• The bio-inspired sensor will help adjust to multiple levels of illumination.
  • Presented in a paper, Nature Electronics, this bio-inspired sensor was presented with the use of phototransistors made from molybdenum disulfide.

Machines and robots must be able to collect images and measurements under multiple background lighting conditions to track and navigate real-world environments. In the last decade, engineers have been trying to design and develop advanced sensors that can seamlessly integrate with robots, surveillance systems, or other technologies that can take advantage by sensing their surroundings.

Recently, a research team at Hong Kong Polytechnic University, Peking University, Yonsei University, and Fudan University created an advanced sensor that can gather data in multiple illumination conditions with a mechanism that clones the functioning of the human retina. Presented in a paper, Nature Electronics, this bio-inspired sensor was presented with the use of phototransistors made from molybdenum disulfide.

“Our research team started the research on optoelectronic memory five years ago. This emerging device can output light-dependent and history-dependent signals, which enables image integration, weak signal accumulation, spectrum analysis, and other complicated image processing functions, integrating the multifunction of sensing, data storage, and data processing in a single device,” said one of the researchers who developed the sensor, Yang Chai, in an interview.

In 2018, Chai and his fellow researchers published their first paper on optoelectronic memories. They introduced a resistive switching memory device capable of performing photo-sensing and logic operations.

In 2019, they had introduced a new optoelectronic resistive random-access memory device with three various capabilities. This updated device could sense the environment, save databases in the memory and execute neuromorphic visual pre-processing operations.

“In 2020, we examined the concept of near-sensor and in-sensor computing paradigms and provided our perspective in this field. Our new study builds on all of our previous efforts,” Chai said.

Natural light intensity might vary significantly with an overall range of 280 dB. When perceiving external light signals, the human retina adapts to the photosensitivity of its photoreceptors based on the intensity of the signal. Eventually, it allows the human eye to gradually adjust to the various illumination levels to see clearly in both dark and bright settings, characteristics known as “visual adaptation.”

“For example, when you enter a darkened movie theater from a bright hall, you can hardly see anything initially, but after a while in the theater, it becomes easier to see. This phenomenon is called scotopic adaptation. In contrast, if you come out of a dark movie theater on a sunny day, you feel very dazzled at first, and it takes a while to see the surrounding scenery. This process is the opposite of scotopic adaptation, called photopic adaptation,” Chai explained.

The core purpose of the recent work of Chai and his fellow researchers was to design and develop a sensing device inspired by the human retina’s structure and function. To make it possible, they initially researched the working of the human retina. After that, they developed a strategy that allowed them to clone the visual adaptation capabilities artificially.

The limited dynamic range of state-of-art image sensors developed on silicon complementary metal-oxide-semiconductor (CMOS) technologies is 70 dB. It is a tremendously lower range than the lighting range of natural lighting, 280 dB.

“To enable vision under a large illumination intensity range, researchers have explored the use of controlled optical apertures, liquid lenses, adjustable exposure times, and de-noising algorithms in post-processing. However, these approaches typically require complex hardware and software resources,” Chai added.

Optoelectronic devices that visually adapt and have a more comprehensive perception range at the sensory terminals can be valuable applications. For example, they can enhance computer vision tools’ performance, minimize the complexity of the hardware needed to develop robots or other sensors, and enrich the accuracy of the image recognition systems.

Earlier, other researchers introduced optoelectronic devices that could adjust to various illumination conditions. However, previously, most of the proposed devices could clone human retinas’ mechanism of photopic adaptation. Furthermore, it is challenging to simulate the scotopic adaptation processes.

Chai and his team developed the bio-inspired vision sensor made on a phototransistor, designed from an ultrathin semiconductor material called molybdenum disulfide. These phototransistors include multiple charge strap states that can trap or de trap electrons within the channel under various gate voltages.

Eventually, these states help researchers dynamically modulate the conductance of their devices. As a result, the researchers can artificially duplicate both the scotopic and photopic adaptation mechanisms of the human retina. For instance, by expanding the sensors’ perception range and response to multiple lighting conditions.

“Our sensor has several advantages and characteristics. Firstly, the visual adaptation function is realized in a single device, which substantially reduces its footprint. Second, it can achieve multiple functions with a single device, including light sensing, memory, and processing. Finally, it can be used to perform scotopic and photopic adaptation to different background light intensities, simply by controlling its gate voltages,” Chai explained.

Chai and his fellow researchers tested their bio-inspired sensors in a series of tests and found that they can effectively emulate the functioning of the human retina, accomplishing phenomenal results in photopic and scotopic adaption. Additionally, it has a much better and broader perception range of 199 dB than its preceding models.

In their future research, they plan to enhance the sensor’s performance while implementing them on large-scale systems with multiple sensors. To accomplish a more expansive field view, the researchers want to develop this sensor array on a flexible or hemispherical substrate.

“An area for improvement is our sensor’s adaptation time, as it is still not short enough to enable machine vision applications. Our target is to reduce the adaptation time to the microseconds level. The sensor array scale also needs further improvement. Our near-term goal of array scale is greater than 100×100. Finally, the heterogeneous integration of vision sensors and the post-processing units with Si-based control circuits is a very important step to move towards practical applications,” Chai added.

 Experts’ views:

“There is still a long way to go before we can fully replicate the retina’s visual adaptation function. To move towards this goal, we designed a phototransistor-type vision sensor using an ultrathin semiconductor, which can control the degree of scotopic adaptation and photopic adaptation in the same device by applying different gates voltages. In this way, we emulate the functions of photoreceptors and horizontal cells in the retina and successfully realized the bio-inspired in-sensor visual adaptation devices with an expanded perception range of 199 dB,” said Chai.

“Our sensor can enrich machine vision functions, reduce hardware complexity and realize high image recognition efficiency. All of these benefits have great application prospects in the fields of automatic driving, face recognition, and industrial manufacturing in complex lighting environments,” Chai added.