Researchers have developed an AI imaging technology that works across a range of spectra including ultraviolet, visible light and infrared.

A team of scientists have come up with a device for artificial intelligence (AI) that mimics the retina of the eye.

The development could pave the way for advanced AI that can instantly recognise what it sees, such as automatic descriptions of pictures taken by a camera or a smartphone. It can also be applied to self-driving vehicles and robotics.

The study was published in the journal ACS Nano

The device that mimics the eye actually exceeds what the eye can see (visible light), including the ultraviolet and infrared spectrum. At the moment, AI imaging technology comprises separate sensing, memorisation and processing of data. The new device offers a combination of these three different operations into one.

Because it marries the three functions into one, the device is many times faster than pre-existing technology, the researchers say. The technology is also quite small, with hundreds of the devices fitting on a chip that’s one-inch-wide.

“It will change the way artificial intelligence is realised today,” says study principal investigator Tania Roy, an assistant professor in University of Central Florida’s Department of Materials Science and Engineering and NanoScience Technology Center.

“Today, everything is discrete components and running on conventional hardware. And here, we have the capacity to do in-sensor computing using a single device on one small platform.”

The team had previously worked on neuromorphic devices that can enable AI to work in remote regions and space.

“We had devices, which behaved like the synapses of the human brain, but still, we were not feeding them the image directly,” Roy says.

“Now, by adding image sensing ability to them, we have synapse-like devices that act like ‘smart pixels’ in a camera by sensing, processing and recognizing images simultaneously.”

The adaptability of the device is supposed to enhance driving in a range of conditions for self-driving vehicles, including at night, says Molla Manjurul Islam, the study’s lead author and a doctoral student in UCF’s Department of Physics. He emphasises that the device can detect ultraviolet and infrared inputs in addition to visible light.

“If you are in your autonomous vehicle at night and the imaging system of the car operates only at a particular wavelength, say the visible wavelength, it will not see what is in front of it,” Islam says. “But in our case, with our device, it can actually see the entire condition.”

“There is no reported device like this, which can operate simultaneously in ultraviolet range and visible wavelength as well as infrared wavelength, so this is the most unique selling point for this device,” he says.

An essential part of the technology is “the engineering of nanoscale surfaces constructed from molybdenum disulfide (MoS2) and platinum ditelluride (PtTe2/Si) to allow for multi-wavelength sensing and memory.”

To test the device’s accuracy, the researchers used both ultraviolet and infrared images. They had an ultraviolet number “3” and an infrared part that is the mirror image of “3” to form an “8”. They showed that the device could tell apart the patterns and recognise both a “3” in ultraviolet and an “8” in infrared.

“We got 70 to 80% accuracy, which means they have very good chances that they can be realised in hardware,” says study co-author Adithi Krishnaprasad, a doctoral student in UCF’s Department of Electrical and Computer Engineering.

According to the researchers, the technology could become available for use within five to ten years.

Source: TRTWorld and agencies