We’ve Almost Gotten Full-Color Night Vision to Work

[ad_1]

This site may earn affiliate commissions from the links on this page. Terms of use.

(Photo: Browne Lab, UC Irvine Department of Ophthalmology)
Current night vision technology has its pitfalls: it’s useful, but it’s largely monochromatic, which makes it difficult to properly identify things and people. Thankfully, night vision appears to be getting a makeover with full-color visibility made possible by deep learning.

Scientists at the University of California, Irvine, have experimented with reconstructing night vision scenes in color using a deep learning algorithm. The algorithm uses infrared images invisible to the naked eye; humans can only see light waves from about 400 nanometers (what we see as violet) to 700 nanometers (red), while infrared devices can see up to one millimeter. Infrared is therefore an essential component of night vision technology, as it allows humans to “see” what we would normally perceive as total darkness. 

Though thermal imaging has previously been used to color scenes captured in infrared, it isn’t perfect, either. Thermal imaging uses a technique called pseudocolor to “map” each shade from a monochromatic scale into color, which results in a helpful yet highly unrealistic image. This doesn’t solve the problem of identifying objects and individuals in low- or no-light conditions.

Paratroopers conducting a raid in Iraq, as seen through a traditional night vision device. (Photo: Spc. Lee Davis, US Army/Wikimedia Commons)

The scientists at UC Irvine, on the other hand, sought to create a solution that would produce an image similar to what a human would see in visible spectrum light. They used a monochromatic camera sensitive to visible and near-infrared light to capture photographs of color palettes and faces. They then trained a convolutional neural network to predict visible spectrum images using only the near-infrared images supplied. The training process resulted in three architectures: a baseline linear regression, a U-Net inspired CNN (UNet), and an augmented U-Net (UNet-GAN), each of which were able to produce about three images per second.

Once the neural network produced images in color, the team—made up of engineers, vision scientists, surgeons, computer scientists, and doctoral students—provided the images to graders, who selected which outputs subjectively appeared most similar to the ground truth image. This feedback helped the team select which neural network architecture was most effective, with UNet outperforming UNet-GAN except in zoomed-in conditions. 

The team at UC Irvine published their findings in the journal PLOS ONE on Wednesday. They hope their technology can be applied in security, military operations, and animal observation, though their expertise also tells them it could be applicable to reducing vision damage during eye surgeries. 

Now Read:



[ad_2]

Source link