Imaging Through Random Diffusers Instantly without a Computer

LOS ANGELES - Jan. 26, 2022 - PRLog -- Imaging through scattering and diffusive media has been a challenge for many decades, with numerous solutions reported so far. In principle, images distorted by random diffusers (such as frosted glass) can be recovered using a computer. However, existing methods rely on sophisticated algorithms and codes running on computers that digitally process the distorted images to correct them.

A new paper published in eLight has sought an entirely new paradigm to image objects through diffusive media. In their paper, entitled "Computational Imaging Without a Computer: Seeing Through Random Diffusers at the Speed of Light," UCLA researchers, led by Professor Aydogan Ozcan, presented a new method to immediately see through random diffusive media without the need for any digital processing. This new approach is computer-free and all-optically reconstructs object images distorted by unknown, randomly-generated phase diffusers.

To achieve this, they trained a set of diffractive surfaces or transmissive layers using deep learning to optically reconstruct the image of an unknown object placed entirely behind a random diffuser. The diffuser-distorted input optical field diffracts through successive trained layers— the image reconstruction process is completed at the speed of light propagation through the diffractive layers. Each trained diffractive surface has tens of thousands of diffractive features (called neurons) that collectively compute the desired image at the output.

During the training, many different and randomly-selected phase diffusers were used to help generalize the optical network. After this one-time deep learning-based design, the resulting layers are fabricated and put together to form a physical network positioned between an unknown, new diffuser and the output/image plane. The trained network collected the scattered light behind the random diffuser to reconstruct an image of the object all optically.

The research team experimentally validated the success of this approach using Terahertz waves. The all-optical image reconstruction achieved by these passive diffractive layers allowed the team to see objects through unknown random diffusers. It presents an extremely low power solution compared with existing deep learning-based or iterative image reconstruction methods that use digital computers.

The researchers believe that their method could be applied to other parts of the electromagnetic spectrum, including the visible and far/mid-infrared wavelengths. The reported proof-of-concept results represent a thin and random diffuser layer. The team believes these underlying methods can potentially be extended to see through volumetric diffusers such as fog.

This approach can enable significant advances in fields where imaging through diffusive media is of utmost importance. Those fields include biomedical imaging, astronomy, autonomous vehicles, robotics, and defense/security applications.

The authors acknowledge funding from the US National Science Foundation and Fujikura.

See the article:
Yi Luo, Yifan Zhao, Jingxi Li, Ege Çetintaş, Yair Rivenson, Mona Jarrahi, and Aydogan Ozcan.
Location:Los Angeles - California - United States
Account Email Address Verified     Account Phone Number Verified     Disclaimer     Report Abuse
UCLA Engineering Institute for Technology Advancement News
Most Viewed
Daily News

Like PRLog?
Click to Share