Three dimensional images can now be created from a single two dimensional image using a new computational imaging technique developed by a research team at the University of California, Berkeley. A hallmark of the new approach is that it does not require a complex, expensive hardware setup, but instead relies on a compact, inexpensive, lensless relative of the light field camera, which the research team has called the DiffuserCam.
Typical light field cameras utilize an array of lenses in front of the camera to capture incoming light angles, which are transformed into the 3D image. Loss of spatial information during collection as well as the cost of the often customized microlens arrays limit this method of 3D image creation. The Berkeley team set out to achieve the same output of these light field cameras, but with cheaper components paired to better image processing algorithms.
The result was the DiffuserCam which combines just a diffuser and an image sensor. The diffuser is basically a piece of plastic with random bumps. During the team’s research, even something as simple as Scotch tape was able to function as an acceptable diffuser. On the computational side, compressed sensing was used to address the resolution loss issue of the microlens array. One current restriction of the DiffuserCam is the need to perform initial calibration with moving light, given that the size and shape of the bumps in a given diffuser are unknown. Despite this additional step, the team showed that their camera could be used to reconstruct 100 million voxels (3D pixels) from a 1.3-megapixel image. Improvements to avoid this calibration step, better image accuracy, and an expedited reconstruction time are in the works.
With the ability to create 3D images of both micro and macroscopic objects, the team is already beginning to think about applications for the DiffuserCam. Dr. Waller thinks, “…the camera could be useful for self-driving cars, where the 3D information can offer a sense of scale, or it could be used with machine learning algorithms to perform face detection, track people or automatically classify objects.” The DiffuserCam will hopefully soon be used to watch neurons firing within a mouse brain as part of a DARPA-funded program to create a cortical modem. While imaging hundreds of neurons at a time can already be done today, the DiffuserCam may be able to image millions of neurons at once to help understand how the brain works on larger scales. Additionally, without the need for a microscopic lens, the camera can be be pointed directly at the mouse brain to watch as neurons fire during the experiment.
The software for DiffuserCam has been made open source and is available from the Berkeley team here.
Study in Optica: DiffuserCam: lensless single-exposure 3D imaging…
Via: The Optical Society