Tech Briefs

New technology captures spectral and spatial information of a scene in one snapshot while raising pixel counts and improving image quality.

The concept of an imaging system that captures both spatial and spectral information has existed for a while. An example of one such imaging system that encodes both location and wavelength into an image is a Fourier Transform Spectrometer (FTS).

ACA of a diffractive optic, a Photon Sieve with a focal length of 50 cm illuminated by a white LED. (Credit: Will Dickinson)

The FTS works by capturing a 2D image that contains both spatial dimensions while sweeping along a Michelson Interferometer to capture the spectral dimension, leading to a 3D image cube. But the fact that the FTS needs to sweep along the spectral dimension introduces an operational time lag. For example, when imaging a scene that is constantly changing, such as a forest fire, this might introduce noise that might make it difficult to process the resulting images. Or there could be mechanical vibrations of the instrument, referred to as pointing jitter, which adds noise. If there were a system that could encode two spatial dimensions and one spectral dimension in a single snapshot, it would remove the operational time lag noise and the pointing jitter that the FTS introduces. The Fresnel Zone Light Field Spectral Imager (FZLFSI), from here on referred to as the Diffractive Plenoptic Camera (DPC), is such a system, capturing these three dimensions in one snapshot.

The DPC is able to capture both spatial and spectral information in one single exposure, as opposed to the FTS. The DPC is able to do this by exploiting chromatic aberrations in order to create a camera that can refocus images over a broad range of wavelengths. The DPC uses a diffracting optic — known as a Fresnel Zone Plate (FZP) — as its main imaging optic. The FZP has the resolving power of a regular refractive lens of the same diameter, but its focal length depends on wavelength, which creates axial chromatic aberration (ACA).

While the ACA introduced by a diffractive optic makes it difficult to produce an in-focus picture using a FZP, the DPC uses this effect to its advantage and creates an imaging system that can refocus at different wavelengths. The ACA of diffractive optics has been used for high resolution spectral imaging by translating the sensor array along the optical axis to capture an image at different focal planes. The DPC is able to exploit the ACA by combining an FZP with a plenoptic camera. The plenoptic camera is a concept that was introduced in 1992. It was initially introduced as a method of capturing 3D data to solve computer-vision problems and designed as a device that recorded the distribution of the light rays in space, i.e., the simplified 4D plenoptic function or radiance.

The concept of the plenoptic camera kept evolving until the first handheld plenoptic camera was built in 2005. It was the concept of the handheld plenoptic camera that was used in building the conventional DPC, which refocuses across a spectral range instead of a depth of field. The rendering algorithm used in both cases led to a final picture that had a drastically lower number of pixels compared to the original raw image. The plenoptic camera’s detector had a 4,096 × 4,096 pixel array, but the final rendered images were 300 × 300 pixels. This is a reduction from 16.7 MP to 0.09 MP. The reduction in the conventional DPC was even more drastic. The DPC’s original detector had a pixel array of 5,120 × 5,120 and the final image had a pixel count of 48 x 46 pixels. This is a reduction from 26.2 MP to 0.002 MP.

The DPC image could be refocused to different wavelengths, which isn’t possible using a standard camera, but the final images had very low pixel counts. This led to an alternative method known as “full resolution light rendering”, also known as the focused plenoptic camera. This method, developed in 2008, was successfully used to produce images that were refocused through an extended depth of field, but with a higher final pixel count. It was this method that was used in conjunction with the DPC to make the focused DPC.

This work was done by Carlos D. Diaz, Captain, USAF for the Air Force Institute of Technology. For more information, download the Technical Support Package (free white paper) here under the Photonics category. AFRL-0263