A computational camera uses a combination of optics and software to produce images that cannot be taken with traditional cameras. A variety of computational cameras has been demonstrated; some designed to achieve new imaging functionalities, and others to reduce the complexity of traditional imaging.

(a) The traditional camera model. (b) A Computational Camera uses optical coding followed by computational decoding to produce new types of images. (c) A programmable imaging system is a computational camera whose optics and software can be varied/controlled. (d) The optical coding can also be done via illumination by means of a programmable flash.
In the computational camera, the novel optics are used to map rays in the light field of the scene to pixels on the detector in some unconventional fashion. The captured image is optically coded and may not be meaningful in its raw form. The computational module has a model of the optics, which it uses to decode the captured image to produce a new type of image that could benefit a vision system. The vision system could either be a human observing the image, or a computer vision system that uses the image to interpret the scene it represents.

The coding methods used in today’s computational cameras can be broadly classified into six approaches: Object Side Coding, which only requires optics to be externally attached to a traditional camera; Pupil Plane Coding in which an optical element is placed at, or close to, the pupil plane of a traditional lens; Focal Plane Coding in which an optical element is placed on, or close to, the image detector; Illumination Coding in which, using a spatially and/or temporally controllable flash, captured images can be coded using illumination patterns; Camera Clusters and Arrays that enable a number of traditional cameras to be spatially arranged to capture different types of images; and Unconventional Imaging Systems, which are optical designs that cannot be easily described as modifications to, or collections of, traditional cameras.

Computational cameras produce images that are fundamentally different from the traditional linear perspective image. However, the hardware and software of each computational camera are typically designed to produce a particular type of image. The nature of this image cannot be altered without significant redesign of the imaging system. A programmable imaging system uses an optical system for forming the image that can be varied by a controller in terms of its radiometric and/or geometric properties. When such a change is applied to the optics, the controller also changes the decoding software in the computational module.

The result is a single imaging system that can emulate the functionalities of several specialized ones. Such a flexible camera has two major benefits. First, a user is free to change the role of the camera based on his or her needs. Second, it allows one to explore the notion of a purposive camera that, as time progresses, automatically produces the visual information that is most pertinent to the task. In order to give its enduser true flexibility, a programmable imaging system must have an open hardware and software architecture.

One motivation for developing computational cameras is to create new imaging functionalities that would be difficult, if not impossible, to achieve using the traditional camera model. The new functionality may come in the form of images with enhanced field of view, spectral resolution, dynamic range, temporal resolution, etc. The new functionality can also manifest in terms of flexibility — the ability to manipulate the optical settings of an image (focus, depth of field, viewpoint, resolution, lighting, etc.) after the image has been captured.

Another major benefit of computational imaging is that it enables the development of cameras with higher performance- to-complexity ratio than traditional imaging. Camera complexity has yet to be defined in concrete terms. However, one can formulate it as some function of size, weight, and cost. In imaging, it is generally accepted that higher performance comes at the cost of complexity. For instance, to increase the resolution of a camera, one needs to increase the number of elements in its lens. In traditional imaging, this is the only way to combat the aberrations that limit resolution. In contrast, computational imaging allows a designer to shift complexity from hardware to computations. For instance, high image resolution can be achieved by post-processing an image captured with very simple optics.

The design of computational cameras may be viewed as choosing an appropriate operating point within a high dimensional parameter space. Some of the parameters are photometric resolution, spatial resolution, temporal resolution, angular resolution, spectral resolution, field of view, and F-number. The space could include additional parameters related to the “cost” of the design, such as, size, weight, and expense. In general, while making a final design choice to achieve a desired functionality, one is forced to trade off between the various parameters.

In the cases of omnidirectional imaging and integral imaging, resolution is traded off for wider field of view and viewpoint (or focus) control, respectively. Generally, the tradeoff made with any given computational camera is straightforward to analyze and quantify.

This work was done by Shree K. Nayar of Columbia University for the Office of Naval Research. ONR-0026


This Brief includes a Technical Support Package (TSP).
Computational Cameras

(reference ONR-0026) is currently available for download from the TSP library.

Don't have an account? Sign up here.