The gigapixel camera, named AWARE-2, was developed for a DARPA funded project: Advanced Wide field-of-view Architectures for image Reconstruction and Exploitation (AWARE) program. In this imaging system, a single objective lens is used to cover a 120 degree horizontal field of view that is sampled with the help of 98 micro-cameras, each having a 14 megapixel resolution, culminating in a final mosaic of images with more than a gigapixel resolution.
As opposed to conventional scanning based gigapixel digital cameras, the current design of the camera can simultaneously expose all of its micro-cameras to obtain the gigapixel image in a single shot, thereby, avoiding the errors introduced by movements during exposure time.
Members of this team from the University of Arizona and Duke University have now developed a new “scalable image formation pipeline” for such a camera that is capable of providing approximately one megapixel live view images at the system frame rate of 10 Hz to a full resolution one gigapixel image in about 3 minutes.
In this paper, the new image formation pipeline is used to overcome the limitation of processing the huge data set generated by the gigapixel camera with an algorithm like “MapReduce” that breaks the image formation process into two parts. “The map step transforms a list of key/value pairs that represent the intensity value for a given pixel on a given micro-camera into an intermediate list of key/value pairs which represent the intensity value for a given location in object space. This location corresponds directly to a pixel in the output image. The reduce step combines key/value pairs sharing the same key to form an estimate of the intensity that was present at that single location in object space”.
The image formation pipeline was parallelized with a 9-node cluster of dual 6-core processors, using a Message Passing Interface (MPI) framework in the AWARE-2 camera. This method has allowed them to reduce the total computation time for processing a full resolution one gigapixel image to about 3 minutes and for a one megapixel live view image to about 0.8 seconds.
A parametric model was used to predict the distortions in a micro-camera as a function of its radial position and the relative illumination intensity per pixel at the final image was modeled and optimized with the help of Zemax. The relative illumination model was used to compensate for any variations in the illumination in the final image. Even with the rigorous use of such modeling techniques, some unwanted artifacts like stray light remain in the final mosaic of images that are later adjusted by a technique called flat-field measurement, in which a uniform light source is imaged by each micro-camera and its illumination profile compensated, resulting in a uniformly stitched composite image.
One of the advantages of such a multiscale optical system over a conventional camera, according to the authors, is that the image formation has now been divided between optical and electronic processing. Individual control over micro-cameras provides adjustments for parameters like exposure, gain and focus. In particular, individually focusable micro-cameras allow AWARE-2 to be capable of what is called a “synthesized extended depth of field” that is not possible in a conventional full field camera.
High dynamic range (HDR) photography is usually accomplished by taking a series of pictures over a wide range of exposure times and merging them later into a single image. HDR imaging combines several shots of a given scene to overcome the limited exposure range of conventional single shot photography to capture many more details all the way from shadows to highlights. The AWARE-2 camera can do this in a single shot by individual control of exposure times of its micro-cameras, a great advantage not possible otherwise in gigapixel resolution imaging.
Currently, the size of the camera is limited by its hefty electronics and cooling system. Future work should tackle this issue in order to achieve a handheld model.
You must log in to add comments.