An examination of recent trends in imaging reveals a movement toward systems that balance processing between optics and electronics. Imaging applications include conventional imaging to produce visually pleasing images, special purpose imaging whose output is also an image but with enhanced characteristics, and functional imaging to produce information about a scene from optical data. We identify three approaches to computational imaging that are capable of achieving these goals: wavefront encoding, multiplex imaging, and feature extraction.
© 2003 Optical Society of America
The revolution in electronics witnessed over the past few decades has had a significant impact on imaging systems. Perhaps most significant are enhanced post-detection processing and the widespread replacement of film by solid-state detectors. In fact, the potential for future advances is often referenced to Moore’s Law, i.e., that the number of transistors per area in an integrated circuit doubles every 18 months.
However, the fervor with which Moore’s Law is so often repeated can lead to a skewed view of the potential for further advances in imaging and imaging systems. For example, naively following Moore’s Law can lead one to believe that imaging can be enhanced simply by increasing the density of detector pixels or that all image processing problems can be solved using digital processing in post-detection. The comparatively slower advances made in optical system design as compared to electronics in the recent past serves only to underscore these notions.
In contrast, we feel that advances in imaging can be made without relying solely on electronics. To break this reliance one has to change the typical notion of an imaging system as an optical front-end followed by a detector followed by post-detection digital signal processing. This framework makes it easy to overlook the fact that the input is an image, i.e., relationships exist between spatially separated regions. Consider that point-to-point imaging motivates the design of the optical frontend and pixel performance motivates the design of the detector. That a relationship exists between adjacent points in the input is not exploited until the collected data is processed in post-detection. As such, it is perhaps more correct to refer to these systems as pixel sensors and not image sensors.
But if computing is so cheap and getting cheaper, what’s wrong with pointing a digital camera at a scene, detecting the scene, and processing the output? One answer is cost. Although computing is cheap, electronic processing may not be the “best” solution to a problem. To track an object, is it necessary to capture full frame digital images at frame rates? If latency is an issue, as in security and defense applications, there are more efficient ways to track an object than processing a digital video stream in real-time.
A second answer is capability. If an imaging system is unable to capture certain information, no amount of processing will be able to recover it. For example, due to aliasing, spatial frequencies in an image that are beyond the sampling rate of a discrete detector cannot be recovered unambiguously through processing. Multidimensional imagers, such as three-dimensional imagers, spatial-spectral imagers, and spatial-polarimetric imagers, by their nature dictate a change in imaging system design.
We feel a change is warranted in the standard notion of the relationship between the physical system for image capture and the computational system. Our goal is to develop a design philosophy that better balances the capability of the physical system to the electronic from the perspective of computation (e.g., computation time and memory access) to achieve an objective. Further, this must be done subject to design constraints such as size, weight, power, and cost. On the surface it may appear that this philosophy simply allows a designer to approach more closely the limits set by physics in conventional imaging. However, its greatest potential is realizing capabilities and performance that are impossible to obtain using conventional imaging methods.
We call an imager designed using this approach an integrated computational imaging system (ICIS). An integrated imaging system is one whose design integrally incorporates optics, optoelectronics, and signal processing. System performance is controlled through concurrent design and joint optimization, as opposed to sequential design and independent optimization of the optics, focal plane optoelectronics, and post-detection algorithm.
We include computation in the nomenclature to emphasize that the burden of forming an image falls not only to the optics but also to the optoelectronics and the post-detection signal processing. A notable example of this is the Hubble telescope. Prior to the installation of its corrective optics, the spherical aberrations in the Hubble telescope were corrected using digital signal processing. Because the aberrations were known, it was possible to account for them in post-detection. It takes only a short leap in thinking to consider the deliberate introduction of aberrations. Once a designer recognizes the potential gained by considering the optics, electronics, and processing integrally as part of a design, many opportunities present themselves.
Our objective here is to highlight through history and example the developments in imaging that have shaped our thinking. To convey this to a broad audience we have chosen to present this work in a narrative format, not a quantitative one. More quantitative discussion can be found elsewhere in this special focus of Optics Express [1–7].
We begin with a brief history of conventional visible imaging and continue with a discussion of image formation of electro-magnetic radiation outside the visible spectrum. In our discussion of non-conventional visible imaging systems we acknowledge that the ICIS philosophy is not new. However, advances in technology have made some system implementations practical. This, in turn, has made it easier to recognize some of the emerging properties of ICIS. We highlight some of these developments in our discussion on new methodologies and technologies. However, more development of these is required before the ICIS approach to design can be made more generally applicable.
2. History of conventional visible imaging
Although biological imaging systems were developed millions of years ago, the oldest evidence of any human understanding of the process is only two thousand years old . For example, in the first century of the Common Era Pliny the Elder noted “emeralds are usually concave so that they may concentrate the visual rays. The Emperor Nero used to watch in an Emerald the gladiatorial combats.” At approximately the same time Seneca wrote, “Letters, however small and indistinct, are seen enlarged and more clearly through a globe of glass filled with water.”
In Opticae Thesaurus, written at the turn of the first millennium, Alhazen discusses the anatomy of the eye and its ability to focus an image on the retina. Two centuries later, in 1267, Bacon states in Perspectiva, “Great things can be performed by refracted vision. If the letters of a book, or any minute object, be viewed through a lesser segment of a sphere of glass or crystal, whose plane is laid upon them, they will appear far better and larger.”
Despite the long-held recognition that a lens could be used as a magnifier to aid vision, it is only during the time of Bacon, around 1280, that modern spectacles were introduced in Florence, Italy. Yet not until sufficient physical understanding had developed three centuries later were more complex optical instruments introduced like the microscope (invented in 1590 by Zaccharias and Hans Janssen), the telescope (invented independently in 1608 by Jan Lippershey and in 1609 by Galileo), and the binocular (also invented by Jan Lippershey in 1608). Both instruments opened up worlds of the very small and the very far, worlds previously inaccessible to human observers.
Another two centuries passed before Daguerre demonstrated in 1837 a means to record images. This achievement led to the development of photographic cameras. Motion picture cameras and projectors were developed after Lumiere recorded motion on film in 1895, followed by Edison’s demonstration in 1896. Thus, the notion that the optics and the recording medium of a camera are independent of one another is well over a century old and well ingrained in the psyche of optical engineers.
In the last century several methods were invented to record motion without using film. For example, Farnsworth’s invention of the dissector tube in 1927 provided a means to record and transmit images in real-time in an electronic format. The invention of the video recorder in 1951 by Ginsburg allowed these signals to be recorded on magnetic tape in analog form. The invention of the charge coupled device (CCD) in 1969 by Smith and Boyle provided a means to record images in a digital electronic format and became the cornerstone upon which digital imaging technology is based. We note that CCDs were developed initially as analog shift registers for storage and processing of analog electronic signals. Their optical sensitivity was almost an afterthought.
Issues related to aliasing not withstanding, solid-state detectors were initially used as one-for-one replacements of film. But bolstered by advances in the electronic industry, the designers of detector technology soon realized the potential for processing in the focal plane. In fact, amacronics was proposed in 1991 as an electronic means to mimic the early vision amacrine processing layer in vertebrate visual systems . However, few attempts have been made to design detectors with much consideration for the optics. A notable exception is the ring-wedge detector designed for use in the Fourier plane of a coherent optical processor .
However, advances in image recording have not been the only achievements in imaging over the last century and a half. An understanding of the imaging process also improved during this time period, primarily through an increased understanding of diffraction. The first person to control diffraction deliberately was Abbe, who in 1873 used his understanding of diffraction to improve the quality of images produced by Zeiss microscopes. This eventually led to optical transfer function synthesis as a tool in optical design. 
More recent advances in optical fabrication have made it possible to produce optical elements with arbitrary transmissions in amplitude, phase, and even polarization. It is these developments that spawned the field of optical processing. In the 1960s, a time when electronics could boast little processing advantage over optics, the potential of optical processing was especially appealing. Given a representation of an object in a computer, an optical Fourier template of the object could be manufactured using computer-generated holographic techniques. An imaging system with the Fourier template in its pupil plane was then capable of detecting all occurrences of the reference object in any scene presented to the imaging system. Studied extensively for pattern recognition, this is the celebrated Fourier plane optical correlator.
It is important for the reader to understand that our objective is not to repackage and resell optical correlators. Yet, even if we are not promoting optical correlators per se, we are suggesting that system designers reconsider the processing power of optics in imaging. The inability of Fourier optical processors to solve the pattern recognition problem does not lie in the optics but in the application. In most applications, particularly military ones, the optics was asked to aggregate information from an input that was spatially and spectrally complex, as well as spatially and temporally incoherent. The complexity of the input overwhelmed the ability of the optics to provide a quantitative output with sufficient resolution and fidelity to be useful. Fourier optical systems have met with much greater success more recently in security applications where it is possible to control the input . Further, the problem addressed is one of verification, not detection.
Not only has optical processing been tarnished by the many attempts to oversell the merits of optical correlation, optical processing has undeniably been eclipsed by advances in focal plane technology and post-detection processing. But the extremes represented by single-minded devotion to either Fourier optical processing or to Moore’s Law are both wrong. To underscore this point, we consider in the next section well-known examples of systems for imaging radiation outside the visible spectrum. In each case the systems exhibit a balance between physics and electronic processing.
3. Imaging outside the visible spectrum
In the previous section, we traced the history of conventional visible imaging systems. Yet, physics informs us that information is lost whenever any electro-magnetic field passes through an aperture. Since information loss is a function of wavelength, in this section we consider imaging in spectral regions other than the visible. In each example we note how data collection is affected by physics and show how the system is designed not only to insure that desired information is not lost but that it is also preferentially marked or encoded. The encoding is designed to facilitate post-detection extraction of information.
Our first example is X-ray imaging, invented in the late nineteenth century. The absence of focusing elements that worked in the X-ray regime meant that close physical proximity had to be maintained between source, object, and recording plane. Although X-ray images were recorded as shadows of objects, the ability to peer into the interior of opaque objects was so unique that scientists were willing to accept the obvious limitations on resolution, noise, and ambiguity. We consider X-ray imaging, i.e., shadow-casting a three-dimensional object into two dimensions, the first example of non-conventional imaging.
One approach to obtain more precise spatial information from X-rays without the use of a lens is coded aperture imaging . In this system, a specially designed aperture array is placed between the object and the detector, which produces a generalized pinhole imaging system. A properly designed aperture array produces multiple images formed by the individual apertures from which it is possible to reconstruct an image of the source in post-detection. Although simple, the system is inefficient in terms of source energy utilization.
Tomographic imaging is yet another means to overcome the physical limitations of X-ray imaging system . In tomography, one integrates the density of X-rays along a line passing through an object. To reconstruct a two-dimensional slice of a three-dimensional object one exploits the projection-slice theorem, and back-projects and filters data from line integrals formed at different angles. It is instructive to point out that post-detection computation is an integral part of tomographic imaging. It is not added as an afterthought to improve or exploit the image.
The development of electron microscopy, another example of non-conventional imaging, early in 20th century provided a resolution capability that vastly exceeded that of optical microscopes. It was in the context of aberration correction in the electron optics that Gabor developed his ideas of wavefront recording and reconstruction . The conventional approach to correcting aberrations had been to develop multi-element and aspheric imaging elements, a technique that had been perfected by that time for visible optics. Since that approach was unavailable for electron imaging optics, Gabor reasoned that essential object information was still present in an intermediate plane, even if it was distorted. Thus, if one could faithfully record the complex wavefront, the original object information could be recovered using suitable post-detection processing. Since digital computers were unavailable, Gabor used analog optical techniques to reconstruct the image and, in doing so, gave rise to the incredibly rich field of holography.
Imaging systems in the radio frequency domain have very different constraints than ones in either the optical or X-ray domain. Although suitable focusing elements are not lacking, their size is determined by the wavelength of radiation. The simplest imaging systems employ a parabolic reflector to focus radio waves arriving from a specific direction onto a single detector. This forms a single point image of a spatial and angular distribution of sources. The aperture of the reflector, which can be several meters across, determines system resolution.
In the 1950s the holographic viewpoint surfaced as a method to reduce the size of a radio-frequency antenna required to form a high-resolution image of the ground . In place of a single large antenna, a small antenna translated over a distance was used to record radar pulses coherently on photographic film. The film record bore no resemblance whatsoever to the scattering objects on the ground. To reconstruct these, the film was processed using coherent analog optical processing, which provided a simple means for introducing the phase necessary to separate individual radar returns coherently. As such, the spatial resolution of the final image was equal to that of an antenna whose aperture is given by the distance traveled by the small antenna. Here again no simple alternative existed for image formation other than to invoke analytic relationships between measured quantities and desired object parameters, and to invert these relationships using computational techniques.
In this brief review of non-conventional imaging systems from different regimes of electromagnetic radiation, we wish to emphasize the diversity of techniques that have been developed to form spatial maps of object distributions. In most cases these novel techniques were necessitated either by a dearth of elements equivalent to a spherical lens or limitations on their physical realization. However, the techniques highlight trade-offs between flexibility, performance, and efficiency that may be instructive in our exploration of new directions for visible imaging systems.
4. Non-conventional approaches to visible imaging
Our goal is the development of a design philosophy capable of producing a system whose optics and electronics have been designed in concert with one another. In effect, to exploit Maxwell’s Equations and Moore’s Law to insure the effective transfer of information from a physical domain into an abstract representational one.
In recent years, a number of imaging systems have exploited non-conventional acquisition optics in conjunction with complementary post-detection processing. Several examples are based on modification of the wavefront phase at or near the pupil plane of an imaging system. Some notable examples include Matic and Goodman’s spatial filtering architecture , Dowski and Cathey’s extended depth imaging system  and their range estimation system , and Dereniak and Descour’s computed-tomography imager . In each of these systems, the image formed at the detector appears severely blurred compared with the image that would be formed using conventional imaging optics. In some cases the primary motivation is to reduce complexity and cost in comparison to the conventional approach. In others, the modified imager actually forms a better estimate of the desired features than a conventional approach. In some cases it may be possible to surpass conventional imaging with respect to both cost and performance.
For example, the conventional approach to filtering a scene with a spatial bandpass filter is to capture a well-focused image and spatially process the image in post-detection with a digital filter. In contrast, Matic and Goodman intentionally introduced a phase deformation at the pupil plane to reduce the number of required digital filter weights. By searching jointly for tap weights and phase deformation parameters, Matic and Goodman found solutions that yielded significant savings in the complexity of the digital filter. If the number of weights is fixed, one can argue that exploiting phase in the pupil plane provides an improved estimate of the desired image features than does a conventional system.
The systems described by Dowski and Cathey reduce system complexity while providing performance that cannot be matched by a conventional approach even when unlimited digital processing is available. The first system addresses the well-known tradeoff between light gathering and depth-of-field. Large numerical aperture imaging systems are capable of gathering large amounts of light with high resolution but have limited depth-of-field. Thus, only planes lying within a very narrow field of view are in focus. This is a limitation in human night vision systems, where nearby navigation hurdles and far away threats both need to be in-focus. Narrow field of view is also a limitation in microscopy.
If restricted to only a single frame of imagery, the conventional all-optical approach for overcoming this problem is to reduce the numerical aperture. However, this increases noise and reduces the diffraction-limited resolution in the recorded image. Attempts to maintain high numerical aperture by deblurring imagery via post-detection processing are confounded by the spatial variance of the blur with range and the ill-posed nature of the restoration problem for a single plane, which requires inversion of a transfer function that contains zeros.
In contrast, the extended-depth imaging system uses a cubic asphere to form a blurred image at the detector. However, as designed by Dowski and Cathey, the blur is unique and remains substantially unchanged over a large depth-of-field. Furthermore, spatial frequency analysis reveals a restoration problem that is well posed. The transfer function has no zeros. Thus, only a single, simple digital filter is required to remove the blur, which sharpens the image over an extended depth-of-field. It has been shown subsequently that the information content of the detected image over its depth-of-field is substantially higher than a conventional image when the wavefront is modified by the asphere .
Dowski and Cathey’s passive imaging system for producing spatial maps of range estimates also exhibits a balance between optical and electronic processing. In a conventional imaging system, when only a single frame of imagery is available, passive range estimation can be performed to a limited extent using local spatial frequency analysis. The best estimates are obtained when the system has a narrow depth-of-field. Under these conditions local spatial frequency content varies strongly with object distance. Recognizing this, Dowski and Cathey modified the pupil phase to create a blur that varied strongly with range. Further, the blur was tailored so that a sinusoid whose frequency corresponded to range was superimposed on the imagery. The problem of range estimation was thus reduced to frequency estimation, which enjoys several algorithmic solutions and well-understood performance bounds from the fields of estimation and information theory.
In Ref.  diffraction is used to introduce redundant information necessary to jointly estimate spatial and spectral features of an image in a single frame of data. Most conventional approaches to spatial-spectral imaging depend upon multiframe acquisition using optical elements that have time varying spectral properties. Although conventional color detectors can perform single frame spectral analysis using the red, green, and blue channels produced by pixel-level color filters, in practice the number of spectral channels is limited. Dereniak and Descour modify the pupil function of the imaging system with a periodic phase to exploit the dispersive behavior of diffraction. The idea is, in fact, related to conventional Fourier spectroscopy. However, spectroscopic information is generated for an entire image by exploiting redundant spectral information contained in each diffracted order. The nature of the processing has been compared to tomographic restoration and allows for a tradeoff between spatial and spectral resolution.
In addition to these wavefront modified imaging systems, other systems have been explored recently that depart more dramatically from conventional image formation. In the late 1980s, several researchers investigated interferometric imaging [22–24]. For example, a rotational shearing interferometer has been proposed and demonstrated (RSI) . Whereas wavefront-modified imagers present to the detector a pattern that resembles an image, the RSI produces a pattern that exhibits no recognizable image plane features even though, as is the case in holography and synthetic aperture imaging, image information is present. In the RSI it is the spatial coherence function that is encoded in the detected pattern. As given by the van Cittert-Zernike theorem for an incoherent object, the relationship between the coherence function measured at the detector and the object intensity is a multi-dimensional Fourier transform. Advances in post-detection processing allow near real-time inversion of the transform and, as a consequence, near real-time estimation of the object’s spatial characteristics. Alternatively, dependent upon the analysis task, the coherence function features themselves can be treated as the desired estimates.
A final example of recent developments in integrated imaging demonstrates a broad interplay amongst optics, detectors, and post-detection processing . Japanese researchers have demonstrated a system called TOMBO that exploits multiple imaging systems to form a high-resolution image in post-detection. By using a microlens array, the architecture enables a thin imager whose light gathering properties cannot be matched by more conventional approaches. Each microlens forms an image on a small subregion of a large solid-state detector. Note that this system departs from the wavefront modification approaches described above in that conventional images are, in fact, incident on the detector. The reduction in the focal length, and thus, the length of the optical train, reduces the size of the image over a conventional macrolens, but so long as the microlens has a low f-number, the image is formed with high resolution. However, this resolution is lost at the detector due to large-sized detectors pixels. Subsequent post-detection processing enables the formation of a high-resolution image. As in the case of RSI imaging, the computational burden is so large that the idea would have been dismissed as impractical just a few short years ago. The idea borrows much from research in microdithering  and three-dimensional imaging . When applied on a large scale the approach enables image formation using multiple low resolution imagers distributed over a large area .
In the previous section we presented examples of imaging systems that break the paradigm of designing optics to capture the most visually pleasing image. The question is whether these examples are isolated or represent the early evolution of more general approaches to image acquisition and analysis. In our examples the nature and purpose of the pre-detection and post-detection processing differs for each. In conventional systems this is not true.
In the conventional paradigm, imaging systems output pixels that are, effectively, irradiance estimates of spatially contiguous small patches in a scene. Once these estimates are formed, post-detection processing is used to estimate more abstract features. It is these abstract features that are used to accomplish the goal of the imaging mission.
From our examples we can identify three broad applications for imaging: conventional, special-purpose (e.g., extended depth), and functional, that is, the extraction of abstract information about a scene (e.g., range estimation or motion detection). The system architecture for implementing a specific application is dependent upon several factors, not the least of which are the system requirements (e.g., high resolution in a small volume) and system constraints (e.g., large sized pixel detectors). These will determine how a designer employs the optics.
The optical architectures can also be categorized into three groups. The first is preferential marking or wavefront encoding . For example, in both range estimation and extended depth imaging the optics generated an intermediate representation at the detector with certain parameters preferentially marked, or encoded. The objective of post-detection processing was to filter or decode the preferentially marked parameters.
Our second category is multiplex imaging . In multiplex imaging systems, for example, the thin high-resolution imager, source-encoded motion detector, and snapshot hyperspectral imager, the optics introduces redundant information that is used in post-detection processing to generate the desired output. The use of multiple channels implies that the post-detection processing typically involves a matrix inversion. Coded aperture imaging, tomography, and synthetic aperture imaging are all examples of multiplex imaging.
The third optical architecture is direct feature extraction . These architectures are the closest in spirit to optical correlators since they rely upon the ability of optics to project a scene onto a basis set of features. It is important though to distinguish between the subtleties of correlation and feature extraction. In feature extraction estimates are made of coefficients or parameters (e.g., geometric moments or transform coefficients) that are subsequently used to make a decision. The output from a correlator contains significantly more aggregation and has therefore been used as a discriminator between classes of objects.
Our examination of recent trends has helped us to identify general applications and architectures and to distinguish between the two. We acknowledge that our list of examples is incomplete but our intent was to be illustrative and not definitive. Indeed, the myriad of applications and architectures makes it difficult to develop an all-encompassing theory for ICIS. Nonetheless, we do feel that as the field matures optical transfer function synthesis will have a major role to play.
To address the question we posed at the beginning of Sec. 5 about a new era in imaging will require more examples; examples with precise imaging tasks. It is unlikely that imaging systems with broad goals will result in revolutionary imaging systems. Given a specific task, the designer’s goal is to address it by means other than simply integrating high quality lenses with large arrays of detector pixels.
Although we have alluded to dynamic systems, our examples are all static. We feel this is a consequence of limited understanding and the need for the field to mature. However, once the practicability of a system has been demonstrated, we feel it can be made more flexible using feedback and adaptive elements. As usual, though, a trade-off exists between flexibility and efficiency. Although a general-purpose system is certainly more flexible than a special purpose system, it is also more complex and therefore less efficient in its utilization of resources like light energy, computational resources, or volume.
It is our hope that pursuit of an ICIS methodology will lead to the development of tools that allow a continuum of choices between flexible and efficient to be quantitatively explored. It is also our belief that the demand for these tools will increase as more system designers avail themselves to the increasing number of flexible and adaptive technologies in optics, optoelectronics, and electronics.
To conclude, we return once again to Moore’s Law. Reliance on the capabilities provided by Moore’s Law to improve imaging allows a designer to increase measurement precision along only a single dimensional axis, e.g., number of pixels or the number of output bits per pixel. This increases data volume but not necessarily information. To increase information flux without overwhelming the system with data implies a change in the way data is captured and a change in the way it is processed. By presenting this discussion we hope to stimulate the imaging community and thereby make our predictions of a new era self-fulfilling.
We are indebted to several colleagues who have influenced the development of this work and our understanding of it. Most notably, we wish to thank Tom Cathey, Ed Dowski, and David Brady. Further, we wish to thank Jim Fienup, Vladimir Brajovic, Mark Neifeld, Dave Munson, Jody O’Sullivan, Zia Ur-Rahman, Bob Plemmons, and Dennis Healy. Finally, in any review of this nature, important references will no doubt be left out. We regret any oversight.
References and links
1. H. S. Pal and M. A. Neifeld, “Multispectral principal component imaging,” Opt. Express 11, 2118–2125 (2003), http://www.opticsexpress.org/abstract.cfm?URI=OPEX-11-18-2118. [CrossRef] [PubMed]
2. A. Ashok and M. A. Neifeld, “Information-based analysis of simple incoherent imaging systems,” Opt. Express 11, 2153–2162 (2003), http://www.opticsexpress.org/abstract.cfm?URI=OPEX-11-18-2153. [CrossRef] [PubMed]
3. U. Gopinathan, D. J. Brady, and N. P. Pitsianis, “Coded apertures for efficient pyroelectric motion tracking,” Opt. Express 11, 2142–2152 (2003), http://www.opticsexpress.org/abstract.cfm?URI=OPEX-11-18-2142. [CrossRef] [PubMed]
4. P. Potuluri, M. Xu, and D. J. Brady, “Imaging with random 3D reference structures,” Opt. Express 11, 2134–2141 (2003), http://www.opticsexpress.org/abstract.cfm?URI=OPEX-11-18-2134. [CrossRef] [PubMed]
5. Z. Xu, Z. Wang, M. E. Sullivan, D. J. Brady, S. H. Foulger, and A. Adibi, “Multimodal multiplex spectroscopy using photonic crystals,” Opt. Express 11, 2126–2133 (2003), http://www.opticsexpress.org/abstract.cfm?URI=OPEX-11-18-2126. [CrossRef] [PubMed]
6. J. Tanida, R. Shogenji, Y. Kitamura, K. Yamada, M. Miyamoto, and S. Miyatake, “Color imaging with an integrated compound imaging system,” Opt. Express 11, 2109–2117 (2003), http://www.opticsexpress.org/abstract.cfm?URI=OPEX-11-18-2109. [CrossRef] [PubMed]
7. K. Kubala and E. Dowski, “Reducing complexity in computational imaging systems,” Opt. Express 11, 2102–2108 (2003), http://www.opticsexpress.org/abstract.cfm?URI=OPEX-11-18-2102. [CrossRef] [PubMed]
8. T. E. Jones, History of the Light Microscope, Chap. 1, http://spidey.sfusd.edu/schwww/sch773/zimmerman/c1.html (accessed 14 July 2003).
9. W.B. Veldkamp, “Wireless focal planes: On the road to amacronic sensors,” J. Quantum Electronics, IEEE JQE-29, 801–813 (1993). [CrossRef]
10. D. P. Casasent, “Hybrid Processors,” in Optical Information Processing, S.-H. Lee, ed., Topics in Applied Physics48 (Springer-Verlag, Berlin, 1981), 181–233. [CrossRef]
11. H. H. Hopkins, “The frequency response of a defocused optical system,” Proc. Roy. Soc. A 231, 91–103 (1955). [CrossRef]
12. See, for example, Optical Security and Counterfeit Deterrence Techniques IV, R. L. van Renesse, ed., Proc. SPIE4677 (2002) and papers contained therein.
13. R. H. Dicke, “Scatter-hole cameras for X rays and gamma rays,” Astrophys. J. 153, L101–L106 (1968). [CrossRef]
14. G. N. Hounsfield, “Computed Medical Imaging,” 1979 Nobel Lecture, http://www.nobel.se/medicine/laureates/1979/hounsfield-lecture.pdf (accessed 14 July 2003).
16. J. W. Goodman, Introduction to Fourier Optics, 2nd ed. (McGraw Hill, New York, 1996).
17. R. M. Matic and J. W. Goodman, “Optical preprocessing for increased system throughput,” J. Opt. Soc. Am. A 6, 428–440 (1989). [CrossRef]
21. J. van der Gracht and G. W. Euliss, “Information optimized extended depth-of-field imaging system,” in Visual Information ProcessingX, S. Park and Z. Rahman, eds., Proc SPIE4388, 103–112 (2001). [CrossRef]
22. K. Itoh and Y Ohtsuka, “Fourier-transform spectral imaging: retrieval of source information from three-dimensional spatial coherence,” J. Opt. Soc. Am. A 3, 94–100 (1986). [CrossRef]
23. J. Rosen and A. Yariv, “General theorem of spatial coherence: application to three-dimensional imaging,” J. Opt. Soc. Am. A 13, 2091–2095 (1996). [CrossRef]
24. D. L. Marks, R. A. Stack, and D. J. Brady, “Three-Dimensional Coherence Imaging in the Fresnel Domain,” Appl. Opt. 38, 1332–1342 (1999). [CrossRef]
25. J. Tanida, T. Kumagai, K. Yamada, S. Miyatake, K. Ishida, T. Morimoto, N. Kondou, D. Miyazaki, and Y. Ichioka, “Thin Observation Module by Bound Optics (TOMBO): Concept and Experimental Verification,” Appl. Opt. 40, 1806–1813 (2001). [CrossRef]
26. R. C. Hardie, K. J. Barnard, J. G. Bognar, E. E. Armstrong, and E. A. Watson, “High-resolution image reconstruction from a sequence of rotated and translated frames and its application to an infrared imaging system,” Opt. Engr. 37, 247–260 (1998). [CrossRef]
28. M. A. Neifeld and W. C. Hasenplaugh, “High-resolution optical imaging using an array of low-resolution cameras,” Annual Meeting of the Optical Society of America 2002, September 29-October 3, 2002.
30. D. J. Brady, “Multiplex sensors and the constant radiance theorem,” Opt. Lett. 27, 16–18 (2002). [CrossRef]