This paper describes a superposition compound eye imaging system for extending the depth-of-field (DOF) and the field-of-view (FOV) using a spherical array of erect imaging optics and deconvolution processing. This imaging system had a three-dimensionally space-invariant point spread function generated by the superposition optics. A sharp image with a deep DOF and a wide FOV could be reconstructed by deconvolution processing with a single filter from a single captured image. The properties of the proposed system were confirmed by ray-trace simulations.
© 2012 OSA
Aberrations degrade the optical transfer function (OTF) in imaging optics, resulting in decreased depth-of-field (DOF) and field-of-view (FOV) in conventional imaging systems . The framework of computational imaging, which is based on cooperative optical design and image processing, has been applied to solve the problem.
To increase the DOF using this framework, the optical system is designed to equalize the point spread functions (PSFs) along the optical axis with an optical modulation element, and then image processing retrieves a sharp image from the captured image by deconvolution filtering. A cubic phase plate, a spherically aberrated optical system, or a diffuser can be used for axial optical modulation [2–5]. Axial movement of the object and detector and a change of the lens focus during exposure can also be used to achieve PSF equalization along the optical axis [6, 7].
On the other hand, computational imaging has been applied to increase the FOV, which is limited by the off-axis aberrations. Typically, a monocentric optical system and/or a lens array equalize the PSFs laterally, and then image processing is applied to rearrange a number of captured images to reconstruct a large image [8–12]. A multiscale gigapixel camera with an FOV of 120°-by-50° has been demonstrated recently using the same concept .
We have proposed a method to achieve both a deep DOF and a wide FOV by using computational superposition imaging . In this method, multiple images obtained under different optical conditions are superposed to equalize the PSFs both axially and laterally. Deconvolution filtering is applied to produce an aberration-reduced image. We have verified the principle by an experiment involving mechanical scanning of an aberrated imaging system .
In this paper, we present a computational superposition imaging technique based on spherical superposition compound eye optics. Computational superposition imaging and compound eye imaging are surveyed and applied to extended DOF and FOV imaging with a single-shot in the following two sections, respectively. In this system, the mechanical scanning employed in Ref.  can be eliminated to achieve single-shot computational superposition imaging. We describe the principle of the proposed system and show the results of ray-trace simulations performed to verify the effectiveness of the method.
2. Computational superposition imaging
2.1. Previous work
To realize deep-DOF and wide-FOV imaging, computational superposition imaging has been proposed . A schematic diagram of the method is shown in Fig. 1. While changing the focusing distance and the optical axis direction, multiple images of an object are acquired by imaging optics whose PSFs are three-dimensionally space-variant owing to defocus and aberrations. The multiple images are superposed to equalize the PSFs. This superposed image has a single blur kernel at every point in the image. The superposed image can be approximated as an image that is captured by an imaging system with a three-dimensionally space-invariant PSF. A sharp aberration-reduced image is produced from a single superposition image by deconvolution processing with a single filter. With this method, we can acquire an aberration-reduced image within a large three-dimensional space. In other words, a deep-DOF, wide-FOV imaging system may be constructed using optics that are allowed to have defocus and aberrations, with low computational cost.
The concept of computational superposition imaging can be directly implemented with mechanical scanning . The imaging conditions (focal distance and optical axis direction in this work) are scanned mechanically. The object is captured multiple times during scanning, and then multiple captured images are superposed computationally to increase the DOF and FOV. This superposition imaging system requires multiple exposures with mechanical scanning and computational superposition of the captured images. To overcome these limitations, a scheme with optical scanning and superposition for achieving single-shot imaging is presented in the next subsection.
2.2. Implementation for single-shot imaging
The mechanical scanning and computational superposition mentioned above can both be emulated optically with a single-shot using a superposition compound eye. A schematic diagram of the optical implementation is shown in Fig. 2(a). The imaging optics is composed of an array of erect imaging optics on a spherical surface and a spherical image sensor. Each pair of elemental erect imaging optics has a different focusing distance and optical axis direction, as indicated in the figure. The images are superposed on the sensor surface by the optical superposition effect of the array of erect imaging optics. The image captured by the whole imaging optics can also be approximated as an image with a three-dimensionally space-invariant PSF. Therefore, computational superposition imaging with a single-shot may be implemented by this type of imaging optics.
The superposition imaging system in Fig. 2(a) can also be designed with a negative curvature, as shown in Fig. 2(b). The design with a positive curvature in Fig. 2(a) has a magnification smaller than unity, whereas the design with a negative curvature in Fig. 2(b) has a magnification larger than unity . In this paper, we demonstrate the implementation in Fig. 2(a) by simulations.
3. Compound eye imaging
3.1. Previous work
Compound eyes of insects are composed of a number of elemental micro-optics, and they have been classified into two types: apposition compound eyes and superposition compound eyes . The diameter of the each elemental optics is about 10 μm to 80 μm [18–20]. The main difference between the two types is the relationship between the elemental optics and the photoreceptors. In the apposition type, the elemental optics are separated by partitions. Therefore, rays through a single elemental optics reach a single photoreceptor. In the superposition type, the elemental optics, each of which produces an erect image, are not separated. Therefore, rays through multiple elemental optics reach a single photoreceptor. Advantages of the superposition type over the apposition type are greater light efficiency and cutoff frequency because the overall imaging system behaves as a lens with a larger aperture than that of the elemental optics [21–23]. In comparison with the OTFs of the apposition type, the OTFs of the superposition type are degraded by spherical aberration ; however, the degraded OTFs can be restored by postprocessing. The resolution in superposition compound eyes is not limited by diffraction of the elemental optics but by the spherical aberration of the whole imaging optics, which has a high numerical aperture (NA) . Therefore, use of the superposition-type in conjunction with postprocessing may give a greater resolution than the apposition type. Our optical superposition method illustrated in Fig. 2 is inspired by the superposition compound eye.
In previous studies, compound eyes have been applied to high-performance imaging systems. The apposition type has been applied to high-resolution imaging with thin optics [25–27]. Single-shot multi-dimensional imaging with compact apposition compound eye optics has also been proposed [28, 29].
The superposition type has been applied to wide-FOV and high-resolution imaging with a cluster of simple erect optical elements, e.g., erect lenses and mirrors [16, 22]. These previous superposition compound eye imaging systems assume a single object distance. In Ref. , rays causing defocus and spherical aberration are blocked by parallax barriers between elemental optics.
3.2. Application to computational superposition imaging
In contrast to the previous work described above, our imaging system exploits the rays that were suppressed in the previous work because such spherically aberrated rays can be used to extend the DOF, and the defocused and aberrated images can be restored by deconvolution filtering .
In this paper, a gradient index (GRIN) lens is assumed as the elemental erect imaging optics in the superposition compound eye. To construct monocentric imaging optics, a spherical image sensor is required. Some state-of-the-art technologies have realized spherical image sensors [30, 31]. Alternatively, a spherical array of planar sensors can approximate a spherical sensor [9,10].
4. Extending the DOF and FOV
In this section, we analyze spherical superposition compound eye imaging optics for extending the DOF and FOV. DOF extension with the imaging optics is illustrated in Fig. 3, together with definitions of the parameters. The imaging optics is composed of a spherical erect lens array and a spherical image sensor with a common center. Here, for system analysis, the array optics is assumed to consist of ideal aberration-free erect lenses without a diameter or length. This means that an infinite number of aberration-free, infinitesimally small erect lenses are arranged on the spherical surface. One lens is chosen arbitrarily to define the optical axis, as shown in the figure. A pair of lenses with the same angle with respect to the optical axis focuses at a certain distance, and different pairs with different angles focus at different distances. Therefore, the focusing distance can be scanned optically with the spherical superposition compound eye, and the captured images have axially space-invariant PSFs. A sharp, deep-DOF image can be produced by deconvolution processing of a single captured image.
Every lens defines an optical axis because the proposed spherical superposition compound eye is monocentric, and the PSF is averaged laterally. Therefore, the spherical superposition compound eye extends the FOV and DOF simultaneously. The proposed method ultimately enables us to realize an omni-directional imaging system with a deep-DOF.
The DOF of the proposed system is limited by the scanning range s of the focusing distances. The scanning range s is determined by the paraxial and marginal rays, as shown in Fig. 3. The marginal rays are limited by the virtual pupil in the figure. The virtual pupil is caused by vignetting of each GRIN lens and parallax barriers between the lenses in practice. The vignetting is determined by partitions between the lenses. The partitions prevent stray light from the neighboring lenses. The vignetting and parallax barriers restrict the FOV of each erect lens. They also govern the diameter D of the virtual pupil and the scanning range s. In this paper, the effect caused by the restricted FOV of each lens is emulated by a virtual pupil for simplicity.
If a point source is located at a distance zo ∈ (0, ∞) from the erect lens on the optical axis, the point source is imaged at a distance from the erect lens with the marginal rays. can be expressed asEq. (1) with r → 0 and zo → ∞. The distance of the image of the point source with the paraxial approximation can be expressed as Fig. 3 are given by Fig. 3 can be described as Eqs. (5)–(7), a smaller radius of curvature R of the lens array and a larger diameter D of the virtual pupil increase the scanning range s. This means that a smaller F-number Fi/# increases the scanning range s.
The effective DOF in the system is part of the scanning range s. In Ref. , the DOF is about half of the scanning range of the object. The magnitude of the modulation transfer function (MTF) is inversely proportional to the scanning range of the object. A lower magnitude causes lower noise robustness in deconvolution filtering. In the next section, we describe simulations of the extent of the effective DOF of the proposed system.
The pitch of the erect lenses determines the pitch of discrete scans of the focusing distance and the optical axis direction. To approximate the GRIN lens array to an ideal lens array, more than one chief ray through the GRIN lenses must reach a single detector on the image sensor. Figure 4 shows this geometrical relationship. A lack of rays degrades the MTF of the GRIN lens array. The pitches of the erect lenses and detectors are pl and pd, respectively. With a planar approximation of the spherical surface, the condition can be roughly estimated as
The imagining properties of the spherical superposition compound eye were verified with simulations using Zemax optical design software . These simulations demonstrated DOF extension because the FOV is obviously extended by the monocentric optics, as mentioned in the previous section. Imaging systems composed of arrays of ideal erect lenses and GRIN lenses were numerically analyzed and compared to show the effect of the discretization by the GRIN lenses.
The system parameters in the simulations are summarized in Table 1, where λ is the wavelength. The misfocus MTFs of the two models are shown with the different diameters D of the virtual pupil to show the impact of the F-number Fi/#. In this case, the radius of the spherical sensor is around 20 mm. Such a spherical sensor may be realized by extending some of the latest technologies available.
In the simulations, the refractive index of the GRIN lens was modeled asTable 2. The diameter of the GRIN lenses was chosen based on Eq. (9) with the parameters in Table 1, D = 20 mm, and pd = 8 μm. In this case, the maximum lens pitch, pl, was 88.6 μm.
The effect of the lens pitch on the MTFs was simulated. Figure 5 compares two MTFs of GRIN lens arrays with different diameters. The sidelobe of the MTF with pl = 80 μm is smoother than that with pl = 200 μm. This is because the former satisfies the condition of Eq. (9), whereas the latter does not.
The results of ray-tracing with the spherical GRIN lens array with D = 20 mm are shown in Fig. 6. Figures 6(a) and (b) show an overview of the system and the rays passing through the GRIN lenses, respectively. The GRIN lenses are optically separated with partitions to prevent stray light. The imaging optics had spherical aberration, as shown in Fig. 6(c). The spherical aberration was used for DOF extension, as mentioned in Section 3.2.Fig. 7 is fixed to twice the maximum sampling frequency of the sensor (2/2pd = 125 cycles/mm). The range of Ψ in this simulation was determined so that the misfocus MTFs of D = 20 mm do not have zero values below the maximum sampling frequency (1/2pd = 62.5 cycles/mm). This range is defined as the effective DOF in this paper. The misfocus MTF of an ideal single lens with a diameter of 20 mm is also shown in Fig. 7(a) as a baseline. The MTFs showed drastic variations with Ψ and had multiple zero values. The defocused images captured by this imaging optics could not be deconvolved with a single filter.
Figures 7(b)–(d) show the misfocus MTFs of ideal erect lens arrays with D = 5 mm, D = 10 mm, and D = 20 mm, respectively. Figures 7(e)–(g) show those of GRIN lens arrays with D = 5 mm, D = 10 mm, and D = 20 mm, respectively. The results of both models show that a larger D increased the depth-invariance of the misfocus MTFs and decreased the magnitude of the MTFs. A larger D causes a smaller Fi/#, as shown in Eq. (8). Therefore, the depth-invariance of the misfocus MTFs of the proposed system is inversely proportional to Fi/#, and the magnitude is proportional to Fi/#, respectively. The tradeoff between the depth-invariance and the magnitude can be controlled by Fi/#.
In the simulations, the effective DOF of the proposed system was roughly eight-times shorter than the scanning range s. In contrast, the object scanning method, that changes the object distance during the exposure , achieves the same effect over half of the scanning range. Therefore, the effective DOF of the proposed method is four-times shorter than the object scanning method. This is because the former scans the focusing distance with partial (i.e., marginal) rays for mechanical scanning-free imaging, whereas the latter scans the object and uses all rays.
DOF extension was demonstrated with computationally generated images using the MTFs of the ideal single lens and the GRIN lens array with D = 20 mm. The two MTFs are shown in Figs. 7(a) and 7(g), respectively. In this demonstration, the object distance was scanned from Ψ = −90 to Ψ = +90. The object was the Lenna image. The pixel count of each image was 128 × 128. The pixel pitch, pd, was 8 μm, as in the above ray-trace simulation. Figures 8(a) and (b) show images captured by the ideal single lens and GRIN lens array, respectively. The images captured by the GRIN lens array were similarly defocused through the range of misfocus, whereas the in-focus and defocused images captured by the ideal lens were obviously different. Figures 8(c) and (d) show deconvolution results of the captured images without and with additional Gaussian noise whose signal-to-noise ratio (SNR) was 40 dB, respectively. The Wiener filter was applied in the deconvolution processing. The filter was calculated using the MTF at Ψ = 0. Sharp images with a larger DOF compared with the single lens imaging were reconstructed well, even from captured images with noise, by the proposed scheme.
The captured images of the ideal single lens and the deconvoluted images of the GRIN lens array were evaluated using peak signal-to-noise ratio (PSNR), as shown in Fig. 9. The deconvoluted images at misfocus distances of the GRIN lens array had better PSNRs than those captured with the ideal single lens even when 40 dB noise was added to the captured images. A further advantage of the superposition compound eye is the high light efficiency or high measurement SNR, as mentioned in Section 3.1.
In this study, we showed the principle and performance of spherical superposition compound eye optics for computational DOF and FOV extension. This system captures an optically superposed image of an object with different focusing distances and optical axes by using a spherical array of erect imaging optics. The PSFs of the captured image are three-dimensionally space-invariant. The original sharp image with a deep DOF and a wide FOV can be reproduced by deconvolution processing with a single filter from the single captured image. The system model and simulations were presented.
The misfocus MTFs of arrays of ideal erect lenses and GRIN lenses were analyzed with ray-tracing to verify the DOF extension. The depth-invariance of the proposed system was larger than that of a conventional, non-computational imaging system. The range of DOF extension was the almost one-fourth compared with that of the object scanning method in Ref. . However, our method also extends the FOV simultaneously with a single-shot.
The proposed method realizes a single-shot, deep-DOF, wide-FOV, high NA imaging system composed of scalable optics. It may be useful for applications such as surveillance, machine vision, and so on. One of the issues to be addressed next is the physical implementation of the proposed system. In this paper, we assumed GRIN lenses with diameters of a few tens of micrometers, almost the same as those of the elemental optics of insects’ compound eyes. State-of-the-art technologies and a reflective optical design may enable us to implement a practical system that satisfies the requirements [33–35].
The authors thank Prof. Yasuhiro Awatsuji at Kyoto Institute of Technology for his technical support in this project.
References and links
1. J. W. Goodman, Introduction to Fourier Optics (McGraw-Hill, 1996).
5. O. Cossairt, C. Zhou, and S. K. Nayar, “Diffusion Coding Photography for Extended Depth of Field,” ACM Trans. Graph. (also Proc. of ACM SIGGRAPH) (2010).
6. G. Häusler, “A method to increase the depth of focus by two step image processing,” Opt. Commun. 6, 38–42 (1972). [CrossRef]
7. S. Kuthirummal, H. Nagahara, C. Zhou, and S. K. Nayar, “Flexible depth of field photography,” IEEE Trans. Pattern Anal. Mach. Intell. 33, 58–71 (2011). [CrossRef]
9. D. L. Marks and D. J. Brady, “Gigagon: A monocentric lens design imaging 40 gigapixels,” in “Imaging Systems,” (Optical Society of America, 2010), p. ITuC2.
10. O. Cossairt, D. Miau, and S. K. Nayar, “Gigapixel computational imaging,” in “IEEE International Conference on Computational Photography (ICCP),” (2011).
11. G. Druart, N. Guérineau, R. Haïdar, S. Thétas, J. Taboury, S. Rommeluère, J. Primot, and M. Fendler, “Demonstration of an infrared microcamera inspired by Xenos peckii vision,” Appl. Opt. 48, 3368–3374 (2009). [CrossRef] [PubMed]
14. R. Horisaki, T. Nakamura, and J. Tanida, “Superposition imaging for three-dimensionally space-invariant point spread functions,” Appl. Phys. Express 4, 112501 (2011). [CrossRef]
15. T. Nakamura, R. Horisaki, and J. Tanida, “Experimental verification of computational superposition imaging for compensating defocus and off-axis aberrated images,” in “Computational Optical Sensing and Imaging,” (Optical Society of America, 2012), p. CM2B.4.
16. S. Hiura, A. Mohan, and R. Raskar, “Krill-eye: Superposition compound eye for wide-angle imaging via grin lenses,” IPSJ Transactions on Computer Vision and Applications 2, 186–199 (2010). [CrossRef]
17. D. E. Nilsson, “A new type of imaging optics in compound eyes,” Nature 332, 76–78 (1988). [CrossRef]
18. E. J. Warrant and P. D. McIntyre, “Limitations to resolution in superposition eyes,” J. Comp. Physiol., A 167, 785–803 (1990). [CrossRef]
19. M. F. Land, F. A. Burton, and V. B. Meyer-Rochow, “The optical geometry of euphausiid eyes,” J. Comp. Physiol., A 130, 49–62 (1979). [CrossRef]
21. J. W. Duparré and F. C. Wippermann, “Micro-optical artificial compound eyes,” Bioinspiration Biomimetics 1, R1 (2006). [CrossRef]
22. K. Stollberg, A. Brückner, J. Duparré, P. Dannberg, A. Bräuer, and A. Tünnermann, “The gabor superlens as an alternative wafer-level camera approach inspired by superposition compound eyes of nocturnal insects,” Opt. Express 17, 15747–15759 (2009). [CrossRef] [PubMed]
24. M. F. Land and D.-E. Nilsson, Animal Eyes (Oxford University Press, USA, 2002).
25. J. Tanida, T. Kumagai, K. Yamada, S. Miyatake, K. Ishida, T. Morimoto, N. Kondou, D. Miyazaki, and Y. Ichioka, “Thin observation module by bound optics (TOMBO): Concept and experimental verification,” Appl. Opt. 40, 1806–1813 (2001). [CrossRef]
27. A. Brückner, J. Duparré, R. Leitel, P. Dannberg, A. Bräuer, and A. Tünnermann, “Thin wafer-level camera lenses inspired by insect compound eyes,” Opt. Express 18, 24379–24394 (2010). [CrossRef] [PubMed]
28. R. Horisaki, S. Irie, Y. Ogura, and J. Tanida, “Three-dimensional information acquisition using a compound imaging system,” Opt. Rev. 14, 347–350 (2007). [CrossRef]
29. R. Horisaki, K. Choi, J. Hahn, J. Tanida, and D. J. Brady, “Generalized sampling using a compound-eye imaging system for multi-dimensional object acquisition,” Opt. Express 18, 19367–19378 (2010). [CrossRef] [PubMed]
30. R. Dinyari, S.-B. Rim, K. Huang, P. B. Catrysse, and P. Peumans, “Curving monolithic silicon for nonplanar focal plane array applications,” Appl. Phys. Lett. 92, 091114 (2008). [CrossRef]
31. D. Dumas, M. Fendler, F. Berger, B. Cloix, C. Pornin, N. Baier, G. Druart, J. Primot, and E. le Coarer, “Infrared camera based on a curved retina,” Opt. Lett. 37, 653–655 (2012). [CrossRef] [PubMed]
32. “Zemax,” http://www.zemax.com/.
35. S. Maekawa, K. Nitta, and O. Matoba, “Transmissive optical imaging device with micromirror array,” in “Proceedings of the SPIE,” (2006), p. 63920E. [CrossRef]