In recent years, the compound eye imaging system has attracted great attention due to its fascinating optical features such as large field of view (FOV), small volume and high acuity to moving objects. However, it is still a big challenge to fabricate such a whole system due to the mismatch between the spherical compound eye imaging element and the planar imaging sensor. In this work, we demonstrate a kind of hemispherical compound eye camera (SCECam) which analogs the eye of the fruit fly. The SCECam consists of three sub-systems, a hemispherical compound eye, an optical relay system and a commercial CMOS imaging sensor. By introducing an intermediate optical relay system, the curved focal plane after the compound eye can be transformed and projected onto the planar focal plane of the imaging sensor. In this way, the SCECam can realize a large FOV (up to 122.4°) with 4400 ommatidia, which makes it possible to detect and locate fast moving objects at a very fast speed. It is calculated that the recognition speed of the SCECam is two to three orders of magnitude higher than those conventional methods such as the Canny and Log edge-detection methods.
© 2017 Optical Society of America under the terms of the OSA Open Access Publishing Agreement
To see the light, different types of eyes have been developed in nature . Besides single-lens eyes (such as human eyes), compound eyes were also evolved but mainly exist in invertebrates 540 million years ago . Unlike human eyes, compound eyes consist of hundreds to thousands of integrated optical units, called ommatidia, arranged on a convex curved surface. Each ommatidium consists of a facet lens, a crystalline cone, and photoreceptor cells with a wave-guiding rhabdom . The facet lens collects incident light with a narrow range of angular acceptance, and the crystalline cone guides the focused light into the wave-guiding rhabdom, and subsequently the guided light detected by the photoreceptor cells . In general, the more the ommatidia, the better the imaging resolution. In comparison with single-lens eyes, compound eyes tend to perceive shapes and outlines rather than crisp details. Nonetheless, arthropods’ compound eyes still offer several great characteristics and take an important role in recognizing the necessities of life: food, danger, mates and shelter. For example, insects can’t be sneaked up for the sake of wide FOV their compound eyes. No matter which direction predator is coming from, insects can see and position it in its environment, recognize what it is, and take appropriate actions at the appropriate time. Parts of insects even can see well into the UV light and use polarized light for navigation. These features are particularly beneficial for target detection, location, recognition, and collision-free navigation of terrestrial and aerospace vehicles. It is because of these great features, compound eyes have been attracting a great deal of research interest, and many efforts have been made to develop vary types of compound eye imaging systems.
Due to the limitation of the manufacturing process and the planar based imaging sensors, most of the work focused on the development of planar compound eyes in the past two decades [5–7]. In recent years, more efforts are made on the fabrication and realization of curved compound eyes. In 2012, Liu et al. reported a femtosecond-laser-based microfabrication and thermomechanical bending method to fabricate bio-inspired omnidirectional and gapless micro-lens array on curvilinear surfaces . In 2015, Kuo et al. successfully fabricated a hemispherical micro-lens array by using soft lithography and thermopressing method. Moreover, the fabricated micro-lens array was integrated with a charge-coupled device to form a bio-mimetic visual system . Furthermore, the ability of the bio-mimetic system to detect moving objects was also studied by them. In 2016, a soft UV imprint process was developed by Chen et al for the fabrication of hexagonal arranged micro-lens array with a high filling factor . At the same year, A high-efficient strategy to create hemispherical micro-lens array based on femtosecond laser and thermal embossing was demonstrated by Deng et al . Besides, some artificial compound eyes with special functions were also fabricated [12–15]. However, above mentioned works are mainly focused on the realization of the compound eye element itself but not the compound eye imaging system.
With the development of flexible optoelectronics, a natural inspired compound-eye system with features of panoramic FOV, negligible distortion and aberration have been developed by Song et al more recently . In their work, an array of 16 × 16 convex micro-lenses were integrated together with a thin silicon photodiode array on a convex curved surface to achieve a FOV of 160° as well as to eliminate the optical aberration. At the same time, Floreano et al demonstrated an artificial compound eye, named CurvACE, by bending a rectangular array of 42 columns of 15 artificial ommatidia into a curved surface with a radius of 6.4 mm along its longer direction to form a 180° FOV in the horizontal plane . However, afore mentioned two types of compound eye cameras simply consist of a micro-lens array and a small-scale photodetector array, and thus they can only obtain an image with low resolution and couldn’t meet the requirement for those applications with requirement on a relatively high-resolution image. Especially for applications of fast location and recognition of objects at a wide FOV, a compound eye imaging system with a high resolution is needed.
In this work, we developed a new hemispherical compound eye camera system, named SCECam, which is able to capture image at a wide-angle FOV as well as to locate and recognize objects with a high speed. Figure 1 shows the developed SCECam and its physical size and FOV of SCECam is 40 mm × 40 mm × 80 mm and 360° × 122.4° respectively. Different from previous work, an optical relay system is introduced into SCECam and deployed in between the hemispherical compound eye and the imaging sensor to transform the curved focal plane into a planar focal plane for image receiving. In this way, the SCECam was designed to analog the natural apposition compound eye which consists of a light-refracting facet lens, a crystalline cone, and photoreceptor cells with a wave-guiding rhabdom . As a result, the SCECam has three sub-systems: a hemispherical micro-lens array rested on a spherical glass shell, an optical image relay system and a planar CMOS imaging censor with electronic driving and processing circuits. The hemispherical micro-lens array has a macrobase with a diameter of 40 mm and approximately 4400 ommatidia with a diameter of about 500 μm. The fabricated SCECam has the ability to locate objects at a wide FOV. Besides, it can recognize objects with a high speed. These features enable the SCECam to have a great potential for a broad range of applications including surveillance imaging, fast target detection and recognition [18–20], collision-free navigation of terrestrial and aerospace vehicles [21–24] and so on.
2.1. Hemispherical micro-lens array
The hemispherical micro-lens array was fabricated by means of soft lithography method  and followed by a thermal embossing process . The details of the fabrication process are shown in Fig. 2. As can be seen, the whole process includes three main steps. The first step is to form a closely hexagonal-packed micro-lens array in photoresist film deposited on a flat quartz substrate as shown in Figs. 2(a)-2(d). In this step, a thin photoresist film (positive photoresist AZ9260: AZ Electronic Materials) with a thickness of about 50 μm was firstly deposited on a quartz substrate by spin-coating method. Then, a photolithography process was employed to form a pattern of a cylinder array in photoresist. After that, the sample was placed on a hotplate to conduct a thermal reflow process so that a micro-lens array with a convex shape can be formed under the force of the surface tension. The annealing temperature for the thermal reflow process was set at a gradient step of 5°C per 10 min from 90°C to 130 °C. The next step is to replicate a reversed form of microlens array into a PDMS thin film and then transfer it into PMMA to form a micro-lens array with a convex shape again as shown in Figs. 2(e)-2(f). The final step is to place PMMA micro-lens array on a glass doom and then thermally emboss it to form a hemispherical compound eye. During this step, the PMMA film was heated to 95 °C, which is closed to its glass transition temperature of 105 °C. At this temperature, PMMA starts to become soft so that the glass doom can be embossed into it under a certain of pressure. As a result, a hemispherical compound eye was formed on the glass dome as shown in Fig. 2(g).
2.2. Prototype of SCECam
A prototype of SCECam was formed by integrating the fabricated hemispherical micro-lens array, an optical imaging relay system and a commercial CMOS imaging sensor with associated electronic circuits together. Figure 3(a) shows the exploded view of the prototype SCECam. As can be seen clearly, the main difference between SCECam and previously reported compound eye imaging systems such as TOMBO and CurvACE [5, 17] is the existence of the optical imaging relay system. For TOMBO and CurvACE, they mainly consist of micro-lens array and imaging sensors with no optical relay system involved. In this case, the main problem is that a larger FOV is not easy to be achieved and the optical aberration cann’t be corrected easily as well.
For natural compound eyes, each ommatidium consists of a facet lens, a crystalline cone, and a wave-guiding rhabdom . The facet lens focuses the incident light, then the crystalline cone helps guide the focused light into the wave-guiding rhabdom and eventually the guided light arrives at the photoreceptor. Similarly in the SCECam, the optical relay system plays the same role as the crystalline cone and the wave-guiding rhabdom in natural compound eyes to convey the focused light into the imaging sensor. The optical relay system is designed to have a MTF value of more than 0.35 at the Nyquist frequency and more than 0.65 at half of the Nyquist frequency. The optical relay device not only corrects field curvature and vignetting of the camera system but also offset the distortion of hemispherical micro-lens array. The FOV of this optical relay system is 120° and its F-number is 1/ 3. The whole optical structure of the SCECam is shown in Fig. 3(b).
A commercial CMOS imaging sensor, Sony IMX264 (number of pixels N = 2448 × 2048; size of the pixel is 3.45 μm × 3.45 μm), is used as the imaging sensor for the prototype SCECam. The image obtained by SCECam consists of about 4400 sub-images and each sub-image covers about 20 × 20 pixels. The integrated SCECam has a frame rate of 35 fps.
3. Results and discussions
Figure 4(a) shows the photograph of the fabricated hemispherical micro-lens array. Figure 4(b) shows the SEM image of the micro-lens array taken by a JSM 6390 SEM. The average diameter and sag height of the micro-lens were measured to be 500 μm and 50 μm respectively. The deviation for the aperture and the sag height of the microlens in different positions is about ± 9.6 μm and ± 2.3 μm respectively, which is about ± 1.9% and ± 4.6% to the designed value respectively. This means that the fabricated micro-lens array has a good uniformity in terms of the size. The focal length of the micro-lens is calculated to be 1.28 mm by using the following equation,
These micro-lenses are located on a hemispherical doom and distribute in a hexagonal way. The filling factor of the micro-lens array is larger than 92%, which is similar to that of the most natural compound eyes. The size and the mechanical properties of the hemispherical micro-lens array are critically important for the SCECam system. The acceptance angle ∆φ and the inter-ommatidia angle ∆Φ determine the FOV of the SCECam. ∆φ denotes the angle of the FOV of Each micro-lens. ∆Φ defines the inter-ommatidia angle, i.e. the angle between two neighboring center optical axes of the neighboring ommatidia. Specifically, the total FOV of the fabricated hemispherical micro-lens array is about 122.4°. The acceptance angle for each ommatidium is ∆φ = 2.4°) and the inter-ommatidia angle is ∆Φ = 1.7°. Since ∆φ is larger than ∆Φ, there is an overlapping on FOV between adjacent ommatidia.
To demonstrate the imaging ability of the fabricated hemispherical micro-lens array, an optical testing setup was established as shown in Fig. 5(a), where the hemispherical micro-lens array was placed in between the object and the optical microscope. The object is a letter “a” with a size of around 2 mm displayed on the screen of a mobile phone. Figure 5(b) shows the image formed by the micro-lens array viewed from different FOVs of 0°, 30° and 60° respectively from top to bottom. As can be seen, the image becomes blurred on the edge due to the mismatch between the hemispherical compound eye and the planar imaging sensor of the optical microscope. In view of this reason, an optical relay system must be employed to overcome this problem.
To form a real compound eye imaging system, i.e. so called SCECam, the fabricated hemispherical micro-array, an optical relay system and a commercial CMOS imaging sensor were integrated together. The parameters of SCECam are listed in Table 1. In that the total FOV of SCECam is sampled by each ommatidium of hemispherical compound eye, the original image obtained by SCECam on COMS imaging sensor is a hexagonal sub-image array. Each ommatidium contributes a sub-image by receiving the light from a sub-FOV.
Since the hemispherical compound eye has a curved focal plane, an optical relay system is must to transform the curved focal plane into a planar focal plane to adapt for the planar based detector. As a result, the image obtained by SCECam consists of thousands of sub-images. To achieve a complete image, a proper algorithm was developed to reconstruct the complete image of the object. This image acquisition and reconstruction process is illustrated in Fig. 6(a). As is shown, each micro-lens in the hemispherical compound eye forms a small invertedimage of the partial object, i.e. a symbol of ‘ + ’. All of the tiny inverted images combine together to form a spherical compound eye image. This spherical compound eye image is then transformed into a planar compound eye image by the optical relay system, and then detected by a CMOS imaging sensor. A simple image reconstruction method was proposed and developed to restore the image of the symbol ‘ + ’. The method contains two processes. Firstly, the central position of each sub-image is determined. Secondly, appropriate pixels from every sub-image are extracted and combined together to form the reconstructed image.
Figure 6(b) shows the imaging experiment to evaluate the imaging characteristics of the prototype SCECam. In this experiment, three different symbols, i.e. cross line, pentagram and triangle, were placed 20 mm away from the SCECam. For restoring the full information captured by the SCECam, the final image was rendered on the surface of a sphere. Three images of different symbols were recorded by SCECam, as shown in Fig. 6(c). As can be seen, each compound eye image was made up of thousands of inverted sub-images. Since there is an overlapping on FOV between adjacent ommatidia, the compound eye image is a hybrid of image from different angle of view. To achieve a complete image, 2 × 2 pixels of each sub-image were extracted to reconstruct the image. Figure 6(d) shows the retrieved images of three different symbols. These images were also rendered on the surface of a sphere for comparison.
Since there are some overlapping on the FOV between neighboring ommatidia. The SCECam can be used for fast target detecting and locating [26–28]. Moreover, it can be used for determining the distance of the object from the SCECam. The location of an object can be estimated by judging in which FOV of the ommatidium that the image of the object falls on. For instance, one can easily tell which direction that the object locates in by simply identifying which ommatidium that responsible for the imaging of the object, as shown in Fig. 7(a). This is simply because each ommatidium points to a distinct direction. The precision of the location depends on the number of the ommatidium. The more the ommatidium in the hemispherical compound eye, the more the precise of the location. Figure 7(b) shows the experimental setup for demonstrating this purpose. As is shown, one object (triangle) is located on the direction with an angular position of 40°, while the other object (rhombus) is on −40°. These two objects are at the same distance from the camera, therefore they will be imaged by the same number of ommatidia but at different positions. By checking the images rendered on a sphere, one can easily get the exact angular positions for the objects. Figure 7(c) shows the images captured by SCECam for objects located at different distances. The distance of the object can be estimated from two aspects. One is the size of the image and the other is how many ommatidia which can view the object. In general, for the same object, the closer the object from the camera is, the larger the size of the image is. On the other hand, the closer the object from the camera is, the less the number of the ommatidia that can view the object is. This is because there is overlapping of the FOV between adjacent ommatidia. The longer the distance of the object from the camera is, the more the overlapping of the FOV between adjacent ommatidia. This is quite different from the traditional camera. For traditional cameras, if the size of an object is known, one can tell how far the object from the camera based on the size of the image. However, objects with different sizes but located at different distance from the camera could form images with the same size and traditional cameras have no way to differentiate this case. But for the SCECam, it can be easily differentiated by simply checking how many ommatidia involve in the imaging process. This is because although the size of the image may be the same, but the number of the ommatidia that can view the objects is different.
In addition, SCECam can also be applied for fast object recognition thanks to its large FOV . Conventional methods used for object recognition are based on the edge detection by computing the first-order derivative expression by using the Canny edge detector or the second-order derivative expression by using Laplacian method. Normally, these computing processes need quite a long time due to the big image. However, for compound eye imaging system, only small images are used and also the detecting mechanism is different. Since each ommatidium of the compound eye only sees partial object and thus the formed sub-image only occupies quite a small memory. Unlike traditional method, only neighboring images are compared to find out the difference between them for object recognition. Figure 8(a) illustrates the running time of different edge detection methods in comparison with our method based on SCECam. As can be seen, the running time for SCECam method is two to three magnitudes less than those conventional edge detection methods. Figures 8(b) and 8(c) show the edge detection results for SCECam method and conventional Canny edge detection method. Based on the mathematic simulation software MATLAB, new method and traditional method were implemented on a powerful computer with CPU of core-i7. The running time for SCECam and traditional methods are 8.8 × 10−4 s and 1.3 s respectively. As a result, SCECam method is three magnitudes faster than Canny edge-detection method. This fast object detection ability of the SCECam has great potential applications in areas including collision-free navigation of UAVs and robots, intruder detection and identification, missile guidance and so on.
In summary, a prototype hemispherical compound eye imaging system, i.e. SCECam, was designed and fabricated in this work. By introducing an optical relay system, the curved focal plane formed by the hemispherical compound eye is transformed and projected onto a planar focal plane for image receiving. The formed SCECam consists of about 4400 ommatidia and has a large FOV of 122.4°, which is suitable for applications of collision-free navigation of terrestrial and aerospace vehicles, target detection, location and recognition. For demonstration, experiments of fast object location and recognition were conducted and validated.
However, more future work need to be done to further optimize the whole system as well as the algorithm for fast moving object detection and location applications. Other important directions for future research include how to improve the FOV, resolution as well as to further extend its applications. In fact, the proposed SCECam is quite flexible in scalability. One can design SCECam by choosing different parameters on the size and the number of the microlens to meet different requirements for different applications. However, the most technical challenge will be the proper design of the optical relay systems. Based on the advantages of compound eye imaging sysgems, the SCECam is more suitable for applications of collision-free navigation of terrestrial and aerospace vehicles, fast target detection and recognition, surveillance imaging and so on. Among above applications, obstacle avoidance for small UAV is probably the most promising application.
National Natural Science Foundation of China (NSFC) (61475156 and 61361166004).
References and links
1. M. F. Land and R. D. Fernald, “The Evolution of Eyes,” Annu. Rev. Neurosci. 15, 1–29 (1992). [PubMed]
2. K. Moses, “Evolutionary biology: Fly eyes get the whole picture,” Nature 443(7112), 638–639 (2006). [PubMed]
3. L. P. Lee and R. Szema, “Inspirations from biological optics for advanced photonic systems,” Science 310(5751), 1148–1150 (2005). [PubMed]
4. K. H. Jeong, J. Kim, and L. P. Lee, “Biologically inspired artificial compound eyes,” Science 312(5773), 557–561 (2006). [PubMed]
5. J. Tanida, T. Kumagai, K. Yamada, S. Miyatake, K. Ishida, T. Morimoto, N. Kondou, D. Miyazaki, and Y. Ichioka, “Thin Observation Module by Bound Optics (TOMBO): Concept and Experimental Verification,” Appl. Opt. 40(11), 1806–1813 (2001). [PubMed]
6. J. Duparré, P. Dannberg, P. Schreiber, A. Bräuer, and A. Tünnermann, “Artificial apposition compound eye fabricated by micro-optics technology,” Appl. Opt. 43(22), 4303–4310 (2004). [PubMed]
7. K. Venkataraman, D. Lelescu, J. Duparre, A. McMahon, G. Molina, P. Chatterjee, R. Mullis, and S. Nayar, “PiCam: An Ultra-Thin High Performance Monolithic Camera Array,” ACM Trans. Graph. 32, 166 (2013).
8. H. W. Liu, F. Chen, Q. Yang, P. B. Qu, S. G. He, X. H. Wang, J. H. Si, and X. Hou, “Fabrication of bioinspired omnidirectional and gapless microlens array for wide field-of-view detections,” Appl. Phys. Lett. 100, 133701 (2012).
9. W. K. Kuo, G. F. Kuo, S. Y. Lin, and H. H. Yu, “Fabrication and characterization of artificial miniaturized insect compound eyes for imaging,” Bioinspir. Biomim. 10(5), 056010 (2015). [PubMed]
10. J. Chen, J. Cheng, D. Zhang, and S.-C. Chen, “Precision UV imprinting system for parallel fabrication of large-area micro-lens arrays on non-planar surfaces,” Precis. Eng. 44, 70–74 (2016).
11. Z. Deng, F. Chen, Q. Yang, H. Bian, G. Du, J. Yong, C. Shan, and X. Hou, “Dragonfly-Eye-Inspired Artificial Compound Eyes with Sophisticated Imaging,” Adv. Funct. Mater. 26, 1995–2001 (2016).
12. J. Chen, H. H. Lee, D. Wang, S. Di, and S. C. Chen, “Hybrid imprinting process to fabricate a multi-layer compound eye for multispectral imaging,” Opt. Express 25(4), 4180–4189 (2017). [PubMed]
13. J. Huang, X. Wang, and Z. L. Wang, “Bio-inspired fabrication of antireflection nanostructures by replicating fly eyes,” Nanotechnology 19(2), 025602 (2008). [PubMed]
14. J. W. Leem and J. S. Yu, “Artificial inverted compound eye structured polymer films with light-harvesting and self-cleaning functions for encapsulated III–V solar cell applications,” RSC Advances 5, 60804–60813 (2015).
15. L. Wang, H. Liu, W. Jiang, R. Li, F. Li, Z. Yang, L. Yin, Y. Shi, and B. Chen, “Capillary number encouraged the construction of smart biomimetic eyes,” J. Mater. Chem. C Mater. Opt. Electron. Devices 3, 5896–5902 (2015).
16. Y. M. Song, Y. Xie, V. Malyarchuk, J. Xiao, I. Jung, K. J. Choi, Z. Liu, H. Park, C. Lu, R. H. Kim, R. Li, K. B. Crozier, Y. Huang, and J. A. Rogers, “Digital cameras with designs inspired by the arthropod eye,” Nature 497(7447), 95–99 (2013). [PubMed]
17. D. Floreano, R. Pericet-Camara, S. Viollet, F. Ruffier, A. Brückner, R. Leitel, W. Buss, M. Menouni, F. Expert, R. Juston, M. K. Dobrzynski, G. L’Eplattenier, F. Recktenwald, H. A. Mallot, and N. Franceschini, “Miniature curved artificial compound eyes,” Proc. Natl. Acad. Sci. U.S.A. 110(23), 9267–9272 (2013). [PubMed]
18. A. Kapustjansky, L. Chittka, and J. Spaethe, “Bees use three-dimensional information to improve target detection,” Naturwissenschaften 97(2), 229–233 (2010). [PubMed]
19. M. Giurfa, G. Zaccardi, and M. Vorobyev, “How bees detect coloured targets using different regions of their compound eyes,” J. Comp. Physiol. A Neuroethol. Sens. Neural Behav. Physiol. 185, 591–600 (1999).
20. K. Nordström, P. D. Barnett, and D. C. O’Carroll, “Insect detection of small targets moving in visual clutter,” PLoS Biol. 4(3), e54 (2006). [PubMed]
21. J. Plett, A. Bahl, M. Buss, K. Kühnlenz, and A. Borst, “Bio-inspired visual ego-rotation sensor for MAVs,” Biol. Cybern. 106(1), 51–63 (2012). [PubMed]
22. E. Baird and M. Dacke, “Visual flight control in naturalistic and artificial environments,” J. Comp. Physiol. A Neuroethol. Sens. Neural Behav. Physiol. 198(12), 869–876 (2012). [PubMed]
23. N. Linander, M. Dacke, and E. Baird, “Bumblebees measure optic flow for position and speed control flexibly within the frontal visual field,” J. Exp. Biol. 218(Pt 7), 1051–1059 (2015). [PubMed]
24. T. Reber, A. Vähäkainu, E. Baird, M. Weckström, E. Warrant, and M. Dacke, “Effect of light intensity on flight control and temporal properties of photoreceptors in bumblebees,” J. Exp. Biol. 218(Pt 9), 1339–1346 (2015). [PubMed]
25. M. J. Wang, T. S. Wang, H. H. Shen, J. L. Zhao, Z. Y. Zhang, J. L. Du, and W. X. Yu, “Subtle control on hierarchic reflow for the simple and massive fabrication of biomimetic compound eye arrays in polymers for imaging at a large field of view,” J. Mater. Chem. C Mater. Opt. Electron. Devices 4, 108–112 (2016).
26. K. Nordström, “Neural specializations for small target detection in insects,” Curr. Opin. Neurobiol. 22(2), 272–278 (2012). [PubMed]
27. B. R. Geurten, K. Nordström, J. D. Sprayberry, D. M. Bolzon, and D. C. O’Carroll, “Neural mechanisms underlying target detection in a dragonfly centrifugal neuron,” J. Exp. Biol. 210(Pt 18), 3277–3284 (2007). [PubMed]
28. S. D. Wiederman, P. A. Shoemaker, and D. C. O’Carroll, “A model for the detection of moving targets in visual clutter inspired by insect physiology,” PLoS One 3(7), e2784 (2008). [PubMed]