Inspired by the natural phenomenon of hyperacuity, redundant sampling in combination with the knowledge about the impulse response of the imaging system is used to extract highly accurate information using a low resolving artificial apposition compound eye. Thus the implementation of a precise position detection for simple objects like point sources and edges is described.
© 2006 Optical Society of America
Inspired by the principles of insect vision, artificial compound eye cameras are reaching the limits in the miniaturization of imaging systems [1, 2]. For example, the artificial apposition compound eye consists of a microlens array on a planar glass substrate with a thickness of less than 500µm. In combination with an optoelectronic sensor array this forms an ultra-thin imaging camera with the potential to a large field-of-view (FOV), low volume and weight. However, due to the reduction of system size, the diameter of each lenslet is small and the number of image details that can be transferred by the system decreases . As a result the image resolution is limited to about ten thousand pixels.
Confronted with the same problem, nature developed a strategy to obtain infor-mation with sub-pixel accuracy. A number of insects are able to detect motion within a fraction of their photoreceptor diameter . This so-called hyperacuity is caused by a dense sampling of the object space connected with image segmentation, informa-tion pooling and parallel signal processing [5–7]. Though hyperacuity enables insects to a perception of motion with an amplitude smaller than the resolution limit of their compound eyes, it does not enable them to resolve adjacent image details beyond this limit .
In machine and robot vision a low resolution optical sensor for position detection is advantageous in terms of speed, computational load and energy consumption . Furthermore, often there exists some a priori knowledge about the object to be located in the image. Adopting some features of natural hyperacuity, position detection sensors with sub-pixel accuracy can be achieved by applying low resolution imaging systems. Various already demonstrated sensors of this kind are using either a scanning regime or overlapping FOVs of three adjacent optical channels to increase the sampling density in object space [10–13]. Due to the scanning drive, a robust mechanical construction and high system complexity pose the main problems to the miniaturization of these sensors. Another disadvantage is that hyperacuity in scanning sensors is limited to one dimension. Sensors with overlapping FOVs suffer from alignment problems and rapidly increasing volume when individual modules are combined to achieve a two-dimensionl FOV.
This article demonstrates that several of these drawbacks can be overcome by using artificial apposition compound eyes in combination with hyperacuity methods. There is no need for scanning because many thousands of optical channels are imaging in parallel. The increase in accuracy is achieved across the whole two dimensional FOV of the camera by letting the FOVs of adjacent optical channels overlap. Furthermore, as artificial apposition compound eyes are imaging systems, several objects can be tracked at once provided that their images are separated. At the same time also object recognition is feasible.
In section 2 we present a linear model of the imaging process in artificial apposition compound eyes which is later used to derive an analytical and unique relationship between measured powers and the position of objects (section 3). Finally, the experimental verification of locating objects with increased accuracy will be presented in section 4.
2. Imaging model for artificial apposition compound eyes
Artificial apposition compound eyes consist of a microlens array (lens diameter D, focal length f, pitch pL) replicated on top of a thin glass substrate (Fig. 1). Contrasting to their natural counterparts they are fabricated on a planar basis rather than on a curved surface because of today’s micro-electronics fabrication technology. An optoelectronic sensor array with a different pitch pK is placed in the focal plane of the microlenses to pickup the image. Furthermore, the size of the sensor pixels is narrowed down by a pinhole array on the substrate backside in order to increase resolution. The pitch difference Δp=pL-pK enables different viewing directions of the individual optical channels. The short focal length of the lenslets leads to nearly unlimited depth of field meaning that for object distances larger than ten times the focal length (e.g. approx. 2mm) the image will remain sharp within the focal plane.
As shown in Fig. 1 each channel contributes to one image point by collecting light from a finite angle Δφ given by the full width at half maximum (FWHM) of the Airy diffraction pattern convolved with the diameter of the pinhole d. The result of this convolution is projected with the focal length f which gives the angular sensitivity function (ASF in Fig. 2). The ASF describes the efficiency of the intensity transfer for an object point as a function of its angular distance φ to the optical axis of the channel.
We now use a linear imaging model of one channel with the assumptions of incoherent illumination and space invariance. For an extended object the intensity distribution in the focal plane I(x, y) can then be written as the convolution of the intensity of the geometric image O with the impulse response R of the lens
R(x-ξ, y-η) is the response of the optical system at (x, y) in the image plane to an impulse at (ξ, η) in the object plane. Solving the convolution integral analytically results in an intricate term for the ASF. For this reason it is appropriate to use a Gaussian approximation that was derived for natural compound eyes 
This approximation is valid as long as the pinhole diameter and the FWHM of the Airy pattern are nearly of the same size. The FWHM of the Gaussian ASF is then given by 
The so-called acceptance angle Δφ has a geometrical contribution Δρ=d/f which is the pinhole diameter projected into object space and a second contribution Δδ=λ/D determined by diffraction at the aperture of the microlens. Equation (3) reveals an important trade-off for artificial apposition compound eyes: As Δφ approximates the smallest resolvable feature size, increasing resolution for a given F-number means that the pinhole diameter d has to be decreased but then sensitivity decreases with d2.
3. Implementation of high accuracy position detection
It has been pointed out before that the acceptance angle approximates the smallest resolvable angle between two image details i.e. the angular cut-off frequency of the lens is νCO≈1/Δφ. Following the sampling theorem the sampling frequency νS=1/Δϕ has to be at least twice as large as the optical cut-off to exploit the resolution potential of the eye. In case of infinitely narrow (i.e. δ-shaped) sampling points there is no information gained when using a higher sampling frequency. However, in artificial apposition compound eyes the object space is sampled by the ASFs of the different optical channels which exhibit a finite angular width. In this case the overlap between sampling profiles enables the extraction of sub-pixel information when the profile of the ASF is known.
For example a point source at an angular distance φP from the optical axis of one channel causes different intensities within the focal planes of adjacent channels (Fig. 3(a)). The individual intensities depend on (I) the distance to the optical axis, (II) the absolute irradiance of the source and (III) other unknown parameters e.g. the transmission of the optical system. By modeling the point source as a δ-distribution the intensity in the k’th channel is proportional to the value of its ASF at the location of the source
The constant c incorporates the unknown parameters that are canceled out during the next step by using the ratio between the intensities in adjacent channels .
At this point the ratio of the intensities is equal to the ratio of the measured powers α in two adjacent pinholes because the area of the pinholes is constant. We substitute Eq. (2) in Eq. (4) and consider the offset between adjacent optical axes (see Fig. 3(a)). In Eq. (5) this is done for the center channel (Index 0) and its adjacent channel along the x axis (Index 1) for demonstration
The angles φ 0 and φ 1 correspond to the angular position of the point source with respect to the optical axis of the center channel and its neighbor respectively. Within the projected image plane, the distance between the position of the point source and the center of one channel is given by
for the two adjacent channels used here (see Fig. 3(b) for reference). Now, we apply a linear approximation for the tangent because φ is small for each channel
wherein s is defined by
Thus an analytical and unique relationship between the measured powers and the position of the source within the FOV of one channel is derived. The ratio of powers in one direction (αx) gives the first coordinate (x 0) of the point source position related to the center channel:
The second ratio in perpendicular direction is needed to find the other coordinate y0 with an equation that is analog to Eq. 11. The radial position r 0 within the projected image plane and therefore the angular distance φP to the optical axis of one channel are calculated from these with the help of Eq. (6) and (8).
In case of the detection of an edge position there are two crucial coordinates: (I) the orientation angle ϑK to the pixel matrix and (II) the normal distance rk between the optical axis of the k’th channel and the edge (Fig. 4). We will mainly be dealing with the second one, assured that the orientation angle can be found with sufficient accuracy by using a standard edge detection filter (e.g. Sobel type) even in low resolution images.
Because the edge extends infinitely compared to the diameter of the radial symmetrical ASF, the intensity in the k’th channel is proportional to the one-dimensional convolution of the impulse response R with the edge profile in direction normal to the edge (see Fig. 4)
In Eq. (12) Θ is the Heaviside step function describing the geometrical image of an edge with total irradiance Q 0 on a constant background B. Using Eq. (2) with φ≈r/f for small angles yields the intensity distribution in the k’th channel
wherein the definition of the error function erf(x) is used
In this case calculating the intensity ratio for two adjacent channels would not help much because of the complex structure of the error function as well as the unknown background illumination. So, the ratio is calculated for the derivatives of the measured powers in adjacent channels in x (ᾶx) and y (ᾶy). The resulting relationship is similar to that of the point source (Eq. (11)) except for its additional dependence on the orientation angle ϑK:
The ± stands for a rising or falling edge respectively. An analog relation follows for adjacent channels on the y axis wherein the cosine is replaced by a sine function.
Other objects like rectangles or triangles can also be treated as long as they consist of edges that can be distinguished in the image and therefore segmented.
4. Experimental verification and results
To verify the proposed methods we used an experimental setup that is shown in Fig. 5. The artificial apposition compound eye and a microscope objective, needed to relay the pinhole layer onto a CCD, are fixed on a rotary stage. During the measurement the stage is rotated stepwise about the z-axis simulating object movement through the FOV.
For each step of the stage orientation (ϕref) the power within the pinholes is measured. The position of the point source or edge is then calculated from the powers of adjacent pinholes using Eq. (11) and Eq. (15) respectively. Afterwards, the measured angular distance between two positions (Δϕm) is compared with the change of the stage orientation angle (Δϕref) which gives the error or acuity (δa) of the method (see Fig. 6(a)). For each measurement, the artificial compound eye is tilted with respect to the path of the point source or edge. Therefore the path of the object is linear but, in general, not parallel to the x or y axis of the image plane.
The accuracies that have been measured with different artificial apposition compound eyes for either a point source or an edge are listed in Tab. 1.
For comparison, these accuracies are also given in percent of the resolution limit of the artificial apposition compound eye.
The listed values represent the best and the worst cases from several measurements. The large variation of these values is mainly due to systematic deviations from the used imaging model caused by deviations of the shape and size of the pinholes. In this case neighboring channels exhibit different responses for the same stimulus. But it has to be emphasized that the measured accuracy was reproducible within the limits which are determined by the signal-to-noise ratio (SNR) when the same artificial compound eye was used. As seen in Fig. 7, the measured maximum accuracy is proportional to the SNR. It should be noted that the accuracies of Tab. 1 are only valid in a finite part of the FOV of each channel with an extent of less than 1°. However, it was found that these zones overlap between adjacent channels (Fig. 6(b)). Hence, the increased accuracy is achieved across the whole FOV of the artificial apposition compound eye which extends over 25°×25°. Though the systematic deviation increases at the outer borders of the overall FOV due to the space variance of the ASF caused by off-axis aberrations, the effect can be neglected for small FOVs. Alternatively, the off- axis aberrations can be corrected by using a chirped array of ellipsoidal microlenses .
5. Conclusion and outlook
The experimentally demonstrated methods of hyperacuity yield a new approach to access highly accurate information with an artificial apposition compound eye despite the number of image pixels is small. To achieve this, an overlap between the FOVs of adjacent optical channels as well as knowledge about the imaging process is used to derive an analytical and unique relationship between measured powers and the position of edges or point objects. For example, position information was extracted from an image containing 50×50 pixels with a fidelity that equals 500×500 effective pixels. Accuracies up to a factor of 50 compared to the smallest resolvable feature size have been achieved. Furthermore, the measured position is independent of the absolute irradiation of the source, the object distance and background illumination. However, the highest achievable accuracy is limited by the SNR in the image. During the experimental verification it was shown that though the zone of high accuracy within each channel is limited, the zones of each adjacent pair out of the total number of up to 80×60 channels overlap. Therefore the high accuracy can be obtained across the whole two-dimensional FOV of 25°×25°. In order to minimize the variations of the results either a better homogeneity of the artificial apposition compound eye parameters has to be achieved or a calibration of the imaging model for the individual lens may be applied. A third option employs a method that is widely independent of the precise imaging model.
Three major conclusions for the performance of the position detection result from the experimental investigations: (I) Large overlap between adjacent ASFs causes a dense sampling of the object space which improves hyperacuity but leads to a small overall FOV for a given number of channels. (II) Increasing sensitivity also improves the maximum accuracy that can be obtained. (III) The SNR states the final limitation for the maximum accuracy. Future work on a design of a new kind of artificial compound eyes inspired by the neural superposition eye  will address these points.
A long-term task is identified in combining the principles of artificial apposition compound eyes with on-chip parallel, analog pre-processing (e.g. by smart pixels). This would lead to the fabrication of ultra-thin imaging sensors with the ability for hyperacuity without the need for off-chip processing by a PC.
References and links
1. J. Duparré, P. Dannberg, P. Schreiber, A. Bräuer, and A. Tünnermann, “Artificial apposition compound eye fabricated by micro-optics technology,” Appl. Opt. 43, 4303–4310 (2004). [CrossRef]
3. R. Völkel, M. Eisner, and K. J. Weible, “Miniaturized imaging systems,” Microelectron. Eng. 67–68, 461–472 (2003). [CrossRef]
5. M. J. Wilcox and D. C. Jr. Thelen, “A Retina with Parallel Input and Pulsed Output, Extracting High-Resolution Information,” IEEE Trans. Neural Net. 10, 574–583 (1999). [CrossRef]
6. S. B. Laughlin, “Form and function in retinal processing,” TINS 10, 478–483 (1987).
7. J. S. Sanders and C. E. Halford, “Design and analysis of apposition compound eye optical sensors,” Opt. Eng. 34, 222–235 (1995). [CrossRef]
9. R. A. Young, “Bridging the gap between vision and commercial applications,” in Human Vision, Visual Processing and Digital Display VI, B. E. Rogowitz and J. P. Allebach, eds., Proc. SPIE2411, 2–14 (1995). [CrossRef]
10. S. Viollet and N. Franceschini, “Visual servo system based on a biologically-inspired scanning sensor,” in Sensor Fusion and Decentralized Control in Robotic Systems II, G. T. McKee and P. S. Schenker, eds., Proc. SPIE3839, 144–155 (1999). [CrossRef]
11. K. Hoshino, F. Mura, and I. Shimoyama, “A one-chip scanning retina with an integrated microme-chanical scanning actuator,” J. Microelectromech. Syst. 10, 492–497 (2001). [CrossRef]
12. M. S. Currin, P. Schonbaum, C. E. Halford, and R. G. Driggers,“Musca domestica inspired machine vision system with hyperacuity,” Opt. Eng. 34, 607–611 (1995). [CrossRef]
13. D. T. Riley, W. M. Harman, E. Tomberlin, S. F. Barrett, M. Wilcox, and C. H. G. Wright, “Musca domestica inspired machine vision system with hyperacuity,” in Smart Structures and Materials 2005: Smart Sensor Technology and Measurement Systems, E. Udd and D. Inaudi, eds., Proc. SPIE5758, 304–320 (2005). [CrossRef]
15. A. W. Snyder, “Acuity of compound eyes: Physical limitations and design,” J. Comp. Physiol. A 116, 161–182 (1977). [CrossRef]
16. J. Duparré, F. Wippermann, P. Dannberg, and A. Reimann, “Chirped arrays of refractive ellipsoidal microlenses for aberration correction under oblique incidence,” Opt. Express 13, 10539–10551 (2005). [CrossRef]