Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Target orientation detection based on a neural network with a bionic bee-like compound eye

Open Access Open Access

Abstract

The compound eye of insects has many excellent characteristics. Directional navigation is one of the important features of compound eye, which is able to quickly and accurately determine the orientation of an objects. Therefore, bionic curved compound eye have great potential in detecting the orientation of the target. However, there is a serious non-linear relationship between the orientation of the target and the image obtained by the curved compound eye in wide field of view (FOV), and an effective model has not been established to detect the orientation of target. In this paper, a method for detecting the orientation of the target is proposed, which combines a virtual cylinder target with a neural network. To verify the feasibility of the method, a fiber-optic compound eye that is inspired by the structure of the bee’s compound eye and that fully utilizes the transmission characteristics and flexibility of optical fibers is developed. A verification experiment shows that the proposed method is able to realize quantitative detection of orientations using a prototype of the fiber-optic compound eye. The average errors between the ground truth and the predicted values of the horizontal and elevation angles of a target are 0.5951 ° and 0.6748°, respectively. This approach has great potential for target tracking, obstacle avoidance by unmanned aerial vehicles, and directional navigation control.

© 2020 Optical Society of America under the terms of the OSA Open Access Publishing Agreement

1. Introduction

As the visual organs of most insects, compound eye are particularly sensitive to potential threats, such as predators, from all orientations in the environment [13]. Compound eye have great advantages in detecting the orientation of the target because of their exceptionally wide FOV, high sensitivity to motion, and compact structure [4,5]. Although the structure of bionic curved compound eye is more complicated than the planar compound eye [613], it can take full advantage of compound eye, especially the wide FOV and target detection [1420]. However, it is difficult for curved compound eye to obtain accurate information in space [21,22], and this in turn will make it difficult to detect the orientation of a target. Thus, accurately detecting the orientation of the target using a curved compound eye poses challenges in terms of both structure and detection methods.

Investigations of target orientation detection using existing bionic compound eye have shown that such eyes have the potential for at least qualitative target orientation detection. Song et al. [23] used an apposition camera to capture targets at different angles and found that the positions of targets on the captured image were significantly different. A hemispherical compound eye camera, SCECam, designed by Shi et al. [24], has as its main components an optical relay system and 4000 microlenses. A qualitative experimental study of target orientation detection has been performed using this device. Hornsey and co-workers [2527] achieved object tracking with the DragonflEYE compound-eye image sensor using vanishing point and $Z$-transform image center calibration techniques. These methods do not require calibration of single sub-eyes and have high precision. Moreover, the device uses fiber bundles to solve the problem of curved compound eye imaging, but it is difficult to fabricate. The above studies have provided qualitatively analyses of target orientation detection. However, to the best of the authors’ knowledge, the relationship between the orientation of a target in space and the acquired image has yet to be established; that is, no suitable detection method has been proposed, nor has a quantitative analysis been carried out.

In the work reported here, to realize target orientation detection using a curved compound eye, a fiber-optic compound eye device that fully utilizes the transmission characteristics and flexibility of optical fibers is developed, based on the structure of biological compound eye. A method for detecting the orientation of the target is proposed, which combines a virtual cylinder target with a neural network. This method calibrates the compound eye as a whole, without separately calibrating each sub-eye. The entire calibration process is carried out in the same world coordinate system, and it is not necessary to have accurate knowledge of the relative positions and axial directions among the sub-eyes. These advantages effectively reduce calibration error and make the calibration process more concise, and they thereby help to improve the accuracy of target orientation detection.

The structure of this paper is as follows. Section 2 introduces the conceptual design and prototyping of the fiber-optic compound eye. Section 3 describes the principle of the calibration method combining a virtual cylinder target and a neural network. Section 4 then describes an experimental study and its results. Section 5 summarizes the paper.

2. Fiber-optic compound eye

2.1 Conceptual design

In the structure of a biological compound eye, each sub-eye is composed of a cornea, a crystalline cone, a rhabdom, which is an independent photosensitive unit [28]. Figure 1(a) shows the compound eye of a bee as observed under a microscope. It can be seen that the compound eye of bee is composed of independent sub-units, and each sub-unit contains structures such as acquisition and transmission. Inspired by this structure, a fiber-optic compound eye was designed that closely mimics the compound eye of bee. It has independent sub-units and each sub-unit has independent transmission channels. Given that the imaging surface of a bionic compound eye with these structural characteristics is curved while commercially available image sensors are planar, optical fibers are used as the relay system in the design. The propagation of light in these fibers occurs through total internal reflection, which maintains good transmission characteristics over a long distance, and the fibers have good flexibility.

 figure: Fig. 1.

Fig. 1. Bionic fiber-optic compound eye.

Download Full Size | PDF

Figure 1(b) shows the conceptual design of the fiber-optic compound eye. In this design, the fibers are used not only as transmission devices, but also as the collecting part of the target. Since the FOV of an optical fiber is narrow, the optical fibers need to be densely arranged over the entire spherical surface to avoid incompleteness of spatial information. The other ends of the fibers are gathered together at the bottom of the device in a close arrangement using fiber-optic close-packing technology [2931]. The arrangement of the fibers at the bottom of the device is in one-to-one correspondence with their positions on the spherical surface to avoid introducing any disorder into the collected spatial information. The image sensor can be attached directly to the fibers for image acquisition. This is convenient for subsequent processing.

To make the arrangement of the sub-eyes on the curved compound eye denser and more even, the icosahedral subdivision method [32,33] is used to determine the exact positions of the sub-eyes on the curved surface. Figure 2 shows the procedure for calculating the positions of the sub-eyes using this method. The sphere is divided into concentric icosahedrons. Then, each regular triangle is divided into a plurality of small equilateral triangles at a division frequency $f$ and the vertexs of the small equilateral triangle is $(u_{i},v_{i},w_{i})$. After that, the vertexs is converted into coordinates $(U_{i},V_{i},W_{i})$ on the spherical surface by equ. (1).

$$\left\{ \begin{array}{lr} U_{i} =u_{i} \sin72^{{\circ}}, & \\ V_{i} = v_{i} + u_{i} \cos72^{{\circ}}, & \\ W_{i} = f / 2 + w_{i} / b, & \\ \end{array} \right.$$
where $b=(1+\sqrt {5})/2$ is the famous Golden Proportion.

 figure: Fig. 2.

Fig. 2. Icosahedral subdivision. (a) A concentric icosahedron is placed within and touching the spherical surface. (b) An equilateral triangle is divided into multiple small equilateral triangles with a division frequency $f$ ($f=8$). (c) The coordinates of the equilateral triangle are converted to coordinates on the sphere. (d) The distribution of points on the sphere is optimized. (e) Comparison of pre-optimization and post-optimization.

Download Full Size | PDF

After transforming these vertices onto the sphere, they are unevenly distributed. Therefore, it is necessary to fine-tune their positions to achieve the purpose of uniform distribution on the spherical surface.

2.2 Prototyping

Figure 3(a) shows the prototype of the fiber-optic compound eye and the receiving surface. Both the spherical shell and the receiving surface are rapidly fabricated by 3D printing. The fiber core material of the optical fiber in the prototype is PMMA, and the cladding material is fluororesin. Each fiber has a diameter of 1.5 mm, a numerical aperture of 0.5, an attenuation of less than 0.16 dB/m, and a minimum bend radius of 30 mm. The end faces of the fibers in the compound eye are polished to make it easier to capture a target. During the installation of the optical fibers, each ommatidium is installed according to the specified sequence number in Fig. 3(b). The image is then formed on the receiving surface and recorded by the camera.

 figure: Fig. 3.

Fig. 3. Prototype of fiber-optic compound eye and receiving surface.

Download Full Size | PDF

The dimensions of the prototype are 100 mm $\times$ 100 mm $\times$ 100 mm. The prototype has a total of 181 sub-eyes, and the radius of the spherical shell is 50 mm. The FOV of the prototype reaches about $120^{\circ } \times 120^{\circ }$. The acceptance angle for each ommatidium is $\alpha = 19^{\circ }$, which is determined by the numerical aperture of the fiber. In order to ensure the integrity of the acquired information, the inter-ommatidium angle is $\beta = 7.94^{\circ }$. Therefore, there is enough overlap of FOV between adjacent ommatidia, which ensures that the target can be detected by multiple sub-eyes, as shown in Fig. 4.

 figure: Fig. 4.

Fig. 4. Schematic diagram of FOV of adjacent sub-eyes.

Download Full Size | PDF

3. Calibration principle

According to the imaging characteristics of the fiber-optic compound eye, the orientation of the target can be accurately calculated. The light source ($A$ and $B$) at different positions in space is captured by different sub-eyes. Light spots are then formed at different positions on the receiving surface, as shown in Fig. 5. Typically, the light source is captured by at least seven sub-eyes. An analysis of the distribution of light spots shows that the relationship between the light source in space and the light spots on the image is nonlinear, and it is therefore difficult to establish an accurate geometric model. To deal with this problem, a neural network is introduced. Neural networks have strong nonlinear mapping ability and robustness, and such a network should be able to determine the nonlinear relationship between a light source in space and the light spots on the image [34,35].

 figure: Fig. 5.

Fig. 5. Virtual cylinder target schematic(associated with Visualization 1).

Download Full Size | PDF

3.1 Virtual cylinder model

In this paper, a virtual cylinder model is used to calibrate the fiber-optic compound eye [36]. Figure 5 shows the principle of virtual cylinder calibration. The origin of the $O$-$XYZ$ coordinate system is established at the center of the spherical shell of the compound eye, with the $Z$ axis coinciding with the axis of the virtual cylinder. The $Y$ axis passes vertically through the center of the virtual cylinder. Suppose that P and A are points in the same orientation in space and that they are captured by the compound eye and form the same light spot on the right side of Fig. 5. By calibrating a virtual cylinder target with a distance R, and the horizontal angle $\theta$ and elevation angle $\phi$ of any P point above it are known, the orientation of A can be calculated. The elevation angle $\phi$ of the output set during training is converted using the equation

$$\phi = \arctan(h/R),$$
where $R$ is the radius of the virtual cylinder, and $h$ is the height of the light source in space.

3.2 Data preprocessing

During the calibration process, the number of light spots on the acquired image is not less than seven. The centroid $(u, v)$ (pixel coordinates) of the seven light spots with the highest light spot gray value in each image are used as the input layer of the training set for the neural network. First, the noise in the image is removed by the median filtering method, so as not to affect the accuracy of the extracted spot center. Then the seven light points with the highest gray value in each image are selected as the region of interest. Finally, the centroid coordinates of each spot are extracted by using the gray-scale centroid method. The centroid coordinates of the seven light spots extracted from each image are used as a set of data in the input layer of the neural network. The centroid coordinates of the light spots in all images constitute the input layer of the neural network.

The output layer of the training set consists of the orientation $(\theta ,\phi )$ of the light source corresponding to each image. Therefore, the position of the light source on the virtual cylindrical target and the coordinates of the light spot centroids extracted from all the collected images constitute the training set of the neural network. In order to eliminate the dimensional influence between the indicators and improve the accuracy of the training of the neural network, the data of the input layer and the output layer are normalized.

3.3 Neural network architecture

In order to establish a non-linear mapping relationship between light spots and corresponding light sources in the image, a BP neural network was introduced to fit the relationship. The centroid $(u, v)$ (pixel coordinates) of the seven light spots with the highest light spot gray value in each image is used as the input set for the neural network. The coordinates $(\theta , \phi )$ of the corresponding light source on the virtual cylinder are used as the output set. The training of the BP neural network and detecting the orientation of target through the trained network are shown in the second and third parts of Visualization 1. Assume that the numbers of neurons in the input and output layers are $M$ and $N$, respectively, and that the neural network is composed of $L$ layers of neurons. Let the connection weight from the $j$th neuron of the $(l-1)$th layer to the $i$th neuron of the $l$th layer be $w_{ij}^{(l)}$, and let $b_{i}^{(l)}$ be the bias of the $i$th neuron of the $l$th layer. Then, the output of the $i$th neuron of the $l$th layer is

$$y_{i}^{(l)}=f(\mathrm{net}_{i}^{(l)}),$$
where
$$\mathrm{net}_{i}^{(l)}=\sum_{j=1}^{s_{l-1}}w_{ij}^{(l)}y_{j}^{(l-1)}+b_{i}^{(l)}$$
is the input of the $i$th neuron in the $l$th layer, $f(\cdot )$ is the activation function of the neuron, $s_{l}$ is the number of neurons in the first layer.

When performing backward feedback, the weights and bias are adjusted according to the error function to minimize the error between the desired output and output of the neural network. For $M$ training samples, the error function is

$$E=\frac{1}{M}\sum_{i=1}^{M}E(i),$$
where
$$E(i)=\frac{1}{2}\sum_{k=1}^{N}[y_{k}(i)-o_{k}(i)]^{2}$$
is the training error of a single sample, $y_{k}(i)$ is the expected output of the $i$th training data, and $o_{k}(i)$ is the actual output.

When the weight $w_{ij}^{(l)}$ of each layer of the BP neural network is adjusted optimally, the relationship between the orientation of the target and the image obtained by the curved compound eye are established by the network. The trained network can then be used to calculate the orientation of the target in space. By inputting a test image into the trained network, the spatial orientation $(\theta , \phi )$ on the virtual cylinder surface can be predicted, where $\theta$ is the horizontal angle and $\phi$ is the elevation angle. Therefore, according to the virtual cylinder model and BP neural network, the fiber-optic compound eye can be calibrated to achieve the target positioning detection of space targets.

4. Experiments and results

4.1 Establishment of calibration system

The system for calibrating the fiber-optic compound eye using the virtual cylinder target is shown in Fig. 6(a). The calibration system consists of the fiber-optic compound eye, a light-emitting diode (LED) target, a camera, a turntable, and a positioning support device for determining the position of the prototype. The midpoint of the LED target, the center of the compound eye prototype, and the center of the camera are on the same horizontal line. The coordinate system of Fig. 6(a) is established with the center of the prototype spherical shell as the coordinate origin. During the calibration process, the turntable is rotated and the LED lights are illuminated in sequence to construct a virtual cylinder target.

 figure: Fig. 6.

Fig. 6. Experimental device for the calibration experiment(associated with the first part of Visualization 1).

Download Full Size | PDF

4.2 Calibration

Figure 6(b) shows the experimental device for the calibration experiment. The CMOS of camera used in the experiment is CMV12000. The size of the CMOS is $22.5 mm \times 16.9 mm$. The resolution is $5120 px \times 5120 px$ and the pixel size is $5.5 \mu m \times 5.5 \mu m$. Owing to the limitations imposed by the length of the LED target, an FOV of $120^{\circ } \times 41^{\circ }$ is calibrated in the experiment to test the feasibility of the calibration method. The LED target is fixed 500 mm in front of the compound eye. The target has 39 LED lights, with 10 mm spacing between adjacent lights. Thus, the height of the LED target ranges from $-190$ mm to 190 mm, which provides an elevation angle from $-20.5^{\circ }$ to $20.5^{\circ }$. During the calibration process, the rotation range of the turntable is from $-60^{\circ }$ to $60^{\circ }$, and the turntable is rotated at $2^{\circ }$ intervals.

In the experiment, the position of the light source is changed 2379 times in the virtual cylindrical surface, so a total of 2379 ($61 \times 39$) images are recorded. Figure 7 shows images taken at different positions during the calibration experiment, and it can be seen that the light spots are distributed in different regions.

 figure: Fig. 7.

Fig. 7. Light spot images of the light source at different locations.

Download Full Size | PDF

The BP neural network is then used to establish the relationship between the light source in space and the light spots on the image. From an analysis of all the images, it is found that the number of light spots in each image is at least seven. In image processing, the centroid of the light spots in the image are extracted using the gray-scale centroid method. Seven of the center positions of all the light spots are selected as the input set of the BP neural network, and the corresponding light source positions is taken as the output set. A three-layer BP neural network is created by MATLAB for training, in which the tansig function is selected as the hidden-layer transfer function. To achieve rapid convergence of the BP neural network, the Levenberg–Marquardt optimization algorithm is used.

4.3 Results

The centroids of the light spots in the acquired image and the positions of the light source in space are used as the BP neural network training sample to calibrate the prototype of the fiber-optic compound eye. Figure 8(a) shows the residual error of the horizontal angle at each position after training; the average residual error is $0.8478^{\circ }$. The residual error of the elevation angle for each position after training is shown in Fig. 8(b); the average residual error is $0.4810^{\circ }$. It can be seen from Fig. 8 that no matter whether it is a horizontal angle or a pitch angle, the residual error of the target at the edge part has obviously increased. The set of meniscus-shaped light spots formed by the sub-eyes of the edge of the fiber-type compound eye on the image makes it difficult to accurately determine the seven light spots in the center, which reduces the accuracy of the extracted centroid coordinates. Therefore, the sub-eyes at the edge affect the accuracy of detection in the entire field of view of the prototype.

 figure: Fig. 8.

Fig. 8. Residual errors of the neural network training.

Download Full Size | PDF

After the fiber-type compound eye calibration is completed, the trained neural network is used to predict the validation set to determine the performance of the system. In order to verify the detection accuracy of the system in the entire field of view, the central, edge and middle areas of the field of view are used as the selection criteria for angle verification. The validation set selected in this way guarantees comprehensiveness and accuracy in terms of the FOV. In addition, the distance between some targets in the verification set and the fiber-type compound eyes is greater than 500mm (the position of the virtual cylindrical target is equal to 500mm), and others are less than 500mm. The verification set is collected according to the requirements of angle and distance to ensure the comprehensiveness of the verification process. After the centroid of the light spots in the acquired images have been extracted, the results predicted by the BP neural network are as shown in Fig. 9. It can be found that there are obvious errors between the ground truth and predicted values of some targets at the elevation angle, and the accuracy of the horizontal angle curve is better than the elevation angle. However,the predicted curves of the horizontal angle $\theta$ and the elevation angle $\phi$ are substantially consistent with the curves of the ground truth, which indicates that the orientation of the light source can be accurately determined by the prediction result.

 figure: Fig. 9.

Fig. 9. Ground truth (dashed line) and prediction (solid line) of the spatial orientation ($\theta ,\phi$) of the light sources in space.

Download Full Size | PDF

The data of ten randomly selected verification points are shown in Table 1, which contains the ground truth, prediction, distance and error of the verification points. It can be seen from Table 1 that the horizontal and elevation angles predicted by the neural network are close to the ground truth, and the average errors between the ground truth and the predicted values of $\theta$ and $\phi$ are $0.5951 ^{\circ }$ and $0.6748^{\circ }$, respectively. By analyzing the angle of the verification point, the accuracy of the prediction of the target in the center area of the FOV is better than the edge area, and the accuracy of the horizontal angle prediction is higher. According to the analysis of the distance of the target point, as the distance gradually becomes greater than $500mm$, the error also increases, and when the distance is less than $500mm$, the error is small. However, it can be seen from the table that the error is about $1^{\circ }$, whether it is the horizontal angle or the elevation angle. Therefore, it can be shown that the fiber-type compound eye system achieves accurate enough detection of the target orientation in terms of predator avoidance.

Tables Icon

Table 1. Comparison of ground truth and predictiona

4.4 Discussion

For a bionic curved compound eye, target orientation detection relies on suitable detection methods. In this work, a proposed detection method and a fiber-optic compound eye have been used for quantitative, rather than simply qualitative, analysis of spatial target orientation detection. The average error of the prediction results shows that the accuracy of prediction of the horizontal angle is higher than that of the elevation angle. The reason for this is that the horizontal angle can be read directly through the turntable, while the elevation angle needs to be converted from the height of the target. In addition, the virtual cylindrical target is designed based on the structural characteristics of curved bionic compound eye. At the horizontal angle, the cylinder and the spherical shell of the fiber-type compound eye are concentric circles, which increases the accuracy of optical information collection. However, the virtual cylindrical target is equivalent to the traditional planar target at the elevation angle, which leads to the distortion of the collected optical information. As a result, the prediction accuracy of the horizontal angle is higher than that of the elevation angle.

It can be seen from Table 1 that the distance has a slight impact on the accuracy of the target orientation, but the error will increase as the distance increases. Therefore, the distance of the target will also affect the training results of the neural network, which will reduce the accuracy of the prediction. Therefore, the distance is an important part of improving the accuracy of the system in future work. For these problems caused by distance, it may be better to calibrate the target with multiple positions. The experimental results suggest that the method can quantitatively detect the orientation of the target. There is still scope, however, for improvement in the accuracy of detection, for example by increasing the density and number of sub-eyes. Moreover, these problems can be improved by replacing virtual cylinder target with spherical target.

5. Conclusion

In this paper, based on the structural characteristics of the bee’s compound eye and taking account of the fact that the imaging surface of a bionic compound eye is curved while commercially available image sensors are planar, a prototype of a fiber-optic compound eye is designed and fabricated using optical fibers as the relay system. The use of fibers not only solves the problem of incompatibility between curved imaging surface and flat sensors, but also ensures that no crosstalk occurs among the sub-eyes. The fiber-optic compound eye uses the proposed orientation detection method to realize quantitative target orientation detection of a target in space. The proposed orientation detection method establishes a nonlinear mapping relationship between the light spots captured by the curved compound eye and the spatial orientation of a target. It calibrates the curved compound eye as a whole instead of calibrating each of the sub-eyes separately. Consequently, the method is scalable for compound eye systems with variable sub-eyes. The results show that the average errors of the horizontal and elevation angles are $0.5951^{\circ }$ and $0.6748^{\circ }$, respectively. The method proposed in this paper can achieve quantitative target orientation detection of a target and has great potential for applications such as obstacle avoidance by unmanned aerial vehicles, target tracking, and directional navigation control.

Funding

National Natural Science Foundation of China (51775164, 11872167, 51675156, 61935008, 51705122); Fundamental Research Funds for the Central Universities (JZ2017HGPA0165, PA2017GDQT0024); Natural Science Foundation of Anhui Province (1908085J15).

Disclosures

The authors declare no conflicts of interest.

References

1. M. V. Srinivasan, M. Poteser, and K. Kral, “Motion detection in insect orientation and navigation,” Vision Res. 39(16), 2749–2766 (1999). [CrossRef]  

2. A. W. Snyder, D. G. Stavenga, and S. B. Laughlin, “Spatial information capacity of compound eyes,” J. Comp. Physiol., A 116(2), 183–207 (1977). [CrossRef]  

3. K. Moses, “Evolutionary biology: fly eyes get the whole picture,” Nature 443(7112), 638–639 (2006). [CrossRef]  

4. G. A. Horridge, “Invertebrate vision,” Nature 283(5746), 510 (1980). [CrossRef]  

5. R. Dudley, The biomechanics of insect flight: form, function, evolution (Princeton University, 2002).

6. J. Tanida and K. Yamada, “Tombo: thin observation module by bound optics,” in The 15th Annual Meeting of the IEEE Lasers and Electro-Optics Society, vol. 1, (IEEE, 2002), pp. 233–234.

7. J. Tanida, T. Kumagai, K. Yamada, S. Miyatake, K. Ishida, T. Morimoto, N. Kondou, D. Miyazaki, and Y. Ichioka, “Thin observation module by bound optics (tombo): concept and experimental verification,” Appl. Opt. 40(11), 1806–1813 (2001). [CrossRef]  

8. H. Deng, X. Gao, M. Ma, Y. Li, H. Li, J. Zhang, and X. Zhong, “Catadioptric planar compound eye with large field of view,” Opt. Express 26(10), 12455–12468 (2018). [CrossRef]  

9. J. Duparré, P. Dannberg, P. Schreiber, A. Bräuer, and A. Tünnermann, “Artificial apposition compound eye fabricated by micro-optics technology,” Appl. Opt. 43(22), 4303–4310 (2004). [CrossRef]  

10. S. Wu, T. Jiang, G. Zhang, B. Schoenemann, F. Neri, M. Zhu, C. Bu, J. Han, and K.-D. Kuhnert, “Artificial compound eye: a survey of the state-of-the-art,” Artif. Intell. Rev. 48(4), 573–603 (2017). [CrossRef]  

11. K.-H. Jeong, J. Kim, and L. P. Lee, “Biologically inspired artificial compound eyes,” Science 312(5773), 557–561 (2006). [CrossRef]  

12. J. Duparré and F. Wippermann, “Micro-optical artificial compound eyes,” Bioinspiration Biomimetics 1(1), R1–R16 (2006). [CrossRef]  

13. Y. Cheng, J. Cao, Y. Zhang, and Q. Hao, “Review of state-of-the-art artificial compound eye imaging systems,” Bioinspiration &Biomimetics 14(3), 031002 (2019). [CrossRef]  

14. M. Ma, F. Guo, Z. Cao, and K. Wang, “Development of an artificial compound eye system for three-dimensional object detection,” Appl. Opt. 53(6), 1166–1172 (2014). [CrossRef]  

15. Y. Zhang, J. Du, L. Shi, X. Dong, X. Wei, and C. Du, “Artificial compound-eye imaging system with a large field of view based on a convex solid substrate,” in Holography, Diffractive Optics, and Applications IV, vol. 7848, (International Society for Optics and Photonics, 2010), p. 78480U.

16. H. Zhang, Z. Lu, R.-t. Wang, F.-y. Li, H. Liu, and Q. Sun, “Study on curved compound eye imaging system,” Optics and Precision Engineering 14, 346–350 (2006).

17. D. Floreano, R. Pericet-Camara, S. Viollet, F. Ruffier, A. Brückner, R. Leitel, W. Buss, M. Menouni, F. Expert, R. Juston, M. K. Dobrzynski, G. L’Eplattenier, F. Recktenwald, H. A. Mallot, and N. Franceschini, “Miniature curved artificial compound eyes,” Proc. Natl. Acad. Sci. 110(23), 9267–9272 (2013). [CrossRef]  

18. A. Cao, J. Wang, H. Pang, M. Zhang, L. Shi, Q. Deng, and S. Hu, “Design and fabrication of a multifocal bionic compound eye for imaging,” Bioinspiration Biomimetics 13(2), 026012 (2018). [CrossRef]  

19. G. J. Lee, C. Choi, D.-H. Kim, and Y. M. Song, “Bioinspired artificial eyes: Optic components, digital cameras, and visual prostheses,” Adv. Funct. Mater. 28(24), 1705202 (2018). [CrossRef]  

20. Y. Wang, C. Shi, C. Liu, X. Yu, H. Xu, T. Wang, Y. Qiao, and W. Yu, “Fabrication and characterization of a polymeric curved compound eye,” J. Micromech. Microeng. 29(5), 055008 (2019). [CrossRef]  

21. J. Duparré, D. Radtke, and A. Tünnermann, “Spherical artificial compound eye captures real images,” in MOEMS and Miniaturized Systems VI, vol. 6466, (International Society for Optics and Photonics, 2007), p. 64660K.

22. L. Li, Y. Hao, J. Xu, F. Liu, and J. Lu, “The design and positioning method of a flexible zoom artificial compound eye,” Micromachines 9(7), 319 (2018). [CrossRef]  

23. Y. M. Song, Y. Xie, V. Malyarchuk, J. Xiao, I. Jung, K.-J. Choi, Z. Liu, H. Park, C. Lu, R.-H. Kim, R. Li, K. B. Crozier, Y. Huang, and J. A. Rogers, “Digital cameras with designs inspired by the arthropod eye,” Nature 497(7447), 95–99 (2013). [CrossRef]  

24. C. Shi, Y. Wang, C. Liu, T. Wang, H. Zhang, W. Liao, Z. Xu, and W. Yu, “Scecam: a spherical compound eye camera for fast location and recognition of objects at a large field of view,” Opt. Express 25(26), 32333–32345 (2017). [CrossRef]  

25. R. Hornsey, P. Thomas, W. Wong, S. Pepic, K. Yip, and R. Krishnasamy, “Electronic compound-eye image sensor: construction and calibration,” in Sensors and Camera Systems for Scientific, Industrial, and Digital Photography Applications V, vol. 5301, (International Society for Optics and Photonics, 2004), pp. 13–25.

26. R. Krishnasamy, W. Wong, E. Shen, S. Pepic, R. Hornsey, and P. J. Thomas, “High precision target tracking with a compound-eye image sensor,” in Canadian Conference on Electrical and Computer Engineering 2004 (IEEE Cat. No. 04CH37513), vol. 4, (IEEE, 2004) 2319–2323.

27. R. Krishnasamy, P. Thomas, S. Pepic, W. Wong, and R. I. Hornsey, “Calibration techniques for object tracking using a compound eye image sensor,” in Unmanned/Unattended Sensors and Sensor Networks, vol. 5611, (International Society for Optics and Photonics, 2004), pp. 42–53.

28. M. F. Land and D.-E. Nilsson, Animal eyes (University of Oxford, 2012).

29. A. F. Gmitro and D. Aziz, “Confocal microscopy through a fiber-optic imaging bundle,” Opt. Lett. 18(8), 565–567 (1993). [CrossRef]  

30. J. A. Udovich, N. D. Kirkpatrick, A. Kano, A. Tanbakuchi, U. Utzinger, and A. F. Gmitro, “Spectral background and transmission characteristics of fiber optic imaging bundles,” Appl. Opt. 47(25), 4560–4568 (2008). [CrossRef]  

31. Y.-C. Tseng, P. Han, H.-C. Hsu, and C.-M. Tsai, “A flexible fov capsule endoscope design based on compound lens,” J. Disp. Technol. 12, 1 (2016). [CrossRef]  

32. H. Kenner, Geodesic math and how to use it (University of California, 1976).

33. H. S. Son, D. L. Marks, J. Hahn, J. Kim, and D. J. Brady, “Design of a spherical focal surface using close-packed relay optics,” Opt. Express 19(17), 16132–16138 (2011). [CrossRef]  

34. Q. Memon and S. Khan, “Camera calibration and three-dimensional world reconstruction of stereo-vision using neural networks,” Int. J. Syst. Sci. 32(9), 1155–1159 (2001). [CrossRef]  

35. L. N. Smith and M. L. Smith, “Automatic machine vision calibration using statistical and neural network methods,” Image Vis. Comput. 23(10), 887–899 (2005). [CrossRef]  

36. H. Jian, J. He, X. Jin, X. Chen, and K. Wang, “Automatic geometric calibration and three-dimensional detecting with an artificial compound eye,” Appl. Opt. 56(5), 1296–1301 (2017). [CrossRef]  

Supplementary Material (1)

NameDescription
Visualization 1       This video contains the calibration experiment process of bionic compound eyes designed in this paper, the training of neural network and the detection of target direction. Some of the content in the article is more clearly shown.

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (9)

Fig. 1.
Fig. 1. Bionic fiber-optic compound eye.
Fig. 2.
Fig. 2. Icosahedral subdivision. (a) A concentric icosahedron is placed within and touching the spherical surface. (b) An equilateral triangle is divided into multiple small equilateral triangles with a division frequency $f$ ($f=8$). (c) The coordinates of the equilateral triangle are converted to coordinates on the sphere. (d) The distribution of points on the sphere is optimized. (e) Comparison of pre-optimization and post-optimization.
Fig. 3.
Fig. 3. Prototype of fiber-optic compound eye and receiving surface.
Fig. 4.
Fig. 4. Schematic diagram of FOV of adjacent sub-eyes.
Fig. 5.
Fig. 5. Virtual cylinder target schematic(associated with Visualization 1).
Fig. 6.
Fig. 6. Experimental device for the calibration experiment(associated with the first part of Visualization 1).
Fig. 7.
Fig. 7. Light spot images of the light source at different locations.
Fig. 8.
Fig. 8. Residual errors of the neural network training.
Fig. 9.
Fig. 9. Ground truth (dashed line) and prediction (solid line) of the spatial orientation ($\theta ,\phi$) of the light sources in space.

Tables (1)

Tables Icon

Table 1. Comparison of ground truth and predictiona

Equations (6)

Equations on this page are rendered with MathJax. Learn more.

{ U i = u i sin 72 , V i = v i + u i cos 72 , W i = f / 2 + w i / b ,
ϕ = arctan ( h / R ) ,
y i ( l ) = f ( n e t i ( l ) ) ,
n e t i ( l ) = j = 1 s l 1 w i j ( l ) y j ( l 1 ) + b i ( l )
E = 1 M i = 1 M E ( i ) ,
E ( i ) = 1 2 k = 1 N [ y k ( i ) o k ( i ) ] 2
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.