Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Long-working-distance 3D measurement with a bionic curved compound-eye camera

Open Access Open Access

Abstract

The bionic curved compound-eye camera is a bionic-inspired multi-aperture camera, which can be designed to have an overlap on the field of view (FOV) in between adjacent ommatidia so that 3D measurement is possible. In this work, we demonstrate the 3D measurement with a working distance of up to 3.2 m by a curved compound-eye camera. In that there are hundreds of ommatidia in the compound-eye camera, traditional calibration boards with a fixed-pitch pattern arrays are not applicable. A batch calibration method based on the CALTag calibration board for the compound-eye camera was designed. Next, the 3D measurement principle was described and a 3D measurement algorithm for the compound-eye camera was developed. Finally, the 3D measurement experiment on objects placed at different distances and directions from the compound-eye camera was performed. The experimental results show that the working range for 3D measurement can cover the whole FOV of 98° and the working distance can be as long as 3.2 m. Moreover, a complete depth map was reconstructed from a raw image captured by the compound-eye camera and demonstrated as well. The 3D measurement capability of the compound-eye camera at long working distance in a large FOV demonstrated in this work has great potential applications in areas such as unmanned aerial vehicle (UAV) obstacle avoidance and robot navigation.

© 2022 Optica Publishing Group under the terms of the Optica Open Access Publishing Agreement

1. Introduction

Recently, various compound eyes have been proposed and demonstrated [15]. In that there is an overlap on the field of view (FOV) in between adjacent ommatidia, compound eyes can be used to obtain 3D information of objects. 3D-imaging of objects has been realized by a planar image-capturing system called TOMBO [68]. Unlike planar compound eyes, bio-inspired compound eyes have a curved surface and there are hundreds to thousands of ommatidia so that an ultra-large FOV can be available [912]. However, most bio-inspired compound eyes can only detect light intensity so that the corresponding applications of bio-inspired compound eyes in 3D scenes mainly focus on shape detecting, trajectory tracking and orientation detecting [1317]. 3D-imaging of objects by bio-inspired compound eyes has not been reported so far. In addition, the working distance of the 3D detection by bio-inspired compound eyes is normally less than 1 m, which makes it hard to be applied in many practical applications [1317].

Camera calibration is an essential step in 3D detection, while the calibration of the curved compound eye is rather difficult due to the large number of the ommatidia. In 2014, Mengchao Ma et al. proposed a biplane-based calibration method for the curved compound eye [13]. In their method, an LCD screen was used to display the calibration board, but it was too small to cover the entire FOV of the compound eye. In 2017, Huijie Jian et al. proposed a bicylinder-based calibration method [14]. An LED bar controlled by a high-precision turntable was used to mimic a large size and high-precision curved calibration board. The bicylinder calibration method is better than the biplane method in that it is more suitable for large FOV calibrations. In 2019, Yelong Zheng et al. proposed a light field based calibration method for the curved compound eye [15]. The measured depth data was directly fitted to the light intensity detected by the compound eye so that the measurement result is easily affected by the background light of the surrounding environment. In summary, these methods based on light intensity can be used to calibrate the curved compound eye, but the calibration accuracy is sensitive to the working environment. Moreover, these methods are highly dependent on precise alignment setup and are unpractical for the bionic curved compound-eye camera (BCCEC).

In this work, we demonstrate 3D measurement with a working distance of 3.2 m by a compound-eye camera. A batch calibration method for the compound-eye camera was designed, where a fiducial marker pattern called CALTag with coding information added was used as the target. The added coding information makes the calibration process more robust and efficient. Then, the measurement process of the compound-eye camera was described and a 3D measurement algorithm for the compound-eye camera was developed as well, which can be used to reconstruct a complete depth map. During this process, the calibrated physical parameters were used, and the spatial coordinates of the pixels sampled during image reconstruction were calculated based on the stereo disparity.

2. 3D measurement principle of BCCEC

2.1 Bionic curved compound eye camera

The compound-eye camera used in this work consists of three subsystems: a hemispherical lens array, an optical relay system and a planar imaging sensor. As shown in Fig. 1(a), the ommatidia are arranged on the curved spherical shell in an approximate hexagonal form. The optical relay system projects the curved focal plane onto a planar focal plane, making the image compatible with the planar detector. The compound-eye camera is shown in Fig. 1(b) and its parameters are listed in Table 1. More details about the compound-eye camera can be found elsewhere [18].

 figure: Fig. 1.

Fig. 1. Arrangement of ommatidia on the curved spherical shell (a), the compound-eye camera in kind (b).

Download Full Size | PDF

Tables Icon

Table 1. Parameters of the compound-eye camera.

2.2 Calibration method for BCCEC

The Zhang's calibration method [19] can be used to calculate the actual parameters of the camera, which requires the use of a plane calibration board. However, two criterions must be met to apply a traditional calibration board with a fixed-pitch pattern array to the compound-eye camera. One is that the calibration board must be imaged by adjacent ommatidia simultaneously so that the spatial coordinates of the object points can be included be in the same coordinate system. The other is that the spatial coordinates of the object can only be determined when the entire calibration target is imaged. However, the FOV of ommatidium is normally small and the FOV overlap is only about 50% in our compound-eye camera. In this case, it is difficult for adjacent ommatidia to image the entire calibration board at the same time. To overcome this problem, a fiducial marker pattern called CALTag with coding information added was used as the target. The CALTag calibration board adds coding information to the traditional chessboard so that markers can still be identified even if the CALTag calibration board is partially imaged [20]. As a result, the entire calibration board doesn’t need to be imaged and the correspondence between spatial coordinates and pixel coordinates can be established in multiple ommatidia at the same time. After introducing the CALTag calibration board, the Zhang's calibration method [19] can still be used to calibrate the compound-eye camera.

The compound-eye camera is considered to be a collection of multiple adjacent ommatidia. To facilitate addressing of the ommatidia, all ommatidia were numbered. Since there is no crosstalk between each other, the sub-images in the raw image can be used to replace the ommatidia for numbering. All sub-images were sequentially numbered in a counterclockwise direction from the inner circle to the outer circle, as shown in Fig. 2. After that, a look-up table was manually constructed from the numbering information of adjacent ommatidia, where each ommatidium node contains six numbers for its surrounding six ommatidia, which can be used to address all ommatidia adjacent to any ommatidium. In the calibration process, all ommatidia are calibrated by Zhang's calibration method [19], and all adjacent ommatidia are stereo calibrated through the stereo-calibration in Camera Calibration Toolbox for MATLAB [21], and finally the intrinsic parameters of 127 ommatidia and the extrinsic parameters between 342 pairs of adjacent ommatidia can be obtained.

 figure: Fig. 2.

Fig. 2. The arrangement and numbering of sub-images.

Download Full Size | PDF

2.3 3D measurement algorithm based on BCCEC

3D imaging means that a complete depth map can be obtained by using the compound-eye camera. To achieve this purpose, the spatial coordinates of the pixels sampled during image reconstruction need to be calculated, and their Z components need to be extracted. Below describe the details of the 3D imaging process by using the compound-eye camera.

In order to perform depth measurement, a pair of ommatidia with overlapped FOV must be used at least. For our BCCEC, there are two basic pairs of neighboring ommatidia marked with red and green arrows respectively as shown in Fig. 3(a). Apparently, the angle formed by the optical axes of neighboring ommatidia for the green or red pair is different, i.e. 7° or 12.13° for the red or green pair respectively. Therefore, there are two critical object distances corresponding to the cases when red or green pair of ommatidia start to overlap. According to the schematic diagram of neighboring overlapped ommatidia shown in Fig. 3(b), the critical object distance dc can be calculated by using the following equation,

$${d_c} = \frac{{d \cdot \sin \frac{\beta }{2}}}{{\tan \alpha }} = \frac{{[R \cdot \cos \frac{\beta }{2} + R \cdot \sin \frac{\beta }{2} \cdot \cos (\alpha - \frac{\beta }{2})/\sin (\alpha - \frac{\beta }{2})] \cdot \sin \frac{\beta }{2}}}{{\tan \alpha }}$$
where R is 68 mm and stands for the radius of the curved shell, β is 7° or 12.13° and stands for the angle formed by the optics axes of red or green pair ommatidia, α is 7° and stands for half of the FOV of each ommatidium. As a result, two critical object distances were calculated to be 68 mm and 437 mm respectively. This means that the depth measurement only can be performed if the working distance is larger than 68 mm and up to 4 ommatidia can be used in the measurement process if the working distance is larger than 437 mm.

 figure: Fig. 3.

Fig. 3. Schematic diagram for two basic pairs of neighboring ommatidia marked with different colored arrows (a) and side view of the overlapped ommatidia (b).

Download Full Size | PDF

Figure 4(a) shows the sub-images formed by the 4 ommatidia with overlapping FOVs. These ommatidia are labeled by nk with k = 1,…, 4 and their combination is defined as a measurement unit. Since the compound-eye camera contains different measurement units, the measurement unit to which the sampled pixel belongs needs to be addressed according to the pixel coordinates.

 figure: Fig. 4.

Fig. 4. The schematic relationship for sub-images formed by 4 ommatidia with overlapping FOVs (a), the schematic relationship for a cluster of sub-images formed by the corresponding ommatidia (b).

Download Full Size | PDF

In a measurement unit, the overlapping FOV is located at the edges of n1 and n4, and close to the centers of n2 and n3. Since the sampled pixels are all located near the centers of the sub-images, it is firstly assumed that the sampled pixels are in n2. After that, the other three ommatidia can be determined based on the pixel distance.

Figure 4(b) shows the schematic relationship for a cluster of sub-images formed by a cluster of ommatidia. Where, point O is the center of the central sub-image and points O1 - O6 are the centers of six surrounding sub-images. Assume that the coordinate of the sampled pixel in the central sub-image is (x, y). In that the sub-images are all inverted images; the pixel coordinate needs to be transformed as follows.

$$x^{\prime} = 2{x_0} - x$$
$$y^{\prime} = 2{y_0} - y$$
where (x0, y0) is the pixel coordinate of point O, and (x, y) is the coordinate of the sampled pixel in the flipped sub-image. The ommatidia adjacent to the central ommatidium are determined according to the look-up table, and the distance between the sampled pixel and the sub-images of the adjacent ommatidia can be calculated by,
$${d_j} = \sqrt {{{({x^{\prime} - {x_j}} )}^2} + {{({y^{\prime} - {y_j}} )}^2}}$$
where (xj, yj) with j = 1,…, 6 are coordinates of points Oj. If d is the minimum, then the corresponding ommatidium is the closest to point P, which is shown in Fig. 4(b) by the same color, the central sub-image consists of areas under different colors, and the sub-image closest to an area has the same color with this area. The ommatidium closest to point P is n3. According to the look-up table, among the six ommatidia adjacent to n2, the two ommatidia adjacent to n3 are n1, n4. So far, the measurement unit to which the sampled pixel belongs can been determined.

Next, the spatial coordinates corresponding to the sampled pixels are measured by the measurement units. If the pixel coordinates of the same point in the four sub-images are known, the bundle adjustment [22] can be used for measurement. However, only the coordinates in n2 are known as of now. To avoid additional calculations, three sets of binoculars composed of n2 and other three ommatidia are used to perform measurements, and the sub-camera coordinate system of n2 is used as the reference coordinate system for triangulation during the measurement process. Then, multiple sets of measurements are averaged, as shown below.

$$P = \frac{1}{n}\sum\nolimits_{i = 1}^\textrm{n} {{P_i}}$$
where Pi with i = 1,…, n are the multiple measurements, P is the average measurement and n is the number of ommatidia sets, which is 2 or 3 in this system. Since the compound-eye camera contains different measurement units, the measurement results need to be transformed into the same coordinate system, as shown below.
$$P^{\prime} = R^{\prime}P\textrm{ + }T^{\prime}$$
where R′ is the rotation matrix between the coordinate systems, and T′ is the translation matrix between them. Finally, 3D imaging can be realized by calculating the spatial coordinates of all sampled pixels in the compound eye image reconstruction process.

3. Experiments and results

3.1 Calibration experiment

Considering the particularity of the compound-eye camera’s calibration, a CALTag calibration board with a size of as large as A0 paper (841 mm × 1189 mm) was used. In this way, the calibration board is large enough so that it can be imaged by as many ommatidia as possible at the same time. As shown in Fig. 5(a), there are 15 × 22 different markers printed in the calibration board and each marker has a size of 50.8 mm × 50.8 mm. Although the size of the calibration board is large enough, only about 10 sub-images containing at least 50 markers during a single exposure can be obtained as shown in Fig. 5(b). Therefore, the relative position between the calibration board and the compound-eye camera was finely tuned according to the designed working distance of the camera and the size and number of the markers so that a sub-image with no less than 50 markers was captured by all ommatidia. In order to improve the calibration accuracy, the imaging process was repeated 15 times with pose adjustment of the calibration board so that at least 15 valid sub-images were obtained by all ommatidia. As a result, a total of 200 images were taken by the compound-eye camera during the calibration process.

 figure: Fig. 5.

Fig. 5. The CALTag calibration board (a) and the raw image obtained by a single exposure of the compound-eye camera during the calibration process (b).

Download Full Size | PDF

Next, all sub-images were extracted from the raw images so that each ommatidium has 200 corresponding sub-images numbered according to the exposure sequence. Then, all sub-images were flipped and subjected to CALTag corner detection so that the pixel coordinates of corner points can be obtained. At the same time, the spatial coordinates of corner points were determined according to the coding information of the recognized marks.

Afterwards, all ommatidia were traversed and performed calibration. During the calibration, the tangential distortion cannot be ignored, because the optical axes of the ommatidia are not completely perpendicular to the detector surface. According to the object-image correspondences established in different exposures, the intrinsic parameters of 127 ommatidia were obtained. Then, each ommatidium is traversed according to the numbering rule shown in Fig. 2. For each traversed ommatidium, only the left neighboring ommatidia (with a smaller x coordinate) is stereo calibrated, and thus to avoid to repeat the same calibration process for the same ommatidium. During the calibration, appropriate exposures were selected from 200 exposures based on whether there are enough corner points in their sub-images or not, and stereo calibration was performed. The extrinsic parameters are then stored in a structure array with an index same as the ommatidium number for later 3D measurement procedure. Finally, the extrinsic parameters between 342 pairs of adjacent ommatidia were obtained.

Figure 6 shows the reprojection errors calculated from the calibration results of the ommatidia located at different circles. Among them, the reprojection errors of inner ommatidia are mostly less than 0.25 pixels, while the reprojection errors of outer ommatidia are concentrated around 0.3 pixels. This is thought to be caused by poorer imaging quality of ommatidia located at the edge. Although the calibration accuracy was slightly different, all ommatidia were effectively calibrated.

 figure: Fig. 6.

Fig. 6. The reprojection errors calculated from the calibration results.

Download Full Size | PDF

3.2 3D measurement results and discussions

After the calibration, a 3D measurement experiment by using the compound-eye camera was performed. In experiment, the compound-eye camera was fixed firstly, and the position of the laser rangefinder was adjusted so that the laser spot was projected to the center of the ommatidium. After that, the laser rangefinder was fixed as well, and the linear distance l0 between the rangefinder and the ommatidium was recorded, as show in Fig. 7(a). Then, the laser was blocked by a black paper, the linear distance l1 between the rangefinder and the laser spot was recorded by the rangefinder, and the image of the laser spot was captured by the compound-eye camera, as shown in Fig. 7(b). Next, the centroid coordinate of the spot was extracted and the spatial coordinate (x, y, z) was measured. Measurement error could be calculated by,

$$p = \left|{\frac{{\sqrt {{x^2} + {y^2} + {z^2}} }}{{{l_0} - {l_1}}} - 1} \right|\times 100\%$$
The measurement range of the laser rangefinder was 40 m, and the measurement accuracy was ±2 mm. Starting from 0.8 m, the black paper was moved away from the compound-eye camera in a 0.4 m increment per step to the maximum distance of 4.0 m and the experimental data were recorded repeatedly. In order to verify the measurement performance of the compound-eye camera in the whole FOV, the relative position between the laser rangefinder and the compound-eye camera was tuned so that the laser spot can be imaged by the central ommatidium 127, the middle ommatidium 25 and the marginal ommatidium 71 respectively, as show in Fig. 8.

 figure: Fig. 7.

Fig. 7. The schematic diagram of the measurement process (a) and an image of the spot (b).

Download Full Size | PDF

 figure: Fig. 8.

Fig. 8. The sketch of 3D measurement experiment by using the compound-eye camera.

Download Full Size | PDF

Figure 9 shows the experimental results of five repeated measurements at different FOVs and different object distances. The results show that the relative errors of measurements of different FOVs gradually increase with distance. This is because the measurement accuracy is positively related to the object distance in the measurement process based on the parallax principle [23]. In order to quantitatively describe the repeated measurement results, the limit errors at different FOVs and different object distances were calculated, as shown in Fig. 10. Where the blue represents the average results of repeated measurements and the red represents the limit errors. As the distance becomes farther, the measurement error and limit error of different FOVs all gradually increase. When the object distance is 3.2 m, the measurement error of the central FOV is (4.48 ± 0.28) %, the measurement error of the middle FOV is (4.15 ± 0.21) %, and the measurement error of the marginal FOV is (4.11 ± 0.27) %. Although the measurement accuracy of different FOVs is slightly different, it can be shown that the working range for 3D measurement can cover the whole FOV and the working distance can be as long as 3.2 m at least with a measurement error of no more than 5%.

 figure: Fig. 9.

Fig. 9. The experimental results of five repeated measurements for the central FOV (a), the middle FOV (b) and the marginal FOV (c).

Download Full Size | PDF

 figure: Fig. 10.

Fig. 10. The limit errors of repeated measurements for the central FOV (a), the middle FOV (b) and the marginal FOV (c).

Download Full Size | PDF

It is worth to point out that the 3D measurement by the compound-eye camera here in fact means that there are multiple binoculars to measure the same target at the same time. As a result, the measurement error should be much smaller than the traditional binocular method. To prove this point, average measurements and binocular measurements were compared, as shown in Fig. 11. Where the average measurements were the average values of the three sets of binocular measurements.

 figure: Fig. 11.

Fig. 11. The average measurements VS the binocular measurements for the central FOV (a), the middle FOV (b) and the marginal FOV (c).

Download Full Size | PDF

In different FOVs, both the error of average measurement and the error of binocular measurement increase with distance. When the object distance is small, the accuracy of the average measurement is close to that of the binocular measurement, but when the object distance is farther, the accuracy of the average measurement is significantly higher. This is because averaging can compensate for random errors, thereby improving measurement accuracy. Due to the small error of binocular measurement at close distances, the compensation effect at close distances is not obvious. In short, 3D measurement by the compound-eye camera shows a higher accuracy as well as a more robustness than the traditional binocular method. It also should be noted that the measurement distance is mainly limited by the structure of the compound-eye camera, and the average measurement only has a greater tolerance for measurement errors, but cannot improve the measurement distance.

3.3 3D measurement results and discussions

In addition, a 3D imaging experiment by using the compound-eye camera was performed as well. In the experiment, the plane calibration board was placed obliquely at a distance of about 1 m and photographed with the compound-eye camera. The experimental result is shown in Fig. 12.

 figure: Fig. 12.

Fig. 12. The raw image (a), the reconstructed image (b), and the reconstructed depth map (c).

Download Full Size | PDF

Since the object only appears in the central 19 sub-images and the rest sub-images are blank and therefore can't be used for 3D measurement in the experiment scene, a part of raw image taken by the compound-eye camera, which contains the 19 sub-images with and the whole object is shown in Fig. 12(a), and the reconstructed image is shown in Fig. 12(b). The spatial coordinates of the pixels sampled during image reconstruction were calculated based on the 3D-imaging algorithm described above. The reconstructed image was rendered according to the depth information, as shown in Fig. 12(c). The color of the depth map gradually changes from near to far, which can truly reflect the depth information of the scene. Since the spatial coordinates of sampled pixels were measured by different measurement units, and the different measurement errors were introduced by multiple measurement units, the depth map inevitably has a blocky feel.

4. Conclusions

In summary, 3D measurement by the compound-eye camera has been demonstrated in this work and our results shown that a working distance of up to 3.2 m has been realized. Moreover, the working range can cover a large FOV of 98°. In this work, a batch calibration method based on the CALTag calibration board for the compound-eye camera was designed, which is practical and flexible for curved compound eyes. Next, the 3D measurement principle of the compound-eye camera was given and a 3D measurement algorithm for the compound-eye camera was developed as well, which can be used to reconstruct a complete depth map. Finally, the experimental results show that the working range for 3D measurement can cover the whole FOV of 98° and the working distance can be as long as 3.2 m at least with a measurement error of no more than 5%. The 3D measurement method by the compound-eye camera demonstrated in this work has great potential in areas such as unmanned aerial vehicle (UAV) for obstacle avoidance and robot navigation applications.

Funding

National Natural Science Foundation of China (61975231).

Disclosures

The authors declare that there are no conflicts of interest related to this article.

Data availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

References

1. K. Venkataraman, D. Lelescu, J. Duparre, A. McMahon, G. Molina, P. Chatterjee, R. Mullis, and S. Nayar, “PiCam: An Ultra-Thin High Performance Monolithic Camera Array,” ACM Trans. Graph. 32(6), 1–13 (2013). [CrossRef]  

2. B. Dai, L. Zhang, C. Zhao, H. Bachman, R. Becker, J. Mai, Z. Jiao, W. Li, L. Zheng, X. Wan, T. J. Huang, S. Zhuang, and D. Zhang, “Biomimetic apposition compound eye fabricated using microfluidic-assisted 3D printing,” Nat. Commun. 12(1), 6458 (2021). [CrossRef]  

3. D. Floreano, R. Pericet-Camara, S. Viollet, F. Ruffier, A. Brueckner, R. Leitel, W. Buss, M. Menouni, F. Expert, R. Juston, M. K. Dobrzynski, G. L’Eplattenier, F. Recktenwald, H. A. Mallot, and N. Franceschini, “Miniature curved artificial compound eyes,” Proc. Natl. Acad. Sci. U. S. A. 110(23), 9267–9272 (2013). [CrossRef]  

4. H. Zhang, L. Li, D. McCray, S. Scheiding, N. Naples, A. Gebhardt, S. Risse, R. Eberhardt, A. Tünnermann, and A. Yi, “Development of a low cost high precision three-layer 3D artificial compound eye,” Opt. Express 21(19), 22232–22245 (2013). [CrossRef]  

5. C. Shi, Y. Wang, C. Liu, T. Wang, H. Zhang, W. Liao, Z. Xu, and W. Yu, “SCECam: a spherical compound eye camera for fast location and recognition of objects at a large field of view,” Opt. Express 25(26), 32333–32345 (2017). [CrossRef]  

6. J. Tanida, K. Yamada, and I. Ieee, “TOMBO: Thin observation module by bound optics,” in The 15th Annual Meeting of the IEEE Lasers and Electro-Optics Society, 233–234 (2002).

7. Y. Kitamura, R. Shogenji, K. Yamada, S. Miyatake, M. Miyamoto, T. Morimoto, Y. Masaki, N. Kondou, D. Miyazaki, J. Tanida, and Y. Ichioka, “Reconstruction of a high-resolution image on a compound-eye image-capturing system,” Appl. Opt. 43(8), 1719–1727 (2004). [CrossRef]  

8. R. Horisaki, S. Irie, Y. Ogura, and J. Tanida, “Three-dimensional information acquisition using a compound imaging system,” Opt. Rev. 14(5), 347–350 (2007). [CrossRef]  

9. Y. M. Song, Y. Xie, V. Malyarchuk, J. Xiao, I. Jung, K.-J. Choi, Z. Liu, H. Park, C. Lu, R.-H. Kim, R. Li, K. B. Crozier, Y. Huang, and J. A. Rogers, “Digital cameras with designs inspired by the arthropod eye,” Nature 497(7447), 95–99 (2013). [CrossRef]  

10. L. Li and A. Y. Yi, “Development of a 3D artificial compound eye,” Opt. Express 18(17), 18125–18137 (2010). [CrossRef]  

11. Y. Wang, C. Shi, C. Liu, X. Yu, H. Xu, T. Wang, Y. Qiao, and W. Yu, “Fabrication and characterization of a polymeric curved compound eye,” J. Micromech. Microeng. 29(5), 055008 (2019). [CrossRef]  

12. W.-K. Kuo, G.-F. Kuo, S.-Y. Lin, and H. H. Yu, “Fabrication and characterization of artificial miniaturized insect compound eyes for imaging,” Bioinspiration Biomimetics 10(5), 056010 (2015). [CrossRef]  

13. M. Ma, F. Guo, Z. Cao, and K. Wang, “Development of an artificial compound eye system for three-dimensional object detection,” Appl. Opt. 53(6), 1166–1172 (2014). [CrossRef]  

14. H. Jian, J. He, X. Jin, X. Chen, and K. Wang, “Automatic geometric calibration and three-dimensional detecting with an artificial compound eye,” Appl. Opt. 56(5), 1296–1301 (2017). [CrossRef]  

15. Y. Zheng, L. Song, J. Huang, H. Zhang, and F. Fang, “Detection of the three-dimensional trajectory of an object based on a curved bionic compound eye,” Opt. Lett. 44(17), 4143–4146 (2019). [CrossRef]  

16. R. Krishnasamy, W. Wong, E. Shen, S. Pepic, R. Hornsey, and P.J. Thomas, “High precision target tracking with a compound-eye image sensor,” in Canadian Conference on Electrical and Computer Engineering, 4, 2319–2323 (2004).

17. M. Ma, H. Li, X. Gao, W. Si, H. Deng, J. Zhang, X. Zhong, and K. Wang, “Target orientation detection based on a neural network with a bionic bee-like compound eye,” Opt. Express 28(8), 10794–10805 (2020). [CrossRef]  

18. H. Xu, Y. Zhang, D. Wu, G. Zhang, Z. Wang, X. Feng, B. Hu, and W. Yu, “Biomimetic curved compound-eye camera with a high resolution for the detection of distant moving objects,” Opt. Lett. 45(24), 6863–6866 (2020). [CrossRef]  

19. Z. Zhang, “A Flexible New Technique for Camera Calibration, Pattern Analysis and Machine Intelligence,” IEEE Transactions on 22, 1330–1334 (2000).

20. B. Atcheson, F. Heide, and W. Heidrich, “CALTag: High Precision Fiducial Markers for Camera Calibration,” in Proceedings of the Vision, Modeling, and Visualization Workshop, 41–48 (2010).

21. J. Y. Bouguet, “Camera Calibration Toolbox for Matlab,” CaltechDATA: Version 1.0, 4 May 2022, https://doi.org/10.22002/D1.20164.

22. B. Triggs, P. Mclauchlan, R. Hartley, and A. Fitzgibbon, “Bundle Adjustment–A Modern Synthesis,” in International workshop on vision algorithms, 298–372 (1999).

23. T. Luhmann, S. Robson, S. Kyle, and J. Boehm, “Close-Range Photogrammetry and 3D Imaging,” The Deutsche Nationalbibliothek, 315–319 (2014).

Data availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (12)

Fig. 1.
Fig. 1. Arrangement of ommatidia on the curved spherical shell (a), the compound-eye camera in kind (b).
Fig. 2.
Fig. 2. The arrangement and numbering of sub-images.
Fig. 3.
Fig. 3. Schematic diagram for two basic pairs of neighboring ommatidia marked with different colored arrows (a) and side view of the overlapped ommatidia (b).
Fig. 4.
Fig. 4. The schematic relationship for sub-images formed by 4 ommatidia with overlapping FOVs (a), the schematic relationship for a cluster of sub-images formed by the corresponding ommatidia (b).
Fig. 5.
Fig. 5. The CALTag calibration board (a) and the raw image obtained by a single exposure of the compound-eye camera during the calibration process (b).
Fig. 6.
Fig. 6. The reprojection errors calculated from the calibration results.
Fig. 7.
Fig. 7. The schematic diagram of the measurement process (a) and an image of the spot (b).
Fig. 8.
Fig. 8. The sketch of 3D measurement experiment by using the compound-eye camera.
Fig. 9.
Fig. 9. The experimental results of five repeated measurements for the central FOV (a), the middle FOV (b) and the marginal FOV (c).
Fig. 10.
Fig. 10. The limit errors of repeated measurements for the central FOV (a), the middle FOV (b) and the marginal FOV (c).
Fig. 11.
Fig. 11. The average measurements VS the binocular measurements for the central FOV (a), the middle FOV (b) and the marginal FOV (c).
Fig. 12.
Fig. 12. The raw image (a), the reconstructed image (b), and the reconstructed depth map (c).

Tables (1)

Tables Icon

Table 1. Parameters of the compound-eye camera.

Equations (7)

Equations on this page are rendered with MathJax. Learn more.

d c = d sin β 2 tan α = [ R cos β 2 + R sin β 2 cos ( α β 2 ) / sin ( α β 2 ) ] sin β 2 tan α
x = 2 x 0 x
y = 2 y 0 y
d j = ( x x j ) 2 + ( y y j ) 2
P = 1 n i = 1 n P i
P = R P  +  T
p = | x 2 + y 2 + z 2 l 0 l 1 1 | × 100 %
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.