## Abstract

Stereo cameras have been widely used for three-dimensional (3D) photogrammetry, and stereo calibration is a crucial process to estimate the intrinsic and extrinsic parameters. This paper proposes a stereo calibration method with absolute phase target by using horizontal and vertical phase-shifting fringes. The one-to-one mapping from the world points to the image points that can be recovered by referring to the absolute phase and then used to calibrate the stereo cameras. Compared with traditional methods that only use feature points within the overlapping field-of-view (FOV), the proposed method can use all feature points within the overlapping and non-overlapping FOVs. Besides, since phase is more robust against camera defocusing than intensity, the target images can be captured regardless of the depth-of-field (DOF). With the advantages of whole-field capability and defocusing tolerability, the target placement becomes very flexible. Both simulations and experiment results demonstrate the robustness and accuracy of the proposed method.

© 2019 Optical Society of America under the terms of the OSA Open Access Publishing Agreement

## 1. Introduction

With the non-contact, fast-speed, high-accuracy and high-resolution capabilities, optical vision systems play an important role in 3D measurement [1]. Generally, 3D imaging methods include optical interferometry, time-of-flight, stereo vision and structured light [2]. Among various methods, stereo vision has been widely used in many scenarios such as automatic production, robot navigation and motion tracking. Camera calibration is a critical process to estimate the intrinsic and extrinsic parameters of stereo cameras that relates 2D image with 3D world. Due to this, many studies have been focused on stereo calibration which usually involves two major parts: intrinsic calibration of single camera and extrinsic calibration of stereo cameras.

Existing intrinsic calibration methods are usually based on different types of targets such as 3D [3], 2D [4] and 1D [5] targets. Among these targets, the 3D targets can achieve accurate calibration results, but they are too difficult to be fabricated with high precision and their volumes are too large. The 2D targets such as chessboard [6] and circles [7] have been the most popular due to the flexibility and accuracy. Zhang’s method [4] is a well-known 2D target based technique which makes camera calibration to be an easy task. The 1D targets usually have a small number of collinear feature points, and they need to obey some motion constraints. It should be noted that all these targets are always marked by abundant distinct features, which are used as input data for calibration process. Therefore, feature detection accuracy will directly affect the calibration results. Some self-calibration methods [8] with no specific targets provide more flexibility, but they require many feature correspondences and complex computations. In recent years, some active phase targets with several different patterns have been proposed to enhance feature detection accuracy, such as sinusoidal phase-shifting fringes [9–14] , circular phase-shifting fringes [15,16] , and crossed fringes [17,18]. Compared with traditional passive targets, active phase targets have two main advantages [13]: high-accuracy feature detection and robustness against defocusing, which enable the accurate calibration of an out-of-focus camera.

Similarly, some extrinsic calibration methods have been developed for stereo cameras with different targets such as 3D [19], 2D [20], 1D [21] and sphere [22] targets. Some researchers also calibrated stereo cameras by using spot laser [23] or line laser [24], however the feature points are few per frame, thus a large amount of images are demanded. No matter which target is used, it should be placed within the DOF of stereo cameras so that clear target images can be recorded. Moreover, these targets should be also placed within the overlapping FOV, so that the targets can be simultaneously captured by stereo cameras, which makes the target placement to be more restrict with less flexibility. Therefore, these traditional targets cannot be used in some applications when there are no overlapping FOV for multiple cameras. To this end, other methods have also been developed to realize the calibration of non-overlapping cameras [25]. Liu et al. [26] used spot laser to connect the non-overlapping cameras by projecting laser beam across their FOVs. Xie et al. [27] designed a special calibration target consisting of two short 1D bars with equally placed light spots and one long linking pole. Dong et al. [28] combined the multiple cameras using some arbitrarily distributed encoded targets. Xu et al. [29] used two flat mirrors to reflect the phase target, thus the cameras could capture the target images. Wei et al. [30] mounted two lasers on a freedom manipulator, then projected the line-structured light into the FOVs of multiple cameras. Yang et al. [31] designed an apparatus including two fixed chessboard targets to calibrate non-overlapping cameras. Though with success, these methods still have limitations as they depend on either large targets or auxiliary devices along with complex processes.

To overcome above problems, this paper presents an efficient and convenient stereo calibration method with absolute phase target. Horizontal and vertical phase-shifting fringes are used as the phase target, which are generated on a planar liquid crystal display (LCD) monitor. Two wrapped phase maps are calculated from the target images using three-step phase-shifting algorithm, and unwrapped to be absolute phase maps with two-frequency phase-shifting algorithm. Then we can establish the mapping between image points and world points by referring to the absolute phase, and then perform stereo calibration based on Zhang’s method. Since phase is more robust against camera defocusing, stereo cameras can capture the target images regardless of DOF. Besides, not only the feature points inside overlapping FOV but also non-overlapping FOV could be used for stereo calibration. Both simulations and experiments confirm the performance of the proposed method.

## 2. Camera model

#### 2.1 Single camera model

This paper uses the usual pinhole camera model [4]. Let a 3D world point be ${\boldsymbol P} = {[X,Y,Z]^T}$, and its corresponding 2D image point be ${\boldsymbol p} = {[u,v]^T}$. Their homogeneous vectors are denoted by $\tilde{{\boldsymbol P}} = {[X,Y,Z,1]^T}$ and $\tilde{{\boldsymbol p}} = {[u,v,1]^T}$, respectively. The relationship between $\tilde{{\boldsymbol P}}$ and $\tilde{{\boldsymbol p}}$ can be described as:

*s*is an arbitrary scale factor;

**denotes the intrinsic matrix that includes the focal length $[{f_u},{f_v}]$, the principal point $[{u_0},{v_0}]$ and the skew factor**

*K**γ*;

**and**

*R***respectively denote the rotation matrix and translation vector from the world coordinate to the camera coordinate, and [**

*t***,**

*R***] are always called the extrinsic matrix. In practice, lens distortion is so common that should be taken into consideration. Radial and tangential distortions are sufficient to represent the lens distortion [3], which can be described as:**

*t**k*

_{1},

*k*

_{2}] are the coefficients of radial distortion; [

*p*

_{1},

*p*

_{2}] are the coefficients of tangential distortion. The intrinsic matrix

**and the distortion coefficients [**

*K**k*

_{1},

*k*

_{2},

*p*

_{1},

*p*

_{2}] are constant parameters, while the extrinsic matrix [

**,**

*R***] are variable parameters that change with camera poses. These constant and variable parameters can be estimated by single camera calibration.**

*t*#### 2.2 Stereo camera model

The stereo vision system consisting of two cameras is employed as an example to describe the stereo camera model. As shown in Fig. 1, *O _{w}*-

*X*,

_{w}Y_{w}Z_{w}*O*-

_{l}*X*and

_{l}Y_{l}Z_{l}*O*-

_{r}*X*denote the world, left camera and right camera coordinate systems, respectively. The relations between two camera coordinates and the world coordinate can be described as:

_{r}Y_{r}Z_{r}*O*-

_{w}*X*,

_{w}Y_{w}Z_{w}*O*-

_{l}*X*and

_{l}Y_{l}Z_{l}*O*-

_{r}*X*, respectively; ${{\boldsymbol R}_{wl}}$ and ${{\boldsymbol R}_{wr}}$ denote the rotation matrices from the world to two cameras; ${{\boldsymbol t}_{wl}}$ and ${{\boldsymbol t}_{wr}}$ denote the translation vectors from the world to two cameras. Combining the two equations, the transformation between two cameras can be expressed as:

_{r}Y_{r}Z_{r}## 3. Principle

#### 3.1 Phase target

Phase-shifting algorithms are extensively adopted into optical metrology because of high accuracy and high robustness [34]. In general, the fringe patterns are modulated by the shape information of the measured objects, and their shapes can be accurately retrieved from the carried phase. Similarly, this paper encodes feature points into two phase maps, which are further carried by horizontal and vertical phase-shifting fringes. In the following, three-step phase-shifting algorithm that requires the least number of fringe patterns is utilized to explain the proposed method. Three fringe images can be mathematically described as:

where $I^{\prime}(x,y)$ denotes the average intensity; $I^{\prime\prime}(x,y)$ denotes the intensity modulation; and $\phi (x,y)$ denotes the wrapped phase to be solved for. Solving the three equations leads to:*f*denotes the frequency of $\phi (x,y)$;

*f*denotes the frequency of ${\phi _r}(x,y)$. Note that the fringe period of ${\phi _r}(x,y)$ should cover the entire fringe range. Once the $k(x,y)$ is determined, the $\phi (x,y)$ can be unwrapped for $\Phi (x,y)$ as:

_{r}#### 3.2 Stereo calibration

Figure 2 illustrates the schematic diagram of stereo calibration. Phase-shifting fringes are sequentially displayed on the LCD monitor, which can be served as the active phase target. The origin *O _{w}* of the world coordinate system is located at the top-left point of the LCD, and

*Z*-axis is perpendicular to the LCD plane, thus all points on the LCD plane have

*Z*= 0. The feature points of two cameras are mapped into this world coordinate system. In contrast to traditional stereo calibration methods that extract feature points from the overlapping FOV, the proposed method can take use of all the feature points that two cameras can capture. More concretely, feature points inside the FOV of left camera are used for left camera calibration, and feature points inside the FOV of right camera are used for right camera calibration. Using the world coordinate system as intermediary, the extrinsic parameters of two cameras can be calculated. Therefore, the important thing is how to define and detect these feature points, and that will be introduced in detail below.

Figure 3 shows the framework of feature detection, and the procedures are summarized as:

- (1)
*Calibration image acquisition*: Horizontal and vertical phase-shifting fringes are sequentially displayed on an LCD monitor. Meanwhile, stereo cameras capture their images from several different viewpoints. - (2)
*Absolute phase calculation*: Based on three-step phase-shifting algorithm, two wrapped phase maps ${\phi _u}$ and ${\phi _v}$ are calculated from the images of the phase-shifting fringes, which are in range of [0, 2π]. To uniquely define each pixel, ${\phi _u}$ and ${\phi _v}$ are unwrapped to recover two absolute phase maps ${\Phi _u}$ and ${\Phi _v}$ using two-frequency phase-shifting algorithm. - (3)
*Feature points detection*: Without losing generality, this paper selects the pixels with ${\Phi _u} = 2\pi m$ and ${\Phi _v} = 2\pi n$ as the feature points, where*m*and*n*are integral numbers. Firstly, some candidate pixels can be obtained if they satisfy $|{{\Phi _u} - 2\pi m} |< \delta \;\& |{{\Phi _v} - 2\pi n} |< \delta$, where*δ*denotes a small threshold. Secondly, searching the pixel among these candidates with the minimum value of $|{{\Phi _u} - 2\pi m} |+ |{{\Phi _v} - 2\pi n} |$, which can be regarded as the rough location of the feature point. Finally, the detection precision can be enhanced by the windowed least-square linear-fitting algorithms based on the following relations:$$\left\{ \begin{array}{l} u = {a_1}{\Phi _u} + {b_1}{\Phi _v} + {c_1}\\ v = {a_2}{\Phi _u} + {b_2}{\Phi _v} + {c_2} \end{array} \right.$$where ${a_1},{b_1},{c_1},{a_2},{b_2},{c_2}$ are fitting coefficients. Then setting ${\Phi _u} = 2\pi m$ and ${\Phi _v} = 2\pi n$, the feature points with sub-pixel precision can be solved. - (4)
*Feature points mapping*: Generally, the pixel pitch of the LCD monitor is uniform and known, and let*q*denote the pixel pitch. For each feature point, the carried phases can be easily converted into the world coordinates as:$$\left[ \begin{array}{l} X\\ Y \end{array} \right] = \frac{{q \ast P}}{{\textrm{2}\pi }} \ast \left[ \begin{array}{l} {\Phi _u}\\ {\Phi _v} \end{array} \right]$$where*P*denotes the pixel number per fringe period on the LCD monitor. Once the one-to-one mapping between the image coordinates and the world coordinates of feature points are obtained, the intrinsic and extrinsic parameters of stereo cameras can be calibrated.

## 4. Simulation

Simulations have been carried out to explore the performance of the proposed method with respect to Gaussian noise and Gaussian blur. Two identical cameras are placed in parallel to constitute the stereo vision system. Their intrinsic and intrinsic parameters are simulated as:

*f*=

_{u}*f*= 800 pixels; the principal points are

_{v}*u*

_{0 }= 320 pixels and

*v*

_{0 }= 240 pixels; the skew factor and distortion coefficients are zero; the rotation angles are

*θ*= 0°,

_{x}*θ*= 0° and

_{y}*θ*= 0°; and the translation distances are

_{z}*t*

_{x}_{ }= 500 pixels,

*t*= 0 and

_{y}*t*= 0. The phase target contains two groups of orthogonal phase-shifting fringes, one group is horizontal, the other group is vertical, and their periods are both

_{z}*P*= 60 pixels. Then the stereo cameras capture the images of the phase target from three different camera poses, and some simulated images are shown in Fig. 4.

Firstly, Gaussian noises with zeros mean and different standard deviations are added to these simulated images. The standard deviations are varied from 0 to 20 with an interval of 1 for different noise levels. Then these noised images are used to calibrate the stereo cameras. The estimated parameters are compared with the simulated parameters. For each standard deviation, we repeat the calibration process 10 times and compute the average of absolute errors. Figure 5 shows the absolute errors of camera parameters with different noise levels. With the increasing of standard deviations, the absolute errors of intrinsic and extrinsic parameters both have an upward trend. Even though the standard deviation is up to 20, the absolute errors are relatively small. These simulation results show the strong anti-noise ability of the proposed method.

As we known, camera defocusing can be modeled by convolving the clear image with a Gaussian filter [15]. Secondly, we use several Gaussian filters with same size of 25×25 pixels and different standard deviations to blur the simulated images. The standard deviations are varied from 0 to 20 with an interval of 1 for different blur levels. Then we use these blurred images to calibrate the stereo cameras. Figure 6 shows the absolute errors of camera parameters with different blur levels. It is obvious that the image blurring has very little influence on both intrinsic and extrinsic parameters. These simulation results show an excellent performance of the proposed method when dealing with image blurring.

Furtherly, different camera parameters are also simulated to verify the performance of the proposed method. In the simulation, the focal lengths *f _{u}* and

*f*are varied from 600 to 1000 with a step of 50, while other parameters including principal points, rotation angles and translation distances keep same with the former simulation. The images of the phase target are also captured from three different camera poses, then they are used for stereo calibration. Figure 7 shows the absolute errors of camera parameters with different focal lengths. As we can see, the absolute errors keep at a very low level. Similarly, we also tested the effects by varying other parameters such as principal points, rotation angles, and translation distances. We find that the absolute errors are also very low. These simulation results confirm that the proposed method can achieve high accuracy for different camera parameters.

_{v}## 5. Experiment

To further verify the proposed method, an experiment platform including two same cameras (Point Grey Chameleon3) and a tablet (iPad MR7G2CH/A) was built up. The two cameras have a resolution of 1280×1024 pixels. The optical lenses (Kowa LM16JCM) mounted on the cameras have a focal length of 16 mm. The tablet with a resolution of 2048×1536 pixels and a pixel pitch of 0.098 mm was used to display the fringe patterns, and served as the calibration target. Horizontal and vertical phase-shifting fringes both have a period of 50 pixels, that are used as absolute phase target to encode the feature points. The additional fringes for phase unwrapping have a period of 2000 pixels. Then two calibration experiments have been conducted with different configurations of two cameras.

#### 5.1 Overlapping cameras calibration

In the first experiment, the two cameras comprise a stereo vision system with certain overlapping FOV. Two groups of fringe images were captured by the stereo cameras from 10 different camera poses. For the first group, the tablet was placed in the DOFs of two cameras, thus in-focus images were captured. Figure 8 shows the in-focus images and their phase maps. Obviously, the phase distributions of Fig. 8(b) and Fig. 8(f) are close, meanwhile the phase distributions of Fig. 8(d) and Fig. 8(h) are also close, which indicates that the two cameras have large overlapping FOV at current shooting distance. For the second group, the tablet was placed out of the DOFs of two cameras, thus defocused images were captured. Figure 9 shows the defocused images and their phase maps. Obviously, the phase distributions of Fig. 9(b) and Fig. 9(f) are close, however the phase distributions of Fig. 9(d) and Fig. 9(h) have little overlap, which indicates that the two cameras have small overlapping FOV at current shooting distance.

Then calibration was performed with two groups of fringe images, respectively. Figure 10 shows the target poses in the left camera coordinate system. As we can see that the shooting distance of first group was about 30 cm, and the shooting distance of second group was about 17 cm. Table 1 shows the calibrated intrinsic parameters, and Table 2 shows the calibrated extrinsic parameters. It is obvious that the intrinsic parameters estimated from in-focus and defocused images are very close to each other. The differences of focal lengths are less than 0.2%, and the differences of principle points are less than 0.9%. The tangential distortions of two cameras are very small, and keeping radial distortion coefficients *k*_{1} and *k*_{2} are enough to express the nonlinear distortion. The *k*_{1} and *k*_{2} are relatively small with a slight variation. The extrinsic parameters estimated from in-focus and defocused images are very close, too. These experiment results confirm that the camera defocusing has very little influence on the proposed method.

These calibration results are then used for the 3D reconstruction of chessboard corners. The chessboard has a square size of 4. 9 mm and 10×10 corners. Ten pairs of chessboard images are captured in-focus by the stereo cameras from different camera poses. The corners are extracted by the standard toolbox [33], then their 3D world coordinates are reconstructed using the in-focus and defocus calibration results, separately. Figure 11 shows the reconstruction results of the chessboard corners in the left camera coordinate system. Then the size of each square can be measured by computing the spatial distance of adjacent corners. Table 3 shows the mean reconstruction errors of the square size. Clearly, the reconstruction errors using defocus calibration results are slightly higher than that using in-focus calibration results, and both are lower than 8 µm. This experiment demonstrates the accuracy of the proposed method.

#### 5.2 Non-overlapping cameras calibration

In the second experiment, two cameras were adjusted to have non-overlapping FOV. Two groups of fringe images were captured to calibrate the two cameras. The first group was in-focus, and the second group was defocused. Figure 12 shows the in-focus images and their phase maps. Obviously, the phase distributions of Fig. 12(b) and Fig. 12(f) are close, but the phase distributions of Fig. 12(d) and Fig. 12(h) have no overlap, which indicates that the two cameras have non-overlapping FOV at current shooting distance. Figure 13 shows the defocused images and their phase maps. Obviously, the phase distributions of Fig. 13(b) and Fig. 13(f) are close, however the phase distributions of Fig. 13(d) and Fig. 13(h) have no overlap, which indicates that the two cameras also have non-overlapping FOV at current shooting distance.

Then calibration was performed with two groups of fringe images, respectively. Table 4 shows the calibrated intrinsic parameters, and Table 5 shows the calibrated extrinsic parameters. It is obvious that the intrinsic parameters estimated from in-focus and defocused images are very close to each other. The differences of focal lengths are less than 0.1%, and the differences of principle points are less than 1.2%. The radial distortion coefficients *k*_{1} and *k*_{2} are relatively small with a small variation. The extrinsic parameters estimated from in-focus and defocused images are very close, too. This experiment confirms that the proposed method can be used to calibrate the cameras with non-overlapping FOV.

## 6. Conclusion

This paper presents an efficient and convenient stereo calibration method based on absolute phase target. This method has the following advantages: Firstly, it is suitable for out-of-focus stereo calibration because phase is robust against camera defocusing; Secondly, feature points are extracted from both the overlapping and non-overlapping FOVs, which are much more than traditional methods, thus accurate calibration results can be achieved; Thirdly, the target placement is more flexible due to its defocusing tolerability and whole-field capability. Because of these advantages, the proposed method would be significant for high-precision stereo measurement applications.

## Funding

National Natural Science Foundation of China (NSFC) (51605130, 61603360); Natural Science Foundation of Hubei Province (2018CFB656); Open Fund of the Key Laboratory for Metallurgical Equipment and Control of Ministry of Education in Wuhan University of Science and Technology (2018B03, 2018B06).

## References

**1. **X. Su and Q. Zhang, “Dynamic 3-D shape measurement method: A review,” Opt. Lasers Eng. **48**(2), 191–204 (2010). [CrossRef]

**2. **C. Zuo, S. Feng, L. Huang, T. Tao, W. Yin, and Q. Chen, “Phase shifting algorithms for fringe projection profilometry: A review,” Opt. Lasers Eng. **109**, 23–59 (2018). [CrossRef]

**3. **J. Heikkila, “Geometric camera calibration using circular control points,” IEEE Trans. Pattern Anal. Mach. Intell. **22**(10), 1066–1077 (2000). [CrossRef]

**4. **Z. Y. Zhang, “A flexible new technique for camera calibration,” IEEE Trans. Pattern Anal. Mach. Intell. **22**(11), 1330–1334 (2000). [CrossRef]

**5. **Z. Zhang, “Camera calibration with one-dimensional objects,” IEEE Trans. Pattern Anal. Mach. Intell. **26**(7), 892–899 (2004). [CrossRef]

**6. **Z. Liu, Q. Wu, X. Chen, and Y. Yin, “High-accuracy calibration of low-cost camera using image disturbance factor,” Opt. Express **24**(21), 24321–24336 (2016). [CrossRef]

**7. **B. Li and S. Zhang, “Flexible calibration method for microscopic structured light system using telecentric lens,” Opt. Express **23**(20), 25795–25803 (2015). [CrossRef]

**8. **J. Jin and X. Li, “Efficient camera self-calibration method based on the absolute dual quadric,” J. Opt. Soc. Am. A **30**(3), 287–292 (2013). [CrossRef]

**9. **L. Huang, Q. Zhang, and A. Asundi, “Camera calibration with active phase target: improvement on feature detection and optimization,” Opt. Lett. **38**(9), 1446–1448 (2013). [CrossRef]

**10. **Y. Xu, F. Gao, H. Ren, Z. Zhang, and X. Jiang, “An Iterative Distortion Compensation Algorithm for Camera Calibration Based on Phase Target,” Sensors **17**(6), 1188 (2017). [CrossRef]

**11. **W. Zhao, X. Su, and W. Chen, “Whole-field high precision point to point calibration method,” Opt. Lasers Eng. **111**, 71–79 (2018). [CrossRef]

**12. **M. Ma, X. Chen, and K. Wang, “Camera calibration by using fringe patterns and 2D phase-difference pulse detection,” Optik **125**(2), 671–674 (2014). [CrossRef]

**13. **C. Schmalz, F. Forster, and E. Angelopoulou, “Camera calibration: active versus passive targets,” Opt. Eng. **50**(11), 113601 (2011). [CrossRef]

**14. **T. Bell, J. Xu, and S. Zhang, “Method for out-of-focus camera calibration,” Appl. Opt. **55**(9), 2346–2352 (2016). [CrossRef]

**15. **Y. Wang, B. Cai, K. Wang, and X. Chen, “Out-of-focus color camera calibration with one normal-sized color-coded pattern,” Opt. Lasers Eng. **98**, 17–22 (2017). [CrossRef]

**16. **Y. Wang, X. Chen, J. Tao, K. Wang, and M. Ma, “Accurate feature detection for out-of-focus camera calibration,” Appl. Opt. **55**(28), 7964–7971 (2016). [CrossRef]

**17. **R. Juarez-Salazar, F. Guerrero-Sanchez, C. Robledo-Sanchez, and J. Gonzalez-Garcia, “Camera calibration by multiplexed phase encoding of coordinate information,” Appl. Opt. **54**(15), 4895–4906 (2015). [CrossRef]

**18. **Y. Liu and X. Su, “Camera calibration with planar crossed fringe patterns,” Optik **123**(2), 171–175 (2012). [CrossRef]

**19. **J. H. Kim and B. K. Koo, “Convenient calibration method for unsynchronized camera networks using an inaccurate small reference object,” Opt. Express **20**(23), 25292–25310 (2012). [CrossRef]

**20. **S. Gai, F. Da, and X. Dai, “A novel dual-camera calibration method for 3D optical measurement,” Opt. Lasers Eng. **104**, 126–134 (2018). [CrossRef]

**21. **L. Wang, W. Wang, C. Shen, and F. Duan, “A convex relaxation optimization algorithm for multi-camera calibration with 1D objects,” Neurocomputing **215**, 82–89 (2016). [CrossRef]

**22. **J. Yu and F. Da, “Bi-tangent line based approach for multi-camera calibration using spheres,” J. Opt. Soc. Am. A **35**(2), 221–229 (2018). [CrossRef]

**23. **Z. Liu, Y. Yin, S. Liu, and X. Chen, “Extrinsic parameter calibration of stereo vision sensors using spot laser projector,” Appl. Opt. **55**(25), 7098–7105 (2016). [CrossRef]

**24. **J. A. M. Rodríguez and F. C. Mejía Alanís, “Binocular self-calibration performed via adaptive genetic algorithm based on laser line imaging,” J. Mod. Opt. **63**(13), 1219–1232 (2016). [CrossRef]

**25. **R. Xia, M. Hu, J. Zhao, S. Chen, Y. Chen, and S. P. Fu, “Global calibration of non-overlapping cameras: State of the art,” Optik **158**, 951–961 (2018). [CrossRef]

**26. **Z. Liu, X. Wei, and G. Zhang, “External parameter calibration of widely distributed vision sensors with non-overlapping fields of view,” Opt. Lasers Eng. **51**(6), 643–650 (2013). [CrossRef]

**27. **M. Xie, Z. Wei, G. Zhang, and X. Wei, “A flexible technique for calibrating relative position and orientation of two cameras with no-overlapping FOV,” Measurement **46**(1), 34–44 (2013). [CrossRef]

**28. **S. Dong, X. Shao, X. Kang, F. Yang, and X. He, “Extrinsic calibration of a non-overlapping camera network based on close-range photogrammetry,” Appl. Opt. **55**(23), 6363–6370 (2016). [CrossRef]

**29. **Y. Xu, G. Feng, Z. Zhang, and X. Jiang, “A calibration method for non-overlapping cameras based on mirrored absolute phase target,” Int. J. Adv. Manuf. Technol.1–7 (2018). [CrossRef]

**30. **Z. Wei, W. Zou, G. Zhang, and K. Zhao, “Extrinsic parameters calibration of multi-camera with non-overlapping fields of view using laser scanning,” Opt. Express **27**(12), 16719–16737 (2019). [CrossRef]

**31. **T. Yang, Q. Zhao, X. Wang, and D. Huang, “Accurate calibration approach for non-overlapping multi-camera system,” Opt. Laser Technol. **110**, 78–86 (2019). [CrossRef]

**32. **J. A. M. Rodríguez, “Microscope self-calibration based on micro laser line imaging and soft computing algorithms,” Opt. Lasers Eng. **105**, 75–85 (2018). [CrossRef]

**33. **J.-Y. Bouguet, “Camera Calibration Toolbox for Matlab,” http://www.vision.caltech.edu/bouguetj/calib_doc.

**34. **X. Chen, Y. Wang, Y. Wang, M. Ma, and C. Zeng, “Quantized phase coding and connected region labeling for absolute phase retrieval,” Opt. Express **24**(25), 28613–28624 (2016). [CrossRef]

**35. **J. S. Hyun and S. Zhang, “Enhanced two-frequency phase-shifting method,” Appl. Opt. **55**(16), 4395–4401 (2016). [CrossRef]