Abstract

Stereo cameras have been widely used for three-dimensional (3D) photogrammetry, and stereo calibration is a crucial process to estimate the intrinsic and extrinsic parameters. This paper proposes a stereo calibration method with absolute phase target by using horizontal and vertical phase-shifting fringes. The one-to-one mapping from the world points to the image points that can be recovered by referring to the absolute phase and then used to calibrate the stereo cameras. Compared with traditional methods that only use feature points within the overlapping field-of-view (FOV), the proposed method can use all feature points within the overlapping and non-overlapping FOVs. Besides, since phase is more robust against camera defocusing than intensity, the target images can be captured regardless of the depth-of-field (DOF). With the advantages of whole-field capability and defocusing tolerability, the target placement becomes very flexible. Both simulations and experiment results demonstrate the robustness and accuracy of the proposed method.

© 2019 Optical Society of America under the terms of the OSA Open Access Publishing Agreement

1. Introduction

With the non-contact, fast-speed, high-accuracy and high-resolution capabilities, optical vision systems play an important role in 3D measurement [1]. Generally, 3D imaging methods include optical interferometry, time-of-flight, stereo vision and structured light [2]. Among various methods, stereo vision has been widely used in many scenarios such as automatic production, robot navigation and motion tracking. Camera calibration is a critical process to estimate the intrinsic and extrinsic parameters of stereo cameras that relates 2D image with 3D world. Due to this, many studies have been focused on stereo calibration which usually involves two major parts: intrinsic calibration of single camera and extrinsic calibration of stereo cameras.

Existing intrinsic calibration methods are usually based on different types of targets such as 3D [3], 2D [4] and 1D [5] targets. Among these targets, the 3D targets can achieve accurate calibration results, but they are too difficult to be fabricated with high precision and their volumes are too large. The 2D targets such as chessboard [6] and circles [7] have been the most popular due to the flexibility and accuracy. Zhang’s method [4] is a well-known 2D target based technique which makes camera calibration to be an easy task. The 1D targets usually have a small number of collinear feature points, and they need to obey some motion constraints. It should be noted that all these targets are always marked by abundant distinct features, which are used as input data for calibration process. Therefore, feature detection accuracy will directly affect the calibration results. Some self-calibration methods [8] with no specific targets provide more flexibility, but they require many feature correspondences and complex computations. In recent years, some active phase targets with several different patterns have been proposed to enhance feature detection accuracy, such as sinusoidal phase-shifting fringes [914] , circular phase-shifting fringes [15,16] , and crossed fringes [17,18]. Compared with traditional passive targets, active phase targets have two main advantages [13]: high-accuracy feature detection and robustness against defocusing, which enable the accurate calibration of an out-of-focus camera.

Similarly, some extrinsic calibration methods have been developed for stereo cameras with different targets such as 3D [19], 2D [20], 1D [21] and sphere [22] targets. Some researchers also calibrated stereo cameras by using spot laser [23] or line laser [24], however the feature points are few per frame, thus a large amount of images are demanded. No matter which target is used, it should be placed within the DOF of stereo cameras so that clear target images can be recorded. Moreover, these targets should be also placed within the overlapping FOV, so that the targets can be simultaneously captured by stereo cameras, which makes the target placement to be more restrict with less flexibility. Therefore, these traditional targets cannot be used in some applications when there are no overlapping FOV for multiple cameras. To this end, other methods have also been developed to realize the calibration of non-overlapping cameras [25]. Liu et al. [26] used spot laser to connect the non-overlapping cameras by projecting laser beam across their FOVs. Xie et al. [27] designed a special calibration target consisting of two short 1D bars with equally placed light spots and one long linking pole. Dong et al. [28] combined the multiple cameras using some arbitrarily distributed encoded targets. Xu et al. [29] used two flat mirrors to reflect the phase target, thus the cameras could capture the target images. Wei et al. [30] mounted two lasers on a freedom manipulator, then projected the line-structured light into the FOVs of multiple cameras. Yang et al. [31] designed an apparatus including two fixed chessboard targets to calibrate non-overlapping cameras. Though with success, these methods still have limitations as they depend on either large targets or auxiliary devices along with complex processes.

To overcome above problems, this paper presents an efficient and convenient stereo calibration method with absolute phase target. Horizontal and vertical phase-shifting fringes are used as the phase target, which are generated on a planar liquid crystal display (LCD) monitor. Two wrapped phase maps are calculated from the target images using three-step phase-shifting algorithm, and unwrapped to be absolute phase maps with two-frequency phase-shifting algorithm. Then we can establish the mapping between image points and world points by referring to the absolute phase, and then perform stereo calibration based on Zhang’s method. Since phase is more robust against camera defocusing, stereo cameras can capture the target images regardless of DOF. Besides, not only the feature points inside overlapping FOV but also non-overlapping FOV could be used for stereo calibration. Both simulations and experiments confirm the performance of the proposed method.

2. Camera model

2.1 Single camera model

This paper uses the usual pinhole camera model [4]. Let a 3D world point be ${\boldsymbol P} = {[X,Y,Z]^T}$, and its corresponding 2D image point be ${\boldsymbol p} = {[u,v]^T}$. Their homogeneous vectors are denoted by $\tilde{{\boldsymbol P}} = {[X,Y,Z,1]^T}$ and $\tilde{{\boldsymbol p}} = {[u,v,1]^T}$, respectively. The relationship between $\tilde{{\boldsymbol P}}$ and $\tilde{{\boldsymbol p}}$ can be described as:

$$s \tilde{\boldsymbol P} = K\left[ {\begin{array}{ll} {\boldsymbol R}&{\boldsymbol t} \end{array}} \right]\tilde{\boldsymbol P}$$
$${\boldsymbol K} = \left[ {\begin{array}{lll} {{f_u}}&\gamma &{{u_0}}\\ 0&{{f_v}}&{{v_0}}\\ 0&0&1 \end{array}} \right],\quad {\boldsymbol R = }rodrigues\left[ {\begin{array}{l} {{\theta_x}}\\ {{\theta_y}}\\ {{\theta_z}} \end{array}} \right],\quad {\boldsymbol t} = \left[ {\begin{array}{l} {{t_x}}\\ {{t_y}}\\ {{t_z}} \end{array}} \right]$$
where s is an arbitrary scale factor; K denotes the intrinsic matrix that includes the focal length $[{f_u},{f_v}]$, the principal point $[{u_0},{v_0}]$ and the skew factor γ; R and t respectively denote the rotation matrix and translation vector from the world coordinate to the camera coordinate, and [R, t] are always called the extrinsic matrix. In practice, lens distortion is so common that should be taken into consideration. Radial and tangential distortions are sufficient to represent the lens distortion [3], which can be described as:
$$\left\{ \begin{array}{l} {{\tilde{u}}_d} = \tilde{u}(1 + {k_1}{r^2} + {k_2}{r^4}) + 2{p_1}\tilde{u}\tilde{v} + {p_2}({r^2} + 2{{\tilde{u}}^2})\\ {{\tilde{v}}_d} = \tilde{v}(1 + {k_1}{r^2} + {k_2}{r^4}) + 2{p_2}\tilde{u}\tilde{v} + {p_1}({r^2} + 2{{\tilde{v}}^2}) \end{array} \right.$$
$$\left[ \begin{array}{l} {{\tilde{u}}_d}\\ {{\tilde{v}}_d} \end{array} \right] = \left[ \begin{array}{l} {u_d}\\ {v_d} \end{array} \right] - \left[ \begin{array}{l} {u_0}\\ {v_0} \end{array} \right],\quad \left[ \begin{array}{l} {\tilde{u}}\\ {\tilde{v}} \end{array} \right] = \left[ \begin{array}{l} u\\ v \end{array} \right] - \left[ \begin{array}{l} {u_0}\\ {v_0} \end{array} \right],\quad {r^2} = {\tilde{u}^2} + {\tilde{v}^2}$$
where $[u,v]$ denotes the undistorted image point; $[{u_d},{v_d}]$ denotes the distorted image point; [k1, k2] are the coefficients of radial distortion; [p1, p2] are the coefficients of tangential distortion. The intrinsic matrix K and the distortion coefficients [k1, k2, p1, p2] are constant parameters, while the extrinsic matrix [R, t] are variable parameters that change with camera poses. These constant and variable parameters can be estimated by single camera calibration.

2.2 Stereo camera model

The stereo vision system consisting of two cameras is employed as an example to describe the stereo camera model. As shown in Fig. 1, Ow-XwYwZw, Ol- XlYlZl and Or-XrYrZr denote the world, left camera and right camera coordinate systems, respectively. The relations between two camera coordinates and the world coordinate can be described as:

$$\left\{ \begin{array}{l} {{\boldsymbol P}_l} = {{\boldsymbol R}_{wl}}{{\boldsymbol P}_w} + {{\boldsymbol t}_{wl}}\\ {{\boldsymbol P}_r} = {{\boldsymbol R}_{wr}}{{\boldsymbol P}_w} + {{\boldsymbol t}_{wr}} \end{array} \right.$$
where ${{\boldsymbol P}_w}$, ${{\boldsymbol P}_l}$ and ${{\boldsymbol P}_r}$ are the same points defined in Ow-XwYwZw, Ol-XlYlZl and Or-XrYrZr, respectively; ${{\boldsymbol R}_{wl}}$ and ${{\boldsymbol R}_{wr}}$ denote the rotation matrices from the world to two cameras; ${{\boldsymbol t}_{wl}}$ and ${{\boldsymbol t}_{wr}}$ denote the translation vectors from the world to two cameras. Combining the two equations, the transformation between two cameras can be expressed as:
$${{\boldsymbol P}_r} = {{\boldsymbol R}_{lr}}{{\boldsymbol P}_l} + {{\boldsymbol t}_{lr}} = {{\boldsymbol R}_{wr}}{\boldsymbol R}_{wl}^T{{\boldsymbol P}_l} + ({{\boldsymbol t}_{wr}} - {{\boldsymbol R}_{wr}}{\boldsymbol R}_{wl}^T{{\boldsymbol t}_{wl}})$$
where ${{\boldsymbol R}_{lr}}$ and ${{\boldsymbol t}_{lr}}$ denote the rotation matrix and translation vector between the two cameras, the estimation of which is often referred as the extrinsic calibration of stereo cameras. Currently, some algorithms have been developed to solve the optimal solution of this equation such as gradient methods and genetic methods [32], and this paper uses the standard gradient-based toolbox [33] to calculate ${{\boldsymbol R}_{lr}}$ and ${{\boldsymbol t}_{lr}}$.

 

Fig. 1. Transformations between the world coordinate and two camera coordinates.

Download Full Size | PPT Slide | PDF

3. Principle

3.1 Phase target

Phase-shifting algorithms are extensively adopted into optical metrology because of high accuracy and high robustness [34]. In general, the fringe patterns are modulated by the shape information of the measured objects, and their shapes can be accurately retrieved from the carried phase. Similarly, this paper encodes feature points into two phase maps, which are further carried by horizontal and vertical phase-shifting fringes. In the following, three-step phase-shifting algorithm that requires the least number of fringe patterns is utilized to explain the proposed method. Three fringe images can be mathematically described as:

$${I_{1}}(x,y) = I^{\prime}(x,y) + I^{\prime\prime}(x,y)\cos [{\phi (x,y) - 2\pi /3} ]$$
$${I_2}(x,y) = I^{\prime}(x,y) + I^{\prime\prime}(x,y)\cos [{\phi (x,y)} ]$$
$${I_3}(x,y) = I^{\prime}(x,y) + I^{\prime\prime}(x,y)\cos [{\phi (x,y) + 2\pi /3} ]$$
where $I^{\prime}(x,y)$ denotes the average intensity; $I^{\prime\prime}(x,y)$ denotes the intensity modulation; and $\phi (x,y)$ denotes the wrapped phase to be solved for. Solving the three equations leads to:
$$\phi (x,y) = {\tan ^{ - 1}}\left( {\sqrt 3 \frac{{{I_1} - {I_3}}}{{2{I_2} - {I_1} - {I_3}}}} \right)$$
Because of the arctan function, the above equation produces wrapped phase ranging from 0 to 2π with 2π phase jumps. To recover the absolute phase map $\Phi (x,y)$, this paper employs a two-frequency phase-shifting method by using an additional phase map ${\phi _r}(x,y)$ with lower frequency [35]. Then the fringe order $k(x,y)$ of $\phi (x,y)$ can be determined as:
$$k(x,y) = \textrm{round}\left[ {\frac{{(f/{f_r}){\phi_r}(x,y) - \phi (x,y)}}{{2\pi }}} \right]$$
where round[] returns the closest integer value; f denotes the frequency of $\phi (x,y)$; fr denotes the frequency of ${\phi _r}(x,y)$. Note that the fringe period of ${\phi _r}(x,y)$ should cover the entire fringe range. Once the $k(x,y)$ is determined, the $\phi (x,y)$ can be unwrapped for $\Phi (x,y)$ as:
$$\Phi (x,y) = \phi (x,y) + 2\pi k(x,y)$$

3.2 Stereo calibration

Figure 2 illustrates the schematic diagram of stereo calibration. Phase-shifting fringes are sequentially displayed on the LCD monitor, which can be served as the active phase target. The origin Ow of the world coordinate system is located at the top-left point of the LCD, and Z-axis is perpendicular to the LCD plane, thus all points on the LCD plane have Z = 0. The feature points of two cameras are mapped into this world coordinate system. In contrast to traditional stereo calibration methods that extract feature points from the overlapping FOV, the proposed method can take use of all the feature points that two cameras can capture. More concretely, feature points inside the FOV of left camera are used for left camera calibration, and feature points inside the FOV of right camera are used for right camera calibration. Using the world coordinate system as intermediary, the extrinsic parameters of two cameras can be calculated. Therefore, the important thing is how to define and detect these feature points, and that will be introduced in detail below.

 

Fig. 2. Schematic diagram of stereo calibration.

Download Full Size | PPT Slide | PDF

Figure 3 shows the framework of feature detection, and the procedures are summarized as:

  • (1) Calibration image acquisition: Horizontal and vertical phase-shifting fringes are sequentially displayed on an LCD monitor. Meanwhile, stereo cameras capture their images from several different viewpoints.
  • (2) Absolute phase calculation: Based on three-step phase-shifting algorithm, two wrapped phase maps ${\phi _u}$ and ${\phi _v}$ are calculated from the images of the phase-shifting fringes, which are in range of [0, 2π]. To uniquely define each pixel, ${\phi _u}$ and ${\phi _v}$ are unwrapped to recover two absolute phase maps ${\Phi _u}$ and ${\Phi _v}$ using two-frequency phase-shifting algorithm.
  • (3) Feature points detection: Without losing generality, this paper selects the pixels with ${\Phi _u} = 2\pi m$ and ${\Phi _v} = 2\pi n$ as the feature points, where m and n are integral numbers. Firstly, some candidate pixels can be obtained if they satisfy $|{{\Phi _u} - 2\pi m} |< \delta \;\& |{{\Phi _v} - 2\pi n} |< \delta$, where δ denotes a small threshold. Secondly, searching the pixel among these candidates with the minimum value of $|{{\Phi _u} - 2\pi m} |+ |{{\Phi _v} - 2\pi n} |$, which can be regarded as the rough location of the feature point. Finally, the detection precision can be enhanced by the windowed least-square linear-fitting algorithms based on the following relations:
    $$\left\{ \begin{array}{l} u = {a_1}{\Phi _u} + {b_1}{\Phi _v} + {c_1}\\ v = {a_2}{\Phi _u} + {b_2}{\Phi _v} + {c_2} \end{array} \right.$$
    where ${a_1},{b_1},{c_1},{a_2},{b_2},{c_2}$ are fitting coefficients. Then setting ${\Phi _u} = 2\pi m$ and ${\Phi _v} = 2\pi n$, the feature points with sub-pixel precision can be solved.
  • (4) Feature points mapping: Generally, the pixel pitch of the LCD monitor is uniform and known, and let q denote the pixel pitch. For each feature point, the carried phases can be easily converted into the world coordinates as:
    $$\left[ \begin{array}{l} X\\ Y \end{array} \right] = \frac{{q \ast P}}{{\textrm{2}\pi }} \ast \left[ \begin{array}{l} {\Phi _u}\\ {\Phi _v} \end{array} \right]$$
    where P denotes the pixel number per fringe period on the LCD monitor. Once the one-to-one mapping between the image coordinates and the world coordinates of feature points are obtained, the intrinsic and extrinsic parameters of stereo cameras can be calibrated.

4. Simulation

Simulations have been carried out to explore the performance of the proposed method with respect to Gaussian noise and Gaussian blur. Two identical cameras are placed in parallel to constitute the stereo vision system. Their intrinsic and intrinsic parameters are simulated as:

$${{\boldsymbol K}_l} = {{\boldsymbol K}_r} = \left[ {\begin{array}{ccc} {\textrm{8}00}&0&{\textrm{320}}\\ 0&{\textrm{8}00}&{\textrm{240}}\\ 0&0&1 \end{array}} \right],\quad {{\boldsymbol R}_{lr}} = \left[ {\begin{array}{ccc} 1&0&0\\ 0&1&0\\ 0&0&1 \end{array}} \right],\quad {{\boldsymbol t}_{lr}} = \left[ {\begin{array}{c} {\textrm{5}00}\\ 0\\ 0 \end{array}} \right]$$
To be specific, the focal lengths are fu = fv = 800 pixels; the principal points are u0 = 320 pixels and v0 = 240 pixels; the skew factor and distortion coefficients are zero; the rotation angles are θx= 0°, θy = 0° and θz = 0°; and the translation distances are tx= 500 pixels, ty = 0 and tz = 0. The phase target contains two groups of orthogonal phase-shifting fringes, one group is horizontal, the other group is vertical, and their periods are both P = 60 pixels. Then the stereo cameras capture the images of the phase target from three different camera poses, and some simulated images are shown in Fig. 4.

 

Fig. 3. Framework of feature detection.

Download Full Size | PPT Slide | PDF

 

Fig. 4. Simulated images.

Download Full Size | PPT Slide | PDF

Firstly, Gaussian noises with zeros mean and different standard deviations are added to these simulated images. The standard deviations are varied from 0 to 20 with an interval of 1 for different noise levels. Then these noised images are used to calibrate the stereo cameras. The estimated parameters are compared with the simulated parameters. For each standard deviation, we repeat the calibration process 10 times and compute the average of absolute errors. Figure 5 shows the absolute errors of camera parameters with different noise levels. With the increasing of standard deviations, the absolute errors of intrinsic and extrinsic parameters both have an upward trend. Even though the standard deviation is up to 20, the absolute errors are relatively small. These simulation results show the strong anti-noise ability of the proposed method.

 

Fig. 5. Absolute errors with different noise levels. (a) Focal lengths;(b) Principal points; (c) Rotation angles; (d) Translation distances.

Download Full Size | PPT Slide | PDF

As we known, camera defocusing can be modeled by convolving the clear image with a Gaussian filter [15]. Secondly, we use several Gaussian filters with same size of 25×25 pixels and different standard deviations to blur the simulated images. The standard deviations are varied from 0 to 20 with an interval of 1 for different blur levels. Then we use these blurred images to calibrate the stereo cameras. Figure 6 shows the absolute errors of camera parameters with different blur levels. It is obvious that the image blurring has very little influence on both intrinsic and extrinsic parameters. These simulation results show an excellent performance of the proposed method when dealing with image blurring.

 

Fig. 6. Absolute errors with different blur levels. (a) Focal lengths;(b) Principal points; (c) Rotation angles; (d) Translation distances.

Download Full Size | PPT Slide | PDF

Furtherly, different camera parameters are also simulated to verify the performance of the proposed method. In the simulation, the focal lengths fu and fv are varied from 600 to 1000 with a step of 50, while other parameters including principal points, rotation angles and translation distances keep same with the former simulation. The images of the phase target are also captured from three different camera poses, then they are used for stereo calibration. Figure 7 shows the absolute errors of camera parameters with different focal lengths. As we can see, the absolute errors keep at a very low level. Similarly, we also tested the effects by varying other parameters such as principal points, rotation angles, and translation distances. We find that the absolute errors are also very low. These simulation results confirm that the proposed method can achieve high accuracy for different camera parameters.

 

Fig. 7. Absolute errors with different focal lengths. (a) Focal lengths;(b) Principal points; (c) Rotation angles; (d) Translation distances.

Download Full Size | PPT Slide | PDF

5. Experiment

To further verify the proposed method, an experiment platform including two same cameras (Point Grey Chameleon3) and a tablet (iPad MR7G2CH/A) was built up. The two cameras have a resolution of 1280×1024 pixels. The optical lenses (Kowa LM16JCM) mounted on the cameras have a focal length of 16 mm. The tablet with a resolution of 2048×1536 pixels and a pixel pitch of 0.098 mm was used to display the fringe patterns, and served as the calibration target. Horizontal and vertical phase-shifting fringes both have a period of 50 pixels, that are used as absolute phase target to encode the feature points. The additional fringes for phase unwrapping have a period of 2000 pixels. Then two calibration experiments have been conducted with different configurations of two cameras.

5.1 Overlapping cameras calibration

In the first experiment, the two cameras comprise a stereo vision system with certain overlapping FOV. Two groups of fringe images were captured by the stereo cameras from 10 different camera poses. For the first group, the tablet was placed in the DOFs of two cameras, thus in-focus images were captured. Figure 8 shows the in-focus images and their phase maps. Obviously, the phase distributions of Fig. 8(b) and Fig. 8(f) are close, meanwhile the phase distributions of Fig. 8(d) and Fig. 8(h) are also close, which indicates that the two cameras have large overlapping FOV at current shooting distance. For the second group, the tablet was placed out of the DOFs of two cameras, thus defocused images were captured. Figure 9 shows the defocused images and their phase maps. Obviously, the phase distributions of Fig. 9(b) and Fig. 9(f) are close, however the phase distributions of Fig. 9(d) and Fig. 9(h) have little overlap, which indicates that the two cameras have small overlapping FOV at current shooting distance.

 

Fig. 8. In-focus images for overlapping cameras calibration. (a) Horizontal fringe captured by left camera and (b) its phase map; (c) Vertical fringe captured by left camera and (d) its phase map; (e) Horizontal fringe captured by right camera and (f) its phase map; (g) Vertical fringe captured by right camera and (h) its phase map.

Download Full Size | PPT Slide | PDF

 

Fig. 9. Defocused images for overlapping cameras calibration. (a) Horizontal fringe captured by left camera and (b) its phase map; (c) Vertical fringe captured by left camera and (d) its phase map; (e) Horizontal fringe captured by right camera and (f) its phase map; (g) Vertical fringe captured by right camera and (h) its phase map.

Download Full Size | PPT Slide | PDF

Then calibration was performed with two groups of fringe images, respectively. Figure 10 shows the target poses in the left camera coordinate system. As we can see that the shooting distance of first group was about 30 cm, and the shooting distance of second group was about 17 cm. Table 1 shows the calibrated intrinsic parameters, and Table 2 shows the calibrated extrinsic parameters. It is obvious that the intrinsic parameters estimated from in-focus and defocused images are very close to each other. The differences of focal lengths are less than 0.2%, and the differences of principle points are less than 0.9%. The tangential distortions of two cameras are very small, and keeping radial distortion coefficients k1 and k2 are enough to express the nonlinear distortion. The k1 and k2 are relatively small with a slight variation. The extrinsic parameters estimated from in-focus and defocused images are very close, too. These experiment results confirm that the camera defocusing has very little influence on the proposed method.

 

Fig. 10. Target poses. (a) In-focus; (b) Defocus.

Download Full Size | PPT Slide | PDF

Tables Icon

Table 1. Calibrated intrinsic parameters of first experiment.

Tables Icon

Table 2. Calibrated extrinsic parameters of first experiment.

These calibration results are then used for the 3D reconstruction of chessboard corners. The chessboard has a square size of 4. 9 mm and 10×10 corners. Ten pairs of chessboard images are captured in-focus by the stereo cameras from different camera poses. The corners are extracted by the standard toolbox [33], then their 3D world coordinates are reconstructed using the in-focus and defocus calibration results, separately. Figure 11 shows the reconstruction results of the chessboard corners in the left camera coordinate system. Then the size of each square can be measured by computing the spatial distance of adjacent corners. Table 3 shows the mean reconstruction errors of the square size. Clearly, the reconstruction errors using defocus calibration results are slightly higher than that using in-focus calibration results, and both are lower than 8 µm. This experiment demonstrates the accuracy of the proposed method.

 

Fig. 11. Chessboard corners reconstructed using (a) In-focus calibration results; and (b) Defocus calibration results.

Download Full Size | PPT Slide | PDF

Tables Icon

Table 3. Mean reconstruction errors of square size using in-focus/defocus calibration results (unit: µm).

5.2 Non-overlapping cameras calibration

In the second experiment, two cameras were adjusted to have non-overlapping FOV. Two groups of fringe images were captured to calibrate the two cameras. The first group was in-focus, and the second group was defocused. Figure 12 shows the in-focus images and their phase maps. Obviously, the phase distributions of Fig. 12(b) and Fig. 12(f) are close, but the phase distributions of Fig. 12(d) and Fig. 12(h) have no overlap, which indicates that the two cameras have non-overlapping FOV at current shooting distance. Figure 13 shows the defocused images and their phase maps. Obviously, the phase distributions of Fig. 13(b) and Fig. 13(f) are close, however the phase distributions of Fig. 13(d) and Fig. 13(h) have no overlap, which indicates that the two cameras also have non-overlapping FOV at current shooting distance.

 

Fig. 12. In-focus images for non-overlapping cameras calibration. (a) Horizontal fringe captured by left camera and (b) its phase map; (c) Vertical fringe captured by left camera and (d) its phase map; (e) Horizontal fringe captured by right camera and (f) its phase map; (g) Vertical fringe captured by right camera and (h) its phase map.

Download Full Size | PPT Slide | PDF

 

Fig. 13. Defocused images for non-overlapping cameras calibration. (a) Horizontal fringe captured by left camera and (b) its phase map; (c) Vertical fringe captured by left camera and (d) its phase map; (e) Horizontal fringe captured by right camera and (f) its phase map; (g) Vertical fringe captured by right camera and (h) its phase map.

Download Full Size | PPT Slide | PDF

Then calibration was performed with two groups of fringe images, respectively. Table 4 shows the calibrated intrinsic parameters, and Table 5 shows the calibrated extrinsic parameters. It is obvious that the intrinsic parameters estimated from in-focus and defocused images are very close to each other. The differences of focal lengths are less than 0.1%, and the differences of principle points are less than 1.2%. The radial distortion coefficients k1 and k2 are relatively small with a small variation. The extrinsic parameters estimated from in-focus and defocused images are very close, too. This experiment confirms that the proposed method can be used to calibrate the cameras with non-overlapping FOV.

Tables Icon

Table 4. Calibrated intrinsic parameters of second experiment.

Tables Icon

Table 5. Calibrated extrinsic parameters of second experiment.

6. Conclusion

This paper presents an efficient and convenient stereo calibration method based on absolute phase target. This method has the following advantages: Firstly, it is suitable for out-of-focus stereo calibration because phase is robust against camera defocusing; Secondly, feature points are extracted from both the overlapping and non-overlapping FOVs, which are much more than traditional methods, thus accurate calibration results can be achieved; Thirdly, the target placement is more flexible due to its defocusing tolerability and whole-field capability. Because of these advantages, the proposed method would be significant for high-precision stereo measurement applications.

Funding

National Natural Science Foundation of China (NSFC) (51605130, 61603360); Natural Science Foundation of Hubei Province (2018CFB656); Open Fund of the Key Laboratory for Metallurgical Equipment and Control of Ministry of Education in Wuhan University of Science and Technology (2018B03, 2018B06).

References

1. X. Su and Q. Zhang, “Dynamic 3-D shape measurement method: A review,” Opt. Lasers Eng. 48(2), 191–204 (2010). [CrossRef]  

2. C. Zuo, S. Feng, L. Huang, T. Tao, W. Yin, and Q. Chen, “Phase shifting algorithms for fringe projection profilometry: A review,” Opt. Lasers Eng. 109, 23–59 (2018). [CrossRef]  

3. J. Heikkila, “Geometric camera calibration using circular control points,” IEEE Trans. Pattern Anal. Mach. Intell. 22(10), 1066–1077 (2000). [CrossRef]  

4. Z. Y. Zhang, “A flexible new technique for camera calibration,” IEEE Trans. Pattern Anal. Mach. Intell. 22(11), 1330–1334 (2000). [CrossRef]  

5. Z. Zhang, “Camera calibration with one-dimensional objects,” IEEE Trans. Pattern Anal. Mach. Intell. 26(7), 892–899 (2004). [CrossRef]  

6. Z. Liu, Q. Wu, X. Chen, and Y. Yin, “High-accuracy calibration of low-cost camera using image disturbance factor,” Opt. Express 24(21), 24321–24336 (2016). [CrossRef]  

7. B. Li and S. Zhang, “Flexible calibration method for microscopic structured light system using telecentric lens,” Opt. Express 23(20), 25795–25803 (2015). [CrossRef]  

8. J. Jin and X. Li, “Efficient camera self-calibration method based on the absolute dual quadric,” J. Opt. Soc. Am. A 30(3), 287–292 (2013). [CrossRef]  

9. L. Huang, Q. Zhang, and A. Asundi, “Camera calibration with active phase target: improvement on feature detection and optimization,” Opt. Lett. 38(9), 1446–1448 (2013). [CrossRef]  

10. Y. Xu, F. Gao, H. Ren, Z. Zhang, and X. Jiang, “An Iterative Distortion Compensation Algorithm for Camera Calibration Based on Phase Target,” Sensors 17(6), 1188 (2017). [CrossRef]  

11. W. Zhao, X. Su, and W. Chen, “Whole-field high precision point to point calibration method,” Opt. Lasers Eng. 111, 71–79 (2018). [CrossRef]  

12. M. Ma, X. Chen, and K. Wang, “Camera calibration by using fringe patterns and 2D phase-difference pulse detection,” Optik 125(2), 671–674 (2014). [CrossRef]  

13. C. Schmalz, F. Forster, and E. Angelopoulou, “Camera calibration: active versus passive targets,” Opt. Eng. 50(11), 113601 (2011). [CrossRef]  

14. T. Bell, J. Xu, and S. Zhang, “Method for out-of-focus camera calibration,” Appl. Opt. 55(9), 2346–2352 (2016). [CrossRef]  

15. Y. Wang, B. Cai, K. Wang, and X. Chen, “Out-of-focus color camera calibration with one normal-sized color-coded pattern,” Opt. Lasers Eng. 98, 17–22 (2017). [CrossRef]  

16. Y. Wang, X. Chen, J. Tao, K. Wang, and M. Ma, “Accurate feature detection for out-of-focus camera calibration,” Appl. Opt. 55(28), 7964–7971 (2016). [CrossRef]  

17. R. Juarez-Salazar, F. Guerrero-Sanchez, C. Robledo-Sanchez, and J. Gonzalez-Garcia, “Camera calibration by multiplexed phase encoding of coordinate information,” Appl. Opt. 54(15), 4895–4906 (2015). [CrossRef]  

18. Y. Liu and X. Su, “Camera calibration with planar crossed fringe patterns,” Optik 123(2), 171–175 (2012). [CrossRef]  

19. J. H. Kim and B. K. Koo, “Convenient calibration method for unsynchronized camera networks using an inaccurate small reference object,” Opt. Express 20(23), 25292–25310 (2012). [CrossRef]  

20. S. Gai, F. Da, and X. Dai, “A novel dual-camera calibration method for 3D optical measurement,” Opt. Lasers Eng. 104, 126–134 (2018). [CrossRef]  

21. L. Wang, W. Wang, C. Shen, and F. Duan, “A convex relaxation optimization algorithm for multi-camera calibration with 1D objects,” Neurocomputing 215, 82–89 (2016). [CrossRef]  

22. J. Yu and F. Da, “Bi-tangent line based approach for multi-camera calibration using spheres,” J. Opt. Soc. Am. A 35(2), 221–229 (2018). [CrossRef]  

23. Z. Liu, Y. Yin, S. Liu, and X. Chen, “Extrinsic parameter calibration of stereo vision sensors using spot laser projector,” Appl. Opt. 55(25), 7098–7105 (2016). [CrossRef]  

24. J. A. M. Rodríguez and F. C. Mejía Alanís, “Binocular self-calibration performed via adaptive genetic algorithm based on laser line imaging,” J. Mod. Opt. 63(13), 1219–1232 (2016). [CrossRef]  

25. R. Xia, M. Hu, J. Zhao, S. Chen, Y. Chen, and S. P. Fu, “Global calibration of non-overlapping cameras: State of the art,” Optik 158, 951–961 (2018). [CrossRef]  

26. Z. Liu, X. Wei, and G. Zhang, “External parameter calibration of widely distributed vision sensors with non-overlapping fields of view,” Opt. Lasers Eng. 51(6), 643–650 (2013). [CrossRef]  

27. M. Xie, Z. Wei, G. Zhang, and X. Wei, “A flexible technique for calibrating relative position and orientation of two cameras with no-overlapping FOV,” Measurement 46(1), 34–44 (2013). [CrossRef]  

28. S. Dong, X. Shao, X. Kang, F. Yang, and X. He, “Extrinsic calibration of a non-overlapping camera network based on close-range photogrammetry,” Appl. Opt. 55(23), 6363–6370 (2016). [CrossRef]  

29. Y. Xu, G. Feng, Z. Zhang, and X. Jiang, “A calibration method for non-overlapping cameras based on mirrored absolute phase target,” Int. J. Adv. Manuf. Technol.1–7 (2018). [CrossRef]  

30. Z. Wei, W. Zou, G. Zhang, and K. Zhao, “Extrinsic parameters calibration of multi-camera with non-overlapping fields of view using laser scanning,” Opt. Express 27(12), 16719–16737 (2019). [CrossRef]  

31. T. Yang, Q. Zhao, X. Wang, and D. Huang, “Accurate calibration approach for non-overlapping multi-camera system,” Opt. Laser Technol. 110, 78–86 (2019). [CrossRef]  

32. J. A. M. Rodríguez, “Microscope self-calibration based on micro laser line imaging and soft computing algorithms,” Opt. Lasers Eng. 105, 75–85 (2018). [CrossRef]  

33. J.-Y. Bouguet, “Camera Calibration Toolbox for Matlab,” http://www.vision.caltech.edu/bouguetj/calib_doc.

34. X. Chen, Y. Wang, Y. Wang, M. Ma, and C. Zeng, “Quantized phase coding and connected region labeling for absolute phase retrieval,” Opt. Express 24(25), 28613–28624 (2016). [CrossRef]  

35. J. S. Hyun and S. Zhang, “Enhanced two-frequency phase-shifting method,” Appl. Opt. 55(16), 4395–4401 (2016). [CrossRef]  

References

  • View by:
  • |
  • |
  • |

  1. X. Su and Q. Zhang, “Dynamic 3-D shape measurement method: A review,” Opt. Lasers Eng. 48(2), 191–204 (2010).
    [Crossref]
  2. C. Zuo, S. Feng, L. Huang, T. Tao, W. Yin, and Q. Chen, “Phase shifting algorithms for fringe projection profilometry: A review,” Opt. Lasers Eng. 109, 23–59 (2018).
    [Crossref]
  3. J. Heikkila, “Geometric camera calibration using circular control points,” IEEE Trans. Pattern Anal. Mach. Intell. 22(10), 1066–1077 (2000).
    [Crossref]
  4. Z. Y. Zhang, “A flexible new technique for camera calibration,” IEEE Trans. Pattern Anal. Mach. Intell. 22(11), 1330–1334 (2000).
    [Crossref]
  5. Z. Zhang, “Camera calibration with one-dimensional objects,” IEEE Trans. Pattern Anal. Mach. Intell. 26(7), 892–899 (2004).
    [Crossref]
  6. Z. Liu, Q. Wu, X. Chen, and Y. Yin, “High-accuracy calibration of low-cost camera using image disturbance factor,” Opt. Express 24(21), 24321–24336 (2016).
    [Crossref]
  7. B. Li and S. Zhang, “Flexible calibration method for microscopic structured light system using telecentric lens,” Opt. Express 23(20), 25795–25803 (2015).
    [Crossref]
  8. J. Jin and X. Li, “Efficient camera self-calibration method based on the absolute dual quadric,” J. Opt. Soc. Am. A 30(3), 287–292 (2013).
    [Crossref]
  9. L. Huang, Q. Zhang, and A. Asundi, “Camera calibration with active phase target: improvement on feature detection and optimization,” Opt. Lett. 38(9), 1446–1448 (2013).
    [Crossref]
  10. Y. Xu, F. Gao, H. Ren, Z. Zhang, and X. Jiang, “An Iterative Distortion Compensation Algorithm for Camera Calibration Based on Phase Target,” Sensors 17(6), 1188 (2017).
    [Crossref]
  11. W. Zhao, X. Su, and W. Chen, “Whole-field high precision point to point calibration method,” Opt. Lasers Eng. 111, 71–79 (2018).
    [Crossref]
  12. M. Ma, X. Chen, and K. Wang, “Camera calibration by using fringe patterns and 2D phase-difference pulse detection,” Optik 125(2), 671–674 (2014).
    [Crossref]
  13. C. Schmalz, F. Forster, and E. Angelopoulou, “Camera calibration: active versus passive targets,” Opt. Eng. 50(11), 113601 (2011).
    [Crossref]
  14. T. Bell, J. Xu, and S. Zhang, “Method for out-of-focus camera calibration,” Appl. Opt. 55(9), 2346–2352 (2016).
    [Crossref]
  15. Y. Wang, B. Cai, K. Wang, and X. Chen, “Out-of-focus color camera calibration with one normal-sized color-coded pattern,” Opt. Lasers Eng. 98, 17–22 (2017).
    [Crossref]
  16. Y. Wang, X. Chen, J. Tao, K. Wang, and M. Ma, “Accurate feature detection for out-of-focus camera calibration,” Appl. Opt. 55(28), 7964–7971 (2016).
    [Crossref]
  17. R. Juarez-Salazar, F. Guerrero-Sanchez, C. Robledo-Sanchez, and J. Gonzalez-Garcia, “Camera calibration by multiplexed phase encoding of coordinate information,” Appl. Opt. 54(15), 4895–4906 (2015).
    [Crossref]
  18. Y. Liu and X. Su, “Camera calibration with planar crossed fringe patterns,” Optik 123(2), 171–175 (2012).
    [Crossref]
  19. J. H. Kim and B. K. Koo, “Convenient calibration method for unsynchronized camera networks using an inaccurate small reference object,” Opt. Express 20(23), 25292–25310 (2012).
    [Crossref]
  20. S. Gai, F. Da, and X. Dai, “A novel dual-camera calibration method for 3D optical measurement,” Opt. Lasers Eng. 104, 126–134 (2018).
    [Crossref]
  21. L. Wang, W. Wang, C. Shen, and F. Duan, “A convex relaxation optimization algorithm for multi-camera calibration with 1D objects,” Neurocomputing 215, 82–89 (2016).
    [Crossref]
  22. J. Yu and F. Da, “Bi-tangent line based approach for multi-camera calibration using spheres,” J. Opt. Soc. Am. A 35(2), 221–229 (2018).
    [Crossref]
  23. Z. Liu, Y. Yin, S. Liu, and X. Chen, “Extrinsic parameter calibration of stereo vision sensors using spot laser projector,” Appl. Opt. 55(25), 7098–7105 (2016).
    [Crossref]
  24. J. A. M. Rodríguez and F. C. Mejía Alanís, “Binocular self-calibration performed via adaptive genetic algorithm based on laser line imaging,” J. Mod. Opt. 63(13), 1219–1232 (2016).
    [Crossref]
  25. R. Xia, M. Hu, J. Zhao, S. Chen, Y. Chen, and S. P. Fu, “Global calibration of non-overlapping cameras: State of the art,” Optik 158, 951–961 (2018).
    [Crossref]
  26. Z. Liu, X. Wei, and G. Zhang, “External parameter calibration of widely distributed vision sensors with non-overlapping fields of view,” Opt. Lasers Eng. 51(6), 643–650 (2013).
    [Crossref]
  27. M. Xie, Z. Wei, G. Zhang, and X. Wei, “A flexible technique for calibrating relative position and orientation of two cameras with no-overlapping FOV,” Measurement 46(1), 34–44 (2013).
    [Crossref]
  28. S. Dong, X. Shao, X. Kang, F. Yang, and X. He, “Extrinsic calibration of a non-overlapping camera network based on close-range photogrammetry,” Appl. Opt. 55(23), 6363–6370 (2016).
    [Crossref]
  29. Y. Xu, G. Feng, Z. Zhang, and X. Jiang, “A calibration method for non-overlapping cameras based on mirrored absolute phase target,” Int. J. Adv. Manuf. Technol.1–7 (2018).
    [Crossref]
  30. Z. Wei, W. Zou, G. Zhang, and K. Zhao, “Extrinsic parameters calibration of multi-camera with non-overlapping fields of view using laser scanning,” Opt. Express 27(12), 16719–16737 (2019).
    [Crossref]
  31. T. Yang, Q. Zhao, X. Wang, and D. Huang, “Accurate calibration approach for non-overlapping multi-camera system,” Opt. Laser Technol. 110, 78–86 (2019).
    [Crossref]
  32. J. A. M. Rodríguez, “Microscope self-calibration based on micro laser line imaging and soft computing algorithms,” Opt. Lasers Eng. 105, 75–85 (2018).
    [Crossref]
  33. J.-Y. Bouguet, “Camera Calibration Toolbox for Matlab,” http://www.vision.caltech.edu/bouguetj/calib_doc .
  34. X. Chen, Y. Wang, Y. Wang, M. Ma, and C. Zeng, “Quantized phase coding and connected region labeling for absolute phase retrieval,” Opt. Express 24(25), 28613–28624 (2016).
    [Crossref]
  35. J. S. Hyun and S. Zhang, “Enhanced two-frequency phase-shifting method,” Appl. Opt. 55(16), 4395–4401 (2016).
    [Crossref]

2019 (2)

Z. Wei, W. Zou, G. Zhang, and K. Zhao, “Extrinsic parameters calibration of multi-camera with non-overlapping fields of view using laser scanning,” Opt. Express 27(12), 16719–16737 (2019).
[Crossref]

T. Yang, Q. Zhao, X. Wang, and D. Huang, “Accurate calibration approach for non-overlapping multi-camera system,” Opt. Laser Technol. 110, 78–86 (2019).
[Crossref]

2018 (6)

J. A. M. Rodríguez, “Microscope self-calibration based on micro laser line imaging and soft computing algorithms,” Opt. Lasers Eng. 105, 75–85 (2018).
[Crossref]

S. Gai, F. Da, and X. Dai, “A novel dual-camera calibration method for 3D optical measurement,” Opt. Lasers Eng. 104, 126–134 (2018).
[Crossref]

J. Yu and F. Da, “Bi-tangent line based approach for multi-camera calibration using spheres,” J. Opt. Soc. Am. A 35(2), 221–229 (2018).
[Crossref]

R. Xia, M. Hu, J. Zhao, S. Chen, Y. Chen, and S. P. Fu, “Global calibration of non-overlapping cameras: State of the art,” Optik 158, 951–961 (2018).
[Crossref]

C. Zuo, S. Feng, L. Huang, T. Tao, W. Yin, and Q. Chen, “Phase shifting algorithms for fringe projection profilometry: A review,” Opt. Lasers Eng. 109, 23–59 (2018).
[Crossref]

W. Zhao, X. Su, and W. Chen, “Whole-field high precision point to point calibration method,” Opt. Lasers Eng. 111, 71–79 (2018).
[Crossref]

2017 (2)

Y. Xu, F. Gao, H. Ren, Z. Zhang, and X. Jiang, “An Iterative Distortion Compensation Algorithm for Camera Calibration Based on Phase Target,” Sensors 17(6), 1188 (2017).
[Crossref]

Y. Wang, B. Cai, K. Wang, and X. Chen, “Out-of-focus color camera calibration with one normal-sized color-coded pattern,” Opt. Lasers Eng. 98, 17–22 (2017).
[Crossref]

2016 (9)

Y. Wang, X. Chen, J. Tao, K. Wang, and M. Ma, “Accurate feature detection for out-of-focus camera calibration,” Appl. Opt. 55(28), 7964–7971 (2016).
[Crossref]

T. Bell, J. Xu, and S. Zhang, “Method for out-of-focus camera calibration,” Appl. Opt. 55(9), 2346–2352 (2016).
[Crossref]

Z. Liu, Q. Wu, X. Chen, and Y. Yin, “High-accuracy calibration of low-cost camera using image disturbance factor,” Opt. Express 24(21), 24321–24336 (2016).
[Crossref]

S. Dong, X. Shao, X. Kang, F. Yang, and X. He, “Extrinsic calibration of a non-overlapping camera network based on close-range photogrammetry,” Appl. Opt. 55(23), 6363–6370 (2016).
[Crossref]

Z. Liu, Y. Yin, S. Liu, and X. Chen, “Extrinsic parameter calibration of stereo vision sensors using spot laser projector,” Appl. Opt. 55(25), 7098–7105 (2016).
[Crossref]

J. A. M. Rodríguez and F. C. Mejía Alanís, “Binocular self-calibration performed via adaptive genetic algorithm based on laser line imaging,” J. Mod. Opt. 63(13), 1219–1232 (2016).
[Crossref]

L. Wang, W. Wang, C. Shen, and F. Duan, “A convex relaxation optimization algorithm for multi-camera calibration with 1D objects,” Neurocomputing 215, 82–89 (2016).
[Crossref]

X. Chen, Y. Wang, Y. Wang, M. Ma, and C. Zeng, “Quantized phase coding and connected region labeling for absolute phase retrieval,” Opt. Express 24(25), 28613–28624 (2016).
[Crossref]

J. S. Hyun and S. Zhang, “Enhanced two-frequency phase-shifting method,” Appl. Opt. 55(16), 4395–4401 (2016).
[Crossref]

2015 (2)

2014 (1)

M. Ma, X. Chen, and K. Wang, “Camera calibration by using fringe patterns and 2D phase-difference pulse detection,” Optik 125(2), 671–674 (2014).
[Crossref]

2013 (4)

J. Jin and X. Li, “Efficient camera self-calibration method based on the absolute dual quadric,” J. Opt. Soc. Am. A 30(3), 287–292 (2013).
[Crossref]

L. Huang, Q. Zhang, and A. Asundi, “Camera calibration with active phase target: improvement on feature detection and optimization,” Opt. Lett. 38(9), 1446–1448 (2013).
[Crossref]

Z. Liu, X. Wei, and G. Zhang, “External parameter calibration of widely distributed vision sensors with non-overlapping fields of view,” Opt. Lasers Eng. 51(6), 643–650 (2013).
[Crossref]

M. Xie, Z. Wei, G. Zhang, and X. Wei, “A flexible technique for calibrating relative position and orientation of two cameras with no-overlapping FOV,” Measurement 46(1), 34–44 (2013).
[Crossref]

2012 (2)

2011 (1)

C. Schmalz, F. Forster, and E. Angelopoulou, “Camera calibration: active versus passive targets,” Opt. Eng. 50(11), 113601 (2011).
[Crossref]

2010 (1)

X. Su and Q. Zhang, “Dynamic 3-D shape measurement method: A review,” Opt. Lasers Eng. 48(2), 191–204 (2010).
[Crossref]

2004 (1)

Z. Zhang, “Camera calibration with one-dimensional objects,” IEEE Trans. Pattern Anal. Mach. Intell. 26(7), 892–899 (2004).
[Crossref]

2000 (2)

J. Heikkila, “Geometric camera calibration using circular control points,” IEEE Trans. Pattern Anal. Mach. Intell. 22(10), 1066–1077 (2000).
[Crossref]

Z. Y. Zhang, “A flexible new technique for camera calibration,” IEEE Trans. Pattern Anal. Mach. Intell. 22(11), 1330–1334 (2000).
[Crossref]

Angelopoulou, E.

C. Schmalz, F. Forster, and E. Angelopoulou, “Camera calibration: active versus passive targets,” Opt. Eng. 50(11), 113601 (2011).
[Crossref]

Asundi, A.

Bell, T.

Bouguet, J.-Y.

J.-Y. Bouguet, “Camera Calibration Toolbox for Matlab,” http://www.vision.caltech.edu/bouguetj/calib_doc .

Cai, B.

Y. Wang, B. Cai, K. Wang, and X. Chen, “Out-of-focus color camera calibration with one normal-sized color-coded pattern,” Opt. Lasers Eng. 98, 17–22 (2017).
[Crossref]

Chen, Q.

C. Zuo, S. Feng, L. Huang, T. Tao, W. Yin, and Q. Chen, “Phase shifting algorithms for fringe projection profilometry: A review,” Opt. Lasers Eng. 109, 23–59 (2018).
[Crossref]

Chen, S.

R. Xia, M. Hu, J. Zhao, S. Chen, Y. Chen, and S. P. Fu, “Global calibration of non-overlapping cameras: State of the art,” Optik 158, 951–961 (2018).
[Crossref]

Chen, W.

W. Zhao, X. Su, and W. Chen, “Whole-field high precision point to point calibration method,” Opt. Lasers Eng. 111, 71–79 (2018).
[Crossref]

Chen, X.

Chen, Y.

R. Xia, M. Hu, J. Zhao, S. Chen, Y. Chen, and S. P. Fu, “Global calibration of non-overlapping cameras: State of the art,” Optik 158, 951–961 (2018).
[Crossref]

Da, F.

J. Yu and F. Da, “Bi-tangent line based approach for multi-camera calibration using spheres,” J. Opt. Soc. Am. A 35(2), 221–229 (2018).
[Crossref]

S. Gai, F. Da, and X. Dai, “A novel dual-camera calibration method for 3D optical measurement,” Opt. Lasers Eng. 104, 126–134 (2018).
[Crossref]

Dai, X.

S. Gai, F. Da, and X. Dai, “A novel dual-camera calibration method for 3D optical measurement,” Opt. Lasers Eng. 104, 126–134 (2018).
[Crossref]

Dong, S.

Duan, F.

L. Wang, W. Wang, C. Shen, and F. Duan, “A convex relaxation optimization algorithm for multi-camera calibration with 1D objects,” Neurocomputing 215, 82–89 (2016).
[Crossref]

Feng, G.

Y. Xu, G. Feng, Z. Zhang, and X. Jiang, “A calibration method for non-overlapping cameras based on mirrored absolute phase target,” Int. J. Adv. Manuf. Technol.1–7 (2018).
[Crossref]

Feng, S.

C. Zuo, S. Feng, L. Huang, T. Tao, W. Yin, and Q. Chen, “Phase shifting algorithms for fringe projection profilometry: A review,” Opt. Lasers Eng. 109, 23–59 (2018).
[Crossref]

Forster, F.

C. Schmalz, F. Forster, and E. Angelopoulou, “Camera calibration: active versus passive targets,” Opt. Eng. 50(11), 113601 (2011).
[Crossref]

Fu, S. P.

R. Xia, M. Hu, J. Zhao, S. Chen, Y. Chen, and S. P. Fu, “Global calibration of non-overlapping cameras: State of the art,” Optik 158, 951–961 (2018).
[Crossref]

Gai, S.

S. Gai, F. Da, and X. Dai, “A novel dual-camera calibration method for 3D optical measurement,” Opt. Lasers Eng. 104, 126–134 (2018).
[Crossref]

Gao, F.

Y. Xu, F. Gao, H. Ren, Z. Zhang, and X. Jiang, “An Iterative Distortion Compensation Algorithm for Camera Calibration Based on Phase Target,” Sensors 17(6), 1188 (2017).
[Crossref]

Gonzalez-Garcia, J.

Guerrero-Sanchez, F.

He, X.

Heikkila, J.

J. Heikkila, “Geometric camera calibration using circular control points,” IEEE Trans. Pattern Anal. Mach. Intell. 22(10), 1066–1077 (2000).
[Crossref]

Hu, M.

R. Xia, M. Hu, J. Zhao, S. Chen, Y. Chen, and S. P. Fu, “Global calibration of non-overlapping cameras: State of the art,” Optik 158, 951–961 (2018).
[Crossref]

Huang, D.

T. Yang, Q. Zhao, X. Wang, and D. Huang, “Accurate calibration approach for non-overlapping multi-camera system,” Opt. Laser Technol. 110, 78–86 (2019).
[Crossref]

Huang, L.

C. Zuo, S. Feng, L. Huang, T. Tao, W. Yin, and Q. Chen, “Phase shifting algorithms for fringe projection profilometry: A review,” Opt. Lasers Eng. 109, 23–59 (2018).
[Crossref]

L. Huang, Q. Zhang, and A. Asundi, “Camera calibration with active phase target: improvement on feature detection and optimization,” Opt. Lett. 38(9), 1446–1448 (2013).
[Crossref]

Hyun, J. S.

Jiang, X.

Y. Xu, F. Gao, H. Ren, Z. Zhang, and X. Jiang, “An Iterative Distortion Compensation Algorithm for Camera Calibration Based on Phase Target,” Sensors 17(6), 1188 (2017).
[Crossref]

Y. Xu, G. Feng, Z. Zhang, and X. Jiang, “A calibration method for non-overlapping cameras based on mirrored absolute phase target,” Int. J. Adv. Manuf. Technol.1–7 (2018).
[Crossref]

Jin, J.

Juarez-Salazar, R.

Kang, X.

Kim, J. H.

Koo, B. K.

Li, B.

Li, X.

Liu, S.

Liu, Y.

Y. Liu and X. Su, “Camera calibration with planar crossed fringe patterns,” Optik 123(2), 171–175 (2012).
[Crossref]

Liu, Z.

Ma, M.

Mejía Alanís, F. C.

J. A. M. Rodríguez and F. C. Mejía Alanís, “Binocular self-calibration performed via adaptive genetic algorithm based on laser line imaging,” J. Mod. Opt. 63(13), 1219–1232 (2016).
[Crossref]

Ren, H.

Y. Xu, F. Gao, H. Ren, Z. Zhang, and X. Jiang, “An Iterative Distortion Compensation Algorithm for Camera Calibration Based on Phase Target,” Sensors 17(6), 1188 (2017).
[Crossref]

Robledo-Sanchez, C.

Rodríguez, J. A. M.

J. A. M. Rodríguez, “Microscope self-calibration based on micro laser line imaging and soft computing algorithms,” Opt. Lasers Eng. 105, 75–85 (2018).
[Crossref]

J. A. M. Rodríguez and F. C. Mejía Alanís, “Binocular self-calibration performed via adaptive genetic algorithm based on laser line imaging,” J. Mod. Opt. 63(13), 1219–1232 (2016).
[Crossref]

Schmalz, C.

C. Schmalz, F. Forster, and E. Angelopoulou, “Camera calibration: active versus passive targets,” Opt. Eng. 50(11), 113601 (2011).
[Crossref]

Shao, X.

Shen, C.

L. Wang, W. Wang, C. Shen, and F. Duan, “A convex relaxation optimization algorithm for multi-camera calibration with 1D objects,” Neurocomputing 215, 82–89 (2016).
[Crossref]

Su, X.

W. Zhao, X. Su, and W. Chen, “Whole-field high precision point to point calibration method,” Opt. Lasers Eng. 111, 71–79 (2018).
[Crossref]

Y. Liu and X. Su, “Camera calibration with planar crossed fringe patterns,” Optik 123(2), 171–175 (2012).
[Crossref]

X. Su and Q. Zhang, “Dynamic 3-D shape measurement method: A review,” Opt. Lasers Eng. 48(2), 191–204 (2010).
[Crossref]

Tao, J.

Tao, T.

C. Zuo, S. Feng, L. Huang, T. Tao, W. Yin, and Q. Chen, “Phase shifting algorithms for fringe projection profilometry: A review,” Opt. Lasers Eng. 109, 23–59 (2018).
[Crossref]

Wang, K.

Y. Wang, B. Cai, K. Wang, and X. Chen, “Out-of-focus color camera calibration with one normal-sized color-coded pattern,” Opt. Lasers Eng. 98, 17–22 (2017).
[Crossref]

Y. Wang, X. Chen, J. Tao, K. Wang, and M. Ma, “Accurate feature detection for out-of-focus camera calibration,” Appl. Opt. 55(28), 7964–7971 (2016).
[Crossref]

M. Ma, X. Chen, and K. Wang, “Camera calibration by using fringe patterns and 2D phase-difference pulse detection,” Optik 125(2), 671–674 (2014).
[Crossref]

Wang, L.

L. Wang, W. Wang, C. Shen, and F. Duan, “A convex relaxation optimization algorithm for multi-camera calibration with 1D objects,” Neurocomputing 215, 82–89 (2016).
[Crossref]

Wang, W.

L. Wang, W. Wang, C. Shen, and F. Duan, “A convex relaxation optimization algorithm for multi-camera calibration with 1D objects,” Neurocomputing 215, 82–89 (2016).
[Crossref]

Wang, X.

T. Yang, Q. Zhao, X. Wang, and D. Huang, “Accurate calibration approach for non-overlapping multi-camera system,” Opt. Laser Technol. 110, 78–86 (2019).
[Crossref]

Wang, Y.

Wei, X.

M. Xie, Z. Wei, G. Zhang, and X. Wei, “A flexible technique for calibrating relative position and orientation of two cameras with no-overlapping FOV,” Measurement 46(1), 34–44 (2013).
[Crossref]

Z. Liu, X. Wei, and G. Zhang, “External parameter calibration of widely distributed vision sensors with non-overlapping fields of view,” Opt. Lasers Eng. 51(6), 643–650 (2013).
[Crossref]

Wei, Z.

Z. Wei, W. Zou, G. Zhang, and K. Zhao, “Extrinsic parameters calibration of multi-camera with non-overlapping fields of view using laser scanning,” Opt. Express 27(12), 16719–16737 (2019).
[Crossref]

M. Xie, Z. Wei, G. Zhang, and X. Wei, “A flexible technique for calibrating relative position and orientation of two cameras with no-overlapping FOV,” Measurement 46(1), 34–44 (2013).
[Crossref]

Wu, Q.

Xia, R.

R. Xia, M. Hu, J. Zhao, S. Chen, Y. Chen, and S. P. Fu, “Global calibration of non-overlapping cameras: State of the art,” Optik 158, 951–961 (2018).
[Crossref]

Xie, M.

M. Xie, Z. Wei, G. Zhang, and X. Wei, “A flexible technique for calibrating relative position and orientation of two cameras with no-overlapping FOV,” Measurement 46(1), 34–44 (2013).
[Crossref]

Xu, J.

Xu, Y.

Y. Xu, F. Gao, H. Ren, Z. Zhang, and X. Jiang, “An Iterative Distortion Compensation Algorithm for Camera Calibration Based on Phase Target,” Sensors 17(6), 1188 (2017).
[Crossref]

Y. Xu, G. Feng, Z. Zhang, and X. Jiang, “A calibration method for non-overlapping cameras based on mirrored absolute phase target,” Int. J. Adv. Manuf. Technol.1–7 (2018).
[Crossref]

Yang, F.

Yang, T.

T. Yang, Q. Zhao, X. Wang, and D. Huang, “Accurate calibration approach for non-overlapping multi-camera system,” Opt. Laser Technol. 110, 78–86 (2019).
[Crossref]

Yin, W.

C. Zuo, S. Feng, L. Huang, T. Tao, W. Yin, and Q. Chen, “Phase shifting algorithms for fringe projection profilometry: A review,” Opt. Lasers Eng. 109, 23–59 (2018).
[Crossref]

Yin, Y.

Yu, J.

Zeng, C.

Zhang, G.

Z. Wei, W. Zou, G. Zhang, and K. Zhao, “Extrinsic parameters calibration of multi-camera with non-overlapping fields of view using laser scanning,” Opt. Express 27(12), 16719–16737 (2019).
[Crossref]

M. Xie, Z. Wei, G. Zhang, and X. Wei, “A flexible technique for calibrating relative position and orientation of two cameras with no-overlapping FOV,” Measurement 46(1), 34–44 (2013).
[Crossref]

Z. Liu, X. Wei, and G. Zhang, “External parameter calibration of widely distributed vision sensors with non-overlapping fields of view,” Opt. Lasers Eng. 51(6), 643–650 (2013).
[Crossref]

Zhang, Q.

Zhang, S.

Zhang, Z.

Y. Xu, F. Gao, H. Ren, Z. Zhang, and X. Jiang, “An Iterative Distortion Compensation Algorithm for Camera Calibration Based on Phase Target,” Sensors 17(6), 1188 (2017).
[Crossref]

Z. Zhang, “Camera calibration with one-dimensional objects,” IEEE Trans. Pattern Anal. Mach. Intell. 26(7), 892–899 (2004).
[Crossref]

Y. Xu, G. Feng, Z. Zhang, and X. Jiang, “A calibration method for non-overlapping cameras based on mirrored absolute phase target,” Int. J. Adv. Manuf. Technol.1–7 (2018).
[Crossref]

Zhang, Z. Y.

Z. Y. Zhang, “A flexible new technique for camera calibration,” IEEE Trans. Pattern Anal. Mach. Intell. 22(11), 1330–1334 (2000).
[Crossref]

Zhao, J.

R. Xia, M. Hu, J. Zhao, S. Chen, Y. Chen, and S. P. Fu, “Global calibration of non-overlapping cameras: State of the art,” Optik 158, 951–961 (2018).
[Crossref]

Zhao, K.

Zhao, Q.

T. Yang, Q. Zhao, X. Wang, and D. Huang, “Accurate calibration approach for non-overlapping multi-camera system,” Opt. Laser Technol. 110, 78–86 (2019).
[Crossref]

Zhao, W.

W. Zhao, X. Su, and W. Chen, “Whole-field high precision point to point calibration method,” Opt. Lasers Eng. 111, 71–79 (2018).
[Crossref]

Zou, W.

Zuo, C.

C. Zuo, S. Feng, L. Huang, T. Tao, W. Yin, and Q. Chen, “Phase shifting algorithms for fringe projection profilometry: A review,” Opt. Lasers Eng. 109, 23–59 (2018).
[Crossref]

Appl. Opt. (6)

IEEE Trans. Pattern Anal. Mach. Intell. (3)

J. Heikkila, “Geometric camera calibration using circular control points,” IEEE Trans. Pattern Anal. Mach. Intell. 22(10), 1066–1077 (2000).
[Crossref]

Z. Y. Zhang, “A flexible new technique for camera calibration,” IEEE Trans. Pattern Anal. Mach. Intell. 22(11), 1330–1334 (2000).
[Crossref]

Z. Zhang, “Camera calibration with one-dimensional objects,” IEEE Trans. Pattern Anal. Mach. Intell. 26(7), 892–899 (2004).
[Crossref]

J. Mod. Opt. (1)

J. A. M. Rodríguez and F. C. Mejía Alanís, “Binocular self-calibration performed via adaptive genetic algorithm based on laser line imaging,” J. Mod. Opt. 63(13), 1219–1232 (2016).
[Crossref]

J. Opt. Soc. Am. A (2)

Measurement (1)

M. Xie, Z. Wei, G. Zhang, and X. Wei, “A flexible technique for calibrating relative position and orientation of two cameras with no-overlapping FOV,” Measurement 46(1), 34–44 (2013).
[Crossref]

Neurocomputing (1)

L. Wang, W. Wang, C. Shen, and F. Duan, “A convex relaxation optimization algorithm for multi-camera calibration with 1D objects,” Neurocomputing 215, 82–89 (2016).
[Crossref]

Opt. Eng. (1)

C. Schmalz, F. Forster, and E. Angelopoulou, “Camera calibration: active versus passive targets,” Opt. Eng. 50(11), 113601 (2011).
[Crossref]

Opt. Express (5)

Opt. Laser Technol. (1)

T. Yang, Q. Zhao, X. Wang, and D. Huang, “Accurate calibration approach for non-overlapping multi-camera system,” Opt. Laser Technol. 110, 78–86 (2019).
[Crossref]

Opt. Lasers Eng. (7)

J. A. M. Rodríguez, “Microscope self-calibration based on micro laser line imaging and soft computing algorithms,” Opt. Lasers Eng. 105, 75–85 (2018).
[Crossref]

Z. Liu, X. Wei, and G. Zhang, “External parameter calibration of widely distributed vision sensors with non-overlapping fields of view,” Opt. Lasers Eng. 51(6), 643–650 (2013).
[Crossref]

S. Gai, F. Da, and X. Dai, “A novel dual-camera calibration method for 3D optical measurement,” Opt. Lasers Eng. 104, 126–134 (2018).
[Crossref]

Y. Wang, B. Cai, K. Wang, and X. Chen, “Out-of-focus color camera calibration with one normal-sized color-coded pattern,” Opt. Lasers Eng. 98, 17–22 (2017).
[Crossref]

W. Zhao, X. Su, and W. Chen, “Whole-field high precision point to point calibration method,” Opt. Lasers Eng. 111, 71–79 (2018).
[Crossref]

X. Su and Q. Zhang, “Dynamic 3-D shape measurement method: A review,” Opt. Lasers Eng. 48(2), 191–204 (2010).
[Crossref]

C. Zuo, S. Feng, L. Huang, T. Tao, W. Yin, and Q. Chen, “Phase shifting algorithms for fringe projection profilometry: A review,” Opt. Lasers Eng. 109, 23–59 (2018).
[Crossref]

Opt. Lett. (1)

Optik (3)

M. Ma, X. Chen, and K. Wang, “Camera calibration by using fringe patterns and 2D phase-difference pulse detection,” Optik 125(2), 671–674 (2014).
[Crossref]

Y. Liu and X. Su, “Camera calibration with planar crossed fringe patterns,” Optik 123(2), 171–175 (2012).
[Crossref]

R. Xia, M. Hu, J. Zhao, S. Chen, Y. Chen, and S. P. Fu, “Global calibration of non-overlapping cameras: State of the art,” Optik 158, 951–961 (2018).
[Crossref]

Sensors (1)

Y. Xu, F. Gao, H. Ren, Z. Zhang, and X. Jiang, “An Iterative Distortion Compensation Algorithm for Camera Calibration Based on Phase Target,” Sensors 17(6), 1188 (2017).
[Crossref]

Other (2)

Y. Xu, G. Feng, Z. Zhang, and X. Jiang, “A calibration method for non-overlapping cameras based on mirrored absolute phase target,” Int. J. Adv. Manuf. Technol.1–7 (2018).
[Crossref]

J.-Y. Bouguet, “Camera Calibration Toolbox for Matlab,” http://www.vision.caltech.edu/bouguetj/calib_doc .

Cited By

OSA participates in Crossref's Cited-By Linking service. Citing articles from OSA journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (13)

Fig. 1.
Fig. 1. Transformations between the world coordinate and two camera coordinates.
Fig. 2.
Fig. 2. Schematic diagram of stereo calibration.
Fig. 3.
Fig. 3. Framework of feature detection.
Fig. 4.
Fig. 4. Simulated images.
Fig. 5.
Fig. 5. Absolute errors with different noise levels. (a) Focal lengths;(b) Principal points; (c) Rotation angles; (d) Translation distances.
Fig. 6.
Fig. 6. Absolute errors with different blur levels. (a) Focal lengths;(b) Principal points; (c) Rotation angles; (d) Translation distances.
Fig. 7.
Fig. 7. Absolute errors with different focal lengths. (a) Focal lengths;(b) Principal points; (c) Rotation angles; (d) Translation distances.
Fig. 8.
Fig. 8. In-focus images for overlapping cameras calibration. (a) Horizontal fringe captured by left camera and (b) its phase map; (c) Vertical fringe captured by left camera and (d) its phase map; (e) Horizontal fringe captured by right camera and (f) its phase map; (g) Vertical fringe captured by right camera and (h) its phase map.
Fig. 9.
Fig. 9. Defocused images for overlapping cameras calibration. (a) Horizontal fringe captured by left camera and (b) its phase map; (c) Vertical fringe captured by left camera and (d) its phase map; (e) Horizontal fringe captured by right camera and (f) its phase map; (g) Vertical fringe captured by right camera and (h) its phase map.
Fig. 10.
Fig. 10. Target poses. (a) In-focus; (b) Defocus.
Fig. 11.
Fig. 11. Chessboard corners reconstructed using (a) In-focus calibration results; and (b) Defocus calibration results.
Fig. 12.
Fig. 12. In-focus images for non-overlapping cameras calibration. (a) Horizontal fringe captured by left camera and (b) its phase map; (c) Vertical fringe captured by left camera and (d) its phase map; (e) Horizontal fringe captured by right camera and (f) its phase map; (g) Vertical fringe captured by right camera and (h) its phase map.
Fig. 13.
Fig. 13. Defocused images for non-overlapping cameras calibration. (a) Horizontal fringe captured by left camera and (b) its phase map; (c) Vertical fringe captured by left camera and (d) its phase map; (e) Horizontal fringe captured by right camera and (f) its phase map; (g) Vertical fringe captured by right camera and (h) its phase map.

Tables (5)

Tables Icon

Table 1. Calibrated intrinsic parameters of first experiment.

Tables Icon

Table 2. Calibrated extrinsic parameters of first experiment.

Tables Icon

Table 3. Mean reconstruction errors of square size using in-focus/defocus calibration results (unit: µm).

Tables Icon

Table 4. Calibrated intrinsic parameters of second experiment.

Tables Icon

Table 5. Calibrated extrinsic parameters of second experiment.

Equations (15)

Equations on this page are rendered with MathJax. Learn more.

s P ~ = K [ R t ] P ~
K = [ f u γ u 0 0 f v v 0 0 0 1 ] , R = r o d r i g u e s [ θ x θ y θ z ] , t = [ t x t y t z ]
{ u ~ d = u ~ ( 1 + k 1 r 2 + k 2 r 4 ) + 2 p 1 u ~ v ~ + p 2 ( r 2 + 2 u ~ 2 ) v ~ d = v ~ ( 1 + k 1 r 2 + k 2 r 4 ) + 2 p 2 u ~ v ~ + p 1 ( r 2 + 2 v ~ 2 )
[ u ~ d v ~ d ] = [ u d v d ] [ u 0 v 0 ] , [ u ~ v ~ ] = [ u v ] [ u 0 v 0 ] , r 2 = u ~ 2 + v ~ 2
{ P l = R w l P w + t w l P r = R w r P w + t w r
P r = R l r P l + t l r = R w r R w l T P l + ( t w r R w r R w l T t w l )
I 1 ( x , y ) = I ( x , y ) + I ( x , y ) cos [ ϕ ( x , y ) 2 π / 3 ]
I 2 ( x , y ) = I ( x , y ) + I ( x , y ) cos [ ϕ ( x , y ) ]
I 3 ( x , y ) = I ( x , y ) + I ( x , y ) cos [ ϕ ( x , y ) + 2 π / 3 ]
ϕ ( x , y ) = tan 1 ( 3 I 1 I 3 2 I 2 I 1 I 3 )
k ( x , y ) = round [ ( f / f r ) ϕ r ( x , y ) ϕ ( x , y ) 2 π ]
Φ ( x , y ) = ϕ ( x , y ) + 2 π k ( x , y )
{ u = a 1 Φ u + b 1 Φ v + c 1 v = a 2 Φ u + b 2 Φ v + c 2
[ X Y ] = q P 2 π [ Φ u Φ v ]
K l = K r = [ 8 00 0 320 0 8 00 240 0 0 1 ] , R l r = [ 1 0 0 0 1 0 0 0 1 ] , t l r = [ 5 00 0 0 ]

Metrics