Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Improved separated-parameter calibration method for binocular vision measurements with a large field of view

Open Access Open Access

Abstract

Large field-of-view (FOV) calibration is indispensable to ensure the accuracy of vision measurement systems for large aviation components. We propose an improved separated-parameter calibration method for large-FOV binocular vision measurements with a high flexibility and accuracy. Firstly, the camera parameters are separately calibrated according to the sub-area features of image. Subsequently, based on the spatial-calibration accuracy, a stereoscopic calibration object is devised. The mean error of the proposed method is experimentally obtained as 0.13 mm for a FOV of 2.0 m × 1.5 m. Its feasibility and effectiveness for the measurement in the field is validated by workshop calibration.

© 2020 Optical Society of America under the terms of the OSA Open Access Publishing Agreement

1. Introduction

The quality of connection and assembly of large force-bearing composite components governs the characteristics of airplanes [1]. For improving the assembly quality, it is important to monitor the assembling conditions, which requires the reconstruction of the surfaces of the assembly components. Owing to its advantages such as non-contact feature, high-accuracy measurement capability, and good stability, the measurement method based on binocular vision has been extensively applied to detect the size of large aviation components [24].

The high-accuracy calibration of binocular cameras significantly impacts the measurement results [5]. Conventional calibration methods aim to establish the correspondence between two-dimensional (2D) information and standard three-dimensional (3D) information by capturing the images of the calibration object at different angles. The size of the calibration object should be approximately equal to the field of view (FOV) for ensuring calibration accuracy [6,7]. For example, in Zhang’s calibration method [7], which is widely used to accurately calibrate a vision system, by acquiring the images of 2D calibration object at 20-30 different positions with a FOV of approximately 170 mm × 250 mm. However, for a large measurement space exceeding 1.5 m, large calibration objects with high accuracy are required, which are expensive to manufacture and difficult to operate in a workshop. Force-bearing aviation components made of composite materials exhibit features such as large geometrical size and wide variations in the depth of field (DOF). To measure the geometrical parameters of force-bearing components accurately and efficiently, it is important to develop an accurate calibration method for binocular-vision measurement systems in industrial fields.

Several studies have focused on calibration methods based on large FOV [810]. M. Brückner et al. proposed a calibration method that utilized a robot arm for multiple cameras by separating intrinsic and extrinsic parameters [11]. The robot arm was employed to locate spatial positions and poses of cameras. This calibration method eliminated the dependence on calibration objects. However, the FOV was limited by the range of the robot arm, and the calibration accuracy was poor owing to the low location accuracy of the robot arm. Roviramás et al. designed a calibration object with uniformly distributed features based on Zhang’s calibration, and the baseline of binocular cameras was utilized to optimize the calibration results [12]. This method included a large FOV. However, the placement of the designed calibration object and operation were very difficult. Abedi et al. presented a method for group geometrical calibration of multi-camera imaging systems [13]. A pyramid with small triangular patterns and opposite colors was designed, and image rectification was performed based on an ideal circle line. This method is useful for circular multi-camera systems, but it cannot be applied to large field calibrations for industrial applications. Wei et al. presented a flexible calibration method for binocular vision sensor using a planar target with several parallel lines [14]. The structural parameters of binocular vision sensors were estimated according to the vanishing feature constraints and spacing constraints of parallel lines. This method can be used for measurements with a FOV of 300 mm × 300 mm. Zhang et al. proposed a calibration method by utilizing a coordinate measuring machine (CMM) to drive shining target points building virtual stereo targets [15], which involved a large FOV of 1500 mm × 700 mm. However, it is difficult to apply this method in industrial fields due to the use of CMM as the locating tool. Jia et al. proposed an improved camera calibration method based on perpendicularity compensation for binocular stereo vision systems [16]. The accuracy of this calibration method approached 99.91% in a large FOV. Because a large-sized controlling platform is implemented in this method, it is currently difficult to apply it in industrial fields.

The existing large-FOV calibration methods are primarily based on two strategies. The first strategy is to simplify the structure of the calibration objects. The overall size of the calibration object is large for covering the FOV. Therefore, it is inconvenient to move the calibration object in a workshop. The second strategy includes the calibration of the vision measurement system according to the spatial features such as parallel lines. However, the calibration accuracy is limited. For the high-accuracy measurement of large aviation parts, the measurement convenience and accuracy should be simultaneously guaranteed. To this end, we propose an improved separated-parameter calibration method for binocular vision measurement with large FOV to facilitate the measurement of large components. According to the imaging mechanism, image distortion is different at the different spatial positions. The camera parameters are separately calibrated by using the sub-area features of images. Meanwhile, to accurately determine the initial value of camera parameters, the measurement accuracy in the imaging space is examined for supporting the design of the calibration object.

The rest of the paper is organized as follows. Section 2 explains the measurement principle. Section 3 describes the proposed improved separated-parameter calibration method. Section 4 presents the design of the stereoscopic calibration object. Section 5 presents the laboratory and field experiments that verify the effectiveness of the proposed method. Section 6 concludes this paper.

2. Measurement principle

A schematic of binocular stereo vision system is shown in Fig. 1. The measured point P is captured synchronously by the binocular camera. Further, it is reconstructed based on triangulation. The transformation of measured points from 2D information to 3D information follows the perspective projection model, as expressed in Eq. (1) [17]. The world coordinate system ${O_w}{X_w}{Y_w}{Z_w}$ represents the actual 3D coordinates of the measured object in space. The camera coordinate system ${O_c}{X_c}{Y_c}{Z_c}$ indicates the inherent 3D coordinates in the camera. The image points are represented in the physical image coordinate system ${O_o}xy$ and the corresponding 2D pixel coordinate system ${O_p}uv$ on the 2D image.

$$s\left[ {\begin{array}{c} u\\ v\\ 1 \end{array}} \right] = \underbrace{{\left[ {\begin{array}{cccc} {\begin{array}{c} {{\alpha_x}}\\ 0\\ 0 \end{array}}&{\begin{array}{c} 0\\ {{\alpha_y}}\\ 0 \end{array}}&{\begin{array}{c} {{u_0}}\\ {{v_0}}\\ 1 \end{array}}&{\begin{array}{c} 0\\ 0\\ 0 \end{array}} \end{array}} \right]}}_{{{{\textbf M}_1}}}\underbrace{{\left[ {\begin{array}{cc} {\textbf R}&{\textbf T}\\ {{{\textbf 0}^T}}&1 \end{array}} \right]}}_{{{{\textbf M}_2}}}\left[ {\begin{array}{c} {{X_w}}\\ {{Y_w}}\\ {{Z_w}}\\ 1 \end{array}} \right] = {{\textbf M}_1}{{\textbf M}_2}\left[ {\begin{array}{c} {{X_w}}\\ {{Y_w}}\\ {{Z_w}}\\ 1 \end{array}} \right] = {\textbf M}\left[ {\begin{array}{c} {{X_w}}\\ {{Y_w}}\\ {{Z_w}}\\ 1 \end{array}} \right],$$
where ${\alpha _x}$ and ${\alpha _y}$ represent the equivalent focal length in the x and y directions, respectively. ${u_0}$ and ${v_0}$ represent the pixel coordinates of the principal point ${O_o}$. ${{\textbf M}_{\textbf 1}}$ and ${{\textbf M}_{\textbf 2}}$ are the intrinsic and extrinsic parameter matrix of the camera, respectively. s is an unknown scale factor, and ${\textbf M}$ is defined as the projection matrix.

 figure: Fig. 1.

Fig. 1. Schematic of binocular stereo vision system.

Download Full Size | PDF

Owing to the machining error and the optical-system assembly error between the camera and lens, there are different types of errors between the actual point and the ideal image point, which are nonlinear errors [18]. The main types of camera distortion are radial distortion, centrifugal distortion, and thin prism distortion. For the measurement of aviation parts, the FOV is large, and the effect of thin prism and centrifugal distortions are much less than that of radial distortion [19].

$$\left\{ {\begin{array}{c} {{\delta^r}_x({x,y} )= x^{\prime}({{k_1}{r^2} + {k_2}{r^4} + {k_3}{r^6} + \cdots } )}\\ {{\delta^r}_y({x,y} )= y^{\prime}({{k_1}{r^2} + {k_2}{r^4} + {k_3}{r^6} + \cdots } )} \end{array}} \right.,$$
where $r = {x^2} + {y^2}$. ${k_1}$, ${k_2}$ and ${k_3}$ represent the distortion coefficients with different order, respectively. ${\delta ^r}_x({x,y} )$ and ${\delta ^r}_y({x,y} )$ are radial distortions in the x and y directions, respectively.

3. Improved separated-parameter calibration with a large FOV

Radial distortion is gradually increased outwards from the center of the image. Thus, the maximum distortion is observed at the four corners of the image, and there is minimal distortion near the principal point. For the field measurement of large-sized parts, it is difficult to manufacture large standard objects with high accuracy. Therefore, for ensuring the accuracy of workshop measurements, the image acquisition area is effectively partitioned according to the imaging characteristics of distortion distribution. The camera parameter matrix and distortion coefficient are separately calculated based on the local characteristics of the captured image to realize high-accuracy calibration in the workshop with a large FOV.

In this study, an improved separated-parameter calibration method is proposed for large-FOV measurement. A schematic of the calibration system is shown in Fig. 2. The initial value of the camera parameter matrix is quickly obtained by using a small 3D calibration target that is placed in the central area of the FOV with small distortion. Besides, 2D calibration targets of collinear points are set at the four corners of image with maximum distortion, and the distortion coefficient is calculated based on the collinear constraint. Finally, taking the minimum projection error as the objective function, the parameter matrix and distortion coefficient are optimized to calibrate the binocular camera. Using this method, the camera parameters are separately calibrated based on the local characteristics of large FOV. This avoids the requirement for manufacturing high-accuracy and large-sized calibration targets and increases the flexibility and reliability of the calibration method in large-field measurements.

 figure: Fig. 2.

Fig. 2. Schematic of the calibration system for large FOV.

Download Full Size | PDF

3.1 Solution for initial intrinsic and extrinsic parameters

The initial camera parameters are calibrated by using a 3D calibration object, which is shown in Fig. 3. The world coordinates of the center of spatial calibration target point are defined as $({{X_w},{Y_w},{Z_w}} )$, which can be acquired by using the high-accuracy 3D calibration object. The pixel coordinates of the center of target points are defined as $({u,v} )$. The initial parametric matrix of the camera is solved using a single image. This process is described as follows.

 figure: Fig. 3.

Fig. 3. Layout of the calibration object.

Download Full Size | PDF

The perspective projection model containing the intrinsic and extrinsic parameter matrices is shown in Eq. (1). The specific solution is obtained as follows.

The projection matrix ${\textbf M} = \left[ {\begin{array}{cccc} {\begin{array}{c} {{m_{11}}}\\ {{m_{21}}}\\ {{m_{31}}} \end{array}}&{\begin{array}{c} {{m_{12}}}\\ {{m_{22}}}\\ {{m_{32}}} \end{array}}&{\begin{array}{c} {{m_{13}}}\\ {{m_{23}}}\\ {{m_{33}}} \end{array}}&{\begin{array}{c} {{m_{14}}}\\ {{m_{24}}}\\ {{m_{34}}} \end{array}} \end{array}} \right]$, $s = {z_i}$, $({{x_i},{y_i},{z_i}} )$ are the coordinates of the point in the camera coordinate system. Equation (3) is constructed according to the coordinate transformation relationship:

$$\left[ {\begin{array}{c} {s{u_i}}\\ {s{v_i}}\\ s \end{array}} \right] = {\textbf M}\left[ {\begin{array}{c} {{X_{wi}}}\\ {{Y_{wi}}}\\ {{Z_{wi}}}\\ 1 \end{array}} \right] = \left[ {\begin{array}{c} {{m_{11}}{X_{wi}} + {m_{12}}{Y_{wi}} + {m_{13}}{Z_{wi}} + {m_{14}}}\\ {{m_{21}}{X_{wi}} + {m_{22}}{Y_{wi}} + {m_{23}}{Z_{wi}} + {m_{24}}}\\ {{m_{31}}{X_{wi}} + {m_{32}}{Y_{wi}} + {m_{33}}{Z_{wi}} + {m_{34}}} \end{array}} \right],$$
Here, $({{X_{wi}},{Y_{wi}},{Z_{wi}}} )$ represent the world coordinates of the center of ith spatial target point, $({{u_i},{v_i}} )$ are the pixel coordinates of the corresponding image center, and When the matrix ${\textbf M}$ is multiplied by any non-zero coefficient, the relationship between the world and image coordinates of the space points is not affected. This relation can be expressed as follows:
$${\textbf M} = \left[ {\begin{array}{@{}cccc@{}} {\begin{array}{c} {{m_{11}}}\\ {{m_{21}}}\\ {{m_{31}}} \end{array}}&{\begin{array}{c} {{m_{12}}}\\ {{m_{22}}}\\ {{m_{32}}} \end{array}}&{\begin{array}{c} {{m_{13}}}\\ {{m_{23}}}\\ {{m_{33}}} \end{array}}&{\begin{array}{c} {{m_{14}}}\\ {{m_{24}}}\\ {{m_{34}}} \end{array}} \end{array}} \right]\textrm{ = }{m_{34}}{\textbf M^{\prime}}\textrm{ = }{m_{34}}\left[ {\begin{array}{@{}cccc@{}} {\begin{array}{c} {{{m^{\prime}}_{11}}}\\ {{{m^{\prime}}_{21}}}\\ {{{m^{\prime}}_{31}}} \end{array}}&{\begin{array}{c} {{{m^{\prime}}_{12}}}\\ {{{m^{\prime}}_{22}}}\\ {{{m^{\prime}}_{32}}} \end{array}}&{\begin{array}{c} {{{m^{\prime}}_{13}}}\\ {{{m^{\prime}}_{23}}}\\ {{{m^{\prime}}_{33}}} \end{array}}&{\begin{array}{c} {{{m^{\prime}}_{14}}}\\ {{{m^{\prime}}_{24}}}\\ 1 \end{array}} \end{array}} \right].$$
Thus, Eq. (3) can be reduced to
$${\textbf Am^{\prime}} = {\textbf U^{\prime}}\textrm{ = }{[{{u_1},{v_1},{u_2},{v_2}, \cdot{\cdot} \cdot ,{u_{n - 1}},{v_{n - 1}},{u_n},{v_n}} ]^T},$$
where ${\textbf m^{\prime}}\textrm{ = }{[{{{m^{\prime}}_{11}},{{m^{\prime}}_{12}},{{m^{\prime}}_{13}},{{m^{\prime}}_{14}},{{m^{\prime}}_{21}},{{m^{\prime}}_{22}},{{m^{\prime}}_{23}},{{m^{\prime}}_{24}},{{m^{\prime}}_{31}},{{m^{\prime}}_{32}},{{m^{\prime}}_{33}}} ]^{^T}}$ contains eleven unknown coefficients. ${\textbf A}$ is an unknown coefficient matrix. The 2D pixel coordinates $({{u_i},{v_i}} )$ and the 3D world coordinates $({{X_{wi}},{Y_{wi}},{Z_{wi}}} )$ are the known quantities. Equation (5) can be solved for $n \ge 6$. As the equations represent an over-determined system, the least-square method is adopted.
$${\textbf m^{\prime} = }{({{{\textbf A}^T}{\textbf A}} )^{ - 1}}{{\textbf A}^T}{\textbf U^{\prime}}.$$
Subsequently, matrix ${{\textbf M}_{{\textbf 3} \times {\textbf 4}}}$ can be decomposed from the camera parameter matrices ${{\textbf M}_{\textbf 1}}$ and ${{\textbf M}_{\textbf 2}}$.
$${m_{34}}\left[ {\begin{array}{@{}cc@{}} {{\textbf m}_1^T}&{{{m^{\prime}}_{14}}}\\ {{\textbf m}_2^T}&{{{m^{\prime}}_{24}}}\\ {{\textbf m}_3^T}&1 \end{array}} \right] = \left[ {\begin{array}{@{}cccc@{}} {\begin{array}{c} {{\alpha_x}}\\ 0\\ 0 \end{array}}&{\begin{array}{c} 0\\ {{\alpha_y}}\\ 0 \end{array}}&{\begin{array}{c} {{u_0}}\\ {{v_0}}\\ 1 \end{array}}&{\begin{array}{c} 0\\ 0\\ 0 \end{array}} \end{array}} \right]\left[ {\begin{array}{@{}cc@{}} {{\textbf r}_1^T}&{{t_1}}\\ {{\textbf r}_2^T}&{{t_2}}\\ {{\textbf r}_3^T}&{{t_3}}\\ {{{\textbf 0}^T}}&1 \end{array}} \right]\textrm{ = }\left[ {\begin{array}{cc} {{\alpha_x}{\textbf r}_1^T + {u_0}{\textbf r}_3^T}&{{\alpha_x}{t_1} + {u_0}{t_3}}\\ {{\alpha_y}{\textbf r}_2^T + {v_0}{\textbf r}_3^T}&{{\alpha_y}{t_2} + {v_0}{t_3}}\\ {{\textbf r}_3^T}&{{t_3}} \end{array}} \right].$$
Here, ${\textbf m}_{\textbf i}^{\textbf T}({i = 1,2,3} )$ is a row vector containing ${m^{\prime}_{ij}}({j = 1,2,3} )$, and ${\textbf r}_{\textbf i}^{\textbf T}$ is the $i$th row of the rotation matrix ${\textbf R}$. ${t_x}$, ${t_y}$, and ${t_z}$ are the three components of the translation matrix ${\textbf T}$. It can be deduced from Eq. (7) that ${m_{34}}{{\textbf m}_{\textbf 3}} = {{\textbf r}_{\textbf 3}}$. Because the rotation matrix ${\textbf R}$ is an orthogonal matrix, the column vector is a unit orthogonal vector, i.e.,
$${\textbf r}_i^T{{\textbf r}_j} = \left\{ {\begin{array}{c} {0\begin{array}{cc} ,&{i \ne j} \end{array}}\\ {1\begin{array}{cc} ,&{i = j} \end{array}} \end{array}} \right..$$
Thus, $|{{{\textbf r}_{\textbf 3}}} |= 1$, which implies that ${m_{34}}|{{\textbf m}_{\textbf 3}}|= 1$, ${m_{34}} = \frac{1}{{|{{\textbf m}_{\textbf 3}}|}}$. Finally, the twelve elements of the camera parameter matrix can be obtained. According to Eq. (9), the intrinsic parameter matrix ${{\textbf M}_{\textbf 1}}$ and the extrinsic parameter matrix ${{\textbf M}_{\textbf 2}}$ are separated, and considered as the initial value for the whole optimization.
$$\begin{aligned}{\textbf M} &= {{\textbf M}_1}{{\textbf M}_2}\\ & = \left[ {\begin{array}{@{}cccc@{}} {m_{34}^2|{{{\textbf m}_1} \times {{\textbf m}_3}} |}&0&{m_{34}^2{\textbf m}_1^T{{\textbf m}_3}}&0\\ 0&{m_{34}^2|{{{\textbf m}_2} \times {{\textbf m}_3}} |}&{m_{34}^2{\textbf m}_2^T{\textbf m}}&0\\ 0&0&1&0 \end{array}} \right]\left[ {\begin{array}{@{}cc@{}} {\frac{{{m_{34}}}}{{{\alpha_x}}}({{{\textbf m}_1} - {u_0}{{\textbf m}_3}} )}&{\frac{{{m_{34}}}}{{{\alpha_x}}}({{m_{14}} - {u_0}} )}\\ {\frac{{{m_{34}}}}{{{\alpha_y}}}({{{\textbf m}_2} - {v_0}{{\textbf m}_3}} )}&{\frac{{{m_{34}}}}{{{\alpha_y}}}({{m_{24}} - {v_0}} )}\\ {{m_{34}}{{\textbf m}_3}}&{{m_{34}}}\\ 0&1 \end{array}} \right].\end{aligned}$$

3.2 Initial solution of distortion coefficient

For large-FOV measurements, the imaging distortion of the camera is not negligible. According to the rule of radial distortion, the most critical distortion regions are the four vertical angles. For improving the accuracy and reliability of the distortion coefficients, a large-FOV distortion coefficient method that combines collinear constraints in the four corner areas is proposed. As shown in Fig. 4, the image is divided into five regions: A, B, C, D, and E, where the radial distortion of the region E is the smallest. The initial values of camera calibration parameters can be calculated by using target points in this region. The radial distortion of the remaining four regions near the edge of the image is relatively large. Four sets of collinear constraint points are arranged in the four corner regions and four collinear points are selected in each region.

 figure: Fig. 4.

Fig. 4. Schematic for obtaining the solution of radial distortion parameter.

Download Full Size | PDF

The linear cross ratio is defined as ${C_R}$, which can be expressed by Eq. (10) and Eq. (11). The world coordinates of the four collinear points are given as $({{X_a},{Y_a},{Z_a}} )$, $({{X_b},{Y_b},{Z_b}} )$, $({{X_c},{Y_c},{Z_c}} )$ and $({{X_d},{Y_d},{Z_d}} )$. The corresponding image coordinates are $({{x_a},{y_a}} )$, $({{x_b},{y_b}} )$, $({{x_c},{y_c}} )$ and $({{x_d},{y_d}} )$, respectively.

$$\frac{{({{X_a} - {X_c}} )({{X_b} - {X_d}} )}}{{({{X_b} - {X_c}} )({{X_a} - {X_d}} )}}\textrm{ = }\frac{{({{Y_a} - {Y_c}} )({{Y_b} - {Y_d}} )}}{{({{Y_b} - {Y_c}} )({{Y_a} - {Y_d}} )}}\textrm{ = }{C_R},$$
$$\frac{{({{x_a} - {x_c}} )({{x_b} - {x_d}} )}}{{({{x_b} - {x_c}} )({{x_a} - {x_d}} )}}\textrm{ = }\frac{{({{y_a} - {y_c}} )({{y_b} - {y_d}} )}}{{({{y_b} - {y_c}} )({{y_a} - {y_d}} )}} = {C_R}.$$
Owing to the distortion, the actual imaging coordinates of the collinear points are given as $({{{x^{\prime}}_a},{{y^{\prime}}_a}} )$, $({{{x^{\prime}}_b},{{y^{\prime}}_b}} )$, $({{{x^{\prime}}_c},{{y^{\prime}}_c}} )$ and $({{{x^{\prime}}_d},{{y^{\prime}}_d}} )$. In order to simplify the calculation, the first-order radial distortion is considered as the approximate reference value of the image distortion in this study, which is expressed as follows:
$$\left\{ {\begin{array}{c} {x = x^{\prime} + {\delta^r}_x({x,y} )= x^{\prime}({1 + {k_x}{r^2}} )}\\ {y = y^{\prime} + {\delta^r}_y({x,y} )= y^{\prime}({1 + {k_y}{r^2}} )} \end{array}} \right.,$$
Here, ${r}^2 = {x^{\prime 2}} + {y^{\prime 2}}$. ${k_x}$ and ${k_y}$ represent the distortion coefficients in the x and y directions, respectively. ${\delta ^r}_x({x,y} )$ and ${\delta ^r}_y({x,y} )$ are the first-order radial distortions in the x and y directions, respectively. The first-order radial distortion k is expressed as $k = \sqrt {{k_x}^2 + {k_y}^2}$. The positive or negative of k is the same as ${k_x}$ and ${k_y}$. Considering the first-order radial distortion and the principle of cross-ratio invariance of linear projection [20], the following solution of the distortion can be constructed:
$$\left\{ {\begin{array}{c} {f({{k_x}} )= ({{A_x} - {C_R}{B_x}} )k_x^2 + ({{C_x} - {C_R}{D_x}} ){k_x} + ({{E_x} - {C_R}{F_x}} )= 0}\\ {f({{k_y}} )= ({{A_y} - {C_R}{B_y}} )k_y^2 + ({{C_y} - {C_R}{D_y}} ){k_y} + ({{E_y} - {C_R}{F_y}} )= 0} \end{array}} \right.,$$
Taking $f({{k_x}} )$ as an example, the coefficients can be deduced as follows:
$$\left\{ {\begin{array}{@{}c@{}} {{A_x} = {{x^{\prime}}_a}{{x^{\prime}}_b}{r_a}^2{r_b}^2 - {{x^{\prime}}_a}{{x^{\prime}}_d}{r_a}^2{r_d}^2 - {{x^{\prime}}_c}{{x^{\prime}}_b}{r_c}^2{r_b}^2 + {{x^{\prime}}_c}{{x^{\prime}}_d}{r_c}^2{r_d}^2}\\ {{B_x} = {{x^{\prime}}_a}{{x^{\prime}}_b}{r_a}^2{r_b}^2 - {{x^{\prime}}_b}{{x^{\prime}}_d}{r_b}^2{r_d}^2 - {{x^{\prime}}_c}{{x^{\prime}}_a}{r_c}^2{r_a}^2 + {{x^{\prime}}_c}{{x^{\prime}}_d}{r_c}^2{r_d}^2}\\ {{C_x} = {{x^{\prime}}_a}{{x^{\prime}}_b}{r_b}^2 - {{x^{\prime}}_a}{{x^{\prime}}_d}{r_d}^2 + {{x^{\prime}}_a}{{x^{\prime}}_b}{r_a}^2 - {{x^{\prime}}_a}{{x^{\prime}}_d}{r_a}^2 - {{x^{\prime}}_c}{{x^{\prime}}_b}{r_b}^2 + {{x^{\prime}}_c}{{x^{\prime}}_d}{r_d}^2 - {{x^{\prime}}_c}{{x^{\prime}}_b}{r_c}^2 + {{x^{\prime}}_c}{{x^{\prime}}_d}{r_c}^2}\\ {{D_x} = {{x^{\prime}}_a}{{x^{\prime}}_b}{r_a}^2 - {{x^{\prime}}_b}{{x^{\prime}}_d}{r_d}^2 + {{x^{\prime}}_a}{{x^{\prime}}_b}{r_b}^2 - {{x^{\prime}}_b}{{x^{\prime}}_d}{r_b}^2 - {{x^{\prime}}_c}{{x^{\prime}}_a}{r_a}^2 + {{x^{\prime}}_c}{{x^{\prime}}_d}{r_d}^2 - {{x^{\prime}}_c}{{x^{\prime}}_a}{r_c}^2 + {{x^{\prime}}_c}{{x^{\prime}}_d}{r_c}^2}\\ {{E_x} = {{x^{\prime}}_a}{{x^{\prime}}_b} - {{x^{\prime}}_a}{{x^{\prime}}_d} - {{x^{\prime}}_c}{{x^{\prime}}_b} + {{x^{\prime}}_c}{{x^{\prime}}_d}}\\ {{F_x} = {{x^{\prime}}_a}{{x^{\prime}}_b} - {{x^{\prime}}_b}{{x^{\prime}}_d} - {{x^{\prime}}_c}{{x^{\prime}}_a} + {{x^{\prime}}_c}{{x^{\prime}}_d}} \end{array}} \right.,$$
where ${r_i}^2 = {x^{\prime}_i}^2 + {y^{\prime}_i}^2$ ($i = a,b,c,d$). ${C_R}$ is calculated by Eq. (10). Similarly, the coefficients in $f({{k_y}} )$ can be obtained. Thus, the radial distortion coefficients of the four angular regions can be obtained. Their mean values are considered as the initial value of the radial distortion in the large-FOV calibration, as expressed by Eq. (15).
$${k_0} = \frac{{\sqrt {{k_{ax}}^2 + {k_{ay}}^2} + \sqrt {{k_{bx}}^2 + {k_{by}}^2} + \sqrt {{k_{cx}}^2 + {k_{cy}}^2} + \sqrt {{k_{dx}}^2 + {k_{dy}}^2} }}{4},$$
where ${k_{ax}}$, ${k_{bx}}$, ${k_{cx}}$, and ${k_{dx}}$ are the radial distortion coefficients in the $x$ direction for the regions A, B, C, and D of image, respectively. The distortion coefficient ${k_0}$ and the calibration parameter matrix ${\textbf M}$ are the initial values for the overall parameter optimization.

3.3 Parameter optimization

To further improve the calibration accuracy, the parameter matrix and distortion coefficients should be optimized. Based on the principle of the minimization of reprojection error, the nonlinear global optimization objective function is established as follows:

$${\boldsymbol{min}} \;\;\mathop \sum \limits_{i = 1}^n \mathop \sum \limits_{j = 1}^n {||{{\rho_{ij}} - \hat{\rho }({{{\textbf M}_1}{\textbf ,}{{\textbf M}_2}{\textbf ,W,}{{\textbf k}_0}} )} ||^2},$$
where i and j represent the number of target points in the horizontal and vertical directions of the image, respectively. ${\rho _{ij}}$ is the 2D pixel coordinate extracted from the actual image. $\hat{\rho }$ is the theoretically obtained 2D coordinate of the image using the initial and extrinsic parameters of the camera, and ${\textbf W}$ is the corresponding world coordinate of the 3D points for the image. ${{\textbf k}_{\textbf 0}}$ is the radial distortion coefficient. In this study, the objective function is solved by using the Levenberg-Marquardt (LM) optimization method, and the camera calibration parameters and distortion coefficient are obtained as the optimal solution for the calibration.

4. Design of stereoscopic calibration object

To ensure the accuracy and stability of the calibration, the 3D target points should exhibit a high spatial position accuracy and repetition accuracy. Meanwhile, target points should not be occluded. The binocular camera can capture the effective target points. Besides, for improving the calibration speed and accuracy, the target points captured by the left and right cameras should exhibit stable features so that they can be matched quickly and there is no confusion in these points due to different image-acquisition angles. To fulfill these requirements, a stereoscopic calibration object has been designed in this study. As the number and spatial distribution of target points govern the reliability of the calibration object, the number of target points, DOF, and the spatial distribution on the calibration target were determined by performing accuracy tests. Finally, the calibration object with 3D target points was designed based on the results of this analysis.

4.1 Analysis of structural parameter for stereoscopic calibration object

To determine the number and distribution of target points, the influence of these structural parameters on the calibration accuracy is discussed in this section. As shown in Fig. 5(a), a calibration control field is established using a laser tracker (Leica AT960, measurement error < ± (15 µm + 6 µm/m)) in the measurement space. The FOV, DOF, front DOF, and back DOF are 640 mm × 480 mm, 250 mm, 120 mm, and 130 mm, respectively.

 figure: Fig. 5.

Fig. 5. Diagram of experimental system.

Download Full Size | PDF

The 3D coordinates of the target points are obtained using the laser tracker. To ensure the consistency of coordinate collection, the 2D coordinates of target points are obtained by using visual target balls (radius: $19.05_{\textrm{ - }0.0127}^{\textrm{ + }0.0000}$mm, position accuracy of retro reflective dot on the center of sphere: 12.7 µm), and it has a similar sized target ball (radius: 19.05 mm ± 2.54 µm, centering of optics: < ±3.05 µm, shape ball: ≤3.05 µm) as that used by the laser tracker. To verify the effect of the number and distribution of target points on the calibration, the measurement space is divided into three calibration planes: focal plane, front-depth plane, and back-depth plane. Each calibration plane is arranged into 8 rows and 7 columns. The 3D and 2D coordinates of the target points were obtained as follows: (1) 2D coordinates of seven target points on one row were synchronously captured by the binocular vision system. Then, each visual target was replaced with a laser tracker target, respectively. The corresponding 3D coordinates of these 7 points were measured by laser tracker. The relative position of these targets was determined by seven drift nests which are evenly distributed in one row, and fixed with hot melt adhesive. The measurement system is shown in Fig. 5. (2) Then, each row of each plane was measured separately. The binocular vision system captured twenty four images to obtain the 2D coordinates of the target points. The laser tracker measured 168 times to obtain the 3D coordinates of the target points. Finally, there are 168 target points in the three calibration planes, which constitute a complete calibration field as shown in Fig. 6(a). Furthermore, the binocular vision system was calibrated by selecting different target points in this measurement space to analyze the number and distribution of target points on calibration accuracy. To ensure the reliability of the analysis, the length of a one-dimensional (1D) standard ruler with two different lengths (475.0201 mm and 350.0156 mm) in 10 different positions was measured to determine the measurement accuracy. The location distribution of target ruler is shown in the Fig. 6(b). The measurement of the standard ruler was repeated three times. The mean value of the absolute measurement errors for the distance measurement of standard ruler is defined as the measurement error, which is used for qualitative analysis of calibration accuracy.

 figure: Fig. 6.

Fig. 6. Schematic of the (a) calibration field and (b) spatial distribution of the target ruler.

Download Full Size | PDF

Under the condition of a large FOV, the number of target points for calibration, image-depth distribution, and spatial distribution significantly influence the calibration results. We analyze the calibration accuracy by considering these three factors.

  • (1) The number of points: we selected 6, 10, 20, 30, 40, 50, 60, 70, 80, 90, and 100 points to calibrate the cameras for examining the impact of the number of points on the calibration accuracy. As the calibration method should implement at least six sets of world coordinates of target points and the corresponding pixel coordinates. Thus, the minimum number of target points is six. The selected target points should cover the calibration field as much as possible, and they should be evenly distributed within the measurement space. The target points were randomly selected with three times to calibrate the binocular system. The absolute measurement errors of the binocular vision system calibrated by the different number of target points are shown in Fig. 7(a). It shows that when the number of target points reaches 20-30, the calibration error significantly decreases, and when it reaches 60, the measurement result becomes stable. Therefore, for ensuring the measurement accuracy and calculation efficiency, it is enough to select about 20-30 target points for the camera calibration. Due to perturbation of random selection, the calibration error slightly increases when the calibration number reaches 40-50. However, the number of target points is more than 20 points, the measurement errors are relatively stable. Therefore, for ensuring the measurement accuracy and calculation efficiency, it is enough to select about 20-30 target points for the initial calibration of camera parameters.
  • (2) Image-depth distribution: the effect of DOF on the calibration accuracy was analyzed by using the target points of different calibration planes and whole target points in the calibration field. Thirty target points on each calibration plane were randomly selected to calibrate the binocular cameras. The measurement errors are shown in Fig. 7(b). The accuracy and stability of measurement results obtained from the calibration of camera in the focal plane is better than that in other calibration planes (random planes 1 and 2). The calibration accuracy and stability are highest when the entire field is covered. Therefore, the method involving the 3D calibration reference is more advantageous than that involving 2D calibration plane.
  • (3) Spatial distribution. According to the radial distortion model, the image distortion varies at different positions of the FOV. The location of the calibration target points in the measured space affects the calibration results. The established calibration field is divided into sub-areas (Zone A, Zone B, Zone C, Zone D, and Zone E), as shown in Fig. 6(a). In each region, 30 target points are selected to calibrate the camera for determining the calibration accuracy with different distribution. The measurement errors are shown in Fig. 7(c). The measurement accuracy of calibration in the middle of the FOV (Zone E) is higher than that in the four corners of the FOV. Thus, the camera parameters can be preliminarily calculated based on calibration results in the middle of the FOV, and the distortion parameters can be assessed by calibration in four corners of the FOV. Further, Fig. 7(c) verifies the feasibility and rationality of the proposed method to separate camera parameters and the distortion region.

 figure: Fig. 7.

Fig. 7. Analysis of calibration accuracy in the calibration control field. (a) Error analysis for different number of calibration points; (b) Error analysis for different calibration planes; (c) Error analysis for different distributions.

Download Full Size | PDF

4.2 Design of the calibration object and coordinate extraction

According to the above qualitative analysis of the calibration accuracy in the measurement space, it is obvious that the structural design of the stereoscopic calibration object should meet the following requirements: 1) the characteristic elements should be distributed in the 3D space to ensure the global measurement accuracy; 2) the number of characteristic elements is near 30, which almost guarantees the best calibration accuracy; 3) for ensuring the accuracy of the initial calibration parameters, the size of the device needs to be approximately equal to that of the central region of the calibration FOV; 4) for ensuring the convenience and reliability, the stereoscopic calibration object should exhibit a stable design, and it should be easy to operate.

A suitable feature element is the key to ensure the coordinate-acquisition accuracy of the calibration object. A silicon carbide ceramic ball has many advantages such as high accuracy, light weight, high strength, low coefficient of thermal expansion, and non-magnetic nature. Further, it can also be used in conjunction with metal elements. Therefore, 25 standard silicon carbide ceramic balls were utilized as the feature elements of the stereoscopic calibration object. The standard centers of these ceramic balls were defined as the target points for camera calibration. According to the structural requirements, the stereoscopic calibration object was composed of standard ceramic balls, adapters, support rods, the base of the ceramic ball, and flush bolts, as shown in Fig. 8. For avoiding the occlusion of calibration features and for ensuring adequate spatial calibration information, the spatial structure of a 5 × 5 ladder was adopted. The size of the calibration field surrounded by ceramic balls was 320 mm × 360 mm, and the calibration depth was 320 mm. All these specifications met the calibration requirements of the central FOV. The ceramic ball and the supporting rods were connected through the detachable adapters, and the adapter and the supporting rod were connected by a accuracy screw, which could be easily assembled and dismantled. This ensured the repetitive positioning accuracy of the device. The supporting rod and the base were connected through a countersunk head bolt to ensure the structural stability of the calibration object.

 figure: Fig. 8.

Fig. 8. Schematic of 3D stereoscopic calibration object.

Download Full Size | PDF

When the stereoscopic calibration object was fixed, the central positions of 25 ceramic balls were relatively fixed. As shown in Fig. 9(a), the 3D coordinates of the spherical center are fitted by measuring four contacts of the ceramic balls using a 3D coordinate machine (Zessis Prismo navigator, measurement error < 2µm). Thus, the 3D coordinates of the center of ceramic ball $({{X_w},{Y_w},{Z_w}} )$ are accurately obtained. The image of the space calibration object captured by the camera is preprocessed. Using the ellipse fitting method, the sub-pixel central coordinates of the ceramic balls are obtained. Thus, the image coordinates $({u,v} )$ of the target points are acquired, as shown in Fig. 9(b) and Fig. 9(c).

 figure: Fig. 9.

Fig. 9. Coordinate extraction of calibration object. (a) Extraction of 3D world coordinates; (b) Extraction of 2D pixel coordinates from the left image; (c) Extraction of 2D pixel coordinates from the right image.

Download Full Size | PDF

4.3 Reliability verification of stereoscopic calibration object

For measurements in industrial fields, the aviation parts have complex features, and there are indefinite locations for image acquisition. Thus, we established a binocular measurement system to verify the reliability of the designed calibration object. Five images of the calibration object were captured at five random positions in the central region of the field, as shown in Fig. 10. The centers of the target balls of the calibration object were extracted. The distance of the balls from the center of the central ball was reconstructed, which was set as the evaluation criterion of repeatability errors. Twenty-four distances were acquired, which are shown in Table 1.

 figure: Fig. 10.

Fig. 10. Schematic of distance estimation and positioning.

Download Full Size | PDF

Tables Icon

Table 1. Reconstruction errors of positioning distances

The experimental results demonstrate that the recognition rate of the centers of standard balls is 100%. Here, the reconstructed distances in the position 1 are the set as the reference values. The difference between the reconstructed distance from the positions 2,3,4 and 5 and the reference distance is defined as the absolute error. The average absolute error of the reconstructed distances is 0.0447 mm, and the average relative error is 0.0323%. The repetitive accuracy is up to 99.97%.

5. Experimental analysis

5.1 Experimental validation of calibration accuracy

To validate the feasibility of the proposed method, a binocular vision-measurement system was built in the laboratory. As shown in Fig. 11, the measurement system consisted of two high-resolution cameras (VC-12MC with resolution of 4096 × 3072, Vieworks, Korea) with a FOV of 2.0 m × 1.5 m, and the nominal focal length was 20 mm. The calibration object was arranged in the center of the FOV while the collinear points were arranged in the four corners of the FOV. The 3D coordinates of target points were already calibrated precisely, and the relative positions of the stereo calibration object and collinear points remained unchanged during the calibration process. According to the proposed method in the Section 3, the calibration parameters of binocular camera are shown in Table 2. The radial distortion coefficients of the image were calculated with the four respective collinear points. Each position was calculated for three times to ensure the reliability of the distortion coefficients.

 figure: Fig. 11.

Fig. 11. Measurement system and calibration object.

Download Full Size | PDF

Tables Icon

Table 2. Calibration parameters of binocular camera

The focal lengths of the cameras can be obtained with Table 2 as follows:

$${f_{lx}} = {\alpha _x}dx = 5.5 \times {10^{\textrm{ - }3}} \times 3625.0 = 19.9\textrm{ }mm\textrm{;}\,{f_{ly}} = {\alpha _y}dy = 5.5 \times {10^{\textrm{ - }3}} \times 3622.9 = 19.9\textrm{ }mm\textrm{;}$$
$${f_{rx}} = {\alpha _x}dx = 5.5 \times {10^{\textrm{ - }3}} \times 3628.0 = 20.0\textrm{ }mm\textrm{;}\,{f_{ry}} = {\alpha _y}dx = 5.5 \times {10^{\textrm{ - }3}} \times 3626.5 = 19.9\textrm{ }mm.$$
The focal length of binocular cameras is in the range of 19.9 mm - 20.0 mm. For verifying the calibration accuracy, the length of a one-dimensional (1D) standard ruler with three different lengths (475.0188 mm, 600.0187 mm and 801.9120 mm) was measured by the calibrated binocular cameras, which is shown in Fig. 12(a). The standard ruler was commercially obtained (Brunson, 803-MCP, tube length tolerance ± 0.003 mm). Reconstruction results of the 1D standard ruler is shown in Fig. 12(b). The length measurement errors are shown in Table 3.

 figure: Fig. 12.

Fig. 12. Measurement result for 1D standard ruler. (a) Measurement of 1D standard ruler in laboratory; (b) Reconstruction results for the 1D standard ruler.

Download Full Size | PDF

Tables Icon

Table 3. Length measurement errors for 1D standard ruler

Table 3 shows that the maximum measurement error of the proposed vision-measurement system is 0.225 mm, while the FOV is 2.0 m × 1.5 m. Further, the minimum measurement error is 0.052 mm, and the average measurement error is 0.115 mm. The maximum relative error is 0.028%, and the average relative error is 0.019%. The proposed method fulfills requirements for large-FOV measurement. Moreover, only one image of the calibration object is captured in this method to realize high accuracy camera calibration, which greatly improves the measurement efficiency.

5.2 Field validation of calibration accuracy

To validate the feasibility of the proposed method for workshop calibration, the same binocular vision-measurement system with a FOV of 2.0 m × 1.5 m was built in an aviation assembly shop. The system is shown in Fig. 13. The initial parameter matrices of the binocular cameras are shown in Table 4. The camera calibration parameters after global optimization are shown in Table 5.

 figure: Fig. 13.

Fig. 13. Binocular vision calibration in the workshop.

Download Full Size | PDF

Tables Icon

Table 4. Initial parameters of camera calibration

Tables Icon

Table 5. Camera calibration parameters after global optimization

The focal lengths of the cameras can be obtained with Table 4 as follows:

$${f_{lx}} = {\alpha _x}dx = 5.5 \times {10^{\textrm{ - }3}} \times 3691.2 = 20.3\textrm{ }mm\textrm{;}\,{f_{ly}} = {\alpha _y}dy = 5.5 \times {10^{\textrm{ - }3}} \times 3695.5 = 20.3\textrm{ }mm\textrm{;}$$
$${f_{rx}} = {\alpha _x}dx = 5.5 \times {10^{\textrm{ - }3}} \times 3638.2 = 20.0\textrm{ }mm\textrm{; }{f_{ry}} = {\alpha _y}dx = 5.5 \times {10^{\textrm{ - }3}} \times 6228.4 = 20.0\textrm{ }mm.$$
The focal length of binocular cameras is in the range of 20.0 mm - 20.3 mm. To test the measurement accuracy and evaluate the calibration error, sixteen coordinate measurement points were arranged in the measurement space. These points were measured using the laser tracker and binocular vision measurement system. The measured coordinates are shown in Table 5.

Considering the measurement results obtained by the laser tracker as the reference, the absolute measurement errors of binocular vision system in the three coordinate directions are shown in Fig. 14.

 figure: Fig. 14.

Fig. 14. Measurement error in the three coordinate directions.

Download Full Size | PDF

Table 6 and Fig. 14 show that the maximum measurement error of the proposed vision-measurement system is 0.35 mm in a single coordinate direction, while the FOV is 2.0 m × 1.5 m. Further, the minimum measurement error is 0.01 mm, and the average measurement error is 0.130 mm. Approximately 97.92% of the coordinate measurement errors are less than 0.3 mm. In particular, the average error in the z direction is just 0.116 mm. This can effectively improve the accuracy of the traditional calibration methods in the direction of DOF as well as the global measurement accuracy for large FOV. Further, this fulfills the measurement requirements for large aviation parts with complex environment, and the feasibility of the camera calibration method is verified simultaneously.

Tables Icon

Table 6. Measurement results of coordinates

The results of the proposed separated-parameter calibration method and Zhang’s calibration method (the size of calibration object is 600 mm × 800 mm, fifteen images of calibration object placed in the different position of measuring space were captured by the left and right cameras) [7] are compared in Table 7. The standard ruler was commercially obtained (Brunson, 803-MCP, tube length tolerance ± 0.003 mm). Its length was 600.0197 mm. The standard ruler was placed in 10 distinct positions according to the distribution diagram in Fig. 5(b). Compared to the Zhang's calibration method, the average absolute error of the proposed method decreased from 0.3487 mm to 0.1103 mm, and the relative error decreased from 0.058% to 0.018%.

Tables Icon

Table 7. Comparison of calibration results

Moreover, some key points were placed on the surface of an aviation part. These points were measured by the binocular vision system with the FOV of 2.0 m × 1.5 m. The system were calibrated by the proposed separated-parameter calibration method (SPCM) and Zhang’s calibration method (ZCM), respectively. The distribution of these key points is shown in Fig. 15(a). The 3D coordinates of these points measured by the laser tracker (Leica AT901, measurement error < ± (15 µm + 6 µm/m)) are defined as the standard coordinates. The distances from point 2-9 to point 1 were taken as the evaluation. The absolute errors and relative errors of the measuring distances are shown in Fig. 15(b). These distances were measured by the calibrated binocular vision system.

 figure: Fig. 15.

Fig. 15. Comparison experiment in the workshop. (a) Distribution of measured points; (b) Error analysis for different calibration method.

Download Full Size | PDF

The experimental results demonstrate that the maximum relative error of the proposed separated-parameter calibration method is 0.028%, minimum relative error is 0.001% and the average relative error is 0.012%. The maximum relative error of Zhang’s calibration method is 0.273%, minimum relative error is 0.051% and the average relative error is 0.136%. Compared with the Zhang's calibration method, the proposed calibration method has a higher calibration accuracy for the field measurement. Besides, the proposed method can realize the calibration of the binocular vision system with a large FOV by capturing only one image of the calibration object. The proposed method greatly improves calibration efficiency and accuracy, especially for the field measurement with a large FOV.

6. Conclusions

We proposed an improved calibration method based on parameter separation, which is primarily used to calibrate a binocular vision system with a large FOV in complex industrial environments. The space to be measured was partitioned according to the image feature of radial distortion. Subsequently, the initial parameter matrix was obtained based on the designed calibration target, which was placed in the centre of the field with a smaller distortion. The initial distortion coefficients were calculated based on four groups of collinear points that are placed in the corners of the field with maximum distortion. Calibration parameters were optimized by minimizing the number of re-projection errors using LM optimization method. The proposed method can quickly calibrate the binocular cameras with a large FOV by using only one image. The influence of the number of target points and distribution on the accuracy, the calibration accuracy is qualitatively assessed. Based on this analysis, a stereoscopic calibration object with standard ceramic target balls was designed. The accuracy of measuring method was verified in the laboratory and the measurement field of aviation with a size of 2.0 m × 1.5 m, where the average error of a measurement point was obtained as 0.015 mm and 0.130 mm, respectively. The measurement accuracy of the proposed method is significantly improved as compared to that of the traditional calibration method. The proposed method meets the requirements of field measurement with high speed and accuracy. Therefore, we believe that the proposed method exhibits immense potential for aviation measurement under large-sized fields. In the further research, the distortion model could be further improved. Moreover the accuracy evaluation in the application field should be further studied for improving the accuracy and reliability of the field measurement.

Funding

Key Technologies Research and Development Program (2018YFA0703304); National Natural Science Foundation of China (51905077); China Postdoctoral Science Foundation (2019M651110); Liao Ning Revitalization Talents Program (XLYC1801008, XLYC1807086).

Disclosures

The authors declare no conflicts of interest.

References

1. F. Mas, J. Ríos, J. L. Menéndez, and A. Gómez, “A process-oriented approach to modeling the conceptual design of aircraft assembly lines,” Int. J. Adv. Manuf. Technol. 67(1-4), 771–784 (2013). [CrossRef]  

2. C. Li, C. Zhou, C. Miao, Y. Yan, and J. Yu, “Binocular vision profilometry for large-sized rough optical elements using binarized band-limited pseudo-random patterns,” Opt. Express 27(8), 10890–10899 (2019). [CrossRef]  

3. G. Xu, L. Sun, X. Li, J. Su, Z. Hao, and X. Lu, “Global calibration and equation reconstruction methods of a three dimensional curve generated from a laser plane in vision measurement,” Opt. Express 22(18), 22043–22055 (2014). [CrossRef]  

4. Z. Liu, X. Li, F. Li, and G. Zhang, “Flexible dynamic measurement method of three-dimensional surface profilometry based on multiple vision sensors,” Opt. Express 23(1), 384–400 (2015). [CrossRef]  

5. Y. I. Abdel-Aziz, H. M. Karara, and M. Hauck, “Direct linear transformation from comparator coordinates into object space coordinates in close-range photogrammetry,” Photogramm. Eng. Rem. S. 81(2), 103–107 (2015). [CrossRef]  

6. M. E. Loaiza, A. B. Raposo, and M. Gattass, “Multi-camera calibration based on an invariant pattern,” Comput. Graph. 35(2), 198–207 (2011). [CrossRef]  

7. Z. Zhang, “A flexible new technique for camera calibration,” IEEE Trans. Pattern Anal. Mach. Intell. 22(11), 1330–1334 (2000). [CrossRef]  

8. Z. Wei, W. Zou, G. Zhang, and K. Zhao, “Extrinsic parameters calibration of multi-camera with non-overlapping fields of view using laser scanning,” Opt. Express 27(12), 16719–16737 (2019). [CrossRef]  

9. X. Pan and Z. Liu, “High-accuracy calibration of line-structured light vision sensor by correction of image deviation,” Opt. Express 27(4), 4364–4385 (2019). [CrossRef]  

10. O. Burggraaff, N. Schmidt, J. Zamorano, K. Pauly, S. Pascual, C. Tapia, E. Spyrakos, and F. Snik, “Standardized spectral and radiometric calibration of consumer cameras,” Opt. Express 27(14), 19075–19101 (2019). [CrossRef]  

11. M. Brückner, F. Bajramovic, and J. Denzler, “Intrinsic and extrinsic active self-calibration of multi-camera systems,” Mach. Vision Appl. 25(2), 389–403 (2014). [CrossRef]  

12. F. Roviramás, Q. Wang, and Q. Zhang, “Design parameters for adjusting the visual field of binocular stereo cameras,” Biosyst. Eng. 105(1), 59–70 (2010). [CrossRef]  

13. F. Abedi, Y. Yang, and Q. Liu, “Group geometric calibration and rectification for circular multi-camera imaging system,” Opt. Express 26(23), 30596–30613 (2018). [CrossRef]  

14. Z. Wei and X. Liu, “Vanishing feature constraints calibration method for binocular vision sensor,” Opt. Express 23(15), 18897–18914 (2015). [CrossRef]  

15. B. Yang, L. Zhang, N. Ye, X. Feng, and T. Li, “Camera calibration technique of wide-area vision measurement,” Acta Opt. Sin. 32(9), 0915001 (2012). [CrossRef]  

16. Z. Jia, J. Yang, W. Liu, F. Wang, Y. Liu, L. Wang, C. Fan, and K. Zhao, “Improved camera calibration method based on perpendicularity compensation for binocular stereo vision measurement system,” Opt. Express 23(12), 15205–15223 (2015). [CrossRef]  

17. D. Herrera, J. Kannala, and J. Heikkilä, “Joint depth and color camera calibration with distortion correction,” IEEE Trans. Pattern Anal. Mach. Intell. 34(10), 2058–2064 (2012). [CrossRef]  

18. L. Ma, Y. Q. Chen, and K. L. Moore, “Analytical piecewise radial distortion model for precision camera calibration,” IEE Proc.-Vis. Image Signal Process. 153(4), 468–474 (2006). [CrossRef]  

19. M. Zhang, L. Jin, G. Li, Y. Wu, and S. Han, “Camera distortion calibration method based on straight line characteristics,” Acta Opt. Sin. 35(6), 0615001 (2015). [CrossRef]  

20. G. Zhang, J. He, and X. Yang, “Calibrating camera radial distortion with cross-ratio invariability,” Opt. Laser Technol. 35(6), 457–461 (2003). [CrossRef]  

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (15)

Fig. 1.
Fig. 1. Schematic of binocular stereo vision system.
Fig. 2.
Fig. 2. Schematic of the calibration system for large FOV.
Fig. 3.
Fig. 3. Layout of the calibration object.
Fig. 4.
Fig. 4. Schematic for obtaining the solution of radial distortion parameter.
Fig. 5.
Fig. 5. Diagram of experimental system.
Fig. 6.
Fig. 6. Schematic of the (a) calibration field and (b) spatial distribution of the target ruler.
Fig. 7.
Fig. 7. Analysis of calibration accuracy in the calibration control field. (a) Error analysis for different number of calibration points; (b) Error analysis for different calibration planes; (c) Error analysis for different distributions.
Fig. 8.
Fig. 8. Schematic of 3D stereoscopic calibration object.
Fig. 9.
Fig. 9. Coordinate extraction of calibration object. (a) Extraction of 3D world coordinates; (b) Extraction of 2D pixel coordinates from the left image; (c) Extraction of 2D pixel coordinates from the right image.
Fig. 10.
Fig. 10. Schematic of distance estimation and positioning.
Fig. 11.
Fig. 11. Measurement system and calibration object.
Fig. 12.
Fig. 12. Measurement result for 1D standard ruler. (a) Measurement of 1D standard ruler in laboratory; (b) Reconstruction results for the 1D standard ruler.
Fig. 13.
Fig. 13. Binocular vision calibration in the workshop.
Fig. 14.
Fig. 14. Measurement error in the three coordinate directions.
Fig. 15.
Fig. 15. Comparison experiment in the workshop. (a) Distribution of measured points; (b) Error analysis for different calibration method.

Tables (7)

Tables Icon

Table 1. Reconstruction errors of positioning distances

Tables Icon

Table 2. Calibration parameters of binocular camera

Tables Icon

Table 3. Length measurement errors for 1D standard ruler

Tables Icon

Table 4. Initial parameters of camera calibration

Tables Icon

Table 5. Camera calibration parameters after global optimization

Tables Icon

Table 6. Measurement results of coordinates

Tables Icon

Table 7. Comparison of calibration results

Equations (20)

Equations on this page are rendered with MathJax. Learn more.

s [ u v 1 ] = [ α x 0 0 0 α y 0 u 0 v 0 1 0 0 0 ] M 1 [ R T 0 T 1 ] M 2 [ X w Y w Z w 1 ] = M 1 M 2 [ X w Y w Z w 1 ] = M [ X w Y w Z w 1 ] ,
{ δ r x ( x , y ) = x ( k 1 r 2 + k 2 r 4 + k 3 r 6 + ) δ r y ( x , y ) = y ( k 1 r 2 + k 2 r 4 + k 3 r 6 + ) ,
[ s u i s v i s ] = M [ X w i Y w i Z w i 1 ] = [ m 11 X w i + m 12 Y w i + m 13 Z w i + m 14 m 21 X w i + m 22 Y w i + m 23 Z w i + m 24 m 31 X w i + m 32 Y w i + m 33 Z w i + m 34 ] ,
M = [ m 11 m 21 m 31 m 12 m 22 m 32 m 13 m 23 m 33 m 14 m 24 m 34 ]  =  m 34 M  =  m 34 [ m 11 m 21 m 31 m 12 m 22 m 32 m 13 m 23 m 33 m 14 m 24 1 ] .
A m = U  =  [ u 1 , v 1 , u 2 , v 2 , , u n 1 , v n 1 , u n , v n ] T ,
m = ( A T A ) 1 A T U .
m 34 [ m 1 T m 14 m 2 T m 24 m 3 T 1 ] = [ α x 0 0 0 α y 0 u 0 v 0 1 0 0 0 ] [ r 1 T t 1 r 2 T t 2 r 3 T t 3 0 T 1 ]  =  [ α x r 1 T + u 0 r 3 T α x t 1 + u 0 t 3 α y r 2 T + v 0 r 3 T α y t 2 + v 0 t 3 r 3 T t 3 ] .
r i T r j = { 0 , i j 1 , i = j .
M = M 1 M 2 = [ m 34 2 | m 1 × m 3 | 0 m 34 2 m 1 T m 3 0 0 m 34 2 | m 2 × m 3 | m 34 2 m 2 T m 0 0 0 1 0 ] [ m 34 α x ( m 1 u 0 m 3 ) m 34 α x ( m 14 u 0 ) m 34 α y ( m 2 v 0 m 3 ) m 34 α y ( m 24 v 0 ) m 34 m 3 m 34 0 1 ] .
( X a X c ) ( X b X d ) ( X b X c ) ( X a X d )  =  ( Y a Y c ) ( Y b Y d ) ( Y b Y c ) ( Y a Y d )  =  C R ,
( x a x c ) ( x b x d ) ( x b x c ) ( x a x d )  =  ( y a y c ) ( y b y d ) ( y b y c ) ( y a y d ) = C R .
{ x = x + δ r x ( x , y ) = x ( 1 + k x r 2 ) y = y + δ r y ( x , y ) = y ( 1 + k y r 2 ) ,
{ f ( k x ) = ( A x C R B x ) k x 2 + ( C x C R D x ) k x + ( E x C R F x ) = 0 f ( k y ) = ( A y C R B y ) k y 2 + ( C y C R D y ) k y + ( E y C R F y ) = 0 ,
{ A x = x a x b r a 2 r b 2 x a x d r a 2 r d 2 x c x b r c 2 r b 2 + x c x d r c 2 r d 2 B x = x a x b r a 2 r b 2 x b x d r b 2 r d 2 x c x a r c 2 r a 2 + x c x d r c 2 r d 2 C x = x a x b r b 2 x a x d r d 2 + x a x b r a 2 x a x d r a 2 x c x b r b 2 + x c x d r d 2 x c x b r c 2 + x c x d r c 2 D x = x a x b r a 2 x b x d r d 2 + x a x b r b 2 x b x d r b 2 x c x a r a 2 + x c x d r d 2 x c x a r c 2 + x c x d r c 2 E x = x a x b x a x d x c x b + x c x d F x = x a x b x b x d x c x a + x c x d ,
k 0 = k a x 2 + k a y 2 + k b x 2 + k b y 2 + k c x 2 + k c y 2 + k d x 2 + k d y 2 4 ,
m i n i = 1 n j = 1 n | | ρ i j ρ ^ ( M 1 , M 2 , W , k 0 ) | | 2 ,
f l x = α x d x = 5.5 × 10  -  3 × 3625.0 = 19.9   m m ; f l y = α y d y = 5.5 × 10  -  3 × 3622.9 = 19.9   m m ;
f r x = α x d x = 5.5 × 10  -  3 × 3628.0 = 20.0   m m ; f r y = α y d x = 5.5 × 10  -  3 × 3626.5 = 19.9   m m .
f l x = α x d x = 5.5 × 10  -  3 × 3691.2 = 20.3   m m ; f l y = α y d y = 5.5 × 10  -  3 × 3695.5 = 20.3   m m ;
f r x = α x d x = 5.5 × 10  -  3 × 3638.2 = 20.0   m m f r y = α y d x = 5.5 × 10  -  3 × 6228.4 = 20.0   m m .
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.