Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Camera calibration optimization algorithm that uses a step function

Open Access Open Access

Abstract

Camera calibration is very important when planning machine vision tasks. Calibration may involve 3D reconstruction, size measurement, or careful target positioning. Calibration accuracy directly affects the accuracy of machine vision. The parameters in many image distortion models are usually applied to all image pixels. However, this may be associated with rather high pixel reprojection errors at image edges, compromising camera calibration accuracy. In this paper, we present a new camera calibration optimization algorithm that features a step function that splits images into center and edge regions. First, based on the increasing pixel reprojection errors according to the pixel distance away from the image center, we gave a flexible method to divide an image into two regions, center and boundary. Then, the algorithm automatically determines the step position, and the calibration model is rebuilt. The new model can calibrate the distortions at the center and boundary regions separately. Optimized by the method, the number of distortion parameters in the old model is doubled, and different parameters represent different distortions within two regions. In this way, our method can optimize traditional calibration models, which define a global model to describe the distortion of the whole image and get a higher calibration accuracy. Experimentally, the method significantly improved pixel reprojection accuracy, particularly at image edges. Simulations revealed that our method was more flexible than traditional methods.

© 2024 Optica Publishing Group under the terms of the Optica Open Access Publishing Agreement

1. Introduction

Machine vision analyzes information from images to direct industrial works. Given the continuous developments in automation, machine vision technologies are now widely used in agricultural [1,2], medical [3], machine manufacturing [4,5], and other fields [6]. It is known to all that camera lenses are often suffer from optical aberrations [7,8], and the position, size, and shape of a target will all be affected by the distortion. Therefore, accurate camera parameters are essential when parsing camera images, and camera calibration accuracy greatly affects the efficacy of machine vision algorithms [9]. Existing camera calibration techniques primarily include model building and parameter calibration [10,11]. Model building may employ linear or nonlinear imaging models. The former use direct linear transformations to describe the relationships between the ideal pixels and the real-world spatial points [12]. However, during real-world imaging, camera lens manufacturing errors, lens mounting errors, and optical path errors caused by optical aberrations are nonlinear. Therefore, nonlinear imaging models consider the image distortions that may be in play [13,14], and the pixel reprojection results are thus closer to reality.

Camera parameter calibration is influenced by the chosen imaging model. Many existing calibration methods use a global distortion model to evaluate entire images [15,16]. In other words, all pixels are corrected by the same distortion parameters after calibration. However, as camera lens errors are random, such models often do not accurately handle distortion [17,18]; errors are particularly obvious at image edges. Figure 1 shows the calibration reprojection results derived using Zhang’s method that uses the two parameters ${k_1},{k_2}$ to describe distortion of points on a circular target plate. The blue circles and red crosses represent the feature points on the target plate and the reprojection positions, respectively, and the blue arrows represent the magnitudes and directions of the feature point reprojection errors.

 figure: Fig. 1.

Fig. 1. Reprojection results of Zhang's method.

Download Full Size | PDF

Figure 1(c) and Fig. 1(d) show enlargements of a boundary and a center region of the reprojection error map. Figure 1(c) shows the reprojection error distribution at the boundary and Fig. 1(d) the distribution at the image center. Figure 1(c) shows that the method does not accurately describe the distortions of image boundary pixels; the directions and sizes of the distortions are scattered. When an effort is made to compensate for boundary pixel distortion errors, Fig. 1(d) shows that the reprojection accuracies of central pixels are compromised, and the reprojection errors are relatively high. These do not indicate inaccurate camera lens construction. The basic problem is distortion modeling error; distortion of the boundary pixels is not accurately described. The algorithm sacrifices some center feature points accuracy to prevent excessive edge pixel reprojection errors. The distortion parameters do not truly reflect the different pixel distortions in two regions of images. The random distortions of pixels far from the image center introduce errors across the entire model.

To address these problems, we use a step function to optimize the image distortion model. We first employ a specific algorithm to determine the locations at which image distortions differ greatly, such as the red dashed lines in Fig. 1(b). Then, while retaining the original image distortion model, we use a step function to split the images into centers and edges; the function weighs the distortion parameters of the two regions separately. After calibration, the two sets of distortion parameters correspond to the distortions of two image regions. This confines the modeling errors of feature points to the areas that are affected, reducing the impacts of distortion in one region on distortion in another region.

Our principal contributions are that we propose an optimization method for calibration using a step function. The method is applicable to common distortion models, and determine the step position automatically. The method improves the reprojection accuracies of image edge pixels and reduces the maximum reprojection errors of camera calibration. Our method makes information from edge pixels more reliable, and is particularly useful when target object is at the edge of the camera view.

The remainder of the paper is structured as follows. Section 2 describes camera calibration. Section 3 deals with camera imaging and the relevant parameters. Section 4 presents our new algorithm and the calibration process. Section 5 describes experiments that confirm the accuracy and stability of the algorithm. The simulations in Section 6 emphasize the flexibility of our method. Section 7 discusses the results of the experiments and simulations. Section 8 contains the conclusions.

2. Related work

The image distortion module is an important component of any nonlinear camera imaging model. Module accuracy translates to accurate camera calibration. Many researchers have sought to optimize image distortion modeling and to solve the various associated problems. In 1966, the Brown group [19] used the Brown-Conrady model to divide camera distortion into radial and tangential components and employed a plumb-line method for camera calibration. In 1987, Tsai [20] presented a new, two-step camera calibration process. In the context of camera distortion modeling, Tsai considered that the inclusion of many distortion parameters during camera calibration not only failed to improve calibration accuracy but also rendered the model solutions unstable. Therefore, only radial distortion parameters were considered when comparing the ideal and actual imaging planes. In 1992, Weng et al. [21] used mathematical descriptions of camera radial, tangential, and thin prismatic distortions to establish a complex, comprehensive nonlinear imaging model.

In an effort to eliminate image distortion, Yu et al. [22] derived original, linear normal vectors from distorted curves by exploiting the geometric invariance of linear straightness, and input the linear images to a flexible and accurate camera distortion model. Bu et al. [23] used a concentric grid to determine calibration target control points. This enhanced the accuracy of target control point extraction by reducing the effect of lens distortion on such extraction during camera calibration. Gao et al. [24] employed a linear iterative method to calculate camera lens distortions; the method predicted certain distortion parameters. A later camera distortion model was based on homographic estimations. Lv et al. [25] used a mutating particle swarm algorithm to improve camera calibration in terms of both speed and accuracy. This method eliminated the poor convergence of traditional optimization algorithms.

However, it has become clear that existing image distortion models that use the same parameters to define the entire image do not well-handle the distortions of pixels at the image edges [26,27]. Gayton et al. [28] explored the uncertainties of local image plane features; the Brown–Conrady distortion model did not well-manage camera distortion. Sun et al. [27] divided camera calibration images into three parts based on the distances of the pixels from the center, and used three different distortion models to calibrate the three regions. However, it was unclear how the lines that separated the three regions were chosen; different divisions may be required when lenses vary in terms of distortion.

3. Camera model

Figure 2 shows the principle of a linear imaging model. The camera lens is ideally convex, there is no optical aberration, and there is no mounting error.

 figure: Fig. 2.

Fig. 2. Linear imaging model.

Download Full Size | PDF

In this case, the mapping of a target point ${\left[ {\begin{array}{ccc} {{x_W}}&{{y_W}}&{{z_W}} \end{array}} \right]^T}$ in the world co-ordinate system to a corresponding coordinate point ${\left[ {\begin{array}{cc} c&r \end{array}} \right]^\textrm{T}}$ in the pixel co-ordinate system of the camera image can be described by Eq. (1):

$$\left[ {\begin{array}{c} c\\ r\\ 1 \end{array}} \right] = \frac{1}{{{z_c}}}\left[ {\begin{array}{cccc} {{f_x}}&0&{{c_x}}&0\\ 0&{{f_y}}&{{c_y}}&0\\ 0&0&1&0 \end{array}} \right]\left[ {\begin{array}{cc} {{\boldsymbol{R}_{3 \times 3}}}&{{\boldsymbol{T}_{3 \times 1}}}\\ 0&1 \end{array}} \right]\left[ {\begin{array}{c} {{x_W}}\\ {{y_W}}\\ {{z_W}}\\ 1 \end{array}} \right]$$
where ${f_x}$ and ${f_y}$ is the focal length, and ${c_x}$ and ${c_y}$ are the co-ordinates of the origin of the physical image co-ordinate system in the image pixel co-ordinate system. The rotation matrix $\boldsymbol{R}$ and translation matrix $\boldsymbol{T}$ represent the transformation from the world to the camera co-ordinate system, and ${z_c}$ is the z-axis co-ordinate of the target point in the camera co-ordinate system.

In a nonlinear camera imaging model, a normalization plane is defined and points ${\left[ {\begin{array}{cc} x&y \end{array}} \right]^\textrm{T}}$ are computed on that plane using Eq. (2):

$$\left[ {\begin{array}{c} x\\ y\\ 1 \end{array}} \right] = \frac{1}{{{z_c}}}\left[ {\begin{array}{cc} {{\boldsymbol{R}_{3 \times 3}}}&{{\boldsymbol{T}_{3 \times 1}}}\\ 0&1 \end{array}} \right]\left[ {\begin{array}{c} {{x_W}}\\ {{y_W}}\\ {{z_W}}\\ 1 \end{array}} \right]$$

And any distortion introduced by the camera lens is directly applied to the co-ordinates in the normalization plane. In other words, the distortions are added to the calculations in Eq. (2), as shown in Eq. (3):

$$\left[ {\begin{array}{c} {\widetilde x}\\ {\widetilde y}\\ 1 \end{array}} \right] = \left[ {\begin{array}{c} {x + \Delta x}\\ {y + \Delta y}\\ 1 \end{array}} \right]$$
where $\widetilde x$ and $\widetilde y$ are the real imaging points of the target point on the image physical co-ordinate system of the nonlinear camera imaging model. Any lens distortion is considered; $\Delta x$ and $\Delta y$ are the distortions of the imaging points in two directions. These are described by Eq. (4) using the image distortion model based on Zhang’s method:
$$\begin{array}{l} \Delta x = x \cdot ({k_1} \cdot {r_c}^2 + {k_2} \cdot {r_c}^4 + {k_3} \cdot {r_c}^6)\\ \Delta y = y \cdot ({k_1} \cdot {r_c}^2 + {k_2} \cdot {r_c}^4 + {k_3} \cdot {r_c}^6)\\ {r_c}^2 = {x^2} + {y^2} \end{array}$$
where ${k_1}$, ${k_2}$, ${k_3}$ are radial distortion parameters of the lens.

After calculating distortion, we can use the corrected normalized coordinates to replace the corresponding part in Eq. (1), and the true imaging position ${\left[ {\begin{array}{cc} {\widetilde c}&{\widetilde r} \end{array}} \right]^\textrm{T}}$ of the image target point is given by Eq. (5).

$$\left[ {\begin{array}{c} {\widetilde c}\\ {\widetilde r}\\ 1 \end{array}} \right] = \left[ {\begin{array}{ccc} {{f_x}}&0&{{c_x}}\\ 0&{{f_y}}&{{c_y}}\\ 0&0&1 \end{array}} \right]\left[ {\begin{array}{c} {\widetilde x}\\ {\widetilde y}\\ 1 \end{array}} \right]$$

4. Calibration

Our algorithm is based on the calibration method proposed by Zhang [29] in 2000, and is divided into the following steps:

  • 1. The images are pre-calibrated using the Zhang's method, and get the reprojection error information.
  • 2. The results are used to determine the distribution law that links the reprojection error magnitudes of image pixels to the distances from the pixels to the image principal point. Next, appropriate step parameters are calculated, a step function is designed, and an appropriate image distortion model built.
  • 3. The Levenberg-Marquardt (LM) optimization algorithm [30] is used for camera calibration to obtain the parameters of the new model.

4.1 Analysis of Zhang's method

Guided by Zhang, we define $\mathbf{A}$ as the camera intrinsic parameters matrix of Eq. (6).

$$\mathbf{A} = \left[ {\begin{array}{ccc} {{f_x}}&0&{{c_x}}\\ 0&{{f_y}}&{{c_y}}\\ 0&0&1 \end{array}} \right]$$

Zhang obtains final, nonlinear imaging model calibration results using Eq. (7), where $\widehat {\textrm{m}}({\mathbf{A}\textrm{,}{k_1}\textrm{,}{k_2}\textrm{,}{\mathbf{R}_i}\textrm{,}{\mathbf{t}_i}\textrm{,}{\textrm{M}_j}} )$ is the reprojection position of point ${\textrm{M}_j}$ of the $i$th image and ${\textrm{m}_{ij}}$ is the location of the feature point identified by the algorithm.

$${f_{obj}} = \min \sum\limits_{i = 1}^n {\sum\limits_{j = 1}^m {{{\left\|{{\textrm{m}_{ij}} - \widehat {\textrm{m}}({\mathbf{A}\textrm{,}{k_1}\textrm{,}{k_2}\textrm{,}{\mathbf{R}_i}\textrm{,}{\mathbf{t}_i}\textrm{,}{\textrm{M}_j}} )} \right\|}^2}} }$$

After obtaining these data, we calculate the distance between the reprojection position of the image feature point and the principal image point using Eq. (8):

$$\begin{array}{l} {d_{ij}} = \left\|{{\boldsymbol{c}_\textrm{0}} - \widehat {\textrm{m}}({\mathbf{A}\textrm{,}{k_1}\textrm{,}{k_2}\textrm{,}{\mathbf{R}_i}\textrm{,}{\mathbf{t}_i}\textrm{,}{\textrm{M}_j}} )} \right\|\\ {\boldsymbol{c}_\textrm{0}} = {\left[ {\begin{array}{cc} {{c_x}}&{{c_y}} \end{array}} \right]^\textrm{T}} \end{array}$$
where ${\boldsymbol{c}_0}$ is the pixel co-ordinate of the principal point revealed by the calibration.

To aid in the creation of our algorithm, we selected a set of real calibration images and used Eq. (8) to derive the modeling errors. Figure 3 graphs the distances from feature points to the image principal point co-ordinates horizontally and the reprojection errors of feature points vertically. This reveals the distributions of the camera calibration reprojection errors. In general, the target board occupies most of the field of view. In such a case, the number of center feature points is greater than that at the edges. Thus, guided by Zhang’s optimization function, the center regions with higher numbers of points are weighted more heavily during optimization. The calibrated parameters tend to reduce the reprojection error of the pixels at the center of the image.

 figure: Fig. 3.

Fig. 3. The camera calibration reprojection error.

Download Full Size | PDF

4.2 Optimization using a step function

To define the trend in the relationship between pixel reprojection accuracy and pixel position revealed using Zhang’s method, we use Eq. (9) to filter the distances of Eq. (8) in groups of 10 pixels and obtain the ${n_k}$ feature points of the $k$th group, where $\lceil{} \rceil$ indicates upward rounding and $\textrm{num}({\cdot} )$ the numbers of elements. The average reprojection error of each group of feature points is given by Eq. (10):

$$\begin{array}{l} {d_k} = |{{\boldsymbol{c}_\textrm{0}} - \widehat {\textrm{m}}({\mathbf{A}\textrm{,}{k_1}\textrm{,}{k_2}\textrm{,}{\mathbf{R}_i}\textrm{,}{\mathbf{t}_i}\textrm{,}{\textrm{M}_n}} )} |\\ {n_k} = \textrm{num}({d_k})|{10({k\textrm{ - 1}} )< {d_k} \le 10k} \\ k\textrm{ = 1,2,}\ldots \textrm{,}\left\lceil {\frac{{\max ({{d_{ij}}} )}}{{10}}} \right\rceil \end{array}$$
$$erro{r_k} = \frac{{\sum\limits_{n = 1}^{{n_k}} {\left\|{{\textrm{m}_n} - \widehat {\textrm{m}}({\mathbf{A}\textrm{,}{k_1}\textrm{,}{k_2}\textrm{,}{\mathbf{R}_i}\textrm{,}{\mathbf{t}_i}\textrm{,}{\textrm{M}_n}} )} \right\|} }}{{{n_k}}}$$

In Fig. 4, application of Eqs. (9) and (10) clarifies the distributional trend between the magnitudes of image pixel reprojection errors and the distances from the pixels to the principal point. The reprojection error gradually rises as the pixel moves away from the principal point.

 figure: Fig. 4.

Fig. 4. Reprojection error trend.

Download Full Size | PDF

We thus averaged the processed $erro{r_k}$ to yield $erro{r_{\textrm{mean}}}$ and then used Eq. (11) to define the $erro{r_k}$ closest to the average. The corresponding pixel position ${d_0}$ is the step point of our step distortion model. We chose the smallest one if Eq. (11) get multiple answers, because the step position should be set before the reprojection errors grow rapidly.

$${d_0} = {d_k}|{\min ({|{erro{r_{\textrm{mean}}} - erro{r_k}} |} )} $$

We express our step distortion model as Eq. (12), where ${K_1}$ and ${K_2}$ are the distortion functions and ${H_1}$ and ${H_2}$ are step functions.

$$\begin{array}{l} ({\widetilde x,\widetilde y} )= {H_1}({{r^2}} ){K_1}({x,y} )+ {H_2}({{r^2}} ){K_2}({x,y} )\\ {H_1} = \frac{1}{{a + bexp ({\alpha ({{r_c}^2 - \beta } )} )}}\\ {H_2} = \frac{1}{{a + bexp ({ - \alpha ({{r_c}^2 - \beta } )} )}}\\ {r_c}^2 = {x^2} + {y^2} \end{array}$$

As the co-ordinates of feature points are calculated using dimensionless components during the assessment of camera distortion, we use the camera focal length f obtained using Zhang’s method to calculate the step position $\beta$ of the step function:

$$\beta = {\left( {\frac{{{d_0}}}{f}} \right)^2}$$

We set the constant $a = 1$; the step function thus ranges from 0 and 1 (which simplifies calculations) and we set $b = 1$ to ensure that the position of the step function is controlled by $\beta$. The magnitude of $\alpha$ determines the slope of the step function in the vicinity of the step position. Larger values of $\alpha$ indicate a greater gradient of the step function. If $\alpha$ is too large, the value of step function quickly becomes 1 on both sides of the step position. This makes the distortion model become two global model at two regions of the image. Therefore, the step process should run through the entire image. We use Eq. (14) to ensure a gradual change of the distortion.

$${H_1}({{r_c} = 0} )= 0.9$$

In this way, we get

$$\alpha = \frac{{\ln 9}}{\beta }$$

We optimize Zhang’s distortion model with three radial distortion parameters as an example in Eq. (16). After introducing the step function, the old distortion model is divided into two parts. The two parts are more associated with the correction of center and boundary image pixels, respectively. By obtaining different distortion parameters, we successfully split the distortion model of the camera image:

$$\begin{array}{l} \widetilde x = {H_1}({{r_c}^2} )({1 + {k_1}{r_c}^2 + {k_2}{r_c}^4 + {k_3}{r_c}^6} )x + {H_2}({{r_c}^2} )({1 + {k_4}{r_c}^2 + {k_5}{r_c}^4 + {k_6}{r_c}^6} )x\\ \widetilde y = {H_1}({{r_c}^2} )({1 + {k_1}{r_c}^2 + {k_2}{r_c}^4 + {k_3}{r_c}^6} )y + {H_2}({{r_c}^2} )({1 + {k_4}{r_c}^2 + {k_5}{r_c}^4 + {k_6}{r_c}^6} )y\\ {H_1} = \frac{1}{{1 + \exp ({10({{r_c}^2 - \beta } )} )}}\\ {H_2} = \frac{1}{{1 + \exp ({ - 10({{r_c}^2 - \beta } )} )}}\\ {r_c}^2 = {x^2} + {y^2} \end{array}$$

4.3 Solution

In terms of our model calibration, we employ a common two-step method and use the LM [30] optimization algorithm during the nonlinear parameter calibration that yields the optimization function of Eq. (17).

$${f_{obj}} = \min \sum\limits_{i = 1}^n {\sum\limits_{j = 1}^m {{{||{{\textrm{m}_{ij}} - \widehat {\textrm{m}}({\mathbf{A}\textrm{,}{\boldsymbol{k}_\textrm{0}}\textrm{,}{\mathbf{R}_i}\textrm{,}{\mathbf{t}_i}\textrm{,}{\textrm{M}_j}} )} ||}^2}} }$$

Let

$$\begin{array}{l} {\boldsymbol{f}_\textrm{0}} = {\left[ {\begin{array}{cc} {{f_x}}&{{f_y}} \end{array}} \right]^\textrm{T}}\\ {\boldsymbol{c}_\textrm{0}} = {\left[ {\begin{array}{cc} {{c_x}}&{{c_y}} \end{array}} \right]^\textrm{T}}\\ {\boldsymbol{k}_0} = {\left[ {\begin{array}{cc} \mathbf{k}&\mathbf{p} \end{array}} \right]^\textrm{T}}\\ {\boldsymbol{x}_\textrm{1}} = {\left[ {\begin{array}{ccc} {{\boldsymbol{f}_\textrm{0}}}&{{\boldsymbol{c}_\textrm{0}}}&{{\boldsymbol{k}_0}} \end{array}} \right]^\textrm{T}}\\ {\boldsymbol{x}_\textrm{2}} = {\left[ {\begin{array}{cc} {{\mathbf{R}_i}}&{{\mathbf{t}_i}} \end{array}} \right]^\textrm{T}}\\ \boldsymbol{x} = \left[ {\begin{array}{cc} {{\boldsymbol{x}_\textrm{1}}}\\ {{\boldsymbol{x}_\textrm{2}}} \end{array}} \right] \end{array}$$

In this way, the increases in distortion parameters caused by proposed method or complex distortion models do not change how the LM algorithm is optimized:

$${f_{obj}} = \min \sum\limits_{k = 1}^{m \times n} {e_k^\textrm{T}(\boldsymbol{x} ){\Omega _k}{e_k}(\boldsymbol{x} )}$$
where $\Omega $ indicates whether a given parameter is involved in optimization, ${e_k}$ represents the reprojection error of the points. Using the Gauss–Newton iterative algorithm, the optimization of Eq. (19) becomes a search for an increment $\Delta \boldsymbol{x}$ that minimizes Eq. (20):
$$f({\mathrm{\Delta }\boldsymbol{x}} )= \sum\limits_{k = 1}^{m \times n} {e_k^\textrm{T}({\boldsymbol{x + }\mathrm{\Delta }\boldsymbol{x}} ){\Omega _k}{e_k}({\boldsymbol{x + }\mathrm{\Delta }\boldsymbol{x}} )}$$

Combining the incremental equations for LM optimization yields Eq. (21) that gives $\Delta \boldsymbol{x}$:

$$\Delta \boldsymbol{x} = {({\mathbf{H} + \lambda \cdot diag(\mathbf{H} )} )^{ - 1}}\boldsymbol{g}$$
where
$$\begin{array}{l} \mathbf{H} = \sum\limits_{i = 1}^n {\sum\limits_{j = 1}^m {{\boldsymbol{J}_{i,j}}^\textrm{T}{\mathbf{\Omega }_{i,j}}{\boldsymbol{J}_{i,j}}} } \\ \boldsymbol{g} ={-} \sum\limits_{i = 1}^n {\sum\limits_{j = 1}^m {{\boldsymbol{J}_{i,j}}^\textrm{T}{\mathbf{\Omega }_{i,j}}{e_{i,j}}(\boldsymbol{x} )} } \\ {\boldsymbol{J}_{i,j}} = {\left[ {\begin{array}{ccccccc} {\frac{{d\widehat {\textrm{m}}}}{{d{\boldsymbol{f}_0}}}}&{\frac{{d\widehat {\textrm{m}}}}{{d{\boldsymbol{c}_0}}}}&{\frac{{d\widehat {\textrm{m}}}}{{d{\boldsymbol{k}_0}}}}&{\underbrace{{0 \ldots 0}}_{{6 \times ({i - 1} )}}}&{\frac{{d\widehat {\textrm{m}}}}{{d{\mathbf{R}_i}}}}&{\frac{{d\widehat {\textrm{m}}}}{{d{\mathbf{t}_i}}}}&{\underbrace{{0 \ldots 0}}_{{6 \times ({n - i} )}}} \end{array}} \right]^\textrm{T}} \end{array}$$

We use the intrinsic camera parameters of Zhang as the initial values, the corresponding extrinsic parameters are obtained via linear calibration of each image. Then, we set $\lambda = 1$ and use the LM optimization algorithm to yield the final camera intrinsic and extrinsic parameters required by our image distortion model.

5. Experiments

5.1 Hardware

We used a Haikang MV-CA050-12UM camera with an image resolution of 2,448 × 2,048 pixels, an image element size of 3.45 × 3.45 µm, and a lens focal length of 8 mm. We employed the center of the dots as the feature points of the world co-ordinate system; the experimental equipment and the calibration plate are shown in Fig. 5. The calibration plate contains a 37 × 37 array of dots of diameter 2.5 mm, and the circle center distance between dots is 5 mm. The plate manufacturing accuracy is ± 0.01 mm. Images were obtained at about 300 mm.

 figure: Fig. 5.

Fig. 5. Hardware.

Download Full Size | PDF

5.2 Optimization of results afforded by Zhang’s method

A total of 27 calibration images were taken, some are shown in Fig. 6.

 figure: Fig. 6.

Fig. 6. Camera calibration images.

Download Full Size | PDF

The camera calibration parameters of Zhang's method and those after step function optimization are listed in Table 1.

Tables Icon

Table 1. Results of camera calibration

The optimized calibration model did not significantly change the lens focal length or the principal point parameters of Zhang’s method. Our method is thus practicable, and changes in the distortion parameters do not trigger drastic variations in other parameters.

5.3 Performance

To compare the accuracies of feature point reprojections, we used Eqs. (9) and (10) to process the reprojection errors of all feature points obtained via calibration and compared the distribution trends of errors in pixels distant from the principal point shown in Fig. 7. Zhang’s method was associated with significant errors at edges accompanied by several error peaks in the center; calibration at the edges inappropriately affected the center. After step function optimization, the edge reprojection errors fell significantly, as did the central error peaks; calibration was much improved.

 figure: Fig. 7.

Fig. 7. The reprojection error trend.

Download Full Size | PDF

A total of 27 images were used for calibration. Figure 8 shows the root mean square (RMS) errors of reprojection and the maximum reprojection errors for all images obtained using Zhang’s and our calibration models; the latter model markedly reduced errors.

 figure: Fig. 8.

Fig. 8. Calibration experiment results.

Download Full Size | PDF

In Table 2, we randomly selected four calibration images and derived the root mean square reprojection errors of them. Besides, we derived the RMS errors of two regions of whole 27 images. Our method reduced the errors of both regions.

Tables Icon

Table 2. RMS error of reprojection (pixel)

Table 3 lists the maximum reprojection errors of six images. Our method reduced these errors.

Tables Icon

Table 3. MAX error of reprojection (pixel)

Figure 9 compares the reprojection errors of the entire calibration plate derived using the two methods. The circles are the locations of identified image feature points and the crosses the locations of feature points obtained via reprojections. The arrows start at identified feature points and point to reprojection points. The length of each arrow is 300-fold the distance between the two points, which reveal the reprojection errors of calibration. We further enlarged and compared the central and edge reprojection maps of the two methods. The overall reprojection error was reduced by our method, and the edge reprojection errors significantly lowered.

 figure: Fig. 9.

Fig. 9. Comparison of reprojection error of an image.

Download Full Size | PDF

Figure 10 shows the reprojection errors of the two models. The colors represent the feature points of several calibration plates. The maximum image reprojection error was significantly reduced by our algorithm.

 figure: Fig. 10.

Fig. 10. Comparison of reprojection error.

Download Full Size | PDF

5.4 Stability testing

To prevent overfitting of the image distortion model attributable to an excess of distortion parameters or inappropriate settings, we took 80 additional images of the calibration plate and divided them into four groups of 20, of which one group was used for camera calibration and the others to measure reprojection errors employing the derived calibration parameters. Eight of the 80 images are shown in Fig. 11.

 figure: Fig. 11.

Fig. 11. Images for stability testing.

Download Full Size | PDF

After calibration, we obtained all reprojection errors. Box plots for the four groups are shown in Fig. 12. Each box consists of 37 × 37 × 20 feature points.

 figure: Fig. 12.

Fig. 12. Reprojection error of each group.

Download Full Size | PDF

Given the high reprojection errors at edges, the errors will be greater when the calibration board includes boundary pixels. Therefore, some outliers were apparent. Compared to Zhang’s method, our method better handled both calibration and test images. Besides, there’s no significant increase of reprojection error in the test images compared to the calibration images. Therefore, our method is stable in practical application.

5.5 Real targets test

Actually, the experiments have shown that our method reached a better pixels reprojection result, especially at the image boundary. On this basis, we use several precise balls to show the improvement in real measurement.

To test the practicability of our method, we measure the distances between the center of balls. As shown in Fig. 13, we used a professional balls board and measure the distances between the balls for accuracy test. The nominal accuracy of the distance is 0.0001 mm. The diameters of balls are known, so that the positions of the balls in the camera coordinate system can be calculated by a single camera.

 figure: Fig. 13.

Fig. 13. Balls for measurement.

Download Full Size | PDF

We take several images of the board and get the positions of the balls. The images are shown in Fig. 14.

 figure: Fig. 14.

Fig. 14. Images of the balls.

Download Full Size | PDF

First, we extract the locations of the boundaries by calculating the gradient of pixels. Then, select the boundaries of the balls according to the area. A random sample consensus (RANSAC) algorithm [31] is used to delete the wrong pixels in the boundaries. Finally, sub-pixel elliptic boundaries are fitted by Least Squares method.

We set the diameters of the balls as priori values. Then, combined with the boundaries and the camera parameters, the positions of ball can be obtained.

We calculate the distances between A1 and A2; C3 and C4. A1 and A2 are imaged at the boundary, the distance between these balls represents the improvement of calibration at image boundary. And the distance between C3 and C4 is tested for the accuracy at image center. We use the images modified by Zhang’s model and our proposed model, separately, and compare with the true distance. The results are shown in Table 4, and our method improves the measure accuracy.

Tables Icon

Table 4. Distances between two balls by different method (mm)

When measuring large target (A1 and A2), our method improves the accuracy by 14.99%. And our method keeps the high accuracy at the image center.

5.6 Checkerboard test

To exclude any effect of dot extraction, we took 27 calibration images using a $36 \times 35$ checkerboard. Figure 15 shows eight of the images; the manufacturing accuracy of the board is ± 0.001 mm.

 figure: Fig. 15.

Fig. 15. Checkerboard calibration images.

Download Full Size | PDF

We compared the original and optimized methods of Zhang. And we take higher order distortion parameter ${k_3}$. The calibration results are shown in Table 5. In Table 6, ${k_{h,i}}$ is the parameter that corresponds to the distortion parameter ${k_i}$ after stepping. We show the RMS error of four random images and two regions of whole 27 images.

Tables Icon

Table 5. Results of camera calibration

Tables Icon

Table 6. RMS error of reprojection (pixel)

Figure 16 shows the trends in reprojection errors as the distances between pixels and the image principal point increase. Our method also works well when using checkerboard.

 figure: Fig. 16.

Fig. 16. The reprojection error trend of checkerboard.

Download Full Size | PDF

5.7 Brown-Conrady model

To illustrate the flexibility of our algorithm, we combined our method with the Brown-Conrady model. As is true of Zhang’s method, the Brown-Conrady distortion parameters can also be separated into two parts.

We used the same images as section 5.2. The calibration results are shown in Table 7, ${k_{h,i}}$ is the parameter that corresponds to the distortion parameter ${k_i}$ after stepping.

Tables Icon

Table 7. Results of camera calibration

The RMS reprojection errors are listed in Table 8. We randomly selected four calibration images, and showed the RMS error of four images and two regions of whole 27 images. Our method reduced edge reprojection errors, and improved the Brown-Conrady model.

Tables Icon

Table 8. RMS error of reprojection (pixel)

Figure 17 shows the trends in reprojection errors as the distances between pixels and the image principal point increase. Our method enhances the accuracy of the Brown-Conrady model.

 figure: Fig. 17.

Fig. 17. The reprojection error trend of two methods.

Download Full Size | PDF

6. Simulation of distortion

Figure 18 uses circles and squares of different sizes to represent the image center and edge pixels, respectively; we simulated image distortion using different parameter values. The blue line shows pixels that were not distorted and the red line pixels that were distorted by specific parameters. The figures (a), (b), and (c) show the distortions of the Brown-Conrady model. The low-order distortion parameter ${k_1}$ determines the overall pixel distortion. The high-order distortion parameter affects principally edge pixel distortion [(b)] but also center pixel distortion, compromising the calibration accuracies of the latter pixels. The figures (d), (e), and (f) indicate simulations of image pixel distortions after introducing a step function to optimize the Brown-Conrady model, where ${k_{h,i}}$ is the parameter corresponding to ${k_i}$ after the step. A comparison of (a) and (d) shows that the optimized model fully describes the distortion of the Brown-Conrady model. A comparison of (b) and (e) shows that the optimized model retains the shape of distortion when reducing boundary distortion. A comparison of (c) and (f) reveals that the optimized model allows respective control of tangential distortion at image center and edge.

 figure: Fig. 18.

Fig. 18. Simulation of image distortion.

Download Full Size | PDF

The figures (g) and (h) indicate two extreme cases of image distortion after introducing the step function. In (g), the center pixels exhibit barrel-shaped distortion and the edge pixels pincushion distortion. In (h), the center pixels are shifted upwards and the edge pixels downwards. Therefore, the optimized model is very flexible; even extreme pixel errors are well-handled.

7. Discussion

Nowadays, many researches in camera calibration focus on higher precise parameters. But in this paper, we proposed a new optimization method for calibration. Our method incorporates a step function and optimize the old model by splitting the image. The optimized model is able to better handle pixel distortions than old model. Experimentally, our method is suitable for Zhang’s and Brown-Conrady models and improved the reprojection accuracy. Compared to the Brown-Conrady model, the calibration improvements afforded by our model are mainly concentrated at image edges. Although our method does not greatly improve the Brown-Conrady model, it decouples the distortions; edge image pixels are rendered more accurate without corrupting pixels in other areas.

8. Conclusion

To reduce errors caused by camera distortion, we present an algorithm that optimizes calibration by introducing a step function; the step position is automatically determined. The algorithm splits images into center and edge regions and obtains the distortion parameters, separately. The algorithm reduces the overall image reprojection errors and enhances the accuracy of edge pixel reprojections. Besides, our method can be widely used in the models, which define a global model to describe the distortion of whole image. Experimentally, the algorithm was stable and markedly improved calibration accuracy, especially at image edges; edge pixels became more reliable. Compared with Zhang’s method, we improved the measurement accuracy in real work by 14.99%. Simulations showed that our method was more flexible than traditional models.

Funding

National Defense Basic Scientific Research Program of China (JCKY2021602B032).

Acknowledgments

We acknowledge Beijing Institute of Technology for the support in this research.

Disclosures

The authors declare no conflicts of interest.

Data availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

References

1. T. Wang, B. Chen, Z. Zhang, et al., “Applications of machine vision in agricultural robot navigation: a review,” Computers and Electronics in Agriculture. 198, 107085 (2022). [CrossRef]  

2. T. U. Rehman, M. S. Mahmud, Y. K. Chang, et al., “Current and future applications of statistical machine learning algorithms for agricultural machine vision systems,” Computers and Electronics in Agriculture. 156, 585–605 (2019). [CrossRef]  

3. W. He, T. Liu, Y. Han, et al., “A review: The detection of cancer cells in histopathology based on machine vision,” Computers in Biology and Medicine. 146, 105636 (2022). [CrossRef]  

4. G. Yang and Y. Wang, “Three-dimensional measurement of precise shaft parts based on line structured light and deep learning,” Measurement 191, 110837 (2022). [CrossRef]  

5. J. Sun, Z. Liu, Y. Zhao, et al., “Motion deviation rectifying method of dynamically measuring rail wear based on multi-line structured-light vision,” Opt. Laser Technol. 50, 25–32 (2013). [CrossRef]  

6. S. Zhang, “High-speed 3D shape measurement with structured light methods: a review,” Opt. Lasers Eng. 106, 119–131 (2018). [CrossRef]  

7. X. Li, B. Zhang, P. V. Sander, et al., “Blind geometric distortion correction on images through deep learning,” Proc. IEEE Conf. on Computer Vision and Pattern Recognition4855–4864 (2019).

8. W. Wang, H. Feng, W. Zhou, et al., “Model-aware pre-training for radial distortion rectification,” IEEE Trans. on Image Process. 32, 5764–5778 (2023). [CrossRef]  

9. D. K. Moru and D. Borro, “Analysis of different parameters of influence in industrial cameras calibration processes,” Measurement 171, 108750 (2021). [CrossRef]  

10. P. Do and Q. C. Nguyen, “A review of stereo-photogrammetry method for 3-d reconstruction in computer vision,” in 19th International Symposium on Communications and Information Technologies (ISCIT) (2019).

11. F. Qi, Q. Li, Y. Luo, et al., “Constraints on general motions for camera calibration with one-dimensional objects,” Pattern Recognition 40(6), 1785–1792 (2007). [CrossRef]  

12. O. D. Faugeras, “The calibration problem for stereo,” Proc. IEEE Conf. on Computer Vision and Pattern Recognition (1986), pp. 15–20.

13. Q. Sun, X. Wang, J. Xu, et al., “Camera self-calibration with lens distortion,” Optik 127(10), 4506–4513 (2016). [CrossRef]  

14. M. D. Grossberg and S. K. Nayar, “A general imaging model and a method for finding its parameters,” Proc. IEEE International Conference on Computer Vision (ICCV), 2, 108–115 (2001).

15. A. W. Fitzgibbon, “Simultaneous linear estimation of multiple view geometry and lens distortion,” Proc. IEEE Conf. on Computer Vision and Pattern Recognition. 1 (2001).

16. J. Wang, F. Shi, J. Zhang, et al., “A new calibration model of camera lens distortion,” Pattern Recognition 41(2), 607–615 (2008). [CrossRef]  

17. C. Ricolfe-Viala and A. Sánchez-Salmerón, “Correcting non-linear lens distortion in cameras without using a model,” Opt. Laser Technol. 42(4), 628–639 (2010). [CrossRef]  

18. T. Schops, V. Larsson, M. Pollefeys, et al., “Why having 10,000 parameters in your camera model is better than twelve,” Proc. IEEE Conf. on Computer Vision and Pattern Recognition.2535–2544 (2020).

19. D. C. Brown, “Decentering distortion of lenses,” Photogrammetric Eng. Remote Sens. 5, 444–462 (1966).

20. R. Y. Tsai, “A versatile camera calibration technique for high-accuracy 3D machine vision metrology using off-the-shelf TV cameras and lenses,” IEEE J. Robotics Automat. 3(4), 323–344 (1987). [CrossRef]  

21. J. Weng, P. Cohen, and M. Herniou, “Camera calibration with distortion models and accuracy evaluation,” IEEE Trans. Pattern Anal. Machine Intell. 14(10), 965–980 (1992). [CrossRef]  

22. J. Yu, H. Sun, Z. Xia, et al., “Sample balancing of curves for lens distortion modeling and decoupled camera calibration,” Opt. Commun. 537, 129221 (2023). [CrossRef]  

23. L. Bu, H. Huo, X. Liu, et al., “Concentric circle grids for camera calibration with considering lens distortion,” Opt. Lasers Eng. 140, 106527 (2021). [CrossRef]  

24. D. Gao and F. Yin, “Computing a complete camera lens distortion model by planar homography,” Opt. Laser Technol. 49, 95–107 (2013). [CrossRef]  

25. X. Lü, L. Meng, L. Long, et al., “Comprehensive improvement of camera calibration based on mutation particle swarm optimization,” Measurement 187, 110303 (2022). [CrossRef]  

26. Q. Sun, Y. Hou, and J. Chen, “Lens distortion correction for improving measurement accuracy of digital image correlation,” Optik 126(21), 3153–3157 (2015). [CrossRef]  

27. Q. Sun, Y. Hou, and Q. Tan, “A new method of camera calibration based on the segmentation model,” Optik 124(24), 6991–6995 (2013). [CrossRef]  

28. G. Gayton, M. Isa, and R. K. Leach, “Evaluating parametric uncertainty using non-linear regression in fringe projection,” Opt. Lasers Eng. 162, 107377 (2023). [CrossRef]  

29. Z. Zhang, “A flexible new technique for camera calibration,” IEEE Trans. Pattern Anal. Machine Intell. 22(11), 1330–1334 (2000). [CrossRef]  

30. M. I. A. Lourakis, “A brief description of the Levenberg-Marquardt algorithm implemented by levmar,” Foundation of Research and Technology. 4(1), 1–6 (2005).

31. M. A. Fischler and R. C. Bolles, “Random sample consensus: a paradigm for model fitting with applications to image analysis and automated cartography,” Commun. ACM 24(6), 381–395 (1981). [CrossRef]  

Data availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (18)

Fig. 1.
Fig. 1. Reprojection results of Zhang's method.
Fig. 2.
Fig. 2. Linear imaging model.
Fig. 3.
Fig. 3. The camera calibration reprojection error.
Fig. 4.
Fig. 4. Reprojection error trend.
Fig. 5.
Fig. 5. Hardware.
Fig. 6.
Fig. 6. Camera calibration images.
Fig. 7.
Fig. 7. The reprojection error trend.
Fig. 8.
Fig. 8. Calibration experiment results.
Fig. 9.
Fig. 9. Comparison of reprojection error of an image.
Fig. 10.
Fig. 10. Comparison of reprojection error.
Fig. 11.
Fig. 11. Images for stability testing.
Fig. 12.
Fig. 12. Reprojection error of each group.
Fig. 13.
Fig. 13. Balls for measurement.
Fig. 14.
Fig. 14. Images of the balls.
Fig. 15.
Fig. 15. Checkerboard calibration images.
Fig. 16.
Fig. 16. The reprojection error trend of checkerboard.
Fig. 17.
Fig. 17. The reprojection error trend of two methods.
Fig. 18.
Fig. 18. Simulation of image distortion.

Tables (8)

Tables Icon

Table 1. Results of camera calibration

Tables Icon

Table 2. RMS error of reprojection (pixel)

Tables Icon

Table 3. MAX error of reprojection (pixel)

Tables Icon

Table 4. Distances between two balls by different method (mm)

Tables Icon

Table 5. Results of camera calibration

Tables Icon

Table 6. RMS error of reprojection (pixel)

Tables Icon

Table 7. Results of camera calibration

Tables Icon

Table 8. RMS error of reprojection (pixel)

Equations (22)

Equations on this page are rendered with MathJax. Learn more.

[ c r 1 ] = 1 z c [ f x 0 c x 0 0 f y c y 0 0 0 1 0 ] [ R 3 × 3 T 3 × 1 0 1 ] [ x W y W z W 1 ]
[ x y 1 ] = 1 z c [ R 3 × 3 T 3 × 1 0 1 ] [ x W y W z W 1 ]
[ x ~ y ~ 1 ] = [ x + Δ x y + Δ y 1 ]
Δ x = x ( k 1 r c 2 + k 2 r c 4 + k 3 r c 6 ) Δ y = y ( k 1 r c 2 + k 2 r c 4 + k 3 r c 6 ) r c 2 = x 2 + y 2
[ c ~ r ~ 1 ] = [ f x 0 c x 0 f y c y 0 0 1 ] [ x ~ y ~ 1 ]
A = [ f x 0 c x 0 f y c y 0 0 1 ]
f o b j = min i = 1 n j = 1 m m i j m ^ ( A , k 1 , k 2 , R i , t i , M j ) 2
d i j = c 0 m ^ ( A , k 1 , k 2 , R i , t i , M j ) c 0 = [ c x c y ] T
d k = | c 0 m ^ ( A , k 1 , k 2 , R i , t i , M n ) | n k = num ( d k ) | 10 ( k  - 1 ) < d k 10 k k  = 1,2, , max ( d i j ) 10
e r r o r k = n = 1 n k m n m ^ ( A , k 1 , k 2 , R i , t i , M n ) n k
d 0 = d k | min ( | e r r o r mean e r r o r k | )
( x ~ , y ~ ) = H 1 ( r 2 ) K 1 ( x , y ) + H 2 ( r 2 ) K 2 ( x , y ) H 1 = 1 a + b e x p ( α ( r c 2 β ) ) H 2 = 1 a + b e x p ( α ( r c 2 β ) ) r c 2 = x 2 + y 2
β = ( d 0 f ) 2
H 1 ( r c = 0 ) = 0.9
α = ln 9 β
x ~ = H 1 ( r c 2 ) ( 1 + k 1 r c 2 + k 2 r c 4 + k 3 r c 6 ) x + H 2 ( r c 2 ) ( 1 + k 4 r c 2 + k 5 r c 4 + k 6 r c 6 ) x y ~ = H 1 ( r c 2 ) ( 1 + k 1 r c 2 + k 2 r c 4 + k 3 r c 6 ) y + H 2 ( r c 2 ) ( 1 + k 4 r c 2 + k 5 r c 4 + k 6 r c 6 ) y H 1 = 1 1 + exp ( 10 ( r c 2 β ) ) H 2 = 1 1 + exp ( 10 ( r c 2 β ) ) r c 2 = x 2 + y 2
f o b j = min i = 1 n j = 1 m | | m i j m ^ ( A , k 0 , R i , t i , M j ) | | 2
f 0 = [ f x f y ] T c 0 = [ c x c y ] T k 0 = [ k p ] T x 1 = [ f 0 c 0 k 0 ] T x 2 = [ R i t i ] T x = [ x 1 x 2 ]
f o b j = min k = 1 m × n e k T ( x ) Ω k e k ( x )
f ( Δ x ) = k = 1 m × n e k T ( x + Δ x ) Ω k e k ( x + Δ x )
Δ x = ( H + λ d i a g ( H ) ) 1 g
H = i = 1 n j = 1 m J i , j T Ω i , j J i , j g = i = 1 n j = 1 m J i , j T Ω i , j e i , j ( x ) J i , j = [ d m ^ d f 0 d m ^ d c 0 d m ^ d k 0 0 0 6 × ( i 1 ) d m ^ d R i d m ^ d t i 0 0 6 × ( n i ) ] T
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.