## Abstract

Nonindustrial low-cost cameras have the advantages of cheap and simple structure, but have the disadvantages of low resolution and large image noise. When the existing camera calibration methods are used to calibrate nonindustrial low-cost cameras, high-accuracy calibration cannot be obtained. A high-accuracy calibration method using a high-accuracy planar target is introduced in this study to solve this problem. First, the initial values and the uncertainties of all image feature points are determined by the multiscale image analysis method. Then, the image disturbance factor is added to each target image feature point. In addition, the image projection error is established as the minimum objective function according to the homography matrix between the target plane and the image plane. Thus, the optimal coordinates of all image feature points are obtained by the nonlinear optimization method. Finally, the calibration of the intrinsic and extrinsic parameters of the camera will be achieved by using Zhang’s method according to the image feature points obtained from the previous step. Simulative and real experiments have been conducted to evaluate the performance of the proposed method, and results show that the calibration accuracy of the proposed method is at least three times that of Zhang’s method.

© 2016 Optical Society of America

## 1. Introduction

In image-based optical measurement systems, a camera is one of the most important function modules [1]. The mathematical model of a camera is the projective relationship between measurement space and image plane of the camera. Camera calibration is used to obtain all parameters in the mathematical model of the camera. An optical measurement system is important to obtain high-accuracy camera calibration. No high-accuracy calibration means no high-accuracy measurement. The image noise and resolution of the camera are the main effective factors for the calibration and measurement of the optical measurement system. The optical measurement system usually uses costly industrial cameras to obtain high measurement and calibration accuracy [2]. The use of costly industrial cameras increases all costs of the optical measurement system and restricts the application of the optical measurement system. If the low-cost camera can obtain high measurement and calibration accuracy, then the cost of the optical measurement system can be significantly decreased and the application region of the optical measurement system can be significantly expanded.

At present, popular calibration methods usually use different forms of targets. The 3D target-based camera calibration method [3–5] can obtain high-accuracy calibration results, but the 3D target is difficult to machine. The volume of the 3D target is large, which will be limited by the field space. Compared with the 3D target, the camera calibration method using 2D planar target [6–13] has higher flexibility. The most typical method using 2D planar target is the method proposed by Zhang [6] (called Zhang’s method). Zhang’s method has the advantages of a flexible calibration process and high calibration accuracy. The camera calibration method using 1D target [14,15] is limited by the collineation of the feature points. A single camera can be calibrated only under certain conditions [14], which have restricted the widespread use of this kind of method in the practical vision system. In addition to the aforementioned target forms, other camera calibration methods use several sphere targets [16–18], which can achieve the intrinsic parameter calibration of cameras placed at many different angles because of the particularity of the sphere. Other camera calibration methods also use the rotating body surface [19,20], but cannot achieve high-accuracy calibration of the camera.

The planar target-based calibration method proposed by Zhang [6] is widely used because of its practicability and simplicity. The calibration errors of Zhang’s method mainly come from two parts, namely, the manufacture error of the target and the location error of the image feature points. With respect to the manufacture error of the target, calibration methods with inaccurate targets are proposed in [20–22]. These methods can reduce the effects of target manufacture errors on camera calibration accuracy. However, in recent years, with the improvement of target manufacture technology, the accuracy of the target increases and the influence of target manufacture errors on camera calibration accuracy can be ignored. Accordingly, the extraction errors of the image feature points have become the main source of camera calibration errors. With respect to the location errors of the image feature points, methods using iterative update to extract the feature points of the target have been proposed successively in [24,25]. These methods need to optimize the iteration and transform the images many times. Moreover, these kinds of methods extract high-precision image feature points using image processing operations. These methods are complex and inefficient. Moreover, the image transformation process has a significant influence on the accurate extraction of the image feature points, which inevitably causes the reduction of calibration accuracy.

The location errors of the image feature points mainly come from image noise and pixel discretization. The location errors of the image feature points are larger for the low-cost camera with low resolution and large image noise. In addition, in complex measurement conditions, the target image is influenced by the light and space environment, which lead to the uneven image intensity distribution and local out of focus. These phenomena will affect the extraction accuracy of the image feature points. Even for the clear target image, the extraction errors of the image feature points for the same extraction algorithm with different parameters and scales will affect the extraction accuracy of the image feature points. Thus, the extraction errors of the image feature points cannot be eliminated completely by using only the image processing operations.

In this paper, we assumed that the location errors of the image feature points are inevitable and the location accuracy of the image feature points cannot be improved only through the image processing methods. Under the condition of using a high-precision target, the minimum objective function with the minimum error of feature point image projection is established by introducing the image disturbance factor. Then, the optimal image coordinates of the target feature points can be obtained by nonlinear optimization. Finally, the camera calibration is achieved by employing Zhang’s method using the optimal image coordinates of the target feature points obtained from the previous step. The remainder of this paper is organized as follows: Section 2 gives some preliminaries about the camera model. Section 3 describes the detailed procedure of the proposed method. Sections 4 and 5 present the simulation and real data experiments conducted to validate the effectiveness of the proposed method. Section 6 introduces some conclusions.

## 2. Mathematical model of the camera

The undistorted homogeneous coordinate of *Q* in the image coordinate frame in pixels and the image coordinate frame in mm are $p={[u,v,1]}^{\text{T}}$ and ${p}_{\text{n}}={[{x}_{n},{y}_{n},1]}^{\text{T}}$, respectively.$q={\left[x,y,z,1\right]}^{T}$ is the 3D homogeneous coordinate in the world coordinate frame. From the pinhole imaging model of the camera, the relationship between ** q** and

**is expressed as follows:**

*p**u*

_{0}and

*v*

_{0}are the coordinates of the principal point,

*f*and

_{x}*f*are the effective focal lengths in the direction of the

_{y}*u*- and

*v*-axes, respectively, $\gamma $is the non-perpendicularity between

*u*- and

*v*-axes, $R={[\begin{array}{ccc}{r}_{1}& {r}_{2}& {r}_{3}\end{array}]}_{3\times 3}$ and

**are the rotation and translation that relate the world coordinate frame to the camera coordinate frame.**

*t*Equation (1) is the linear model of the camera. Considering lens distortion, the corresponding relationship between distorted image homogeneous coordinate $\tilde{p}={[\tilde{u},\tilde{v},1]}^{\text{T}}$ and undistorted image homogeneous coordinate $p={[u,v,1]}^{\text{T}}$ is expressed as follows:

*k*

_{1}, and

*k*

_{2}are radial distortion coefficients.

## 3. Algorithm principle

The proposed algorithm has two main parts: In Part 1, the corresponding relationship between the target feature points in the world coordinate frame and that in the image coordinate frame is established through the homography matrix. Then, the optimal coordinates of the image feature points can be solved by using the nonlinear optimization method. In Part 2, the intrinsic and extrinsic parameters of the camera can be calibrated by employing Zhang’s method using the optimal coordinates of the image feature points obtained.

The key step of this study is how to determine the optimal coordinates of the image feature points.

The detailed steps of the algorithm are as follows:

Step 1: The distorted image homogeneous coordinates ${\tilde{p}}_{ij}=[{\tilde{u}}_{ij},{\tilde{v}}_{ij},1]$ and uncertainties ${U}_{\Delta {u}_{ij}},{U}_{\Delta {v}_{ij}}$ of the image feature points for all the target images are extracted by the image processing operations.

Step 2: From the homography matrix *H** _{i}* between the target plane and the image plane and the image feature points obtained from Step 1, the optimal image coordinates ${\widehat{p}}_{ij}$ of the target feature points can be obtained by the nonlinear optimization method.

Step 3: According to *H** _{i}* and ${\widehat{p}}_{ij}$, the linear solution and nonlinear optimization solutions of the intrinsic and extrinsic parameters of the camera are obtained by using Zhang’s method.

#### 3.1 Determination of the initial coordinates and uncertainties of the target image feature points

Several factors, such as image noise and pixel discretization, affect the location accuracy of the image feature points. Even for the clear target image, the location errors of the image feature points for the same extraction algorithm with different parameters and scales will affect the location accuracy of the image feature points. A method that needs the initial image coordinates of the feature points and their uncertainties is proposed to obtain the precise image feature point coordinates by nonlinear optimization.

With regard to the different image feature points extraction algorithm, *m* different image processing parameters (e.g., Gauss convolution template standard deviation$\sigma $) can be selected for the *j*th feature points in the *i*th position of the target. Then, *m* image feature point coordinates ${p}_{ij}^{m}$ can be obtained and can form the feature points set *Q _{ij}*, as indicated by the red crosses shown in Fig. 1. The mean coordinates ${\tilde{p}}_{ij}=[{\tilde{u}}_{ij},{\tilde{v}}_{ij},1]$ and the standard deviations${\sigma}_{uij}$, ${\sigma}_{vij}$ of the image feature points in the direction of

*u*,

*v*are calculated through the set

*Q*, as shown in Fig. 1.

_{ij}The initial coordinate is ${\tilde{p}}_{ij}=[{\tilde{u}}_{ij},{\tilde{v}}_{ij},1]$, and the uncertainties are ${U}_{\Delta {u}_{ij}}={\sigma}_{u},{U}_{\Delta {v}_{ij}}={\sigma}_{v}$.

The green circles denote uncertainties of all target image feature point in Fig. 2.

#### 3.2 Optimal image feature point coordinates solution

The homography relationship between the target plane and the image plane is shown in Fig. 3. As shown in Fig. 3, *O*_{t}*x*_{t}*y*_{t}*z*_{t} is the target coordinate frame, *O*_{c}*x*_{c}*y*_{c}*z*_{c} is the camera coordinate frame, and *H** _{i}* is the homography matrix between the target plane and the image plane in the

*i*th target position. The undistorted image homogeneous coordinates and the distorted image homogeneous coordinates for the

*j*th point

*q**= [*

_{j}*x*,

_{j}*y*,1] of the target in the

_{j}*i*th position are

*p**= [*

_{ij}*u*,

_{ij}*v*,1]

_{ij}^{T}and ${\tilde{p}}_{ij}={[{\tilde{u}}_{ij},{\tilde{v}}_{ij},1]}^{\text{T}}$, respectively. ${\Delta}_{ij}=[\Delta {u}_{ij},\Delta {v}_{ij},\text{1}]$ is the image point disturbance factor, where $\Delta {u}_{ij}$is the disturbance factor in the

*u*direction and $\Delta {v}_{ij}$ is the disturbance factor in the

*v*direction.

*q**and*

_{j}

*p**can be regarded as constants, and ${\Delta}_{ij}$ is the parameter to be optimized. The corresponding relationship between*

_{ij}

*p**and*

_{ij}

*q**can be obtained according to*

_{j}

*H**, as shown in Eq. (3).*

_{i}*s*is a nonzero scale factor.

Under ideal conditions, the relationship between feature points on the planar target and corresponding image points accords with Eq. (3). The image disturbance factors $\Delta {u}_{ij},\Delta {v}_{ij}$ are added to the image feature points in the *u*, *v* directions, respectively, as shown in Fig. 4, because of the location errors of the image feature points.

As shown in Fig. 4, the blue points are the initial coordinates of the image feature points in Eq. (3) and the red points are their optimized coordinates ${\widehat{p}}_{ij}={[{u}_{ij}+\Delta {u}_{ij},{v}_{ij}+\Delta {v}_{ij},1]}^{\text{T}}$ with the image disturbance factor.

The corresponding relationship between ${\widehat{p}}_{ij}$ and *q** _{j}* is expressed as follows:

The initial solutions of intrinsic parameters and radial distortion coefficients *k*_{1}, *k*_{2} and corresponding uncertainties ${U}_{ax},{U}_{ay},{U}_{\gamma},{U}_{u0},{U}_{v0},{U}_{k1},{U}_{k2}$ can be obtained by using Zhang’s method (the Matlab calibration toolbox in [26] is used, and the uncertainties are provided by the calibration toolbox). Thus, the undistorted image feature points homogeneous coordinates *p** _{ij}* = [

*u*,

_{ij}*v*,1]

_{ij}^{T}can be calculated according to Eq. (2). The parameters obtained previously are used to correct the image distortion.

Through the previously presented process, the corresponding relationship ${\tilde{p}}_{ij}\to {p}_{ij}\to {\widehat{p}}_{ij}\to {q}_{j}$ is established. The relationship ${\tilde{p}}_{ij}\to {p}_{ij}\to {\widehat{p}}_{ij}\to {q}_{j}$ among all feature points on the target plane in each position and their corresponding image feature points can be established when the target manufacture errors can be ignored. Under this assumption, the objective function minimizing the image projection error of the target feature points and minimizing the distance between the back-projection image feature points ${\widehat{q}}_{ij}={H}_{i}^{-1}{\widehat{p}}_{ij}$ and their real value is established as follows:

*A*and

*B*,

*M*is the position number of the target, and

*N*is the number of feature points detected.

Meanwhile, the objective function that minimizes the distance between the center of the target feature points and their real value in each position of the planar target is established.

As shown in Eq. (6):

The final optimization objective function considering Eqs. (5) and (6) as shown in Eq. (7):

Nine constraint conditions of the nonlinear optimization are added to the final optimization objective function, as shown in Eq. (8):

*n*is a nonzero ratio coefficient and is set as

*n*= 9.

For the optimization objective function Eq. (7) combined with Eq. (8), the optimal solution *H** _{i}* and ${\Delta}_{ij}$ can be obtained by using the Levenberg-Marquardt algorithm [27]. Then, the optimized image feature points ${\widehat{p}}_{ij}$ can be calculated.

#### 3.3 Intrinsic and extrinsic parameters calibration of the camera

The optimized *H** _{i}* between the target plane and the image plane can be obtained through the process presented in Section 3.2 and can be expressed in Eq. (9), as follows:

*i*th target position, respectively.

Equation (10) can be set up considering the unit orthogonality between ${r}_{1i}$ and ${r}_{2i}$:

Equation (10) is employed for each position of the target. Thus, the target can be placed more than three times to solve the linear solution of the intrinsic and extrinsic parameters of the camera.

Combining Eqs. (9) and (10) with Eqs. (1) and (2), the nonlinear optimization objective functions are established, as shown in Eq. (11):

*M*is the total position number of the target, and

*N*is the number of feature points in each position. Finally, the optimal solutions of the intrinsic and extrinsic parameters of the camera are solved by the non-linear optimization method (e.g., Levenberg-Marquardt algorithm [27]). Detailed solution can be found in [6, 25].

## 4. Simulation experiment

The simulation experiment focused on the effects of image noise on Zhang’s method and the proposed method are conducted to validate the performance of the proposed method. The corner points of the planar target are 6 × 6 and the distance between adjacent points is 5 mm. Moreover, the image resolution of the camera in the simulation experiment is 640 pixels × 480 pixels and clear images can be captured in the range of 30 mm to 90 mm from objective placement to the camera image plane. The intrinsic parameters of the cameras are shown in detail as follows: ${f}_{x}=1,024$, ${f}_{y}=960$, ${u}_{0}=320$, ${v}_{0}=240$, ${k}_{1}=0.2$, ${k}_{2}=-0.53$.

Gaussian noise with zero mean and standard deviations ranging from 0.1 pixels to 1 pixel with an interval of 0.1 pixels is added to the image feature points. The target is placed at 10 different positions in each trial, and a total of 100 independent trials based on Zhang’s method and the proposed method are performed for each noise level. The root mean square errors (RMSEs) of the intrinsic parameter calibration results of the camera compared with the real value of the intrinsic parameters, the reprojection error in the image plane, and the back-projection error in the target plane are considered to evaluate the performances of the proposed method and Zhang’s method.

#### 4.1 RMSE analysis of the camera parameters

As shown in Fig. 5, the RMSEs of ${f}_{x}$, ${f}_{y}$ based on the proposed calibration method are approximately 1/2 to 1/3 of that based on Zhang’s method. Meanwhile, the RMSEs of ${u}_{0}$,${v}_{0}$ are approximately 1/5 to 1/6 of that based on Zhang’s method. With the increase in the noise level, the growth rates of the RMSEs of the effective focal length and principal point obtained by the proposed method are significantly less than that obtained by Zhang’s method. For ${k}_{1}$, ${k}_{2}$, the results obtained by the two methods are comparable to each other.

#### 4.2 Analysis of the reprojection errors in the image plane

The reprojection errors in the image plane are computed by the deviations between the real image points and the corresponding reprojection image points, where the reprojection image points are obtained by projecting the feature points in the target plane to the image plane according to the camera calibration results. Figure 6 shows the reprojection RMSEs of the target feature points in the *u*, *v* directions in the image coordinate frame. The reprojection errors of the target feature points obtained by the proposed method are less than 0.1 pixels and remain stable with the increase in the noise level. By contrast, the reprojection errors of the target feature points based on Zhang’s method increase linearly with the increase in the noise level.

The comparison of the reprojection errors of the target feature points based on Zhang’s method and the proposed method when the image noise is 0.1 pixels is shown in Fig. 7. As shown in Figs. 7(a) and 7(b), the standard deviation of the reprojection errors based on the proposed method reduces from 0.094 pixels to 0.006 pixels compared with Zhang’s method. Therefore, the reprojection error of the target feature points based on the proposed method is 1/15 of that based on Zhang’s method. Figures 7(c) and 7(d) are statistical diagrams of the data shown in Figs. 7(a) and 7(b), respectively. As shown in Figs. 7(c) and 7(d), the reprojection errors based on the proposed method are within 0.02 pixels, whereas those based on Zhang’s method distribute in the range of 0 pixels to 0.3 pixels

#### 4.3 Analysis of the back-projection errors in the target plane

The back-projection errors of the image feature points are the deviations of the coordinate between the real feature points and the estimated feature points in the target plane, where the estimated feature points are computed by back-projecting 2D image feature points in the image plane to the target plane using the *H* matrix from the image plane to the target plane. The back-projection errors of the image feature points with noise level of 0.1 pixels are shown in Fig. 8. As shown in Figs. 8(a) and 8(d), the standard deviation of the back-projection errors obtained by Zhang’s method is 0.007 mm. The standard deviation of the back-projection errors obtained by the proposed method is 0.0004 mm, which is approximately 1/10 of that obtained by Zhang’s method. Figures 8(b) and 8(e) are the statistical diagrams of the data shown in Figs. 8(a) and 8(d), respectively. The back-projection errors based on the proposed method are within 0.002 mm, and those based on Zhang’s method distribute in the range of 0 mm to 0.018 mm. In addition, the points within 0.002 mm obtained by Zhang’s method are less than that obtained by the proposed method.

As shown in Figs. 8(c) and 8(f), the 3D coordinate errors are the deviations between the real 3D coordinates of the feature points and the estimated 3D feature points in the camera coordinate frame, where the estimated 3D feature points are reconstructed through the calibration results obtained by Zhang’s method and the proposed method when the image noise is 0.1 pixels, respectively. As shown in Figs. 8(c) and 8(f), the 3D coordinate errors of the target feature points based on the calibration result of the proposed method is lower than that based on Zhang’s calibration result. Meanwhile, the location errors of the target feature points in the *z*-axis are larger than that in the *x*- and *y*-axes, which is also consistent with our practical experience.

## 5. Physical experiment

An endoscopic camera is a vision sensor used widely in industrial and medical fields. The endoscopic camera has the advantages of small size, light weight, and ability to capture the internal image of a narrow space. However, the resolution of the endoscopic camera is low and the image quality captured by this kind of camera is poor. Endoscopic cameras are rarely used for high-precision 3D reconstruction because it is difficult to achieve high-accuracy calibration. In this study, the proposed method and Zhang’s method are applied to the most common electronic endoscope. The calibration results of the proposed method and Zhang’s method are analyzed comparatively to validate the effectiveness of the proposed method.

As shown in Fig. 9, the resolution of the electronic endoscope camera in the experiment is 640 pixels × 480 pixels. Clear images can be captured in the range of 40 mm to 90 mm. The planar target is used in the calibration, and the distance between the adjacent feature points is 3.5 mm in the horizontal and vertical directions. The number of feature points is 10 × 10, and the target manufacture accuracy is 2 µm.

In the physical experiment, the planar target is placed at 10 different positions, and the object-to-camera distance is 50 mm to 60 mm. At each position, 36 corner points (6 × 6) are extracted in the image because of the field of view of the camera. All the target images used for calibration are shown in Fig. 10.

The proposed method and Zhang’s method use the same images to calibrate the camera parameters. The calibration toolbox program in [26] written by using Matlab is adopted for both methods. In addition, the calibration results of the two methods are analyzed by comparing the uncertainties of the camera parameters, the reprojection errors of the target feature points, and the back-projection errors of the image feature points.

As shown in Fig. 11, the red horizontal arrows and the red vertical arrows denote the disturbance factors$\Delta {u}_{ij},\Delta {v}_{ij}$respectively. To make arrows clearer, the real disturbance factors are magnified 300 times.

#### 5.1 Analysis of the uncertainties of the camera parameters

The calibration results of the intrinsic parameters of the camera obtained by the two methods are shown in Table 1. As shown in Table 2, the uncertainties of ${f}_{x}$, ${f}_{y}$, $\gamma $, ${u}_{0}$, ${v}_{0}$, ${k}_{1}$, ${k}_{2}$ are ${U}_{fx}$, ${U}_{fy}$, ${U}_{\gamma}$, ${U}_{u0}$, ${U}_{v0}$, ${U}_{k1}$, ${U}_{k2}$, respectively, and are provided by the Matlab calibration toolbox [26].

Table 2 shows that the uncertainties obtained by the proposed method are significantly less than that obtained by Zhang’s method. By using the proposed method, the uncertainties of ${f}_{x}$, ${f}_{y}$ and ${k}_{1}$, ${k}_{2}$are reduced by three times compared with that of Zhang’s method. Meanwhile, the uncertainties of the principal point coordinates obtained by the proposed method are reduced by four to six times.

#### 5.2 Analysis of the reprojection errors of the target feature points

The reprojection errors distribution of the target feature points based on the proposed method and Zhang’s method are illustrated in Figs. 12(a) and 12(b). The figures show that the standard deviation of the reprojection errors based on the proposed method is 0.050 pixels, whereas that based on Zhang’s method is 0.017 pixels. Notably, the calibration accuracy is improved by approximately three times by the proposed method. Figures 12(c) and 12(d) show the target image feature points (green crosses) and their reprojection errors (red arrows) based on the calibration results of two methods, respectively. The reprojection errors of the target feature points (red arrows) are magnified 100 times to display this result clearly. The comparison of Figs. 12(c) and 12(d) reveals that the reprojection error obtained by the proposed method is less than that obtained by Zhang’s method. Figures 12(e) and 12(f) are the statistical analysis diagrams of the reprojection errors based on the calibration results of the two methods. The figures show that the reprojection errors distribution of the target feature points based on the proposed method is significantly better than that of Zhang’s method.

#### 5.3 Analysis of the back-projection errors of the image feature points

The back-projection errors distribution of the image feature points based on the proposed method and Zhang’s method are illustrated in Figs. 13(a) and 13(b). The figures show that the standard deviation of the back-projection errors of the image feature points based on the proposed method is 0.001 mm, whereas that based on Zhang’s method is 0.003 mm. Thus, the calibration accuracy is improved by approximately three times by the proposed method. Figures 13(c) and 13(d) illustrate the feature points (blue crosses) on the target plane and their back-projection errors (red arrows) based on the calibration results of the two methods, respectively. The back-projection errors of the image feature points (red arrows) are magnified 100 times. The comparison of Figs. 13(c) and 13(d) indicate that the back-projection error obtained by the proposed method is less than that obtained by Zhang’s method. Figures 13(e) and 13(f) are the statistical analysis diagrams of the back-projection errors based on the calibration results of the two methods. The figures show that the back-projection errors based on the proposed method is significantly better than that of Zhang’s method.

Of the 10 images for calibration, 1 image is picked randomly. The 3D display form of the back-projection errors of the feature points in this image based on the two methods is shown in Fig. 14. The back-projection errors of different feature points obtained by Zhang’s method are obviously different. As shown in Fig. 14(a), the overall fluctuation is larger and the maximum value of the back-projection errors is approximately 0.007 mm. However, the back-projection errors of different feature points obtained by the proposed method are obviously closer to each other. As shown in Fig. 14(b), the overall fluctuation is flat and the maximum value of the back-projection errors is less than 0.002 mm.

The image feature points are back-projected to the target plane and the distance between any two of the feature points is denoted by *d*_{m}, whereas the distance between any two of the real feature points on the target plane is denoted by *d*_{t}. ∆*d* denotes the deviation of *d*_{m} and *d*_{t}. The statistical distributions of ∆*d* obtained by the two methods are shown in Fig. 15. The horizontal axis is the distance between two feature points in the range of 3.5 mm to 17.5 mm (the minimum distance between target feature points is 3.5 mm). In Fig. 15, the thin horizontal line in the green rectangle or pink rectangle represents the mean value of ∆*d* in each *d*_{m} or *d*_{t}, and the rectangle represents the percentage of data ∆*d* from 25% (lower boundary of the rectangle) to 75% (upper boundary of the rectangle), that is the rectangle shows 50% of ∆*d* distribute in the vicinity of mean value. It is obvious that the rectangle in the proposed method is smaller than that of Zhang’s method, which means distribution of ∆*d* in the proposed method is more concentrated. It shows the mean value and standard deviation of ∆*d* based on the proposed method are 0.001 and 0.001 mm, respectively. The mean value and standard deviation of ∆*d* based on Zhang’s method are 0.003 and 0.003 mm, respectively. Thus, the calibration accuracy of the proposed method is three times that of Zhang’s method.

## 6. Conclusions

In this paper, we assumed that the location errors of the image feature points are inevitable and the location accuracy of the image feature points cannot be improved only through the image processing methods. Through nonlinear optimization methods, the proposed method can determine the optimal coordinates of the image feature points based on the corresponding relationship between the camera image plane and the high-accuracy planar target. Moreover, the proposed method is less affected by image noise. The high-accuracy calibration of the low-cost camera with low resolution and large image noise can be achieved by using the proposed method. The proposed method is simple and very efficient, and doesn’t need the image transformation process. Through the simulation and physical experiments, the calibration accuracy of the proposed method is validated to be at least three times that of Zhang’s method when the same images are used for calibration.

## Funding

National Natural Science Foundation of China (NSFC) (Grant Nos. 51175027 and 51575033); National Key Scientific Instruments and Equipment Program (NO. 2012YQ140032).

## References and links

**1. **S. Shirmohammadi and A. Ferrero, “Camera as the instrument: the rising trend of vision based measurement,” IEEE Trans. Instrum. Meas. **17**(3), 41–47 (2014). [CrossRef]

**2. **E. N. Malamas, E. G. M. Petrakis, M. Zervakis, L. Petit, and J. D. Legat, “A survey on industrial vision systems, applications and tools,” Image Vis. Comput. **21**(2), 171–188 (2003). [CrossRef]

**3. **R. Y. Tsai, “A versatile camera calibration technique for high-accuracy 3D machine vision metrology using off-the-shelf TV camera and lenses,” IEEE J. Robot. Autom. **3**(4), 323–344 (1987). [CrossRef]

**4. **J. Heikkila, “Geometric camera calibration using circular control points,” IEEE Trans. Pattern Anal. Mach. Intell. **22**(10), 1066–1077 (2000). [CrossRef]

**5. **J. H. Kim and B. K. Koo, “Convenient calibration method for unsynchronized camera networks using an inaccurate small reference object,” Opt. Express **20**(23), 25292–25310 (2012). [CrossRef] [PubMed]

**6. **Z. Y. Zhang, “A flexible new technique for camera calibration,” IEEE Trans. Pattern Anal. Mach. Intell. **22**(11), 1330–1334 (2000). [CrossRef]

**7. **J. S. Kim, P. Gurdjos, and I. S. Kweon, “Geometric and algebraic constraints of projected concentric circles and their applications to camera calibration,” IEEE Trans. Pattern Anal. Mach. Intell. **27**(4), 637–642 (2005). [CrossRef] [PubMed]

**8. **Y. H. Wu, X. J. Li, F. C. Wu, and Z. Hu, “Coplanar circles, quasi-affine invariance and calibration,” Image Vis. Comput. **24**(4), 319–326 (2006). [CrossRef]

**9. **D. Douxchamps and K. Chihara, “High-accuracy and robust localization of large control markers for geometric camera calibration,” IEEE Trans. Pattern Anal. Mach. Intell. **31**(2), 376–383 (2009). [CrossRef] [PubMed]

**10. **L. Huang, Q. Zhang, and A. Asundi, “Camera calibration with active phase target: improvement on feature detection and optimization,” Opt. Lett. **38**(9), 1446–1448 (2013). [CrossRef] [PubMed]

**11. **Y. Hong, G. Ren, and E. Liu, “Non-iterative method for camera calibration,” Opt. Express **23**(18), 23992–24003 (2015). [CrossRef] [PubMed]

**12. **C. Ricolfe-Viala and A. J. Sanchez-Salmeron, “Camera calibration under optimal conditions,” Opt. Express **19**(11), 10769–10775 (2011). [CrossRef] [PubMed]

**13. **Z. Liu, F. J. Li, X. J. Li, and G. J. Zhang, “A novel and accurate calibration method for cameras with large field of view using combined small targets,” Measurement **64**(3), 1–16 (2015).

**14. **Z. Zhang, “Camera calibration with one-dimensional objects,” IEEE Trans. Pattern Anal. Mach. Intell. **26**(7), 892–899 (2004). [CrossRef] [PubMed]

**15. **F. C. Wu, Z. Y. Hu, and H. J. Zhu, “Camera calibration with moving one-dimensional objects,” Pattern Recognit. **38**(5), 755–765 (2005). [CrossRef]

**16. **F. Qi, Q. Li, Y. Luo, and D. Hu, “Camera calibration with one-dimensional objects moving under gravity,” Pattern Recognit. **40**(1), 343–345 (2007). [CrossRef]

**17. **H. Zhang, K. Y. Wong, and G. Zhang, “Camera calibration from images of spheres,” IEEE Trans. Pattern Anal. Mach. Intell. **29**(3), 499–502 (2007). [CrossRef] [PubMed]

**18. **X. Ying and H. Zha, “Geometric interpretations of the relation between the image of the absolute conic and sphere images,” IEEE Trans. Pattern Anal. Mach. Intell. **28**(12), 2031–2036 (2006). [CrossRef] [PubMed]

**19. **K. K. Wong, R. S. P. Mendonca, and R. Cipolla, “Camera calibration from surfaces of revolution,” IEEE Trans. Pattern Anal. Mach. Intell. **25**(2), 147–161 (2003). [CrossRef]

**20. **C. Colombo, D. Comanducci, and A. D. Bimbo, “Camera calibration with two arbitrary coaxial circles,” *in Proceedings of European Conference on Computer Vision 1*. (Springer Verlag, 2006), pp. 265 - 276. [CrossRef]

**21. **A. Albarelli, E. Rodola, and A. Torsello, “Robust camera calibration using inaccurate target,” *in*Proceedings of The British Maching Vision Conference (2010), pp. 1 - 10.

**22. **H. Strobl and G. Hirzinger, “More accurate camera and hand-eye calibration with unknown grid pattern dimensions,” *in*Proceedings of IEEE Conference on Robotics and Automation (IEEE, 2008), pp. 1398 - 1405. [CrossRef]

**23. **L. Huang, Q. Zhang, and A. Asundi, “Flexible camera calibration using not-measured imperfect target,” Appl. Opt. **52**(25), 6278–6286 (2013). [CrossRef] [PubMed]

**24. **K. Nakano, M. Okutomi, and Y. Hasegawa, “Camera calibration with precise extraction of feature points using projective transformation,” *in*Proceedings of IEEE Conference on Robotics and Automation (IEEE, 2002), pp. 2532 - 2538. [CrossRef]

**25. **A. Datta, J. S. Kim, and T. Kanade, “Accurate camera calibration using iterative refinement of control points,” *in*Proceedings of IEEE Conference on Computer Vision Workshops (IEEE, 2009), pp. 1201 - 1208. [CrossRef]

**26. **J. Y. Bouguet, “The MATLAB open source calibration toolbox,” http://www.vision.caltech.edu/bouguetj/calib _doc/.

**27. **J. MORE, *The Levenberg-Marquardt Algorithm, Implementation and Theory* (Numerical Analysis, 1977).