Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

High-accuracy camera calibration method based on coded concentric ring center extraction

Open Access Open Access

Abstract

In the field of three-dimensional (3-D) metrology based on fringe projection profilometry (FPP), accurate camera calibration is an essential task and a primary requirement. In order to improve the accuracy of camera calibration, the calibration board or calibration target needs to be manufactured with high accuracy, and the marker points in calibration image require to be positioned with high accuracy. This paper presents an improved camera calibration method by simultaneously optimizing the camera parameters and target geometry. Specifically, a set of regularly distributed target markers with rich coded concentric ring pattern is first displayed on a liquid crystal display (LCD) screen. Then, the sub-pixel edges of all coded bands radial straight lines are automatically located at several positions of the LCD screen. Finally, the sub-pixel edge point set is mapped into parameter space to form a line set, and the intersection of the lines is defined as the center pixel coordinates of each target point to complete the camera calibration. The simulation and experimental results verify that the proposed camera calibration method is feasible and easy to operate, which can essentially eliminate the perspective transformation error to improve the accuracy of camera parameters and target geometry.

Published by Optica Publishing Group under the terms of the Creative Commons Attribution 4.0 License. Further distribution of this work must maintain attribution to the author(s) and the published article's title, journal citation, and DOI.

1. Introduction

In some fringe projection profilometry (FPP)-based three-dimensional (3-D) measurement fields, such as industrial manufacturing process, heritage conservation, microsurgery, autonomous navigation, reverse engineering [16], accuracy is the primary requirement and standard. The accuracy of camera calibration directly affects the accuracy of subsequent 3-D tracking and reconstruction. The problem of camera calibration is an enduring challenge, from early photogrammetry [79] to recent machine vision [1012] and optical 3-D measurement [13,14], and the accurate calibration of camera parameters is an important part of achieving the ultimate observation purpose.

Due to two-dimensional (2-D) plane calibration targets can be easily manufactured with high accuracy by printing the calibration pattern on a planar board or displaying the calibration pattern on a high-quality display screen. Therefore, the demand for coded targets ensuring automatic and accurate image point extraction has increased dramatically [1517]. Also, the identification of marker point is the key to image matching and stitching, and in order to accurately register marker points on 2-D plane target images acquired from different viewpoints, the calibration patterns should have some unique features. For example, the most widely used calibration patterns are checkerboard [18,19], circular lattice [16,2022], square lattice [23], sinusoidal grating [24], orthogonal grating [25,26] and speckle [27] patterns. Among them, due to the characteristics of high-accuracy positioning, easy identification and good stability, coded marker points have been widely used in the field of computer vision, especially in camera calibration and 3-D measurement.

The identification of coded marker points is the key to matching and stitching images from different viewpoints. However, due to the influence of environmental factors, it is often difficult to accurately obtain the coded values of marker points in real measurements. For the decoding method of circular coded marker points, a large number of scholars have conducted extensive research.

Ahn et al [28] briefly reviewed various types of coding target pattern designs for automating 3-D measurement procedures. The coded information is mainly embedded in the form of point coding, linear, radial, digital characters or angle bar codes, decoding is carried out by analyzing the change of image gray values along the appropriate image sampling path. They also designed their own type of coded target based on a central circle that is surrounded by a series of smaller dots that encode the identity of the circular target. Chen et al [29] discussed in detail the encoding and decoding methods of different marker points in photogrammetry to make the best preparation for subsequent image matching processing. Wijenayake and Heuvel [30,31] proposed concentric rings that are relatively easy to recognize and decode. However, affected by the large number of measured parameters and environmental factors, the coded values of marker points are often difficult to calculate accurately in real measurement. In order to decode the marker point, Forbes et al [32] mapped the elliptic inverse affine of the coded marker point into a unit circle, and then obtained a binary value at certain angular intervals by using the circular arc segmentation method. The disadvantage is that both image noise and uneven segmentation will affect the decoding effect. Many least square ellipse fitting algorithms [3335] have been applied to extract the central coordinates of circle markers. However, circular marker point is greatly affected by the perspective projection transformation. There is a certain deviation between the points obtained by the traditional ellipse fitting center and the real circular center. To solve this problem, Ahn et al [36] investigated various practical network set-ups to compensate eccentricity error. He et al [37] solved the model proposed by Ahn [36] linearly by using the concentric ring model, and eliminated unnecessary parameters. Only the inner and outer radius of the ring and the centers of the two ellipses in the imaging can be known to calculate the true center of the circle. Their method has the advantages of being easy to implement and having a good effect on the center positioning deviation. However, due to the approximate calculation used in the simplification procedure, the increase of the angle and the size of the marker point will reduce the correction accuracy of the center. Chen et al [38] proposed a method to obtain circular centers by using the principles of harmonic conjugate and polar correspondence. The un-distortion coded value can also be obtained by correcting the identification point image into orthographic projection through elliptic parameters and affine transformation. In general, there are two possible sources of deviation in the captured calibration images, namely projective transformation deviation and lens distortion deviation. Solving the above two problems is the main time-consuming tasks of camera calibration and 3-D measurement, in case the camera parameters are not well calibrated.

To address the above limitations, this paper proposes an improved high-accuracy camera calibration method that combines a coded concentric ring pattern calibration target and a sub-pixel-based edge retrieving algorithm for automatic target point center identification. Each target (black dot) on the calibration pattern is surrounded by a unique code band that is used to identify the marker point. The coded concentric ring target carries rich coding information. No matter what poses the calibration target is captured by the camera, the center coordinates of the marker points retrieved from the linear intersection points of sub-pixel edges in the radial direction are not affected by the perspective projection distortion, which can effectively guarantee the extraction accuracy of marker point. Even if the manufacturing accuracy of the calibration target is not high, the camera calibration with the same level of accuracy can be achieved through the proposed method.

This paper is organized as follows. Section 2 introduces the principle of the proposed coded concentric ring calibration method to eliminate perspective projection error. Section 3 shows simulation and test results to verify the perspective projection compensation method. Conclusions and final remarks are given in Section 4.

2. Principle

2.1 Calibration principle of camera

Ideally, the camera model is based on the pinhole imaging model [39], that is, the object point, image point and optical center are collinear, as shown in Fig. 1. In the procedure of camera imaging, the coordinate conversion from the space object point P(xw, yw, zw) to the image point p(u, v) on the imaging surface is defined by the following Equation [40].

$$\left[ \begin{array}{l} u\\ v\\ 1 \end{array} \right] = s{\textbf A}\left[ {\begin{array}{{cc}} {\textbf R}&{\textbf T} \end{array}} \right]\left[ {\begin{array}{{c}} {{x_w}}\\ {{y_w}}\\ {{z_w}}\\ 1 \end{array}} \right] = s\left[ {\begin{array}{{ccc}} {{f_x}}&\gamma &{{u_0}}\\ 0&{{f_y}}&{{v_0}}\\ 0&0&1 \end{array}} \right]\left[ {\begin{array}{{cc}} {\textbf R}&{\textbf T}\\ {\textbf 0}&{\textbf 1} \end{array}} \right]\left[ {\begin{array}{*{20}{c}} {{x_w}}\\ {{y_w}}\\ {{z_w}}\\ 1 \end{array}} \right]$$
where s is an arbitrary scale factor; A is the intrinsic parameter matrix of the camera, (fx, fy) denotes the focal length coordinate of the camera, (u0, v0) is the principal point coordinate of the camera; γ represents the skew coefficient of the two axes of the image; T is a 3 × 1 translation vector between the world coordinate system (xw, yw, zw) and the camera coordinate system (xc, yc, zc); R is a 3 × 3 rotation matrix between the above two coordinate systems. The extrinsic parameter matrix of the camera can be formed by R and T.

 figure: Fig. 1.

Fig. 1. Imaging model of the pinhole camera.

Download Full Size | PDF

In the actual measurement, the lens distortions of the camera include radial and tangential distortion, which may bring errors to the camera model, affect the image quality and deteriorate the perfect calibration results. Since tangential distortion may cause unstable optimization, which is negligible for modern camera lens, radial distortion is only considered to compensate lens distortions. The radial distortion model can be written as

$${\delta _r} = {k_1}{r^2} + {k_2}{r^4}$$
where k1 and k2 are the first-order and second-order radial distortion coefficients; r represents the radial distance from the pixel point to the image center.

The position coordinates of the actual image points with distortion can be expressed as

$$\left\{ \begin{array}{l} \hat{x} = x({1 + {k_1}{r^2} + {k_2}{r^4}} )\\ \hat{y} = y({1 + {k_1}{r^2} + {k_2}{r^4}} )\end{array} \right.$$
where (x, y) denotes the ideal un-distortion image point position coordinates; r2 = x2 + y2.

2.2 Creation of coded concentric ring calibration targets

A calibration target with the center serving as the marker point is designed. It makes the center extraction accuracy of the marker point image reach sub-pixel accuracy level even in arbitrary poses and positions.

To ensure the stability of decoding and the accuracy of center extraction, a n-bit Schneider coding scheme is used to number each marker point. Each circular black dot on the calibration target is surrounded by a code band with bit positions at equally spaced angular intervals. The coded concentric ring surrounding the center target is divided into n equal parts, and each part is regarded as a binary bit pattern. Figure 2 shows an example of a 12-bit Schneider coding target structure diagram. The white region represents “1”, and the black region represents “0”. Each bit is regarded as the first bit in turn, there are 12 binary numbers to be considered when reading the binary code clockwise. The minimum value of all binary codes is used as a number to identify the code pattern.

 figure: Fig. 2.

Fig. 2. Diagrams showing (a) a layout of the target and coded concentric ring pattern, and (b) an example of the code on object plane.

Download Full Size | PDF

2.3 Pre-processing of calibration target image

Before extracting the center point of the coding marker points, the marker points need to be segmented from the background image, that is, to realize the pre-processing of the calibration target image. The pre-processing of the marker point image can be performed from the following six steps.

Step 1: Detection of edge. Canny operator [41] is used to obtain the edge contour of marker points. It uses Gaussian filter for preprocessing, which has good anti-interference ability and strong ability to detect weak edges.

Step 2: Removal of false edges. The false edges of texture, noise and other non-marked points are eliminated by two principles: closure judgment and area / perimeter judgment.

Step 3: Detection of the innermost ellipse of the marker point. Once the center ellipse of the identification point in the image is determined, the rest of the image is the coded concentric ring.

Step 4: Segmentation of marker point. The center of mass of each coded concentric ring is extracted, and the distances from the center of mass to the centers of all innermost ellipses are calculated separately. The nearest ellipse to the edge of the ring band is formed into a set of marker points, and the minimum outer rectangle of each marker point is calculated, and then the region of marker points is segmented by adding a certain boundary.

Step 5: Image processing. Based on the segmented single marker point images, the noise in the images is first processed by Gaussian filtering. Then, the grayscale normalization process is performed to eliminate the grayscale differences caused by illumination, so that all the grayscale values range of the image is mapped to 0-255. In this paper, the image grayscale normalization is achieved by using the following linear grayscale transformation function.

$${y_i} = 255 \times \frac{{({{x_i} - {x_{\min }}} )}}{{({{x_{\max }} - {x_{\min }}} )}}$$
where yi is the gray value after transformation; xi is the gray value before transformation; xmin and xmax are the minimum and maximum gray values in the image, respectively.

Step 6: Adjustment of image size. A bilinear interpolation method is used to adjust the image size of the single marker point to a uniform size.

2.4 Extraction of target center

2.4.1 Radial line fitting center method

An image of 12-bit coded concentric ring marker is shown in Fig. 3(a). Since the circle of the marker point coding band is concentric with the innermost circular target, the radial line edge of the coding band will pass through the circular target center. After the perspective projection transformation, the coordinate variation of a point and a line on the plane can be obtained as follows:

$$\left\{ {\begin{array}{{l}} {x^{\prime} = \mathbf{H}x}\\ {I^{\prime} = {\mathbf{H}^{ - T}}I} \end{array}} \right.$$
where H is a homography matrix between the object plane and the imaging plane; x, x’, I and I’ are the homogeneous coordinates of points and lines before and after projection transformation. The sufficient condition for a point on the plane that located on a line can be expressed as:
$${x^T}I = 0$$

 figure: Fig. 3.

Fig. 3. Coded concentric ring marker point. (a) Marker point of the object plane before projection transformation; (b) marker point of image plane after projection transformation.

Download Full Size | PDF

According to Eq. (5), the relationship between x’ and I’ after projection transformation is shown as follows:

$${x^{\prime}{^T}}I^{\prime} = {x^T}{\mathbf{H}^T}{\mathbf{H}^{ - T}}I = 0$$

According to the above derivation, the projection transformation maintains the property that the point is on a line. Even if the image has a large angular projective mapping, that is, the perspective projection error is large, and the intersection of lines after projection transformation remains unchanged. Therefore, the target center can be obtained by using the radial lines of the coded concentric ring. A marker point of the image plane after projection transformation is shown in Fig. 3(b).

The image in the red frame is enlarged 5 × 5 times in x and y directions respectively, as illustrated in the left part of Fig. 3(a) and (b). The red point is the accurate target center, the yellow cross is the intersection of all lines, and the blue cross is the circle center extracted by ellipse fitting method. After perspective projection transformation, the red point matches well with the yellow cross, which indicates a high center extraction accuracy, while the blue cross has positioning deviation.

2.4.2 Identification and localization of marker points

In order to be automated of the calibration procedure, imaged target center points must be automatically identified and located within images. The following steps can be referenced for identification and localization.

  • (1) Identification of sub-pixel edge
Firstly, boundary tracing is performed to find the set of points of two linear edges of each connected domain, and the radial linear edges of all connected domains on the coding band are segmented. Here, it should be noted that the positioning accuracy of the linear edge points directly affects the extraction accuracy of the center of the marker points, so the sub-pixel edge identification algorithm should be performed for the extracted line edge points. The simulation results show that the Gaussian fitting algorithm with better noise resistance and localization accuracy is selected for sub-pixel edge identification [41]. In the gradient direction of image edge, the first-order derivative of the grayscale distribution is approximate to Gaussian distribution, and the center of Gaussian distribution is the position with the highest accuracy of edge points.
$$y = k\frac{1}{{\sqrt {2\pi {\sigma ^2}} }}\textrm{exp} \left[ { - \frac{{{{({x - \mu } )}^2}}}{{2{\sigma^2}}}} \right]$$
where y is the first-order derivative of the gray value; x is the coordinate point in the edge gradient direction; k is the magnitude of the Gaussian function, representing the contrast on both sides of the image edge; σ is the standard deviation; µ is the offset of the center point and also represents the offset of the sub-pixel edge point. Once the position parameter µ is derived, the sub-pixel position of the edge can be obtained.
  • (2) Localization of target center coordinates
Since the procedure of fitting the extracted edges into a straight line and then finding the intersection of multiple lines will accumulate fitting errors, resulting in low accuracy of center location, a center coordinate finding method based on Hough transform and the Random Sample Consensus (RANSAC) algorithm is proposed [42].

Firstly, the obtained sub-pixel edge points are mapped into lines in the parameter space, and then all the intersections of the mapping lines are calculated, which correspond to the lines fitted by two points on each line in the original space. Since most lines in the original space pass through the center of the marker point, the center point can be extracted by fitting these intersections in the parameter space. To reduce the effect of noisy points and false extraction points on the results, the intersections are fitted with the RANSAC algorithm in the parameter space, and the lines are mapped to the original space as the center coordinates. Since this method does not use multiple points to fit the line, but directly uses the edge points of all lines to calculate the center, there is no cumulative error and has higher extraction accuracy. The RANSAC algorithm is able to estimate the model parameters of “internal points (correct data)” in a data set of “external points (noisy data)” through an iterative method. The number of iterations m is:

$$m = \frac{{\log ({1 - p} )}}{{\log ({1 - {\omega^n}} )}}$$
where n is the number of local internal points required to estimate the model; ω is the probability of extracting internal points, and p is the probability that all points extracted during the iteration are internal points. By setting the number of iterations m, p can be infinitely close to 1. After fitting the intersection set of parameter space with the RANSAC algorithm, the fitted line is inversely mapped back to the original space, that is, the target center.

In general, a flow chart of the target center extraction method is shown in Fig. 4.

 figure: Fig. 4.

Fig. 4. Flow chart of target center extraction method.

Download Full Size | PDF

3. Experiments

To verify the advantages of the proposed calibration method over the existing methods using checkerboard and circular pattern calibration board, simulations and actual tests were carried out.

3.1 Effect of different parameters on extraction accuracy

3.1.1 Effect of marker point radius ratio

The rotation matrix R and the translation vector T are set to 0 and [0, 0, 800]T, respectively. The size of a single simulation marker point in the object space is 30 mm for the inner circle radius, 60 mm for the inner circle radius of the ring band, and 70 - 160 mm for the outer circle radius variation range of the ring band with a step size of 10 mm. Figure 5 shows the error map of the extracted centers of the simulation images in x and y directions by using the existing ellipse fitting method [43] and the proposed method. To ensure the stability of the measurement error, the same outer radius is repeatedly calculated 20 times and the average value of the error is used.

 figure: Fig. 5.

Fig. 5. Effect of marker point radius ratio on extraction accuracy.

Download Full Size | PDF

It can be seen from the Fig. 5 that the variation of the ring radius ratio has little effect on the existing methods. The center extraction accuracy of the proposed method is maintained within 0.05 pixels except for the outer circle radius is 70 mm, and the centrifugation error is minimized when the outer circle radius is 120 mm. Therefore, considering the accuracy of dividing straight lines and the appropriateness of the size ratio, the radius ratio and the three circles of the design marker point is 1: 2: 4. That is, the radius of coded concentric ring is 30 mm, 60 mm and 120 mm.

3.1.2 Effect of the number of connected domains of coded concentric ring

For a 12-bit ring-encoded marker point, the coded concentric ring has at most 6 connected domains, that is, 12 radial lines. To investigate the effect of the number of straight lines on the fitted center results, the number of connected domains of the simulated coded concentric ring varies from 1 to 6 with the marker point R = 0, T = [0, 0, 800]T and the radius ratio of 1: 2: 4. Figure 6 shows the effect of different number of connected domains on the center extraction accuracy.

 figure: Fig. 6.

Fig. 6. Effect of the number of connected domains of the coded concentric ring on extraction accuracy.

Download Full Size | PDF

It can be seen that except for the case of one connected domain, the center extraction errors of both methods for different numbers of connected domains are within 0.015 pixels. Therefore, the number of connected domains of the coding band can be randomly designed to be 2 - 6.

3.1.3 Effect of deflection angle of object plane

Set the radius ratio is 1: 2: 4 and T = [0, 0, 800]T. θx, θy and θz represent the pitch angle, yaw angle and rotation angle of the image rotating around the X, Y and Z axes, respectively. The variation range of θ is −1 - 1 rad and the step is 0.1 rad. Figures 7(a), 8(a) and 9 show the comparison of centrifugal errors between the proposed method and the existing ellipse fitting method for different angle settings. It can be seen from these figures that the existing ellipse fitting methods are greatly affected by the perspective projection transformation of the object plane. The centrifugal error increases with the increase of the pitch angle or yaw angle of the captured image, and the maximum centrifugal error can reach 2.828 pixels. Figures 7(b) and 8(b) are partially enlarged views of the proposed method. They are basically not affected by the shooting angle, and the error distribution is always maintained within 0.025 pixels. As can be seen from Fig. 9, the effect of rotation angle on the two methods is basically the same, with the error distribution within 0.035 pixels.

 figure: Fig. 7.

Fig. 7. Effect of pitch angle on extraction accuracy. (a) Ellipse fitting and the proposed; (b) Enlarge view of the proposed method.

Download Full Size | PDF

 figure: Fig. 8.

Fig. 8. Effect of yaw angle on extraction accuracy. (a) Ellipse fitting and the proposed; (b) Enlarged view of the proposed method.

Download Full Size | PDF

 figure: Fig. 9.

Fig. 9. Effect of rotation angle on extraction accuracy.

Download Full Size | PDF

3.1.4 Effect of noise

With other variables being constant, the influence of noise level on extraction accuracy is measured, as shown in Fig. 10. The standard deviation σ of Gaussian noise increases from 0 to 0.05, and 20 independent experiments are conducted for different noise variance values, and the average value of the error is taken. Figure 10 shows that the extraction accuracy of the existing methods is greatly affected by the noise level. However, even if the Gaussian noise is too high during the experiment, the center extraction error of the proposed method is still less than 0.02 pixels. The results show that the proposed method has good robustness to the influence of ambient noise, which can meet the requirements for calibration accuracy.

 figure: Fig. 10.

Fig. 10. Effect of Gaussian noise on extraction accuracy.

Download Full Size | PDF

The simulation studies show that the proposed method can ensure the stability of the center extraction accuracy even in the case of large imaging angle and high environmental noise. The center extraction error is always less than 0.04 pixels, which is much smaller than that of the ellipse fitting method. It proves the high accuracy and good robustness of the proposed method compared with the existing method.

3.2 Comparison of camera calibration using different methods

3.2.1 Simulated comparative results

In the simulation test, synthetic calibration images of three different patterns (circle, checkerboard and coded concentric ring) are first generated with a certain noise level and the same marker points interval, as shown in Figs. 11(a)–11(c). Referring to the effect of the radius ratio and the number of connected domains of the coded concentric ring, a coded concentric ring pattern containing 12 × 9 marker points is randomly generated with a radius ratio of 1: 2: 4 and the number of connected domains of 2 - 6. Except for image noise and interpolation error, no other errors are introduced in the simulated calibration images. Three different calibration images with the same parameters and the same poses are used for camera calibration, and the images at any position in the field of view of the simulated camera are shown in Figs. 11(d)–11(f).

 figure: Fig. 11.

Fig. 11. Three different calibration images in the simulation experiment. (a) Circle; (b) checkerboard; (c) coded concentric ring; (d), (e) and (f) corresponding to (a), (b) and (c) in the field of view of the simulated camera.

Download Full Size | PDF

The calibration accuracy is further quantitatively evaluated by the reprojection errors. It is a more rigorous and widely accepted calibration accuracy indicator, which is the root mean square error (RMSE) of the projection points. In this simulation test, the reprojection error maps of the calibration patterns using circle, checkerboard and coded concentric ring are shown in Figs. 12(a)–12(c), respectively, with mean reprojection errors of 0.0574, 0.0649 and 0.0493 pixels, and standard deviations of [0.0459 0.0461], [0.0536 0.0498] and [0.0340 0.0316] pixels. It is obvious that the calibration using coded concentric ring pattern outperforms the other two in reprojection error, while the performance of the checkerboard pattern is the worst. The reprojection error of calibration using coded concentric ring pattern is 14.11% and 24.04% smaller than that of calibration using circle and checkerboard patterns, respectively.

 figure: Fig. 12.

Fig. 12. Reprojection error maps of simulated three different calibration images. (a) Circle; (b) checkerboard; (c) coded concentric ring.

Download Full Size | PDF

3.2.2 Actual comparative results

In the actual test, real calibration images were captured by a real imaging system equipped with a 2048 × 2048 pixels camera (XIMEA xiQ MQ042CG-CM, SK) and a F2 / 35 mm lens (NAVITAR, JAPAN). A 13.3-inch LCD screen (Lenovo Pro-13, TCL) with 2560 × 1600 pixels was introduced, which certified by German TÜV Rheinland and eliminated refraction error by the method in [13]. Camera calibration experiments were performed using circle, checkerboard and coded concentric ring patterns shown on an LCD screen, as well as the circle and checkerboard printed on a high production accuracy calibration board. Here, the production accuracy of the calibration board is 0.01 mm. Five different patterns at the same position were firstly captured by the camera. The captured calibration images were processed and the center coordinates of each marker point were extracted, as shown in Figs. 13(a)–13(e). Finally, the camera calibration was done sequentially, and the obtained intrinsic parameters are shown in Table 1. By analyzing and comparing the calibrated intrinsic parameters of five different patterns, it can be found that the five calibration modes are very similar. The reason is that, on the one hand, the calibration results of the obtained intrinsic parameters: fx, fy, ux, uy, k1, k2 are similar by the five calibration modes. On the other hand, the calibration procedures are consistent. The five calibration modes all use the method of extracting the center point of the marker point first, and then estimated by Zhang's camera calibration method [41]. The results qualitatively prove the effectiveness of the proposed method.

 figure: Fig. 13.

Fig. 13. Center extraction of five different mode in the actual experiment. (a) circle; (b) checkerboard; (c) coded concentric ring. A high production accuracy calibration board with (d) circle; (e) checkerboard.

Download Full Size | PDF

Tables Icon

Table 1. Comparison of calibrated intrinsic parameters using five different patterns in actual experiments.

Figure 14(a) shows a few typical calibration images of the coded concentric ring pattern captured in the actual test, and the extrinsic parameters reconstructed from the captured coded concentric ring pattern are shown in Fig. 14(b).

 figure: Fig. 14.

Fig. 14. (a) A few calibration images of the proposed method captured in actual test and (b) the positions and orientations of the coded concentric ring target for calibration evaluation.

Download Full Size | PDF

The reprojection error maps and error values of the calibration results are shown in Figs. 15(a)–15(e). The mean reprojection errors of the display mode calibration using circle, checkerboard, and coded concentric ring pattern are 0.2060 pixels, 0.2577 pixels, and 0.1062 pixels with standard deviations of [0.1725 0.1878] pixels, [0.2124 0.2146] pixels, and [0.0818 0.0903] pixels, meaning that the reprojection error using coded concentric ring pattern is 48.45% and 58.79% lower than those using the circle and checkerboard pattern, respectively. The results show that the proposed calibration pattern has the highest calibration accuracy under the same position and condition. Meanwhile, the reprojection error is 36.94% and 32.70% lower than the circle and checkerboard pattern on the high-quality calibration board, respectively.

 figure: Fig. 15.

Fig. 15. Reprojection error maps. (a) circle; (b) checkerboard; (c) coded concentric ring. A high production accuracy calibration board with (d) circle and (e) checkerboard.

Download Full Size | PDF

The proposed calibration pattern has the minimal reprojection errors, which outperformed the circle and checkboard calibration methods despite these calibration targets were made with high cost and high-accuracy. The calibration accuracy can be further improved if the coded concentric ring can be printed on the ceramic calibration board, because the dots per inch (dpi) of the LCD screen is about 227, while the printing typesetting machine is up to 2000 dpi. The printing accuracy and resolution of the coded concentric ring images will be increased to provide favorable conditions for improving the extraction accuracy of the marker points.

4. Conclusion

This paper proposes a novel camera calibration method by using a coded concentric ring target for automatic image point identification and high accurate camera parameters. Each black dot on the calibration pattern is surrounded by a unique code band that is used to identify the target point. The designed coding pattern avoids perspective projection transformation deviation in images when performing identification and localization. Compared with other features such as straight line, it has higher fitting accuracy. High-accuracy center positioning can reduce the demanding requirements for imaging system hardware resolution and has certain engineering application value.

Both simulations and actual experiments show that the proposed method has the advantages of higher coding point extraction accuracy and easy operation compared with the existing camera calibration patterns, which lead to more accurate camera parameters and target geometry.

In the next step, further researches need to be done to overcome the following limitations. On the one hand, the designed calibration pattern needs to be printed on the white calibration board to avoid the reflection phenomenon of the projected fringe on the LCD screen and to achieve the purpose of calibrating the camera and projector with the same calibration board. On the other hand, more marker points pattern needs to be integrated. The increase in the number of marker points will improve the accuracy of marker points extraction to a certain extent, and then improve the accuracy of camera calibration.

Funding

National Natural Science Foundation of China (51675160, 52075147); Engineering and Physical Sciences Research Council (EP/P006930/1, EP/T024844/1); Chinese Government Scholarship.

Disclosures

The authors declare no conflicts of interest.

Data availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

References

1. J. Xu and S. Zhang, “Status, challenges, and future perspectives of fringe projection profilometry,” Opt. Lasers Eng. 135, 106193 (2020). [CrossRef]  

2. Y. Zhao, A. Pilvar, A. Tank, H. Peterson, J. Jiang, J. C. Aster, J. P. Dumas, M. C. Pierce, and D. Roblyer, “Shortwave-infrared meso-patterned imaging enables label-free mapping of tissue water and lipid content,” Nat. Commun. 11(1), 1–12 (2020). [CrossRef]  

3. L. R. Ramírez-Hernández, J. C. Rodríguez-Quinoñez, M. J. Castro-Toscano, D. Hernández-Balbuena, W. Flores-Fuentes, R. Rascón-Carmona, L. Lindner, and O. Sergiyenko, “Improve three-dimensional point localization accuracy in stereo vision systems using a novel camera calibration method,” Int. J. Adv. Robot. Syst. 17(1), 172988141989671 (2020). [CrossRef]  

4. F. Zhong, K. Ravi, and C. Quan, “RGB laser speckles based 3D profilometry,” Appl. Phys. Lett. 114(20), 201104 (2019). [CrossRef]  

5. F. Zhong, K. Ravi, and C. Quan, “A cost-effective single-shot structured light system for 3D shape measurement,” IEEE Sens. J. 19(17), 7335–7346 (2019). [CrossRef]  

6. C. Chen, N. Gao, X. Wang, Z. Zhang, F. Gao, and X. Jiang, “Generic exponential fringe model for alleviating phase error in phase measuring profilometry,” Opt. Lasers Eng. 110, 179 (2018). [CrossRef]  

7. S. Fraser and Clive, “Automatic camera calibration in close range photogrammetry,” Photogramm. Eng. Rem. S. 79(4), 381–388 (2013). [CrossRef]  

8. K. Oliver, R. Rofallski, and T. Luhmann, “Impact of stereo camera calibration to object accuracy in multimedia photogrammetry,” Remote Sens-basel. 12(12), 2057 (2020). [CrossRef]  

9. T. Luhmann, C. Fraser, and H. G. Maas, “Sensor modelling and camera calibration for close-range photogrammetry,” Isprs J. Photogramm. 115, 37–46 (2016). [CrossRef]  

10. I. N. Swamidoss, A. B. Amro, and S. Sayadi, “Systematic approach for Thermal imaging camera calibration for Machine vision applications,” Optik 247, 168039 (2021). [CrossRef]  

11. J. Zhang, H. Yu, H. Deng, Z. Chai, M. Ma, and X. Zhong, “A Robust and rapid camera calibration method by one captured image,” IEEE Trans. Instrum. Meas. 68, 1–10 (2018). [CrossRef]  

12. J. Zhang, J. Zhu, H. Deng, Z. Chai, M. Ma, and X. Zhong, “Multi-camera calibration method based on a multi-plane stereo target,” Appl. Opt. 58(34), 9353 (2019). [CrossRef]  

13. C. Chen, J. Yu, N. Gao, and Z. Zhang, “High accuracy 3D calibration method of phase calculation-based fringe projection system by using LCD screen considering refraction error,” Opt. Lasers Eng. 126, 105870 (2020). [CrossRef]  

14. A. S. Machikhin, A. V. Gorevoy, D. D. Khokhlov, and A. O. Kuznetsov, “Modification of calibration and image processing procedures for precise 3-D measurements in arbitrary spectral bands by means of a stereoscopic prism-based imager,” Opt. Eng. 58(03), 1 (2019). [CrossRef]  

15. R. Legarda-Sáenz, T. Bothe, and W. P. O. Jüptner, “Accurate procedure for the calibration of a structured light system,” Opt. Eng. 43(2), 464–471 (2004). [CrossRef]  

16. X. Liu, Z. Cai, Y. Yin, H. Jiang, D. He, W. He, Z. Zhang, and X. Peng, “Calibration of fringe projection profilometry using an inaccurate 2d reference target,” Opt. Lasers Eng. 89, 131–137 (2017). [CrossRef]  

17. R. Chen, J. Xu, S. Zhang, H. Chen, Y. Guan, and K. Chen, “A self-recalibration method based on scale-invariant registration for structured light measurement systems,” Opt. Lasers Eng. 88, 75–81 (2017). [CrossRef]  

18. P. Lu, C. Sun, B. Liu, and P. Wang, “Accurate and robust calibration method based on pattern geometric constraints for fringe projection profilometry,” Appl. Opt. 56(4), 784–794 (2017). [CrossRef]  

19. J. Lu, R. Mo, H. Sun, and Z. Chang, “Flexible calibration of phase-to-height conversion in fringe projection profilometry,” Appl. Opt. 55(23), 6381–6388 (2016). [CrossRef]  

20. C. Zuo, T. Tao, S. Feng, L. Huang, A. Asundi, and Q. Chen, “Micro fourier transform profilometry (µFTP): 3D shape measurement at 10,000 frames per second,” Opt. Lasers Eng. 102, 70–91 (2018). [CrossRef]  

21. Y. Yin, X. Peng, A. Li, X. Liu, and B. Z. Gao, “Calibration of fringe projection profilometry with bundle adjustment strategy,” Opt. Lett. 37(4), 542–544 (2012). [CrossRef]  

22. W. Zhang, W. Li, L. Yu, H. Luo, H. Zhao, and H. Xia, “Sub-pixel projector calibration method for fringe projection profilometry,” Opt. Express 25(16), 19158–19169 (2017). [CrossRef]  

23. J. Tian, Y. Ding, and X. Peng, “Self-calibration of a fringe projection system using epipolar constraint,” Opt. Laser Technol. 40(3), 538–544 (2008). [CrossRef]  

24. H. Liu, W. H. Su, K. Reichard, and S. Yin, “Calibration-based phase-shifting projected fringe profilometry for accurate absolute 3D surface profile measurement,” Opt. Commun. 216(1-3), 65–80 (2003). [CrossRef]  

25. J. Villa, M. Araiza, D. Alaniz, R. Ivanov, and M. Ortiz, “Transformation of phase to (x, y, z)-coordinates for the calibration of a fringe projection profilometer,” Opt. Lasers Eng. 50(2), 256–261 (2012). [CrossRef]  

26. J. Yu, N. Gao, Z. Meng, and Z. Zhang, “A three-dimensional measurement system calibration method based on red/blue orthogonal fringe projection,” Opt. Lasers Eng. 139, 106506 (2021). [CrossRef]  

27. P. Zhou, J. Zhu, and H. Jing, “Optical 3-D surface reconstruction with color binary speckle pattern encoding,” Opt. Express 26(3), 3452–3465 (2018). [CrossRef]  

28. S. J. Ahn, W. Rauh, and S. I. Kim, “Circular coded target for automation of optical 3D-measurement and camera calibration,” Int. J. Pat. Recognit. Artif. Intell. 15(06), 905–919 (2001). [CrossRef]  

29. Y. Chen and B. Su, “Encoding method of measurement targets and decoding algorithm,” Applied Sciences 9(22), 4915 (2019). [CrossRef]  

30. U. Wijenayake, S. I. Choi, and S. Y. Park, “Automatic detection and decoding of photogrammetric coded target,” 13th International Conference on Electronics, Information and Communication (ICEIC). IEEE, (2016).

31. F. A. Heuvel and R. J. G. A. Kroon, “Digital close-range photogrammetry using artificial targets,” Int. Arch. Photogramm. Remote Sens. 29, 222 (1993).

32. K. Forbes, A. Voigt, and N. Bodika, “An inexpensive, automatic and accurate camera calibration method,” Proceedings of the Thirteenth Annual South African Workshop on Pattern Recognition, 1–6 (2002).

33. A. Fitzgibbon, M. Pilu, and R. B. Fisher, “Direct least square fitting of ellipses,” IEEE Trans. Pattern Anal. Mach. Intell. 21(5), 476–480 (1999). [CrossRef]  

34. M. Vo, Z. Wang, P. Bing, and T. Pan, “Hyper-accurate flexible calibration technique for fringe-projection-based three-dimensional imaging,” Opt. Express 20(15), 16926–16941 (2012). [CrossRef]  

35. M. Liu, S. Yang, Z. Wang, S. Huang, Y. Liu, Z. Niu, X. Zhang, J. Zhu, and Z. Zhang, “Generic precise augmented reality guiding system and its calibration method based on 3D virtual model,” Opt. Express 24(11), 12026–12042 (2016). [CrossRef]  

36. S. J. Ahn, H. J. Warnecke, and R. Kotowski, “Systematic geometric image measurement errors of circular object targets: mathematical formulation and correction,” Photogramm. Rec. 16(93), 485–502 (1999). [CrossRef]  

37. D. He, X. Liu, X. Peng, Y. Ding, and B. Z. Gao, “Eccentricity error identification and compensation for high-accuracy 3D optical measurement,” Meas. Sci. Technol. 24(7), 075402 (2013). [CrossRef]  

38. X. Chen, Z. Ma, Y. Hu, Y. Chen, and F. Bi, “A new method for accurate location of concentric circles in visual measurement,” J. Optoelectronics. Laser. 24, 1524–1528 (2013).

39. R. Juarez-Salazar, J. Zheng, and V. H. Diaz-Ramirez, “Distorted pinhole camera modeling and calibration,” Appl. Opt. 59(36), 11310–11318 (2020). [CrossRef]  

40. S. Zhang, “High-speed 3D shape measurement with structured light methods: a review,” Opt. Lasers Eng. 106, 119–131 (2018). [CrossRef]  

41. Z. Zhang, “A flexible new technique for camera calibration,” IEEE Trans. Pattern Anal. Mach. Intell. 22(11), 1330–1334 (2000). [CrossRef]  

42. B. K. Kwon, Z. Teng, T. J. Roh, and D. J. Kang, “Fast ellipse detection based on three point algorithm with edge angle information,” Int. J. Control. Autom. 14(3), 804–813 (2016). [CrossRef]  

43. Z. Zhang, S. Huang, S. Meng, F. Gao, and X. Jiang, “A simple, flexible and automatic 3D calibration method for a phase calculation-based fringe projection imaging system,” Opt. Express 21(10), 12218–12227 (2013). [CrossRef]  

Data availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (15)

Fig. 1.
Fig. 1. Imaging model of the pinhole camera.
Fig. 2.
Fig. 2. Diagrams showing (a) a layout of the target and coded concentric ring pattern, and (b) an example of the code on object plane.
Fig. 3.
Fig. 3. Coded concentric ring marker point. (a) Marker point of the object plane before projection transformation; (b) marker point of image plane after projection transformation.
Fig. 4.
Fig. 4. Flow chart of target center extraction method.
Fig. 5.
Fig. 5. Effect of marker point radius ratio on extraction accuracy.
Fig. 6.
Fig. 6. Effect of the number of connected domains of the coded concentric ring on extraction accuracy.
Fig. 7.
Fig. 7. Effect of pitch angle on extraction accuracy. (a) Ellipse fitting and the proposed; (b) Enlarge view of the proposed method.
Fig. 8.
Fig. 8. Effect of yaw angle on extraction accuracy. (a) Ellipse fitting and the proposed; (b) Enlarged view of the proposed method.
Fig. 9.
Fig. 9. Effect of rotation angle on extraction accuracy.
Fig. 10.
Fig. 10. Effect of Gaussian noise on extraction accuracy.
Fig. 11.
Fig. 11. Three different calibration images in the simulation experiment. (a) Circle; (b) checkerboard; (c) coded concentric ring; (d), (e) and (f) corresponding to (a), (b) and (c) in the field of view of the simulated camera.
Fig. 12.
Fig. 12. Reprojection error maps of simulated three different calibration images. (a) Circle; (b) checkerboard; (c) coded concentric ring.
Fig. 13.
Fig. 13. Center extraction of five different mode in the actual experiment. (a) circle; (b) checkerboard; (c) coded concentric ring. A high production accuracy calibration board with (d) circle; (e) checkerboard.
Fig. 14.
Fig. 14. (a) A few calibration images of the proposed method captured in actual test and (b) the positions and orientations of the coded concentric ring target for calibration evaluation.
Fig. 15.
Fig. 15. Reprojection error maps. (a) circle; (b) checkerboard; (c) coded concentric ring. A high production accuracy calibration board with (d) circle and (e) checkerboard.

Tables (1)

Tables Icon

Table 1. Comparison of calibrated intrinsic parameters using five different patterns in actual experiments.

Equations (9)

Equations on this page are rendered with MathJax. Learn more.

[ u v 1 ] = s A [ R T ] [ x w y w z w 1 ] = s [ f x γ u 0 0 f y v 0 0 0 1 ] [ R T 0 1 ] [ x w y w z w 1 ]
δ r = k 1 r 2 + k 2 r 4
{ x ^ = x ( 1 + k 1 r 2 + k 2 r 4 ) y ^ = y ( 1 + k 1 r 2 + k 2 r 4 )
y i = 255 × ( x i x min ) ( x max x min )
{ x = H x I = H T I
x T I = 0
x T I = x T H T H T I = 0
y = k 1 2 π σ 2 exp [ ( x μ ) 2 2 σ 2 ]
m = log ( 1 p ) log ( 1 ω n )
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.