Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Influence of camera calibration conditions on the accuracy of 3D reconstruction

Open Access Open Access

Abstract

For stereoscopic systems designed for metrology applications, the accuracy of camera calibration dictates the precision of the 3D reconstruction. In this paper, the impact of various calibration conditions on the reconstruction quality is studied using a virtual camera calibration technique and the design file of a commercially available lens. This technique enables the study of the statistical behavior of the reconstruction task in selected calibration conditions. The data show that the mean reprojection error should not always be used to evaluate the performance of the calibration process and that a low quality of feature detection does not always lead to a high mean reconstruction error.

© 2016 Optical Society of America

1. Introduction

Camera calibration is an essential step in many computer vision applications. It results in a digital projective transformation between the object space and the image plane. For metrology devices using cameras, it allows the user to perform 3D reconstruction through stereoscopy. The quality of the reconstruction relies on the accuracy of the camera model and on the experimental calibration procedure. There has been a lot of interest on techniques that could help to improve the calibration process. Recent examples of this interest to enhance the performance include the creation of new mathematical models to describe the behaviour of cameras with specific caracteristics [1], modifications of the computational process for the estimation of intrinsic parameters such as the suppression of non-linear optimization [2] and removal of noise and distortion [3], to name but a few. Many studies have been dedicated to the evaluation of the accuracy of these new techniques [4, 5]. In addition, there has been a sustained interest towards the impact of the precision of the calibration target on the quality of the estimation of the intrinsic and extrinsic parameters [6, 7].

Nevertheless, there are still unanswered questions regarding factors impacting the accuracy of the calibration process. Researchers are interested in comparing the performance of well-established models [8] or determining the optimal calibration conditions in the laboratory. Extensive experimental data is sometimes hard to gather and there are many parameters that are difficult or impossible to control in an experimental setup. Experimental studies about the impact of calibration conditions on the calibration process are often limited to very specific cases and do not consider the repeatability and reproducibility of the results. Hassan et al. [9] recently performed a study in the laboratory with three different systems regarding the impact of the number of pictures on the mean reprojection error. This error is easily available from the calibration process. Averaged over all the control points of each calibration target, it represents the error between the real image coordinates and the reprojected coordinates using the calibration parameters. This error is minimized during the optimization of the calibration parameters and used to assess the quality of calibration. Therefore, the calibration process and the assessment of its quality are closely related. On the other hand, Hanning et al. [10] used the mean reconstruction error of the target control points to optimize the calibration parameters. However, even if this error is calculated in object space, it is also linked to the control points. In this case, a small mean reconstruction error does not guarantee an accurate 3D reconstruction in areas where no calibration target was positioned. The more general topic of optimal calibration conditions has also been studied with simulations using a virtual camera model based on the pinhole model [11, 12]. This virtual camera model is an approximation of a real optical system and cannot represent the complexity of all the optical systems available with the same accuracy.

This paper presents a study of the impact of the calibration conditions on the mean reprojection error and the quality of 3D reconstruction of a volume in object space. This volume is completely decoupled from the calibration target and is used to assess the quality of the calibration process. More specifically, the calibration conditions investigated are the density of control points on the calibration target, the density of targets in space and the quality of the detection of the control points. Realistic data is obtained with the virtual calibration technique avoiding tedious manipulations in the laboratory [13]. This method makes use of the synergy between the Camera Calibration Toolbox for Matlab [14] and the optical design software Zemax. This technique also presents the advantage to compute a large volume of data quickly while using an exact replica of the lens that is used, without any approximation. The number of calibration runs performed allows the observation of the statistical distribution of the mean reconstruction error for the various sets of calibration conditions. A similar study would be almost impossible to perform in the laboratory. The virtual simulation process makes such experiment possible.

This paper includes a short summary of the calibration toolbox model and parameters, and of the virtual calibration technique, in sections 2 and 3 respectively. The experimental calibration conditions are described in section 4. Finally, section 5 presents mean reprojection error analysis and 3D reconstruction results for selected calibration conditions.

2. Calibration algorithm and model

The Camera Calibration Toolbox for Matlab is a robust and well-established calibration platform. To calibrate a camera, the user needs to acquire between 10 and 20 images of a flat checkerboard calibration target at different positions and orientations in object space. The distance between two adjacent control points (corners) needs to be precisely known. To perform an accurate camera calibration, it is recommended to make sure there are vanishing points related to the target for each image and to properly cover the field of view with the group of images of the target. The image coordinates of the control points and the information about the calibration target are used to estimate the intrinsic and extrinsic calibration parameters. The camera model is inspired by the model proposed by Heikkila and Silvén [15] with additional distortion correction. Let P = [X, Y, Z]T, an object point, be used to obtain its corresponding normalized pinhole image point pn via a perspective projection.

pn=[xnyn]=[X/ZY/Z]

To account for distortion, the normalized pinhole image coordinates [xn, yn]T are transformed into the normalized distorted image coordinates [xd, yd]T through distortion coefficients ki with rn=xn2+yn2.

pd=[xdyd]=(1+k1rn2+k2rn4+k5rn6)[xnyn]+[2k3xnyn+k4(rn2+2xn2)k3(rn2+2yn2)+2k4xnyn]

The final pixel coordinates [xp, yp]T can be obtained from the normalized distorted image coordinates.

[xpyp1]=[fcx0ccx0fcyccy001]KK[xpyp1]

The intrinsic calibration parameters are the focal lengths fcx and fcy, the coordinates of the principal point (ccx, ccy), the radial distortion coefficients k1, k2 and k5 and the tangential distortion coefficients k3 and k4. The calculation of these parameters is similar to the technique developped by Zhang [16]. The intrinsic parameters in matrix KK are first estimated through a linear process. Final values for all parameters are calculated after a non-linear optimization that aims to minimize the mean reprojection error. As opposed to Zhang’s technique, the optimization in the Camera Calibration Toolbox for Matlab relies on the orthogonality of vanishing points, explaining the need for vanishing points related to the target in all the images.

Extrinsic parameters for translation and rotation are always estimated. However, for a stereo calibration, rotation matrix R3×3 and translation vector T3×1 link the frames of the left and right cameras to make triangulation, and ultimately reconstruction, possible.

3. Virtual calibration technique

The virtual calibration technique presented here was developed to study various aspects of the calibration process without having to acquire complex experimental data. It was previously used to study the impact of temperature and tolerancing on the calibration parameters [17]. It is worth mentioning that optical design software is key to this techique. The availability of the design file provides a virtual replica of a nominal optical system that can therefore be studied easily. Even if the fabrication and tolerancing errors are not included, using the nominal design achieves suitable results. Figure 1 shows the interaction between MATLAB and Zemax during a virtual calibration. The link between the two software modules is made using the MZDDE Toolbox for MATLAB [18]. Once the user selects the calibration conditions and 3D reconstruction parameters, the process is entirely automatic and can perform and analyze a large number of different simulations.

 figure: Fig. 1

Fig. 1 Flow chart of the virtual calibration technique using a camera calibration toolbox and the optical software Zemax

Download Full Size | PDF

3.1. Assessing the quality of the calibration

Often, the mean reprojection error is used to evaluate the quality of the calibration. It represents the mean error, on the image plane, between the real image coordinates of control points on the calibration target and the image coordinates reprojected using the calibration parameters. This quantity is directly related to the target image coordinates since it is the value being minimized during the optimization. Another interesting parameter to assess the performance of the process is the mean reconstruction error related to a set of coordinates in object space. For a set of points of interest in object space, this quantity is the mean Euclidian distance between the reconstructed object points and the ground truth. This quantity is unrelated to the calibration target coordinates and therefore independant from the calibration process. However, it requires that a full 3D reconstruction task be performed in the laboratory.

The virtual calibration technique presented here can include a virtual 3D reconstruction task. The parameters of the reconstruction can be chosen by the user along with the calibration conditions. In this paper, a virtual sterescopic system made of two identical cameras in canonical configuration is analyzed. The lens used to perform the simulations has a focal length of 8.5 mm and a FFOV of 42.6° on a 1/2” sensor and is commercially available (Edmund Optics 58-000). Two cameras sharing identical calibration parameters are used to perform the 3D reconstruction of a uniform volume. The baseline of the system is 10 cm. The volume covers the part of the object space that contains calibration targets so the calibrated and reconstructed part of the object space are the same. The volume is made of 5 parallel planes perpendicular to the optical axis. Thie configuration was chosen to uniformely cover the volume of calibration while keeping the computational time acceptable. The 3D reconstruction of the volume is done by triangulation.

4. Calibration conditions

This study focuses on three main factors that could impact the calibration. In all cases, the chosen positions for the target in the object space are the same (Fig. 2). These positions are equally distributed in the volume of interest and, along with the size of the calibration target, guarantee a uniform coverage of the field of view. The first factor of interest is the density of target per position. This value also represents the number of full coverage of the field of view and is of great interest even if only one coverage is required in pratice. The second factor is the density of control points on the target, measured in points per meter. Keeping constant the size of the target and varying the pitch of the checkerboard pattern allows to study if all targets are equivalent. This is of great interest if the lens to be calibrated shows high distortion and needs user manual input during the corner detection process. In this case, a lower number of control points on the target would be less time-consuming. The last factor is the quality of the detection of the control points as well as of the points used for the 3D reconstruction. To account for this, two different pixel pitchs are used. The smallest one, of 2.2 μm, is the pixel pitch of a Guppy F-503 camera that could be used with the chosen lens. The other one, of 6.6 μm, represents the size of the RMS spot size of this lens on the optical axis. In both cases, no sub-pixel detection is performed. The coordinates are rounded to the closest integer pixel coordinate.

 figure: Fig. 2

Fig. 2 Positions of the calibration target in object space. Positions with identical symbols are located at the same distance z. The black circle and its coordinate system indicate the location and orientation of the camera. The optical axis is oriented along the z axis.

Download Full Size | PDF

5. Results

For each variation of a calibration condition, 20 different sets of targets were created and used to perform individual calibration. The only difference between these sets is the randomly selected orientation of each target. However, all the orientations were required to have vanishing points related to the calibration pattern since the optimization of the calibration parameters relies on this condition. Also, it was not permitted for a target to be perpendicular to the optical axis. Each calibration run complies with the general contraints of an experimental calibration.

In this paper, the mean reconstruction error represents the error for the reconstructed volume of interest with one calibration run and the statistical mean reconstruction error is the mean of this value for 20 calibration runs in the same conditions. For all the simulations, the magnitude of the errors can be explained by the inability of the camera model and optimization process to perfectly model the real optical system. It also highlights that a virtual camera using the pinhole model and the lens design file using ray tracing are not equivalent. The large reconstructed volume also contributes to the magnitude of the reconstruction error since the error grows with the depth.

5.1. Density of target in object space

It is not uncommon to merge two independent calibrations to achieve a better calibration. In fact, the Camera Calibration Toolbox for Matlab documentation provides an example of a global calibration made with two data sets. For example, merging two data sets could correspond to two targets per position. The target used has 10 × 10 control points with 50 mm between two consecutive points. Table 1 shows the statistical mean reconstruction error for the volume of interest as a function of the number of targets per position for 20 different calibrations each time.

Tables Icon

Table 1. Statistical mean reconstruction error of the volume of interest for 20 different calibration runs as a function of the number of targets per position. The results are shown for two values of the pixel pitch that represents the quality of the detection of the control points.

From the results in Table 1, one can observe that the value of the statistical mean reconstruction error is almost constant and does not correlate with the number of targets per position. However, the standard deviation shows that a bigger number of targets per position decreases the risk of performing a calibration that gives a mean reconstruction error different from the statistical mean value (better or worse). Depending on the application and considering the gain versus the cost in time and calculation of having a large number of targets per position, a higher density of targets could possibly not present a real advantage. However, for stereoscopic systems that are limited by the weakest camera, a higher density of targets could be beneficial to avoid the worst case scenario.

Surprisingly, the statistical mean reprojection error does not increase significantly for a larger pixel pitch. It appears that this error is not directly proportionnal to the quality of the corner and point of interest detection. Actually, for all the density of targets listed in Table 1, it is possible to achieve a lower statistical mean reconstruction error for a pixel pitch of 6.6 μm than for a pixel pitch of 2.2 μm.

5.2. Density of control points on the calibration target

The Camera Calibration Toolbox for Matlab optimization relies on the presence of vanishing points related to the surface of the calibration target. In this case, it seems that the density of control points should not significantly influence the calibration and reconstruction process as long as vanishing points are available. The statistical mean reconstruction error of the volume of interest as a function the density of control points is shown at Table 2 for two values of pixel pitch. For each run, there was one target at each position. Except for the lowest value of 10 control points per meter, the density of control points influence neither the statistical mean reconstruction error, nor the standard deviation of the distribution. In this case, any available planar checkerboard target would produce a similar calibration. As for the number of targets per position discussed previously, the statistical mean reconstruction error is not proportional to the pixel pitch. Considering the standard deviation, there is also a possibility that both systems, aside from their pixel pitch, produce the same mean reconstruction error.

Tables Icon

Table 2. Statistical mean reconstruction error of the volume of interest for 20 different calibration runs as a function of the density of control points on the calibration target. The results are shown for two values of the pixel pitch that represents the quality of the detection of the control points.

5.3. Mean reprojection error and mean reconstruction error

The previous subsections presented the mean reconstruction error that is a very useful measure to evaluate the performance of stereoscopic systems. This error is relevant for devices designed to perform 3D reconstruction of points of interest in a volume. Unfortunately, this quantity is not as easy to obtain in the laboratory as it is with the virtual calibration technique. However, the mean reprojection error is available with almost any calibration tool. The virtual calibration technique was used to compare these two errors. As a reminder, the reprojection error is related specifically to the target control points image coordinates while the reconstruction error is related to the entire volume of calibration in the object space. The two errors are therefore decoupled from one another. To analyze the behavior of these errors, 20 calibration runs were made with a target having a density of control points of 10 points per meter, one target per position and a pixel pitch of 6.6 μm. Table 3 shows the two errors for 20 calibrations made in the conditions mentioned above.

Tables Icon

Table 3. Mean reconstruction error for the volume of interest and mean reprojection error of the calibration process for 20 different calibrations. The calibration conditions are the following: a pixel pitch of 6.6 μm, a target density of one target per position and a density of control points of 10 points/meter.

The mean reconstruction errors follow a normal distribution. The 9th calibration gives the lowest mean reconstruction error of the volume of interest (9.6400 mm) and the 6th calibration produces the highest value (16.3266 mm). The mean reprojection error values associated with these calibrations are 0.2705 pixel and 0.2708 pixel respectively. In this case, it seems that the mean reprojection error should not be used to assess the quality of the calibration of a system employed to perform 3D reconstruction. It also shows that the image coordinates of control points located at selected positions of the target in object space do not accurately represent the entire volume of calibration. It is reasonable to hypothesize that the optimization of the calibration parameters using the minimization of the mean reconstruction errors of these same control points would probably not represent precisely the volume of calibration as well.

6. Conclusion

This paper presented a study of specific calibration conditions and their impact on the quality of the calibration process. To avoid tedious experimentation that would require up to 300,000 control points for one set of calibration conditions, a virtual calibration technique exploiting the Camera Calibration Toolbox for Matlab and the optical design software Zemax was used. In addition to the mean reprojection error, this technique gives access to the mean reconstruction error of a volume of interest in order to evaluate the performance of the calibration, thus decoupling the calibration process and the assessment of its quality. For each set of calibration conditions, 20 calibration runs were performed to obtain the statistical mean reconstruction error. For a commercially avaible lens, it was shown that the density of targets in object space and the density of control points on the calibration target do not influence the statistical mean reconstruction error. However, a higher density of targets in object space reduces the standard deviation of the distribution, lowering the odds of obtaining a result far from the mean value (better or worse). It was also shown that the quality of feature detection is not proportional to the error made in the reconstruction process. In fact, it is possible to achieve the same mean reconstruction error for two different values of pixel pitch tested (2.2 μm and 6.6 μm).

The comparison of the mean reconstruction error and the mean reprojection error for 20 calibrations performed in the same conditions demonstrates that the mean reprojection error does not correlate with the quality of the 3D reconstruction of the volume of calibration. Therefore, in the case studied, the mean reprojection error should not be used to evaluate the performance of the calibration process. Our work also showed that even if the experimenter follows the generally accepted guidelines to perform a good calibration, such as acceptable positions and orientations of the target, appropriate coverage of the field of view for calibration and low mean reprojection error, the results of the 3D reconstruction task for a stereoscopic system are likely to vary significantly for different calibration runs made in the same calibration conditions.

Acknowledgments

This research was supported by the NSERC Industrial Research Chair in Optical Design. The authors would like to thank Xavier Dallaire for generously sharing his expertise on the Zemax programming language. We would also like to express our gratitude to Edmund Optics for providing the Zemax design file of the lens 58-000.

References and links

1. M. Nekouei Shahraki and N. Haala, “Introducing free-function camera calibration model for central-projection and omni-directional lenses,” Proc. SPIE 9630, 96300P (2015).

2. Y. Hueng, G. Ren, and E. Liu, “Non-iterative method for camera calibration,” Opt. Express 23(18), 246365 (2015).

3. Z. Wang, “Removal of noise and radial lens distortion during calibration of computer vision systems,” Opt. Express 23(9), 234340 (2015).

4. J. Salvi, X. Armangué, and J. Batlle, “A Comparative review of camera calibrating methods with accuracy evaluation,” Pattern Recognit. 35(7), 1617–1635 (2002). [CrossRef]  

5. P. D. Lin and C. K. Sung, “Comparing two new camera calibration methods with traditional pinhole calibrations,” Opt. Express 15(6), 3012–3022 (2007). [CrossRef]   [PubMed]  

6. W. Sun and J. R. Cooperstock, “Requirements for Camera Calibration: Must Accuracy Come with a High Price,” in Proceedings of the Seventh IEEE Workshop on Applications of Computer Vision (IEEE, 2005), pp. 356–361.

7. P. Swapna, N. Krouglicof, and R. Gosine, “The question of accuracy with geometric camera calibration,” in Proceedings of the Seventh IEEE Canadian Conference on Electrical and Computer Engineering (IEEE, 2009), pp. 541–546.

8. W. Sun and J. R. Cooperstock, “An empirical evaluation of factors influencing camera calibration accuracy using three publicly available techniques,” Mach. Vis. Appl. 17(1), 51–67 (2006). [CrossRef]  

9. M. F.A.l Hassan, I. Ma’arof, and A. M. Samad, “Assessment of Camera Calibration Towards Accuracy Requirement,” in Proceedings of IEEE 10th International Colloquium on Signal Processing and its Applications (IEEE, 2014), pp. 123–128.

10. T. Hanning, S. Graf, and M. Kellner, “Re-projective vs. projective camera calibration: effects on 3D-reconstruction,” in Proceedings of IEEE International Conference on Image Processing (IEEE, 2005), pp. II–1170.

11. Z. Zhang, “Flexible camera calibration by viewing a plane from unknown orientations,” in Proceedings of the Seventh IEEE International Conference on Computer Vision, (IEEE, 1999), vol.1, pp. 666–673.

12. C. Ricolfe-Viala and A.-J. Sanchez-Salmeron, “Camera calibration under optimal conditions,” Opt. Express 19(11), 10769–10775 (2011). [CrossRef]   [PubMed]  

13. A.-S. Poulin-Girard, X. Dallaire, S. Thibault, and D. Laurendeau, “Virtual camera calibration using optical design software,” Appl. Opt. 53(13), 2822–2827 (2014). [CrossRef]   [PubMed]  

14. J.-Y. Bouguet, “Camera Calibration Toolbox for Matlab,” http://www.vision.caltech.edu/bouguetj/calib_doc/index.html.

15. J. Heikkila and O. Silvén, “A four-step camera calibration procedure with implicit image correction,” in Proceedings of IEEE Computer Society Conference on Computer Vision and Pattern Recognition (IEEE, 1997), pp. 1106–1112.

16. Z. Zhang, “A flexible new technique for camera calibration,” IEEE Trans. Pattern Recognit. Mach. Intell. 22(11), 1330–1334 (2000). [CrossRef]  

17. A.-S. Poulin-Girard, X. Dallaire, A. Veillette, S. Thibault, and D. Laurendeau, “Study of Camera Calibration Process with Ray Tracing,” Proc. SPIE 9192, 91920B (2014). [CrossRef]  

18. D. Griffith, “How to Talk to Zemax from MATLAB” (Zemax Corporation, 2006). http://www.zemax.com/support/resource-center/knowledgebase/how-to-talk-to-zemax-from-matlab

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (2)

Fig. 1
Fig. 1 Flow chart of the virtual calibration technique using a camera calibration toolbox and the optical software Zemax
Fig. 2
Fig. 2 Positions of the calibration target in object space. Positions with identical symbols are located at the same distance z. The black circle and its coordinate system indicate the location and orientation of the camera. The optical axis is oriented along the z axis.

Tables (3)

Tables Icon

Table 1 Statistical mean reconstruction error of the volume of interest for 20 different calibration runs as a function of the number of targets per position. The results are shown for two values of the pixel pitch that represents the quality of the detection of the control points.

Tables Icon

Table 2 Statistical mean reconstruction error of the volume of interest for 20 different calibration runs as a function of the density of control points on the calibration target. The results are shown for two values of the pixel pitch that represents the quality of the detection of the control points.

Tables Icon

Table 3 Mean reconstruction error for the volume of interest and mean reprojection error of the calibration process for 20 different calibrations. The calibration conditions are the following: a pixel pitch of 6.6 μm, a target density of one target per position and a density of control points of 10 points/meter.

Equations (3)

Equations on this page are rendered with MathJax. Learn more.

p n = [ x n y n ] = [ X / Z Y / Z ]
p d = [ x d y d ] = ( 1 + k 1 r n 2 + k 2 r n 4 + k 5 r n 6 ) [ x n y n ] + [ 2 k 3 x n y n + k 4 ( r n 2 + 2 x n 2 ) k 3 ( r n 2 + 2 y n 2 ) + 2 k 4 x n y n ]
[ x p y p 1 ] = [ f c x 0 c c x 0 f c y c c y 0 0 1 ] K K [ x p y p 1 ]
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.