Abstract

High-precision calibration of binocular vision systems plays an important role in accurate dimensional measurements. In this paper, an improved camera calibration method is proposed. First, an accurate intrinsic parameters calibration method based on active vision with perpendicularity compensation is developed. Compared to the previous work, this method eliminates the effect of non-perpendicularity of the camera motion on calibration accuracy. The principal point, scale factors, and distortion factors are calculated independently in this method, thereby allowing the strong coupling of these parameters to be eliminated. Second, an accurate global optimization method with only 5 images is presented. The results of calibration experiments show that the accuracy of the calibration method can reach 99.91%.

© 2015 Optical Society of America

1. Introduction

The binocular stereo vision system is a 3D, non-contact, real-time, high-precision measurement technology. It is widely used in geometric measurement, motion measurement, and other measurement applications. One of the most important problems surrounding the binocular stereo vision measurement system concerns its accuracy, with the accuracy of the camera calibration most significantly affecting the accuracy of the measurement system. The task of camera calibration is to determine the relationship between the 2D coordinates of the image points and the 3D world coordinates of the objects. This takes into account the optical geometry of the camera (i.e., the intrinsic parameters) and the positional relationship between the left and right cameras (i.e., the extrinsic parameters).

Many scholars have proposed different camera calibration methods for the binocular stereo vision measurement system, which constitutes an active research area within vision metrology calibration. The calibration methods can be divided into three main categories: traditional calibration, self-calibration, and motion-based calibration.

Traditional calibration methods employ the relationship between the 2D image coordinates and the 3D world coordinates of the feature points [1]. Tsai proposed the classic two-step calibration method in 1986 [2]. Zhang also presented a calibration method using a high-accuracy calibration target in 2000 [3, 4].

The accuracy of Zhang’s calibration method is high, and it is widely used in camera calibration. However, a calibration target matching the field of view is required in this method, and the target needs to be placed in different positions. Therefore, the calibration target is expensive and difficult to manufacture when Zhang’s method is used to calibrate the cameras in the large field of view; furthermore, the calibration process is complex.

Self-calibration uses constraints among the system parameters to calibrate cameras [5, 6], which makes it possible to calibrate the system using unknown scenes and motions. Faugeras first proposed the concept of self-calibration in 1992 [711], thereby demonstrating that the self-calibration method was theoretically and practically fit for camera calibration through an unknown scene. Svoboda thereafter proposed a multi-camera self-calibration method for virtual environments in 2005 [12]. This method calibrated an immersive virtual environment with 16 cameras using a laser pointer. Hödlmoser also presented a multi-camera self-calibration method using pedestrians [13]; more specifically, this method calibrated the intrinsic and extrinsic parameters by using a walking man. The process of self-calibration is both simple and fast [14]. However, the accuracy of self-calibration cannot be guaranteed and the robustness of self-calibration is very low. Therefore, self-calibration is usually used in low-precision applications such as communication and virtual reality technology.

Motion-based calibration is another calibration method in which the cameras are calibrated by special motions – including pure translation, pure rotation, and a combination thereof. This method can acquire a high accuracy without an expensive calibration target, and it therefore easily realizes automated calibration. For these reasons, motion-based calibration is widely used.

Faugeras put forward a theory of calibration based on a moving camera in 1992 [7]. Two epipolar transformations between different camera displacements are used in this method, and the camera calibration can be parameterized by an algebraic curve. Hartley and Sturm then proposed a calibration method based on rotation in 1997 [15]. This method requires at least three images with different orientations of the same camera at the same point in space in order to analyze point matching between images; furthermore, pure rotation is required in this method, which is difficult to ensure. Du and Brady presented a calibration technique using an active system in 1993 [16]. The intrinsic parameters are calibrated by making deliberate camera motions based on the positional difference of the optical flow field (PDOFF) and the trajectories of features (TOF). Ma presented a technique for calibrating the camera’s intrinsic parameters based on active vision systems in 1996 [17]; it accomplishes this by utilizing pure translational motions. However, as was the case with the method proposed by Hartley and Sturm, it is difficult to guarantee pure translation in this method.

According to the principles of motion-based calibration methods, the camera is calibrated by its motions, such as translation which is perpendicular to each other. The motions are usually conducted by multidimensional-motion actuators. However, motion accuracy of the actuators has significant effect on the accuracy of camera calibration. Therefore, there is a significant and inevitable deviation in the calibration, which can reduce measurement accuracy in a stereo vision system. It is important to compensate the deviation of the camera motion. However, the significant effect is not taken into consideration in the abovementioned calibration methods.

In previous work, a camera was calibrated through its perpendicularity motions. However, the camera motions are not accurately perpendicularity in practical applications, which reduces the calibration accuracy. Therefore, it is necessary to calibrate cameras with perpendicularity compensation for stereo system which is used for high-precision measurement.

In this paper, an improved camera calibration method based on perpendicularity compensation for a binocular stereo vision measurement system is proposed. In Section 2, the effect of the camera motions’ non-perpendicularity on calibration accuracy is analyzed via simulation and experiment. Next, the calibration method for intrinsic parameters based on perpendicularity compensation is presented, which alleviates the effect of the camera motions’ non-perpendicularity on calibration accuracy. Section 3 provides an overall optimization method for increasing calibration accuracy. In Section 4, a calibration experiment is carried out to verify accuracy of the proposed method. Concluding remarks then follow in Section 5.

2. Calibration of intrinsic parameters

2.1 Camera model

The 3D points are projected onto a 2D image plane through the imaging lens of the camera. This projection can be described by the image transformation known as the camera model. The arrangement of points in the image plane can be approximated by the pinhole model, as shown in Fig. 1 . The 3D point P is projected through the projection center of the lens to the point p in the image plane; p lies on a straight line from P through the projection center O C, as indicated by the dotted line. Generally, this kind of camera model is called the linear camera model as well.

 figure: Fig. 1

Fig. 1 Camera model. (OCXCYCZC and OWXWYWZW indicate the camera coordinate system and the world coordinate system, respectively).

Download Full Size | PPT Slide | PDF

As shown in Fig. 1, four coordinates are defined as follows: Opixeluv is the pixel coordinate system (units of pixels) where the origin Opixel is located at the top left corner of the image plane. Ommxy is image coordinate system (units of millimeters) where the origin Omm is defined by the intersection point of the camera’s optical axis and the image plane, and the x and y axes are parallel with u and v axes, respectively. OCXCYCZC is the camera coordinate system, where the center is the point OC (projection center). Moreover, the XC and YC axes are parallel with the x and y axes, respectively. OWXWYWZW is the world coordinate system. According to the linear pinhole model of the camera, the perspective transformation from the world coordinate system to the pixel coordinate system can be expressed by [18]:

[uv1]=[fx000fy0u0v01000]K[Rt0T1][R|t][XWYWZW1]=K[R|t][XWYWZW1]=M[XWYWZW1],
where fx and fy are the normalized focal lengths of the camera, which represent the horizontal and vertical pixel pitch of the sensor, respectively. (u0,v0)T is the principal point of the image, which is the perpendicular projection of the projection center onto the image plane. Meanwhile, (u0,v0)T defines the center of the radial distortions. All these variables are the intrinsic parameters of camera. The intrinsic parameter matrix can be written as [19]:

K=[fx0u00fyv0001].

2.2 Effect of perpendicularity of camera motion on camera calibration accuracy

The active vision calibration method is based on linear camera motion. The intrinsic parameters of the camera are calibrated by four groups of perpendicular motion, driven by the electrically controlled platform. The motion of one group is equal to the translation of the spatial point, as shown in Fig. 2 , where Pi (i = 1,2,3) and Qi represent the three positions of the same spatial point during the two translations; pi and qi are the corresponding image points; e 1 and e 2 are the poles of the motion.

 figure: Fig. 2

Fig. 2 Principle diagram of the active vision calibration method based on perpendicular linear motions.

Download Full Size | PPT Slide | PDF

As shown in Fig. 2, P 1 P 2 and Q 1 Q 2, along with P 2 P 3 and Q 2 Q 3, are two groups of parallel lines. The direction of the translation is the same as that of the parallel lines. In other words, the vectors Oce 1 and Oce 2 from the optical center to the poles e 1 and e 2 are parallel with the motion vector. Therefore if the two camera translations are perpendicular to each other, the corresponding points should satisfy Eq. (3):

(mj1-u0)(mj2-u0)/fx2+(nj1-v0)(nj2-v0)/fy2+1=0,j=1,2,3,4,
where (mj1,nj1) and (mj2,nj2) (j = 1, 2, 3) are the coordinates of the two poles in the image of the jth group motion. The intrinsic parameters of the camera can be acquired by using four groups of similarly perpendicular motions.

However, because of the manufacturing and installation deviation of the linear guides, it is usually difficult to ensure the perpendicularity of the linear guides in practical applications. Therefore, the angle of the direction of movement is not 90°, which indicates that the corresponding points do not satisfy Eq. (3). In order to analyze this issue, simulations and experiments are conducted to analyze the effect of the perpendicularity of the linear guides on the accuracy of the camera calibration.

Suppose that the two cameras have the same calibration result. The parameters of the cameras are set as follows. The resolution of the camera is 4008 × 2672 pixels2; the principal point is the center of the image, and its coordinate is (2004, 1336)T; the scaling factor is 9 μm; the distance of the camera motion is 200 mm in each case; the simulation analyses are conducted for linear guides angles θ of 85°–95°; lastly, D0 = 200 mm is the distance of the camera motion. The simulation process is shown in Fig. 3 .

 figure: Fig. 3

Fig. 3 Simulation of the effect of the perpendicularity of the linear guides on the calibration results. (Ii represents the image plane of the camera).

Download Full Size | PPT Slide | PDF

In order to evaluate the accuracy under different perpendicularities, the linear distance of two points is reconstructed, where the linear distance is 4000 mm. The estimation errors associated with the principle point and focal lengths are shown in Fig. 4 , and the reconstruction errors under different perpendicularities are shown in Fig. 5 . As shown in these figures, the perpendicularity has a significant effect on the accuracy of the intrinsic camera calibration. The accuracy decreases as the perpendicularity declines.

 figure: Fig. 4

Fig. 4 The estimation errors of (a) focal length f x, (b) focal length f y, (c) principle point u0, and (d) principle point v0.

Download Full Size | PPT Slide | PDF

 figure: Fig. 5

Fig. 5 The reconstruction errors under different perpendicularities.

Download Full Size | PPT Slide | PDF

In order to verify the simulation analyses, an experiment was conducted. The experimental system included a coordinate measuring machine (CMM, PROSMO, Germany), digital cameras (SVS-11002C, Germany) with resolutions of 4008 × 2672 pixels2. The experimental process and system are shown in Fig. 6 , where the angle of camera motion changes from 89° to 91° and the step angle is 0.1°.

 figure: Fig. 6

Fig. 6 Experimental system and the reconstruction target.

Download Full Size | PPT Slide | PDF

In this experiment, the calibration target was driven by the CMM under different perpendicularities for simulating the calibration process, and the images of the calibration target were captured in different positions to calibrate the cameras. The image of the reconstruction target is shown in Fig. 6. The distances between every pair of feature points are reconstructed, from point 1 to point 36. Finally, the reconstruction errors of the distances are calculated, and the average reconstruction error is shown in Fig. 7 .

 figure: Fig. 7

Fig. 7 The reconstruction error under different perpendicularities in the experiment.

Download Full Size | PPT Slide | PDF

As shown in Fig. 7, when the perpendicularity is not perfect, the error can reach 0.3%, which markedly reduces the accuracy of the measurement system.

2.3 Calibration method of intrinsic parameters based on perpendicularity compensation with active vision technology

According to the analyses above, the non-perpendicularity of the camera motions has a significant effect on the accuracy of the camera calibration. Therefore, it is necessary to compensate the perpendicularity of the camera motions. In order to alleviate the effect, a calibration method for intrinsic parameters based on perpendicularity compensation with active vision technology is proposed, where the camera is calibrated using active vision technology and the angle of camera motion (θ) is taken into consideration.

However, the camera calibration process will be more complicated if the perpendicularity of the camera motion is taken into consideration. Moreover, the effectiveness of the calibration algorithm will be low because of the strong coupling of these intrinsic parameters. The calibration accuracy will also be lower if the intrinsic parameters are calculated together. In order to improve the accuracy and robustness, the intrinsic parameters – including the principle point and the normalized focal lengths – are calibrated independently in this method.

The principle point is usually defined as the center of the image in order to simplify the calibration. However, there are various fabrication and installation errors related to the CCD. For example, the installation centers of many of the lenses do not coincide with each other. In addition, the principle point does not coincide with the center of the image. Therefore, it is necessary to calibrate the principle point independently to improve the precision of the calibration.

When the effective focal length or object distance changes, the principle point of the camera remains constant in the linear model. Therefore, if the focal length changes from f 1 to f 2, the image point of the spatial point changes from (x 1, y 1) to (x 2, y 2). The relationship of the principle point and the two corresponding image points can be written as:

u0(y1y2)+v0(x2x1)=x2y1x1y2,
Thus, when three groups of images of the spatial point under different focal lengths are captured, the principle point can be obtained by Eq. (5), where (x 2, y 2) represents the image point of the spatial point under focal length f 3. However, the principle point actually moves upon changing the focal length because the camera model is in fact non-linear. This kind of negative effect will be taken into consideration in the global optimization that is investigated in the next section.

MPpT=NM=[y1y2x1x2y2y3x2x3],Pp=[u0v0]N=[x2y1x1y2x3y2x2y3]

However, the result from Eq. (5) is usually imprecise because of the noise of the images. In order to improve the accuracy, the corresponding points of the spatial point under different focal lengths are employed for linear fitting. At this stage, the intersection of the fitting lines is calculated by using the least-squares method. Consequently, the intersection is the principal point.

If the camera moves linearly twice in the same plane (as shown in Fig. 2), then the relationship between the intrinsic parameters and the angle of the movement can be written as follows:

(Aj2Cj2sin2θ)x2+(Bj2Dj2sin2θ)y2+(2AjBjCjDj-Aj2Dj2cos2θ-Bj2Cj2cos2θ)xy+,(2AjCj-Aj2cos2θ-Cj2cos2θ)x+(2BjDj-Bj2cos2θ-Dj2cos2θ)y+sin2θ=0
where Aj=mj1u0, Bj=nj1v0, Cj=mj2u0, Dj=nj2v0, x=1/fx2 and y=1/fy2. (m1,n1) and (m2,n2) represent polar points in the image, θ is the angle of the camera motions, and fx and fy are the normalized focal lengths of the camera.

Equation (6) shows the perpendicularity compensation process. In order to realize the perpendicularity compensation, an actuator such as the linear guide is indispensable to control the camera motions, and the angle of the linear guides are measured, which is used for the calibration. Therefore, if the principle point is known, the normalized focal length can be calculated by Eq. (6) via perpendicularity compensation.

3. Centroid-based global optimization method of camera parameters

In order to calibrate the extrinsic parameters of the measurement system and reduce the deviation of the calibration further, a centroid-based global optimization method of system parameters is proposed, which consists of two steps. Firstly, the extrinsic parameters – i.e., the rotation matrix and translation vector from the left to the right camera – are calculated with a centroid-based method. Secondly, points on the targets are reconstructed via the intrinsic and extrinsic parameters of the cameras, and the optimal parameters can be obtained.

As shown in Fig. 8 , OWXWYWZW is the coordinate system of the calibration target – i.e., the world coordinate system. OCLXCLYCLZCL and OCRXCRYCRZCR are the coordinate systems of the left and right cameras, respectively. RL,tL,RR and tRrepresent the rotation matrix and the translation vectors from the target to the left and right cameras, respectively.

 figure: Fig. 8

Fig. 8 Euler transformation relationship between different coordinate systems.

Download Full Size | PPT Slide | PDF

{(P1,P1),(P2,P2),...,(Pn,Pn)} is the corresponding point set of the feature points on the calibration target under different coordinate systems, where Pi (i = 1,2…,n) denotes the coordinates of the world coordinate system and Pi (i = 1,2…,n) denotes the coordinates in the camera coordinate system. R0 and t0 represent the rotation matrix and translation vector from the target to one of the two cameras, respectively. (i.e., RL,tL or RR,tR) According to the camera model, Pican be written as:

Pi=R0Pi+t0.
Therefore, R0 and t0 can be estimated by minimizing the following objective function:

f(R0,t0)=i=1nR0Pi+t0-Pi.

However, the computing process involved here is complex because the objective function has six variables. Moreover, it is easy to obtain local minima during the optimization process. Therefore, in order to simplify the calculation process, a centroid-based calculation method is proposed in this paper. First, the centroid of the point set is calculated. Next, the original point of the coordinate system is translated to the centroid. The proposed method can be further explained as follows.

(1) The centroid of the set of corresponding points {(P1,P1),(P2,P2),...,(Pn,Pn)} is calculated by Eq. (9):

{P¯=1ni=1nPiP¯=1ni=1nPi.
(2) The original point of the coordinate system is then translated to the centroid. The points after movement can be obtained by Eq. (10), and the relationship among the image points can be determined by Eq. (11):
{P˜i=PiP¯,P˜i=PiP¯
P˜i=R0P˜i.
The translation vector can be eliminated and the objective function can be simplified as:
f(R0)=i=1nR0P˜i-P˜i.
The translation vector can then be calculated by Eq. (13) after the rotation matrix is determined:
t0=P¯iR0P¯i.
The extrinsic parameters (i.e., the positional relationship between the left and right cameras R,t) can be obtained by Eqs. (14) and (15) when the rotation matrix and translation vector from the target to the left and right cameras (RL,tL,RR,tR) are calculated.
R=RRR1,
t=tRRtl.
The theoretical coordinates of image points can then be determined if the coordinate of the point (XiW,YiW,ZiW) and the projection matrix of the camera are obtained. This process is called reprojection. The reprojection is represented by:
{u˜i=(fxr11+u0r31)XiW+(fxr12+u0r32)YiW+(fxr13+u0r33)ZiW+fxt1+u0t3r31XiW+r32YiW+r33ZiW+t3v˜i=(fyr21+v0r31)XiW+(fyr22+v0r32)YiW+(fyr23+v0r33)ZiW+fyt2+v0t3r31XiW+r32YiW+r33ZiW+t3
where rij represents the element of the rotation matrix RL or RR in line i and column j; ti is the ith element of the translation vector tL or tR, which was shown in Fig. 8; (u˜i,v˜i)is the theoretical image point coordinate of the corresponding point (XiW,YiW,ZiW) calculated by Eq. (16).

According to Eq. (16), the theoretical image point of the feature point on the calibration target in different cameras can be obtained when the intrinsic parameters of the two cameras (KL,KR) and the extrinsic parameters (RL,tL,RR,tR) are given. Moreover, the distortion correction can be used for correcting the real projections as shown in Eq. (17), where (u^i,v^i)are the distortion correction points, k=(k1,k2) are the coefficients of radial distortion, and r2=u˜i2+v˜i2:

[u^iv^i]=(1+k1r2+k2r4)[u˜iv˜i],
where the initial values of k1 and k 2 for global optimization are 0 in this paper.

There is a deviation between the real projections with distortion correction and the calculated reprojection points. Therefore, the deviation indicator function can be written as:

f(k,KL,KR,R,t)=i=1m((ui-u^i)2+(vi-v^i)2),
where (u^i,v^i) are the ith distortion correction points and (ui,vi) are the corresponding reprojection points. Therefore, the objective function of the overall optimization can be written as:

F(k,KL,KR,R,t)=mink,KL,KR,R,tj=1nf(k,KL,KR,R,t).

Geometrically speaking, Eq. (19) represents the act of seeking for optimal camera parameters to minimize the deviation of the distortion corrections and the reprojection points. In theory, the target is put in the field of measurement view only one time, which allows for the optimization of the camera’s parameters. However, owing to the image noise, the result of optimization is sometimes incorrect. Thus, the images of the calibration target need to be obtained two or more times.

The overall optimization method discussed in this paper can be summarized as follows: (1) N images of the calibration target are captured by the left and right cameras. (2) The extrinsic parameters of the measurement system can then be calibrated and the deviation of the calibration can be further reduced by using Eqs. (9)(19). The Levenberg-Marquardt iterative algorithm is the employed to obtain the optimal parameters, with the initial value obtained by the proposed method.

4. Experiments and analyses

4.1 Experimental system

The experimental system, shown in Fig. 9 , consists of two monochrome CCD cameras (VA-29MC-M5A0, Korea Vieworks Company) with a resolution of 6576 × 4384 pixels2 and a pixel size of 5.5 μm, ten red lasers, four-dimensional electronic control plantform consist of two linear guides (uKSA200, Zolix) with a stroke of 200 mm and two rotating tables, a lens (AF-S 24-70 mm f/2.8G, Nikkor), and a workstation (Z850, HP).

 figure: Fig. 9

Fig. 9 The experimental system used for calibration.

Download Full Size | PPT Slide | PDF

The experiments consist of three parts. The perpendicularity of the linear guides is measured first. Next, the intrinsic parameters of the cameras and the extrinsic parameters of the system are calibrated; moreover, the parameters are optimized by the global optimization method. Finally, the calibration accuracy is analyzed.

4.2 Experiment of perpendicularity measurement

The perpendicularity measurement system of linear guides is shown in Fig. 10 . The straightness of two linear guides is measured by using the dual-frequency laser interferometer (an accuracy of ± 0.5 ppm). The angle of the linear guides is then calculated. The experimental result is as follows. The straightness of the X-axis is measured to be 2.73 μm / 200 mm; the straightness of the Y-axis is measured to be 9.09 μm / 200 mm; and the perpendicularity is measured to be −0.03285°. Therefore, the angle between the two linear guides is 89.96715°.

 figure: Fig. 10

Fig. 10 Perpendicularity measurement experiment. The straightness measurement system of (a) the X axis and (b) the Y axis.

Download Full Size | PPT Slide | PDF

4.3 Calibration experiments for intrinsic parameters of camera

In this paper, the off-line calibration of the intrinsic parameters is conducted in the laboratory. A laser grid is projected onto a board for calibrating the intrinsic parameters of the camera, as shown in Fig. 11 . The feature points are selected by the intersections of the laser grid.

 figure: Fig. 11

Fig. 11 Calibration experiments of intrinsic parameters.

Download Full Size | PPT Slide | PDF

The camera is mounted on a 4D electronic control platform. The images of the calibration target are taken under seven different focal lengths. The principal point is then calibrated by the method proposed in this paper. Afterward, the focal length is set at nearly 35 mm. The horizontal viewing angle is approximately 63.4°.

The translation of the camera into a different posture is conducted by using linear guides, which are controlled by the 4D electronic control platform. In this method, the camera was calibrated by using four groups of camera motions in different postures, and each group consists of two translations. The motions and postures are shown in Table 1 and Fig. 12 , where the distances of the translation and postures of camera can be changed according to the field of view. The images of the laser target are taken before and after each translational motion, and the scale factor is calibrated according to these images.

Tables Icon

Table 1. The motions and postures of the intrinsic parameters calibration

 figure: Fig. 12

Fig. 12 Camera motions in intrinsic parameters calibration process.

Download Full Size | PPT Slide | PDF

The calibration results with perpendicularity compensation are shown in Table 2 , and the calibration results without perpendicularity compensation are shown in Table 3 . According to these two sets of results, the accuracy of calibrated scaling factors can be very low without perpendicularity compensation. Therefore, the perpendicularity and the strong coupling of different parameters have significant effects on the calibration accuracy. Moreover, the method proposed in this paper can improve the accuracy of camera calibration greatly.

Tables Icon

Table 2. Calibration results of principal point and scale factors with perpendicularity compensation.

Tables Icon

Table 3. Calibration results of principal point and scale factors without perpendicularity compensation.

4.4 Calibration experiments for global optimization of system parameters

The optimization process of the system parameters is shown in Fig. 13 , where the intrinsic parameters of the cameras have been calibrated. For binocular stereo vision systems, when the calibration target is placed in five different locations, the images of the calibration targets are captured at the same time. The images for global optimization are shown in Fig. 14 . After that, the global optimization of the system parameters is conducted. The optimization results are shown in Table 4 .

 figure: Fig. 13

Fig. 13 Calibration experiments of the global optimization.

Download Full Size | PPT Slide | PDF

 figure: Fig. 14

Fig. 14 Images of the global optimization experiment. (a)–(e) show five arbitrary positions of the calibration target during the global optimization.

Download Full Size | PPT Slide | PDF

Tables Icon

Table 4. Calibration results of the global optimization.

4.5. Evaluation of calibration method

In order to evaluate the accuracy of the calibration method proposed in this paper, accuracy evaluation experiments are conducted. As shown in Fig. 15 , the working distance between the measurement system and the target is 7 m, which is shown in Fig. 15. The width of the field of view (FOV) of the left and right cameras is 7 m. The distance between the two cameras is 3 m. A target with 8 feature points is placed in different positions. The distances of the feature points are measured in advance, and are shown in Table 5 . At this stage, the images of the target are captured by the binocular stereo vision measurement system (which has been calibrated), and 3D coordinates of the feature points are reconstructed. Finally, the distance is reconstructed and the reconstruction error is calculated.

 figure: Fig. 15

Fig. 15 Accuracy verification experiment of the calibration method.

Download Full Size | PPT Slide | PDF

Tables Icon

Table 5. The reconstruction relative error of different groups of reconstruction distances.

The target is placed at 30 different positions in the FOV, and the distances between each pair of feature points are reconstructed at every position. Therefore, 840 reconstruction distances can be obtained in total. The true value of distances between each pair feature points and the corresponding reconstruction error of the distances are shown in Table 5. The relative error of every reconstruction distance is shown in Fig. 16 . As shown in Table 5, the absolute error increases because the distances changed. However, the relative error is similar under different distances. The maximum relative error is 0.46%, whereas the minimum relative error is 0.0001%. The average relative error is 0.09%, and the mean squared error (MSE) is 0.07%.

 figure: Fig. 16

Fig. 16 The accuracy of the proposed calibration method.

Download Full Size | PPT Slide | PDF

4.6 Comparative experiment

In order to compare the calibration method proposed in this paper to the method without perpendicular compensation, a comparative experiment was conducted. The estimated values in Table 3 were used as the initial values for the calibration of extrinsic parameters and the global optimization of the system parameters. The calibration results are shown in Table 6 .

Tables Icon

Table 6. Calibration result of the global optimization without perpendicular compensation.

The calibration results are then evaluated in the same manner as those in Section 4.5 after global optimization, and the results of these two methods are compared in Fig. 17 . The maximum relative error of the method without perpendicular compensation is 0.61%, whereas the minimum relative error is 0.0003%. The average relative error is 0.23%, and the average mean square error (MSE) is 0.13. As shown in Fig. 17, the method with perpendicular compensation exhibits impressive accuracy, and is better suited for the calibration of binocular stereo vision measurement systems.

 figure: Fig. 17

Fig. 17 The accuracy of different calibration methods.

Download Full Size | PPT Slide | PDF

5. Conclusion

In this paper, an improved camera calibration method based on perpendicularity compensation for binocular stereo vision measurement systems is developed. The effect of the non-perpendicularity of the camera motions on the accuracy of the calibration is analyzed via simulation and experiment, and the perpendicularity of the camera motion is compensated in the calibration process. By utilizing the perpendicularity compensation of linear guides, the effect of non-perpendicularity on calibration accuracy is attenuated, resulting in a significant improvement of the accuracy. In order to achieve this result, the principal point, scale factors, extrinsic parameters, and distortion factors of the system are calculated independently so that the effect of the strong coupling of these parameters is markedly reduced. In addition, the centroid-based global optimization with only 5 images of the target further improves the calibration accuracy of the cameras. The experimental calibration results show that the calibration accuracy can reach 99.91%, which indicates that the proposed method can accurately calibrate the binocular stereo vision measurement systems with a low-accuracy actuator.

Acknowledgments

This paper is supported by the Special Funds of the National Natural Science Foundation of China (Grant No. 51227004), the National Natural Science Foundation of China (Grant No. 51375075), the National Basic Research Program of China 973 Project (Grant No. 2014CB046504), the Liaoning Provincial Natural Science Foundation of China (Grant No. 2014028010), and the Science Fund for Creative Research Groups (No. 51321004).

References and links

1. C. Ricolfe-Viala and A.-J. Sanchez-Salmeron, “Camera calibration under optimal conditions,” Opt. Express 19(11), 10769–10775 (2011). [CrossRef]   [PubMed]  

2. R. Y. Tsai, “A versatile camera calibration technique for high-accuracy 3d machine vision metrology using off-the-shelf TV cameras and lenses,” IEEE Trans. Robot. Autom. 3(4), 323–344 (1987). [CrossRef]  

3. Z. Y. Zhang, “A flexible new technique for camera calibration,” IEEE T. Pattern Anal. 22(11), 1330–1334 (2000). [CrossRef]  

4. Z. Zhang, “Flexible camera calibration by viewing a plane from unknown orientations,” in Proceedings of the Seventh IEEE International Conference on Computer Vision (IEEE, 1999), pp. 666–673. [CrossRef]  

5. E. Kruppa, Zur Ermittlung eines Objektes aus zwei Perspektiven mit innerer Orientierung (Hölder, 1913).

6. R. I. Hartley, “Euclidean reconstruction from uncalibrated views,” in Applications of Invariance in Computer Vision (Springer, 1994), pp. 235–256.

7. S. J. Maybank and O. D. Faugeras, “A theory of self-calibration of a moving camera,” Int. J. Comput. Vis. 8(2), 123–151 (1992). [CrossRef]  

8. Q. T. Luong and O. Faugeras, “Self-calibration of a camera using multiple images,” in Proceedings of the 11th IAPR International Conference on Pattern Recognition, Vol. I, Conference A: Computer Vision and Applications (IEEE, 1992), pp. 9–12. [CrossRef]  

9. O. D. Faugeras, Q. T. Luong, and S. J. Maybank, “Camera self-calibration: Theory and experiments,” in Computer Vision ECCV'92 (Springer, 1992), pp. 321–334.

10. O. D. Faugeras, “What can be seen in three dimensions with an uncalibrated stereo rig?” in Computer Vision ECCV'92 (Springer, 1992), pp. 563–578.

11. Q. T. Luong and O. D. Faugeras, “The fundamental matrix: Theory, algorithms, and stability analysis,” Int. J. Comput. Vis. 17(1), 43–75 (1996). [CrossRef]  

12. T. Svoboda, D. Martinec, and T. Pajdla, “A convenient multicamera self-calibration for virtual environments,” Presence-Teleop, Virt. 14(4), 407–422 (2005).

13. M. Hödlmoser and M. Kampel, “Multiple camera self-calibration and 3D reconstruction using pedestrians,” in Advances in Visual Computing (Springer, 2010), pp. 1–10.

14. F. Yılmaztürk, “Full-automatic self-calibration of color digital cameras using color targets,” Opt. Express 19(19), 18164–18174 (2011). [CrossRef]   [PubMed]  

15. R. I. Hartley, “Self-calibration of stationary cameras,” Int. J. Comput. Vis. 22(1), 5–23 (1997). [CrossRef]  

16. F. Du and M. Brady, “Self-calibration of the intrinsic parameters of cameras for active vision systems,” in Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition (IEEE, 1993), pp. 477–482. [CrossRef]  

17. S. D. Ma, “A self-calibration technique for active vision systems,” IEEE Trans. Robot. Autom. 12(1), 114–120 (1996). [CrossRef]  

18. A. Gruen, “Calibration and orientation of cameras in computer vision,” Meas. Sci. Technol. 13(2), 231 (2002). [CrossRef]  

19. R. Hartley and A. Zisserman, Multiple View Geometry in Computer Vision (Cambridge University Press, 2003).

References

  • View by:
  • |
  • |
  • |

  1. C. Ricolfe-Viala and A.-J. Sanchez-Salmeron, “Camera calibration under optimal conditions,” Opt. Express 19(11), 10769–10775 (2011).
    [Crossref] [PubMed]
  2. R. Y. Tsai, “A versatile camera calibration technique for high-accuracy 3d machine vision metrology using off-the-shelf TV cameras and lenses,” IEEE Trans. Robot. Autom. 3(4), 323–344 (1987).
    [Crossref]
  3. Z. Y. Zhang, “A flexible new technique for camera calibration,” IEEE T. Pattern Anal. 22(11), 1330–1334 (2000).
    [Crossref]
  4. Z. Zhang, “Flexible camera calibration by viewing a plane from unknown orientations,” in Proceedings of the Seventh IEEE International Conference on Computer Vision (IEEE, 1999), pp. 666–673.
    [Crossref]
  5. E. Kruppa, Zur Ermittlung eines Objektes aus zwei Perspektiven mit innerer Orientierung (Hölder, 1913).
  6. R. I. Hartley, “Euclidean reconstruction from uncalibrated views,” in Applications of Invariance in Computer Vision (Springer, 1994), pp. 235–256.
  7. S. J. Maybank and O. D. Faugeras, “A theory of self-calibration of a moving camera,” Int. J. Comput. Vis. 8(2), 123–151 (1992).
    [Crossref]
  8. Q. T. Luong and O. Faugeras, “Self-calibration of a camera using multiple images,” in Proceedings of the 11th IAPR International Conference on Pattern Recognition, Vol. I, Conference A: Computer Vision and Applications (IEEE, 1992), pp. 9–12.
    [Crossref]
  9. O. D. Faugeras, Q. T. Luong, and S. J. Maybank, “Camera self-calibration: Theory and experiments,” in Computer Vision ECCV'92 (Springer, 1992), pp. 321–334.
  10. O. D. Faugeras, “What can be seen in three dimensions with an uncalibrated stereo rig?” in Computer Vision ECCV'92 (Springer, 1992), pp. 563–578.
  11. Q. T. Luong and O. D. Faugeras, “The fundamental matrix: Theory, algorithms, and stability analysis,” Int. J. Comput. Vis. 17(1), 43–75 (1996).
    [Crossref]
  12. T. Svoboda, D. Martinec, and T. Pajdla, “A convenient multicamera self-calibration for virtual environments,” Presence-Teleop, Virt. 14(4), 407–422 (2005).
  13. M. Hödlmoser and M. Kampel, “Multiple camera self-calibration and 3D reconstruction using pedestrians,” in Advances in Visual Computing (Springer, 2010), pp. 1–10.
  14. F. Yılmaztürk, “Full-automatic self-calibration of color digital cameras using color targets,” Opt. Express 19(19), 18164–18174 (2011).
    [Crossref] [PubMed]
  15. R. I. Hartley, “Self-calibration of stationary cameras,” Int. J. Comput. Vis. 22(1), 5–23 (1997).
    [Crossref]
  16. F. Du and M. Brady, “Self-calibration of the intrinsic parameters of cameras for active vision systems,” in Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition (IEEE, 1993), pp. 477–482.
    [Crossref]
  17. S. D. Ma, “A self-calibration technique for active vision systems,” IEEE Trans. Robot. Autom. 12(1), 114–120 (1996).
    [Crossref]
  18. A. Gruen, “Calibration and orientation of cameras in computer vision,” Meas. Sci. Technol. 13(2), 231 (2002).
    [Crossref]
  19. R. Hartley and A. Zisserman, Multiple View Geometry in Computer Vision (Cambridge University Press, 2003).

2011 (2)

2005 (1)

T. Svoboda, D. Martinec, and T. Pajdla, “A convenient multicamera self-calibration for virtual environments,” Presence-Teleop, Virt. 14(4), 407–422 (2005).

2002 (1)

A. Gruen, “Calibration and orientation of cameras in computer vision,” Meas. Sci. Technol. 13(2), 231 (2002).
[Crossref]

2000 (1)

Z. Y. Zhang, “A flexible new technique for camera calibration,” IEEE T. Pattern Anal. 22(11), 1330–1334 (2000).
[Crossref]

1997 (1)

R. I. Hartley, “Self-calibration of stationary cameras,” Int. J. Comput. Vis. 22(1), 5–23 (1997).
[Crossref]

1996 (2)

S. D. Ma, “A self-calibration technique for active vision systems,” IEEE Trans. Robot. Autom. 12(1), 114–120 (1996).
[Crossref]

Q. T. Luong and O. D. Faugeras, “The fundamental matrix: Theory, algorithms, and stability analysis,” Int. J. Comput. Vis. 17(1), 43–75 (1996).
[Crossref]

1992 (1)

S. J. Maybank and O. D. Faugeras, “A theory of self-calibration of a moving camera,” Int. J. Comput. Vis. 8(2), 123–151 (1992).
[Crossref]

1987 (1)

R. Y. Tsai, “A versatile camera calibration technique for high-accuracy 3d machine vision metrology using off-the-shelf TV cameras and lenses,” IEEE Trans. Robot. Autom. 3(4), 323–344 (1987).
[Crossref]

Brady, M.

F. Du and M. Brady, “Self-calibration of the intrinsic parameters of cameras for active vision systems,” in Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition (IEEE, 1993), pp. 477–482.
[Crossref]

Du, F.

F. Du and M. Brady, “Self-calibration of the intrinsic parameters of cameras for active vision systems,” in Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition (IEEE, 1993), pp. 477–482.
[Crossref]

Faugeras, O. D.

Q. T. Luong and O. D. Faugeras, “The fundamental matrix: Theory, algorithms, and stability analysis,” Int. J. Comput. Vis. 17(1), 43–75 (1996).
[Crossref]

S. J. Maybank and O. D. Faugeras, “A theory of self-calibration of a moving camera,” Int. J. Comput. Vis. 8(2), 123–151 (1992).
[Crossref]

Gruen, A.

A. Gruen, “Calibration and orientation of cameras in computer vision,” Meas. Sci. Technol. 13(2), 231 (2002).
[Crossref]

Hartley, R. I.

R. I. Hartley, “Self-calibration of stationary cameras,” Int. J. Comput. Vis. 22(1), 5–23 (1997).
[Crossref]

Luong, Q. T.

Q. T. Luong and O. D. Faugeras, “The fundamental matrix: Theory, algorithms, and stability analysis,” Int. J. Comput. Vis. 17(1), 43–75 (1996).
[Crossref]

Ma, S. D.

S. D. Ma, “A self-calibration technique for active vision systems,” IEEE Trans. Robot. Autom. 12(1), 114–120 (1996).
[Crossref]

Martinec, D.

T. Svoboda, D. Martinec, and T. Pajdla, “A convenient multicamera self-calibration for virtual environments,” Presence-Teleop, Virt. 14(4), 407–422 (2005).

Maybank, S. J.

S. J. Maybank and O. D. Faugeras, “A theory of self-calibration of a moving camera,” Int. J. Comput. Vis. 8(2), 123–151 (1992).
[Crossref]

Pajdla, T.

T. Svoboda, D. Martinec, and T. Pajdla, “A convenient multicamera self-calibration for virtual environments,” Presence-Teleop, Virt. 14(4), 407–422 (2005).

Ricolfe-Viala, C.

Sanchez-Salmeron, A.-J.

Svoboda, T.

T. Svoboda, D. Martinec, and T. Pajdla, “A convenient multicamera self-calibration for virtual environments,” Presence-Teleop, Virt. 14(4), 407–422 (2005).

Tsai, R. Y.

R. Y. Tsai, “A versatile camera calibration technique for high-accuracy 3d machine vision metrology using off-the-shelf TV cameras and lenses,” IEEE Trans. Robot. Autom. 3(4), 323–344 (1987).
[Crossref]

Yilmaztürk, F.

Zhang, Z.

Z. Zhang, “Flexible camera calibration by viewing a plane from unknown orientations,” in Proceedings of the Seventh IEEE International Conference on Computer Vision (IEEE, 1999), pp. 666–673.
[Crossref]

Zhang, Z. Y.

Z. Y. Zhang, “A flexible new technique for camera calibration,” IEEE T. Pattern Anal. 22(11), 1330–1334 (2000).
[Crossref]

IEEE T. Pattern Anal. (1)

Z. Y. Zhang, “A flexible new technique for camera calibration,” IEEE T. Pattern Anal. 22(11), 1330–1334 (2000).
[Crossref]

IEEE Trans. Robot. Autom. (2)

R. Y. Tsai, “A versatile camera calibration technique for high-accuracy 3d machine vision metrology using off-the-shelf TV cameras and lenses,” IEEE Trans. Robot. Autom. 3(4), 323–344 (1987).
[Crossref]

S. D. Ma, “A self-calibration technique for active vision systems,” IEEE Trans. Robot. Autom. 12(1), 114–120 (1996).
[Crossref]

Int. J. Comput. Vis. (3)

R. I. Hartley, “Self-calibration of stationary cameras,” Int. J. Comput. Vis. 22(1), 5–23 (1997).
[Crossref]

Q. T. Luong and O. D. Faugeras, “The fundamental matrix: Theory, algorithms, and stability analysis,” Int. J. Comput. Vis. 17(1), 43–75 (1996).
[Crossref]

S. J. Maybank and O. D. Faugeras, “A theory of self-calibration of a moving camera,” Int. J. Comput. Vis. 8(2), 123–151 (1992).
[Crossref]

Meas. Sci. Technol. (1)

A. Gruen, “Calibration and orientation of cameras in computer vision,” Meas. Sci. Technol. 13(2), 231 (2002).
[Crossref]

Opt. Express (2)

Presence-Teleop, Virt. (1)

T. Svoboda, D. Martinec, and T. Pajdla, “A convenient multicamera self-calibration for virtual environments,” Presence-Teleop, Virt. 14(4), 407–422 (2005).

Other (9)

M. Hödlmoser and M. Kampel, “Multiple camera self-calibration and 3D reconstruction using pedestrians,” in Advances in Visual Computing (Springer, 2010), pp. 1–10.

R. Hartley and A. Zisserman, Multiple View Geometry in Computer Vision (Cambridge University Press, 2003).

F. Du and M. Brady, “Self-calibration of the intrinsic parameters of cameras for active vision systems,” in Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition (IEEE, 1993), pp. 477–482.
[Crossref]

Q. T. Luong and O. Faugeras, “Self-calibration of a camera using multiple images,” in Proceedings of the 11th IAPR International Conference on Pattern Recognition, Vol. I, Conference A: Computer Vision and Applications (IEEE, 1992), pp. 9–12.
[Crossref]

O. D. Faugeras, Q. T. Luong, and S. J. Maybank, “Camera self-calibration: Theory and experiments,” in Computer Vision ECCV'92 (Springer, 1992), pp. 321–334.

O. D. Faugeras, “What can be seen in three dimensions with an uncalibrated stereo rig?” in Computer Vision ECCV'92 (Springer, 1992), pp. 563–578.

Z. Zhang, “Flexible camera calibration by viewing a plane from unknown orientations,” in Proceedings of the Seventh IEEE International Conference on Computer Vision (IEEE, 1999), pp. 666–673.
[Crossref]

E. Kruppa, Zur Ermittlung eines Objektes aus zwei Perspektiven mit innerer Orientierung (Hölder, 1913).

R. I. Hartley, “Euclidean reconstruction from uncalibrated views,” in Applications of Invariance in Computer Vision (Springer, 1994), pp. 235–256.

Cited By

OSA participates in Crossref's Cited-By Linking service. Citing articles from OSA journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (17)

Fig. 1
Fig. 1 Camera model. (OCXCYCZC and OWXWYWZW indicate the camera coordinate system and the world coordinate system, respectively).
Fig. 2
Fig. 2 Principle diagram of the active vision calibration method based on perpendicular linear motions.
Fig. 3
Fig. 3 Simulation of the effect of the perpendicularity of the linear guides on the calibration results. (Ii represents the image plane of the camera).
Fig. 4
Fig. 4 The estimation errors of (a) focal length f x, (b) focal length f y, (c) principle point u0, and (d) principle point v0.
Fig. 5
Fig. 5 The reconstruction errors under different perpendicularities.
Fig. 6
Fig. 6 Experimental system and the reconstruction target.
Fig. 7
Fig. 7 The reconstruction error under different perpendicularities in the experiment.
Fig. 8
Fig. 8 Euler transformation relationship between different coordinate systems.
Fig. 9
Fig. 9 The experimental system used for calibration.
Fig. 10
Fig. 10 Perpendicularity measurement experiment. The straightness measurement system of (a) the X axis and (b) the Y axis.
Fig. 11
Fig. 11 Calibration experiments of intrinsic parameters.
Fig. 12
Fig. 12 Camera motions in intrinsic parameters calibration process.
Fig. 13
Fig. 13 Calibration experiments of the global optimization.
Fig. 14
Fig. 14 Images of the global optimization experiment. (a)–(e) show five arbitrary positions of the calibration target during the global optimization.
Fig. 15
Fig. 15 Accuracy verification experiment of the calibration method.
Fig. 16
Fig. 16 The accuracy of the proposed calibration method.
Fig. 17
Fig. 17 The accuracy of different calibration methods.

Tables (6)

Tables Icon

Table 1 The motions and postures of the intrinsic parameters calibration

Tables Icon

Table 2 Calibration results of principal point and scale factors with perpendicularity compensation.

Tables Icon

Table 3 Calibration results of principal point and scale factors without perpendicularity compensation.

Tables Icon

Table 4 Calibration results of the global optimization.

Tables Icon

Table 5 The reconstruction relative error of different groups of reconstruction distances.

Tables Icon

Table 6 Calibration result of the global optimization without perpendicular compensation.

Equations (19)

Equations on this page are rendered with MathJax. Learn more.

[ u v 1 ] = [ f x 0 0 0 f y 0 u 0 v 0 1 0 0 0 ] K [ R t 0 T 1 ] [ R | t ] [ X W Y W Z W 1 ] = K [ R | t ] [ X W Y W Z W 1 ] = M [ X W Y W Z W 1 ] ,
K = [ f x 0 u 0 0 f y v 0 0 0 1 ] .
( m j 1 - u 0 )( m j 2 - u 0 )/ f x 2 +( n j 1 - v 0 )( n j 2 - v 0 )/ f y 2 +1=0 , j =1,2,3,4,
u 0 ( y 1 y 2 ) + v 0 ( x 2 x 1 ) = x 2 y 1 x 1 y 2 ,
M P p T = N M = [ y 1 y 2 x 1 x 2 y 2 y 3 x 2 x 3 ] , P p = [ u 0 v 0 ] N = [ x 2 y 1 x 1 y 2 x 3 y 2 x 2 y 3 ]
( A j 2 C j 2 sin 2 θ ) x 2 + ( B j 2 D j 2 sin 2 θ ) y 2 + ( 2 A j B j C j D j - A j 2 D j 2 cos 2 θ - B j 2 C j 2 cos 2 θ ) x y + , ( 2 A j C j - A j 2 cos 2 θ - C j 2 cos 2 θ ) x + ( 2 B j D j - B j 2 cos 2 θ - D j 2 cos 2 θ ) y + sin 2 θ = 0
P i = R 0 P i + t 0 .
f ( R 0 , t 0 ) = i = 1 n R 0 P i + t 0 - P i .
{ P ¯ = 1 n i=1 n P i P ¯ = 1 n i=1 n P i .
{ P ˜ i = P i P ¯ , P ˜ i = P i P ¯
P ˜ i = R 0 P ˜ i .
f ( R 0 ) = i = 1 n R 0 P ˜ i - P ˜ i .
t 0 = P ¯ i R 0 P ¯ i .
R = R R R 1 ,
t = t R R t l .
{ u ˜ i = ( f x r 11 + u 0 r 31 ) X i W + ( f x r 12 + u 0 r 32 ) Y i W + ( f x r 13 + u 0 r 33 ) Z i W + f x t 1 + u 0 t 3 r 31 X i W + r 32 Y i W + r 33 Z i W + t 3 v ˜ i = ( f y r 21 + v 0 r 31 ) X i W + ( f y r 22 + v 0 r 32 ) Y i W + ( f y r 23 + v 0 r 33 ) Z i W + f y t 2 + v 0 t 3 r 31 X i W + r 32 Y i W + r 33 Z i W + t 3
[ u ^ i v ^ i ] = ( 1+ k 1 r 2 + k 2 r 4 ) [ u ˜ i v ˜ i ] ,
f ( k , K L , K R , R , t ) = i = 1 m ( ( u i - u ^ i ) 2 + ( v i - v ^ i ) 2 ) ,
F ( k , K L , K R , R , t ) = m i n k , K L , K R , R , t j = 1 n f ( k , K L , K R , R , t ) .

Metrics