Abstract

For important camera calibration in the field of computer vision, a new target form, namely, a grid spherical target (GST) that is different from the spherical target, is proposed. The GST has advantages of spherical and checkerboard targets because of the grid on the sphere. And the latitude and longitude circles and the intersection points between latitude and longitude circles on the GST are used to calibrate the camera. Firstly, the Image of Absolute Conic should be obtained using the elliptic curves of latitude and longitude circles on the GST in the images. After obtaining the initial intrinsic and extrinsic parameters of the camera using the Image of Absolute Conic, optimum solutions of the intrinsic and extrinsic parameters are solved through nonlinear optimization by using the latitude circles and the intersection points of the latitude and longitude lines. Finally, the effectiveness of the GST-based method is proven in simulation and physical experiments.

© 2017 Optical Society of America

Corrections

27 June 2017: Typographical corrections were made to Refs. 2, 3, 10, 12, 14, 18, 19, 21, 23, 29, and 31.

1. Introduction

Camera calibration is a hot research topic in the field of computer vision [1–3]. The accuracy of camera calibration directly affects the measuring accuracy of a vision system, such as industry inspection, vision-based autonomous robot navigation, and virtual reality. Thus, a simple and high-accuracy camera calibration method is of great significance. Current camera calibration methods are primarily divided into two major classes: (1) target-based calibration methods, which are applicable to the calibration of vision measuring systems for precise dimensional measurement; and (2) camera self-calibration methods that do not require targets, which are applied to vision-based autonomous robot navigation and scene reconstruction. The calibration methods of the first class are an important topic under discussion in this paper.

By adopting a 3D target of two planes [4–6] for camera calibration, this method may achieve high accuracy, but it is now relatively less used because of difficulties in machining a 3D target. Zhang [7] introduced a simple, flexible, and high-accuracy method using a freely moving planar checkerboard target, which has been widely applied to camera calibration [8]. Additionally, some researchers adopted different calibration methods using various features on planar targets [9–11]. Camera calibration methods [12–14] using 1D targets can only calibrate a single camera under certain circumstances, thereby limiting the applications of 1D targets in machine vision-based measuring systems. Considerable camera calibration methods [15–21] have been proposed based on spherical target images, besides the abovementioned target forms. The proposed method determines the intrinsic camera parameters based on the SCs. Given the particularity of a spherical target that is different from conventional 3D targets, a camera shooting a spherical target at any angle may obtain a contour image. This method is ideal for the synchronous calibration of the intrinsic parameters for multiple cameras shooting at different angles. However, it is greatly affected by image noise, so its accuracy does not satisfy requirements in high-accuracy measuring system calibration. Other methods using spinning surface [22], parallel circles [23], and coaxial circles [24] have also been described in some articles, but they are unable to achieve high-accuracy calibration.

The calibration method using 2D target is flexible and convenient, with high calibration accuracy. It has been widely applied in camera calibration, but it is unsuited to synchronously complete the calibration of multiple cameras at different positions. Use of a spherical target allows a complete outer contour of the spherical target at any angle. Such a technique is suitable for the synchronous calibration of multiple cameras at different positions, but its calibration accuracy can hardly meet high-accuracy measurement requirements. Moreover, a certain requirement must be met in the location of the spherical target: if the location is too close to the center of the camera, the calibration will be inaccurate. However, if it is too far, the contour acquisition and elliptic fitting may be subject to camera distortion, thereby affecting the calibration accuracy.

Based on the above analysis, this paper presents a new target form, namely, the grid spherical target (GST), which combines the strengths of the spherical and planar targets. The GST is different from the spherical target has been proposed in [15–21], mainly, it calibrates the camera using the latitude and longitude circles and the intersection points between latitude and longitude circles. Meanwhile, the GST-based method in this paper allows multiple cameras to shoot simultaneously and calibrate their respective intrinsic parameters, while achieving the same calibration accuracy as that using planar target. During the calibration, the linear solutions of the intrinsic and extrinsic parameters are obtained from the elliptic curves of the latitude and longitude circles in the image. Based on the coordinates of the intersecting points of the elliptic curves of the longitude and latitude lines in the image and the lens distortion, the nonlinear optimization method is adopted to obtain the optimum solutions of the intrinsic and extrinsic parameters.

About the GST method, there are some limitations. Obviously, it is difficult to achieve high accuracy manufacturing compared with checkerboards, and it can’t be applied widely because of manufacturing technology. The GST in this paper is manufactured by five-axis NC machine. In the future, with the improvement of manufacturing technology, it can be made from ceramics, and the grid on it can be manufactured by laser corrosion method and the manufacturing accuracy of it will be improved. In addition, it is an advantage of GST to achieve calibration of cameras at different positions. But if there are two cameras placed along the same horizontal axis as the sphere, then the images acquired by both cameras would be identical. So if we need to calibrate multiple cameras in the future, to avoid this ambiguity, some marker points should be pasted on the longitude intervals such as 40°, 60°, 90 °, 120°, 180°, 240° and so on. These marker points will help us to distinguish different cameras from images.

The rest of this paper is organized as follows. In section 2, a mathematical model for grid spherical imaging is introduced. The method principle of the proposed method is introduced in detail in Section 3. The proposed calibration method is verified by simulation experiments and physical experiments in Section 4. Conclusions are drawn in Section 5.

2. Mathematical model for grid spherical imaging

Figure 1 shows the model for grid spherical target imaging. Ocxcyczc represents the camera coordinate system; Otxtytzt represents the target coordinate system; Cli(i)represents the equation for the ith circle of longitude under the coordinate system Oli(i)xli(i)yli(i)zli(i); and eli(i) represents the elliptic curves of Cli(i)in the image. i = 1,2…N where N is the number of circles of longitude, Cla(j)is the equation for the jth circle of latitude under the coordinate system Ola(j)xla(j)yla(j)zla(j), and ela(j) represents the elliptic curves of Cla(j) in the image. j = 1,2…M where M represents the number of circles of latitude. The origin of coordinates and Z-axis of every Oli(i)xli(i)yli(i)zli(i)are the same as those of Otxtytzt. The coordinate axis of every Ola(j)xla(j)yla(j)zla(j) is parallel to that of Otxtytzt, but only a fixed value separates their origins of coordinates in the Z-axis direction.

 figure: Fig. 1

Fig. 1 Schematic of the calibration process.

Download Full Size | PPT Slide | PDF

q=[x,y,z,1]T represents the 3D coordinates of an arbitrary point on the target under the target coordinate frame Otxtytzt. p=[u,v,1]T represents the undistorted homogeneous image coordinates of q in the image. According to the pinhole camera model, they satisfy the following equation:

ρp=ρK[Rt]q=[axγu00ayv0001][Rt]q
where ρ represents the non-zero scale factor, Kdenotes the intrinsic parameter matrix of the camera, u0 and v0 are the coordinates of the principal point, ax and ay are the scale factors in the image u and v axes, respectively, and γ is the skew of the two image axes. R,trepresent the rotation matrix and the translation vector from Otxtytzt to Ocxcyczc, respectively.

3. Method principle

In the calibration process, the initial solutions of intrinsic and extrinsic parameters are obtained from the elliptic curves of the latitude and longitude circles in the image. Then, the nonlinear optimization method corresponding with the coordinates of the intersection points of the longitude and latitude lines in the image is adopted for optimum solutions of the intrinsic and extrinsic parameters.

3.1 Extracting and fitting the image of the elliptic curves of the latitude and longitude circles

The Steger method [25] is used to extract the center of the image of the longitude and latitude lines on the target and then link the extracted points to line segments. Two arbitrary segments are selected, and the first point and the last point in one of segments are compared to them of the other segment. If the distance between the two nearest neighboring points of them is less than 4 pixels, and the included angle between the normal vectors (n1, n2 in Fig. 2(a)) of segments (s1, s2 in Fig. 2(a)) is smaller than the threshold value, the two segments will be categorized into a set of curves. The rest can be deduced like that, until all segments are categorized into different sets of curves. The corresponding elliptic curve equations for all sets of curves are obtained by ellipse fitting [26].

 figure: Fig. 2

Fig. 2 (a) Included angle between the normal vectors n1 and n2. (b) Seita angle between the long axis and the y-axis in the image.

Download Full Size | PPT Slide | PDF

For all elliptic curves, the long axis, short axis length, center of the ellipse, and included angle between the long axis and the y-axis in the image are calculated. Based on the above parameters, all elliptic curves are divided into two classes: (1) elliptic curves in which the Seita angle (as shown in Fig. 2(b)) which are included angle between the long axis and the y-axis in the image, the long axis and center of the ellipse are basically the same, and the relative deviation is smaller than the threshold value, belonging to the elliptic curves of the circles of longitude; and (2) all elliptic curves other than those of the first class, belonging to the elliptic curves of the circles of latitude. The process is shown in Fig. 3.

 figure: Fig. 3

Fig. 3 Extracting and fitting the image of the elliptic curves of the circles of latitude and longitude.

Download Full Size | PPT Slide | PDF

3.2 Obtaining the linear solutions of the intrinsic and extrinsic parameters

3.2.1 Obtaining the linear solutions of the intrinsic camera parameters

According to the geometric basics of projection, all the circles spatially intersect the absolute conic (AC) [27] at two imaginary intersection points, i.e., I = (1,I,0,0)T, J = (1,-I,0,0)T, also called the imaginary circular point (CP) [27], and the projected points of the CPs in the image are called the images of circular points (ICPs). Circles that are parallel or on a same plane intersect the absolute conic at the same two CPs. According to computer vision knowledge [27], the projected curve of the absolute conic (AC) in the image is called the Image of Absolute Conic (IAC)ω, and the relations between ω and the intrinsic camera parameters are as below:

ω=K-TK1
If six ICPs on the IAC can be obtained, ω can be calculated by fitting these ICPs and then K can also be obtained.

As shown in Fig. 4(a), all latitude circles on GST in space are parallel, so the intersection points between any latitude circle image ela(j) and ω are the same two ICPs mI and mJ(mI and mJ are the conjugate imaginary numbers). The line lla connecting mI and mJis called the vanishing line [27] of the plane on which a latitude circle is in the image and all latitude circles have the same vanishing line lla in the image. Meanwhile, llaintersects ela(j) or ωat mIandmJ. As shown in Fig. 4(b), lli(i) represents the vanishing line of the plane on which every longitude circle is, and lli(i) intersects eli(i) or ωat two ICPs. If we know lla and lli(i), the ICPs on ωcan be obtained by the intersection points between lli(i) and eli(i), or llaand ela(j).

 figure: Fig. 4

Fig. 4 (a) Relationship between ela, llaand ω; (b) Relationship between lli(i), ω and eli(i).

Download Full Size | PPT Slide | PDF

Assuming that cla(j)=[xla(j),yla(j),zla(j),1]T is the center of the latitude circle Cla(j), ola(j)=[ula(j),vla(j),1]T represents the undistorted homogeneous coordinate of cla(j) in the image, and the undistorted homogeneous coordinate of the center of the largest latitude circle in the image is expressed by o˜la(j). cli(i)=[xli(i),yli(i),zli(i),1]T represents the center of a longitude circle Cli(i), and oli(i)=[uli(i),vli(i),1]T represents the undistorted homogeneous coordinate of cli(i) in the image. The undistorted homogeneous coordinate of the center of GST in the image is expressed as o, and this point is also the point oli(i) corresponding to the center of any longitude circle and the pointo˜la(j) corresponding to the center of the largest latitude circle in the image, namely, o˜la(j)=oli(i)=o.

Assume that E=[1001ababa2+b2r2] is the matrix expression of a circle in a space plane, and (xa)2+(yb)2=r2 is the analytic expression of it. The homogeneous coordinate of circle center is O=[ab1] in the space and Eq. (3) shows that the product of E and O is a homogeneous vector of the infinite line [27].

L=EO=[1001ababa2+b2r2][ab1]=[00r2]=λ[001]=λL

Where L=[001] is the infinite line, λ is a non-zero factor.

Assume that H is the homography matrix from the plane on which the circle is to the camera image plane. In Eqs. (4)-(6), e is the matrix expression of the circle in the image, oc is the image coordinate of O and l is the image of L .

e=HTEH1E=HTeH
oc=HO
l=HTL=λHTL

According to Eqs. (4)-(6), Eq. (7) can be obtained,

λl=eoc
where l is the image of L, namely, l is the vanishing line of the plane on which the circle is in the image.

So a polar relation exists between ola(j) and lla relative to ela(j), as Eq. (8). Thus, by solving Eq. (8), lla is obtained

lla=λela(j)ola(j)
where λ is a non-zero constant. Given o˜la(j)=oli(i)=o, lla may be obtained according to Eq. (9),
lla=λe˜la(j)o
where λ is a non-zero constant, e˜la(j) represents the elliptic curves of the largest circle of latitude in the image.

Similarly, polar relations exist between oli(i) and lli(i) relative to eli(i), as shown in Eq. (10)

lli(i)=λeli(i)oli(i)
where λ is a non-zero constant. Given o˜la(j)=oli(i)=o, the vanishing line lli(i) can be obtained by Eq. (11).

lli(i)=λeli(i)o

In addition, o in Eqs. (9) and (11) can be obtained using the method in [17]. In [17], it is mentioned that if there are three spheres in different positions, three equations will be listed as follows:

{C2C1l12=β2β1l12C3C1l13=β3β1l13C2C3l23=β2β3l23

As shown in Fig. 5, C1, C2, C3 are contour images of spheres, C1*, C2*, C3* are the dual of them respectively, and O1, O2, O3 are the images of the center of spheres respectively. l12 is an eigenvector of C2C1 corresponding to the eigenvalue β2/β1, and l12 can be uniquely obtained from the eigenvectors of C2C1since it is the only line having two intersection points with both conics. l13, l23 can also be obtained from C3C1 and C2C3 respectively. Then O1 can be calculated from cross-product of l12 and l13. Similarly O2 and O3 can also be obtained. So we can get the o from there GSTs.

 figure: Fig. 5

Fig. 5 The relationship between centers and contours of three spheres in the image.

Download Full Size | PPT Slide | PDF

Because lli(i)intersects eli(i)at two ICPs, and lla intersects an arbitrary ela(j)at the same pair of ICPs, shooting at least three longitude circles or the largest latitude circle combined with two longitude circles can produce ω, and then camera intrinsic parameters can be solved by Cholesky decomposition.

3.2.2 Linear solution of extrinsic camera parameters

For the normal vector of plane Otxtyt, namely, the normal vector of the plane on which all circles of latitude of the target can be found, the coordinate of the vanishing point in the image is v1. For the normal vector of plane Otxtzt, namely, the normal vector of the plane on which a longitude circle of the target is, the coordinate of the vanishing point in the image is v2. v1 and v2 can be solved by Eq. (13):

{lla=λ1ωv1lli(i)=λ2ωv2
where λ1 and λ2 are non-zero constants; lla represents, at the current position of the target, the vanishing line of the plane Otxtyt on which the largest latitude circle is, which is obtained by Eq. (9); and lli(i) represents, at the position of the target, the vanishing line of Plane Otxtzt on which the selected longitude circle is, which is obtained by Eq. (11). The selected longitude circle can be freely chosen among the elliptic curves of longitude circles in the captured image, without affecting the resolutions of extrinsic camera parameters.

Assume that the vectors along the z-axis and y-axis under Otxtytzt are d1=[0,0,1]T and d2=[0,1,0], respectively. The relationships between v1 and d1 and v2 and d2 are expressed as Eq. (14):

{v1=μ1KRd1v2=μ2KRd2{K1v1=μ1Rd1K1v2=μ2Rd2
where μ1 and μ2 are non-zero constants, and R=[r1,r2,r3]T represents the rotation matrix from Otxtytzt to Ocxcyczc.

Using Eq. (14) and combining d1=[0,0,1]T and d2=[0,1,0], Eq. (15) can be obtained:

{r3=±(K1v1)/K1v1r2=±(K1v2)/K1v2
where the plus or minus sign can be judged by the resolution of t=[txtytz]. If tz is positive, the selection of the plus or minus sign for r2,r3 is correct. If tz is negative, that for r2,r3should be reset until tz is positive.

r1,r2,r3 represent the orthogonal unit vectors, thereby obtaining Eq. (16):

r1=r2×r3

Then, the rotation matrix R from Otxtytztto Ocxcyczc is obtained using Eqs. (13)–(16).

As shown in Fig. 6, the expression for the jth circle of latitude under the coordinate systemOla(j)xla(j)yla(j)zla(j) of the jth circle of latitude is assumed as Cla(j)=[10001000dj2], where dj represents the radius of the jth circle of latitude. For the jth circle of latitudeCla(j), the coordinate systemOla(j)xla(j)yla(j)zla(j) is parallel to the target coordinate system Otxtytzt, but only the fixed value D separates them in the z direction. The value D represents the inherent dimension of the target, and it is a known quantity. Therefore, the rotation matrix R˜ from Ola(j)xla(j)yla(j)zla(j) to Ocxcyczc and the rotation matrix R from Otxtytzt to Ocxcyczc are the same, but the translation vectors t˜ and tare different, and t˜=t+R×[00D]Tas shown in Fig. 6. The sign of D should be noted, it is positive on the positive side of the zt-axis but negative on the negative side.

 figure: Fig. 6

Fig. 6 Schematic of solving the translation vector from Otxtytzt to Ocxcyczc.

Download Full Size | PPT Slide | PDF

According to multi-view geometry foundation [27], the relationship between Cla(j) and ela(j) is as below:

Cla(j)=λHTela(j)H
where λ is a non-zero constant, and Hrepresents the homography matrix from the plane on which the jth circle of latitude is to the camera image plane.

By solving Eq. (17):

Cla(j)=λHTela(j)HCla(j)=λ(K[r1,r2,t˜])Tela(j)K[r1,r2,t˜]Cla(j)=λ[r1,r2,t˜]T(KTela(j)K)[r1,r2,t˜]

By consolidating Eq. (18):

-dj2=t˜T(KTela(j)K)t˜/(r1T(KTela(j)K)r1)

Equation (19) can be obtained from an arbitrary circle of latitude. From the projected curves of any three circles of latitude on the target, a set of no less than three equations similar to Eq. (19) is enumerated. Using the least square method, t that is from Oli(j)xli(j)yli(j)zli(j) to Ocxcyczc can be solved.

3.3 Non-linear Optimization

The 3D coordinates of the intersection points on the GST under the target coordinate system are given. The intersection points are applied to solve the camera intrinsic parameters, thereby improving the calibration accuracy.

The nonlinear optimization is a two-stage process. First, the optimal solutions of the intrinsic and extrinsic parameters are obtained from the curves of the longitude and latitude lines, and the serial numbers are identified by perspective projection positioning. The identified intersection points of the latitude and longitude lines are introduced into the objective function for optimization, and the optimum solutions of the intrinsic and extrinsic parameters are solved by nonlinear optimization.

At the first stage, nonlinear optimization establishes the objective function for optimization with the minimum difference between the projected curve of a circle of latitude and the fitted curve of the extracted image after image distortion correction, as expressed in Eq. (20):

f(a)=min(l=1L(j=1Nela(jl)Hla(jl)Cla(jl)))
where a=[K,k1,k2,Rj,tj]; k1,k2 represent the lens distortion coefficients; l = 1,2,3…L. L represents the number of times the target is placed; N represents the total number of elliptic curves of latitude circles extracted from every position of the target;ela(jl)represents the elliptic curve corresponding to the jth latitude circle in the image acquired from the position that the target is placed for the lth time, and the elliptic curve is fitted after image distortion correction; Hla(jl) represents the homography matrix from the plane on which the jth latitude circle is to the camera image plane; and Cla(j) represents the expression of the jth latitude circle on the plane under the coordinate systemOla(j)xla(j)yla(j)zla(j).

The 3D coordinates of the intersection points on the GST under the target coordinate system are given, and solving the intrinsic camera parameters according to the intersection points may improve the calibration accuracy. Therefore, the identified intersection points of the latitude and longitude lines are introduced into the objective function for optimization, and the optimum solutions of the intrinsic and extrinsic parameters are obtained by nonlinear optimization.

The objective function for optimization is established with the minimum difference between the projected point of the intersection point on the target and the image intersection point after image distortion correction, as shown in Eq. (21):

f(b)=min(j=1mi=1nd(p^ij,pij))
where b=[K,k1,k2,Rj,tj]; p^ijis the projected point of the intersection point; pij is the undistorted image coordinates of the intersection points; n is the number of extracted intersection points at every position of the target; m represents the number of times the target is placed; and d(p^ij,pij) is the distance between p^ij and pij.

According to the objective function for optimization in Eq. (21), the nonlinear optimum solutions of the intrinsic and extrinsic parameters are obtained by the LM method.

4. Experimental results

The effectiveness of this method is proven in simulation and physical experiments. Simulation experiments lay special stress on analyzing the root-mean-square error (RMSE) of the results relative to the true value of the GST-based method at different noise levels and those of Zhang’s camera calibration method (Zhang’s) and spherical contour (SC)-based method. Physical experiments focus on comparing GST-based method and the SC-based method relative to the results of Zhang’s.

4.1 Simulation experiments

Simulation experiments are designed to analyze the calibration accuracy of the GST-based method, Zhang’s, and SC-based method at different noise levels. GST (diameter: 120 mm; longitude/latitude interval: 20/15 degrees). Subsequently, 6 × 6 feature points are selected for nonlinear optimization. Zhang’s adopts a planar target of 6 × 6 feature points, of which the horizontal spacing of the adjacent feature points is 10 mm and the vertical spacing is 16 mm. The diameter of the SC target is 100 mm, and the clear imaging range of the camera is 180–230 mm from the lens. The target is placed within the scope, and the intrinsic parameters matrix of the camera isK=[102404000960300001]. The lens distortion coefficients are k1=0.1,k2=0.08.

For Zhang’s and GST-based method, their targets are placed 10 times respectively. The SC-based method adopts the SDP method in the literature [28], by which the camera intrinsic parameters are solved through 10 spherical targets at different positions. The methods are tested 100 times each at different noise levels. The different noise levels are added to the simulated ideal extracted points on latitude and longitude circles in the image, so the calibration results in the simulation experiment include the effect of the curve fitting method relative to noise.

The three calibration methods are assessed according to the camera calibration results, RMSEs relative to the true value, and reprojection errors of the feature points of the target on the image plane.

In Fig. 7, the RMSEs of fx, fy, u0 and v0 obtained by the SC-based method are about two to three times greater than those by Zhang’s and GST. The RMSEs of fx, fy, u0 and v0 by Zhang’s and the proposed method are basically the same. In terms of the RMSEs of k1 and k2, GST is slightly poorer than Zhang’s method, and those obtained by Zhang’s are about 2/3 of those by the proposed method.

 figure: Fig. 7

Fig. 7 RMSEs of the intrinsic parameters based on Zhang’s method, GST-based method and SC-based method.

Download Full Size | PPT Slide | PDF

The reprojection error of the feature points of the target image on the image plane refers to, under the target coordinate system, the error between the target image feature points that are projected onto the image coordinate system on the basis of the calibration results and the camera model and the coordinates of the feature points extracted in the image. Figure 8 shows the reprojection error of the feature points of the target image on the image plane in the u and v directions under the image coordinate system obtained by GST and Zhang’s, where u represents the x-coordinate under the image coordinate system, and v represents the y-coordinate under the image coordinate system. In Fig. 8, similar to Zhang’s method, reprojection error based on GST increases linearly with the noise level.

 figure: Fig. 8

Fig. 8 Reprojection errors of the target points. (a) Reprojection errors in the u direction; (b) Reprojection errors in the v direction.

Download Full Size | PPT Slide | PDF

4.2. Physical experiment

In physical experiments, the effectiveness of GST is validated by comparing the calibration errors of the GST and SC-based methods relative to Zhang’s. In experiments, the focal length of the camera lens is 16 mm, with the clear imaging range of 300–400 mm and the image resolution of 1628 × 1236 pixels. The feature point of the planar checkerboard target for calibration is 10 × 10, with the horizontal/vertical spacing of 17 mm between two adjacent feature points, and the target accuracy of 0.1 mm. The diameter of GST is 100 mm, and the longitude interval is 20 degrees. The latitude lines are vertically arranged on the latitude plane with the same interval and the vertical interval is 10 mm, and the accuracy is 0.1 mm. The diameter of the spherical target in the SC-based method is 50 mm.

The physical site map is shown in Figs. 9(a)-9(c), where Fig. 9(a) represents a camera using the checkerboard target for calibration, Fig. 9(b) represents a camera using the GST for calibration, and Fig. 9(c) represents a camera using the spherical target for calibration.

 figure: Fig. 9

Fig. 9 Physical map of calibration setup. (a) Zhang’s method; (b) GST-based method; (c) SC-based method.

Download Full Size | PPT Slide | PDF

Steger method has been used in [29–31]. In [31], Steger method is used in rail wear measurement and Section 4.1 in [29] shows that in good, dim and strong light environment, the light stripe can be obtained with similar accuracy. To verify it further, Figs. 10(a) and 10(b) show two sets of experiments demonstrating the curve fitting result on different light conditions. Firstly, the points on latitude and longitude lines in the image are extracted by Steger method, and then they are fitted by curve fitting method. In uniform dark or uniform bright situation, Fig. 10(a) and Fig. 10(b) show that two sets of experiments both achieve the same curve fitting results in uniform dark and uniform bright lighting. But in non-uniform lighting condition in Figs. 10(a) and 10(b), the curve cannot be fitted well because of reduced extracted points in latitude or longitude circles. So in actual calibration scene, the lighting condition should be relatively uniform to get high-accuracy curve fitting.

 figure: Fig. 10

Fig. 10 Two sets of experiments about curve fitting in dark, bright and non-uniform lighting situation. (a) Set 1; (b) Set 2.

Download Full Size | PPT Slide | PDF

During the experiment, planar target, GST, and spherical target are respectively laid out for 10 times, and the distances from the targets to the lens are about 300–400 mm. Zhang’s method directly adopts Matlab Toolbox in [32] for calibration. Given the field of view (F.O.V.) of the camera in Zhang’s, 6 × 7 or 7 × 7 feature points of the target are extracted from every position image (as shown in Fig. 11(a)). For the GST method, the number of intersection points extracted at every position is 64 or 72 because of the extraction of light stripes, and the target image is shown as Fig. 11(b). For the spherical target, the contour is extracted by Canny algorithm [33], the SDP method in the literature [17] is adopted, and the target image in Fig. 11(c) is obtained by solving from 10 spheres.

 figure: Fig. 11

Fig. 11 Images used for calibration. (a) Checkerboard target images; (b) GST images; (c) Sphere images.

Download Full Size | PPT Slide | PDF

As a result of high machining costs of high-accuracy GST, the GST with accuracy of 0.1 mm is temporarily used to verify the effectiveness of the proposed method. Thus, we recommend the use of the checkerboard target with the same accuracy for contrast test while choosing the planar target.

The calibration results of experiments are shown in Table 1. Zhang’s is extensively applied with high accuracy. Therefore, with the results from Zhang’s as the true value, the calibration errors of the GST and SC-based methods relative to Zhang’s are analyzed.

Tables Icon

Table 1. Comparison of the Intrinsic Parameters

From the reprojection errors, GST and Zhang’s are further analyzed. As shown in Figs. 12(a) and 12(b) their reprojection errors are 0.21 pixels. Figures 12(c) and 12(d) show the statistics forms of the reprojection errors based on the two methods. From the figures, the reprojection error in Zhang’s is basically within 0–0.5 pixels, whereas that of GST is within 0–0.8 pixels. However, the reprojection error of GST is concentrated more within the range of 0–0.2 pixels compared with that of Zhang’s.

 figure: Fig. 12

Fig. 12 Reprojection errors of the target feature points. (a) Reprojection errors distribution based on Zhang’s method; (b) Reprojection errors diatribution based on GST method; (c) The statistics form of the reprojection errors based on Zhang’s method; (d) The statistics form of the reprojection errors based on GST method.

Download Full Size | PPT Slide | PDF

To further prove the effectiveness of the proposed method, the three methods are repeatedly tested. In the specific experiments, the spherical target, planar target, and GST are placed 10 times each, with 10 target images captured at different positions. Ten sets of images are chosen for camera calibration from the 10 images, and each set contains seven images. Figures 13(a)-13(d) show the box charts and fitting curves for statistical analysis of the calibration results of 10 sets of images using planar target, GST and SC.

 figure: Fig. 13

Fig. 13 Repeatability results analysis based on the three methods. (a) Repeatability of fx based on the three methods; (b) Repeatability of fy based on the three methods; (c) Repeatability of u0 based on the three methods; (d) Repeatability of fx based on Zhang’s method and GST; (e) Repeatability of fy based on Zhang’s method and GST; (f) Repeatability of u0 based on Zhang’s method and GST; (g) Repeatability of v0 based on the three methods; (h) Repeatability of r based on the three methods; (i) Repeatability of v0 based on Zhang’s method and GST; (j) Repeatability of r based on Zhang’s method and GST.

Download Full Size | PPT Slide | PDF

Figures 13(a) and 13(b) show the statistics results of the effective focal length based on the three methods. To clarify the results, Figs. 13(d) and 13(e) show a comparison between Zhang’s and GST with their respective results enlarged. In the figures, the asterisk in black indicates the results of the 10 sets of images using each of the methods. First, along the ordinate direction, the box of the SC is the longest among the three methods, indicating that it has the highest data dispersion, and the mean values (marked with dotted horizontal lines in purple) of the results of the SC-based method differ greatly from the other two methods. As shown in Figs. 13(d) and 13(e), along the direction of ordinate, the box length of GST is basically the same as that of Zhang’s. However, according to the fitted normal distribution curves, 2.5 and 10 pixels separate their mean values respectively, but their dispersion degrees are similar. Figures 11(c) and (g) compare the results of the center coordinates of the three methods. From the figures, the dispersion of the SC-based method is high, and the mean values are greater than those of GST and Zhang. According to the normal distribution and box shown in Fig. 13(f), there is a difference of no more than five pixels between the mean value of u0 of Zhang’s and that of GST. The normal distribution and box in Fig. 13(i) are similar in the dispersion of v0, but a large difference of about 20 pixels is observed between their mean values. Figure 13(h) shows the statistical results of r obtained by the three methods, and the dispersion and deviation from the SC-based method are larger than those from the other two method. In Fig. 13(j), the difference between the mean value of r obtained by Zhang’s and that by GST is about 2.5, but they are similar in terms of dispersion.

Figures 14(a) and 14(b) show a comparison of the reprojection errors in the two directions of u and v. In the direction of u, there is a difference of 0.05 pixels between GST and Zhang’s in the reprojection errors, and the dispersion obtained by GST is lower than Zhang. In the direction of v, there is a difference of 0.015 pixels between GST and Zhang’s in the mean reprojection errors, and the dispersion obtained by GST is slightly lower than Zhang. Thus, the repeatability of the calibration results show that results of the GST method are similar to those of Zhang’s method.

 figure: Fig. 14

Fig. 14 Reprojection errors of the target feature points.

Download Full Size | PPT Slide | PDF

5. Conclusions

This paper presents a camera calibration method based on the GST. By combining the circles of latitude and longitude with the intersection points, this method achieves calibration of the intrinsic and extrinsic parameters. Considering target machining errors, the 3D coordinates of the intersection points, as the parameters for optimization, are introduced into nonlinear optimization, thereby further improving the calibration accuracy. The proposed method presents the strengths of both spherical target-based calibration and planar target-based calibration, while overcoming their respective shortcomings and the same calibration accuracy as Zhang’s can be obtained. In physical and simulation experiments, this method is compared with the current mainstream calibration methods, namely, Zhang’s and the SC-based method. The results prove the effectiveness of this method. Meanwhile, the GST introduced in this method has important application value in the calibration of vision measuring systems, including structured light vision sensor and multi-vision sensor without common F.O.V.

Funding

National Natural Science Foundation of China (NSFC) (51575033), National Key Scientific Instrument and Equipment Development Projects of China (2012YQ140032) and Central University in Beijing Major Achievements Transformation Project (Online Dynamic Detection System for Train Pantograph and Catenary).

References and links

1. A. S. Poulin-Girard, S. Thibault, and D. Laurendeau, “Influence of camera calibration conditions on the accuracy of 3D reconstruction,” Opt. Express 24(3), 2678–2686 (2016). [CrossRef]   [PubMed]  

2. B. Sun, J. G. Zhu, L. H. Yang, S. R. Yang, and Z. Y. Niu, “Calibration of line-scan cameras for precision measurement,” Appl. Opt. 55(25), 6836–6843 (2016). [CrossRef]   [PubMed]  

3. W. M. Li, S. Y. Shan, and H. Liu, “High-precision method of binocular camera calibration with a distortion model,” Appl. Opt. 56(8), 2368–2377 (2017). [CrossRef]   [PubMed]  

4. R. Y. Tsai, “A versatile camera calibration technique for high-accuracy 3D machine vision metrology using off-the-shelf TV camera and lenses,” IEEE J. Robot. Autom. 3(4), 323–344 (1987). [CrossRef]  

5. J. Heikkila, “Geometric camera calibration using circular control points,” IEEE Trans. Pattern Anal. Mach. Intell. 22(10), 1066–1077 (2000). [CrossRef]  

6. J. Y. Weng, P. Cohen, and M. Herniou, “Camera calibration with distortion model and accuracy evaluation,” IEEE Trans. Pattern Anal. Mach. Intell. 14(10), 965–980 (1992). [CrossRef]  

7. Z. Y. Zhang, “A flexible new technique for camera calibration,” IEEE Trans. Pattern Anal. Mach. Intell. 22(11), 1330–1334 (2000). [CrossRef]  

8. Z. Liu, Q. Wu, X. Chen, and Y. Yin, “High-accuracy calibration of low-cost camera using image disturbance factor,” Opt. Express 24(21), 24321–24336 (2016). [CrossRef]   [PubMed]  

9. J. S. Kim, P. Gurdjos, and I. S. Kweon, “Geometric and algebraic constraints of projected concentric circles and their applications to camera calibration,” IEEE Trans. Pattern Anal. Mach. Intell. 27(4), 637–642 (2005). [CrossRef]   [PubMed]  

10. Y. H. Wu, X. J. Li, F. C. Wu, and Z. Y. Hu, “Coplanar circles, quasi-affine invariance and calibration,” Image Vis. Comput. 24(4), 319–326 (2006). [CrossRef]  

11. D. Douxchamps and K. Chihara, “High-accuracy and robust localization of large control markers for geometric camera calibration,” IEEE Trans. Pattern Anal. Mach. Intell. 31(2), 376–383 (2009). [CrossRef]   [PubMed]  

12. Z. Y. Zhang, “Camera calibration with one-dimensional objects,” IEEE Trans. Pattern Anal. Mach. Intell. 26(7), 892–899 (2004). [CrossRef]   [PubMed]  

13. F. C. Wu, Z. Y. Hu, and H. J. Zhu, “Camera calibration with moving one-dimensional objects,” Pattern Recognit. 38(5), 755–765 (2005). [CrossRef]  

14. F. Qi, Q. H. Li, Y. P. Luo, and D. C. Hu, “Camera calibration with one-dimensional objects moving under gravity,” Pattern Recognit. 40(1), 343–345 (2007). [CrossRef]  

15. M. Agrawal and L. Davis, “Complete camera calibration using spheres: a dual-space approach,” Analysis Int. Math. J. Analysis Appl. 34(3), 257–282 (2003).

16. H. Teramoto and G. Xu, “Camera calibration by a single image of balls: from conic to the absolute conic,” in Proceedings of the Asian Conference on Computer Vision (ACCV, 2002), pp. 499-506.

17. H. Zhang, K.-Y. K. Wong, and G. Zhang, “Camera calibration from images of spheres,” IEEE Trans. Pattern Anal. Mach. Intell. 29(3), 499–503 (2007). [CrossRef]   [PubMed]  

18. X. H. Ying and H. B. Zha, “Geometric interpretations of the relation between the image of the absolute conic and sphere images,” IEEE Trans. Pattern Anal. Mach. Intell. 28(12), 2031–2036 (2006). [CrossRef]   [PubMed]  

19. K.-Y. K. Wong, G. Q. Zhang, and Z. H. Chen, “A stratified approach for camera calibration using spheres,” IEEE Trans. Image Process. 20(2), 305–316 (2011). [CrossRef]   [PubMed]  

20. M. H. Ruan and D. Huber, “Calibration of 3D Sensors Using a Spherical Target,” International Conference on 3D Vision (IEEE, 2015), pp. 187–193.

21. J. H. Sun, H. B. He, and D. B. Zeng, “Global Calibration of Multiple Cameras Based on Sphere Targets,” Sensors (Basel) 16(1), 77 (2016). [CrossRef]   [PubMed]  

22. K. K. Wong, P. R. S. Mendonca, and R. Cipolla, “Camera calibration from surfaces of revolution,” IEEE Trans. Pattern Anal. Mach. Intell. 25(2), 147–161 (2003). [CrossRef]  

23. Y. H. Wu, H. J. Zhu, Z. Y. Hu, and F. C. Wu, “Camera Calibration from the Quasi-Affine invariance of two parallel circles,” in Proceedings of European Conference on Computer Vision (ECCV, 2004), pp.190–202.

24. C. Colombo, D. Comanducci, and A. D. Bimbo, “Camera calibration with two arbitrary coaxial circles,” in Proceedings of European Conference on Computer Vision (ECCV, 2006), pp. 265–276. [CrossRef]  

25. C. Steger, “An Unbiased Detector of Curvilinear Structures,” IEEE Trans. Pattern Anal. Mach. Intell. 20(2), 113–125 (2002). [CrossRef]  

26. R. Halir and J. Flusser, “Numerically Stable Direct Least Squares Fitting of Ellipses,” (1998).

27. R. I. Hartley and A. Zisserman, Multiple View Geometry in Computer Vision (Cambridge University, 2000).

28. M. Agrawal and L. S. Davis, “Camera calibration using spheres: a semi-definite programming approach,” in Proceedings of the IEEE International Conference on Computer Vision (IEEE, 2003), pp.782–789. [CrossRef]  

29. Z. Liu, X. J. Li, and Y. Yin, “On-site calibration of line-structured light vision sensor in complex light environments,” Opt. Express 23(23), 29896–29911 (2015). [CrossRef]   [PubMed]  

30. Z. Liu, X. J. Li, F. J. Li, and G. J. Zhang, “Calibration method for line-structured light vision sensor based on a single ball target,” Opt. Lasers Eng. 69, 20–28 (2015). [CrossRef]  

31. Z. Liu, F. J. Li, B. K. Huang, and G. J. Zhang, “Real-time and accurate rail wear measurement method and experimental analysis,” J. Opt. Soc. Am. A 31(8), 1721–1729 (2014). [CrossRef]   [PubMed]  

32. J. Y. Bouguet, “The MATLAB open source calibration toolbox,” http://www.vision.caltech.edu/bouguetj/calib _doc/.

33. J. Canny, “A computational approach to edge detection,” IEEE Trans. Pattern Anal. Mach. Intell. 8(6), 679–698 (1986). [CrossRef]   [PubMed]  

References

  • View by:
  • |
  • |
  • |

  1. A. S. Poulin-Girard, S. Thibault, and D. Laurendeau, “Influence of camera calibration conditions on the accuracy of 3D reconstruction,” Opt. Express 24(3), 2678–2686 (2016).
    [Crossref] [PubMed]
  2. B. Sun, J. G. Zhu, L. H. Yang, S. R. Yang, and Z. Y. Niu, “Calibration of line-scan cameras for precision measurement,” Appl. Opt. 55(25), 6836–6843 (2016).
    [Crossref] [PubMed]
  3. W. M. Li, S. Y. Shan, and H. Liu, “High-precision method of binocular camera calibration with a distortion model,” Appl. Opt. 56(8), 2368–2377 (2017).
    [Crossref] [PubMed]
  4. R. Y. Tsai, “A versatile camera calibration technique for high-accuracy 3D machine vision metrology using off-the-shelf TV camera and lenses,” IEEE J. Robot. Autom. 3(4), 323–344 (1987).
    [Crossref]
  5. J. Heikkila, “Geometric camera calibration using circular control points,” IEEE Trans. Pattern Anal. Mach. Intell. 22(10), 1066–1077 (2000).
    [Crossref]
  6. J. Y. Weng, P. Cohen, and M. Herniou, “Camera calibration with distortion model and accuracy evaluation,” IEEE Trans. Pattern Anal. Mach. Intell. 14(10), 965–980 (1992).
    [Crossref]
  7. Z. Y. Zhang, “A flexible new technique for camera calibration,” IEEE Trans. Pattern Anal. Mach. Intell. 22(11), 1330–1334 (2000).
    [Crossref]
  8. Z. Liu, Q. Wu, X. Chen, and Y. Yin, “High-accuracy calibration of low-cost camera using image disturbance factor,” Opt. Express 24(21), 24321–24336 (2016).
    [Crossref] [PubMed]
  9. J. S. Kim, P. Gurdjos, and I. S. Kweon, “Geometric and algebraic constraints of projected concentric circles and their applications to camera calibration,” IEEE Trans. Pattern Anal. Mach. Intell. 27(4), 637–642 (2005).
    [Crossref] [PubMed]
  10. Y. H. Wu, X. J. Li, F. C. Wu, and Z. Y. Hu, “Coplanar circles, quasi-affine invariance and calibration,” Image Vis. Comput. 24(4), 319–326 (2006).
    [Crossref]
  11. D. Douxchamps and K. Chihara, “High-accuracy and robust localization of large control markers for geometric camera calibration,” IEEE Trans. Pattern Anal. Mach. Intell. 31(2), 376–383 (2009).
    [Crossref] [PubMed]
  12. Z. Y. Zhang, “Camera calibration with one-dimensional objects,” IEEE Trans. Pattern Anal. Mach. Intell. 26(7), 892–899 (2004).
    [Crossref] [PubMed]
  13. F. C. Wu, Z. Y. Hu, and H. J. Zhu, “Camera calibration with moving one-dimensional objects,” Pattern Recognit. 38(5), 755–765 (2005).
    [Crossref]
  14. F. Qi, Q. H. Li, Y. P. Luo, and D. C. Hu, “Camera calibration with one-dimensional objects moving under gravity,” Pattern Recognit. 40(1), 343–345 (2007).
    [Crossref]
  15. M. Agrawal and L. Davis, “Complete camera calibration using spheres: a dual-space approach,” Analysis Int. Math. J. Analysis Appl. 34(3), 257–282 (2003).
  16. H. Teramoto and G. Xu, “Camera calibration by a single image of balls: from conic to the absolute conic,” in Proceedings of the Asian Conference on Computer Vision (ACCV, 2002), pp. 499-506.
  17. H. Zhang, K.-Y. K. Wong, and G. Zhang, “Camera calibration from images of spheres,” IEEE Trans. Pattern Anal. Mach. Intell. 29(3), 499–503 (2007).
    [Crossref] [PubMed]
  18. X. H. Ying and H. B. Zha, “Geometric interpretations of the relation between the image of the absolute conic and sphere images,” IEEE Trans. Pattern Anal. Mach. Intell. 28(12), 2031–2036 (2006).
    [Crossref] [PubMed]
  19. K.-Y. K. Wong, G. Q. Zhang, and Z. H. Chen, “A stratified approach for camera calibration using spheres,” IEEE Trans. Image Process. 20(2), 305–316 (2011).
    [Crossref] [PubMed]
  20. M. H. Ruan and D. Huber, “Calibration of 3D Sensors Using a Spherical Target,” International Conference on 3D Vision (IEEE, 2015), pp. 187–193.
  21. J. H. Sun, H. B. He, and D. B. Zeng, “Global Calibration of Multiple Cameras Based on Sphere Targets,” Sensors (Basel) 16(1), 77 (2016).
    [Crossref] [PubMed]
  22. K. K. Wong, P. R. S. Mendonca, and R. Cipolla, “Camera calibration from surfaces of revolution,” IEEE Trans. Pattern Anal. Mach. Intell. 25(2), 147–161 (2003).
    [Crossref]
  23. Y. H. Wu, H. J. Zhu, Z. Y. Hu, and F. C. Wu, “Camera Calibration from the Quasi-Affine invariance of two parallel circles,” in Proceedings of European Conference on Computer Vision (ECCV, 2004), pp.190–202.
  24. C. Colombo, D. Comanducci, and A. D. Bimbo, “Camera calibration with two arbitrary coaxial circles,” in Proceedings of European Conference on Computer Vision (ECCV, 2006), pp. 265–276.
    [Crossref]
  25. C. Steger, “An Unbiased Detector of Curvilinear Structures,” IEEE Trans. Pattern Anal. Mach. Intell. 20(2), 113–125 (2002).
    [Crossref]
  26. R. Halir and J. Flusser, “Numerically Stable Direct Least Squares Fitting of Ellipses,” (1998).
  27. R. I. Hartley and A. Zisserman, Multiple View Geometry in Computer Vision (Cambridge University, 2000).
  28. M. Agrawal and L. S. Davis, “Camera calibration using spheres: a semi-definite programming approach,” in Proceedings of the IEEE International Conference on Computer Vision (IEEE, 2003), pp.782–789.
    [Crossref]
  29. Z. Liu, X. J. Li, and Y. Yin, “On-site calibration of line-structured light vision sensor in complex light environments,” Opt. Express 23(23), 29896–29911 (2015).
    [Crossref] [PubMed]
  30. Z. Liu, X. J. Li, F. J. Li, and G. J. Zhang, “Calibration method for line-structured light vision sensor based on a single ball target,” Opt. Lasers Eng. 69, 20–28 (2015).
    [Crossref]
  31. Z. Liu, F. J. Li, B. K. Huang, and G. J. Zhang, “Real-time and accurate rail wear measurement method and experimental analysis,” J. Opt. Soc. Am. A 31(8), 1721–1729 (2014).
    [Crossref] [PubMed]
  32. J. Y. Bouguet, “The MATLAB open source calibration toolbox,” http://www.vision.caltech.edu/bouguetj/calib _doc/ .
  33. J. Canny, “A computational approach to edge detection,” IEEE Trans. Pattern Anal. Mach. Intell. 8(6), 679–698 (1986).
    [Crossref] [PubMed]

2017 (1)

2016 (4)

2015 (2)

Z. Liu, X. J. Li, and Y. Yin, “On-site calibration of line-structured light vision sensor in complex light environments,” Opt. Express 23(23), 29896–29911 (2015).
[Crossref] [PubMed]

Z. Liu, X. J. Li, F. J. Li, and G. J. Zhang, “Calibration method for line-structured light vision sensor based on a single ball target,” Opt. Lasers Eng. 69, 20–28 (2015).
[Crossref]

2014 (1)

2011 (1)

K.-Y. K. Wong, G. Q. Zhang, and Z. H. Chen, “A stratified approach for camera calibration using spheres,” IEEE Trans. Image Process. 20(2), 305–316 (2011).
[Crossref] [PubMed]

2009 (1)

D. Douxchamps and K. Chihara, “High-accuracy and robust localization of large control markers for geometric camera calibration,” IEEE Trans. Pattern Anal. Mach. Intell. 31(2), 376–383 (2009).
[Crossref] [PubMed]

2007 (2)

F. Qi, Q. H. Li, Y. P. Luo, and D. C. Hu, “Camera calibration with one-dimensional objects moving under gravity,” Pattern Recognit. 40(1), 343–345 (2007).
[Crossref]

H. Zhang, K.-Y. K. Wong, and G. Zhang, “Camera calibration from images of spheres,” IEEE Trans. Pattern Anal. Mach. Intell. 29(3), 499–503 (2007).
[Crossref] [PubMed]

2006 (2)

X. H. Ying and H. B. Zha, “Geometric interpretations of the relation between the image of the absolute conic and sphere images,” IEEE Trans. Pattern Anal. Mach. Intell. 28(12), 2031–2036 (2006).
[Crossref] [PubMed]

Y. H. Wu, X. J. Li, F. C. Wu, and Z. Y. Hu, “Coplanar circles, quasi-affine invariance and calibration,” Image Vis. Comput. 24(4), 319–326 (2006).
[Crossref]

2005 (2)

J. S. Kim, P. Gurdjos, and I. S. Kweon, “Geometric and algebraic constraints of projected concentric circles and their applications to camera calibration,” IEEE Trans. Pattern Anal. Mach. Intell. 27(4), 637–642 (2005).
[Crossref] [PubMed]

F. C. Wu, Z. Y. Hu, and H. J. Zhu, “Camera calibration with moving one-dimensional objects,” Pattern Recognit. 38(5), 755–765 (2005).
[Crossref]

2004 (1)

Z. Y. Zhang, “Camera calibration with one-dimensional objects,” IEEE Trans. Pattern Anal. Mach. Intell. 26(7), 892–899 (2004).
[Crossref] [PubMed]

2003 (2)

M. Agrawal and L. Davis, “Complete camera calibration using spheres: a dual-space approach,” Analysis Int. Math. J. Analysis Appl. 34(3), 257–282 (2003).

K. K. Wong, P. R. S. Mendonca, and R. Cipolla, “Camera calibration from surfaces of revolution,” IEEE Trans. Pattern Anal. Mach. Intell. 25(2), 147–161 (2003).
[Crossref]

2002 (1)

C. Steger, “An Unbiased Detector of Curvilinear Structures,” IEEE Trans. Pattern Anal. Mach. Intell. 20(2), 113–125 (2002).
[Crossref]

2000 (2)

Z. Y. Zhang, “A flexible new technique for camera calibration,” IEEE Trans. Pattern Anal. Mach. Intell. 22(11), 1330–1334 (2000).
[Crossref]

J. Heikkila, “Geometric camera calibration using circular control points,” IEEE Trans. Pattern Anal. Mach. Intell. 22(10), 1066–1077 (2000).
[Crossref]

1992 (1)

J. Y. Weng, P. Cohen, and M. Herniou, “Camera calibration with distortion model and accuracy evaluation,” IEEE Trans. Pattern Anal. Mach. Intell. 14(10), 965–980 (1992).
[Crossref]

1987 (1)

R. Y. Tsai, “A versatile camera calibration technique for high-accuracy 3D machine vision metrology using off-the-shelf TV camera and lenses,” IEEE J. Robot. Autom. 3(4), 323–344 (1987).
[Crossref]

1986 (1)

J. Canny, “A computational approach to edge detection,” IEEE Trans. Pattern Anal. Mach. Intell. 8(6), 679–698 (1986).
[Crossref] [PubMed]

Agrawal, M.

M. Agrawal and L. Davis, “Complete camera calibration using spheres: a dual-space approach,” Analysis Int. Math. J. Analysis Appl. 34(3), 257–282 (2003).

M. Agrawal and L. S. Davis, “Camera calibration using spheres: a semi-definite programming approach,” in Proceedings of the IEEE International Conference on Computer Vision (IEEE, 2003), pp.782–789.
[Crossref]

Bimbo, A. D.

C. Colombo, D. Comanducci, and A. D. Bimbo, “Camera calibration with two arbitrary coaxial circles,” in Proceedings of European Conference on Computer Vision (ECCV, 2006), pp. 265–276.
[Crossref]

Canny, J.

J. Canny, “A computational approach to edge detection,” IEEE Trans. Pattern Anal. Mach. Intell. 8(6), 679–698 (1986).
[Crossref] [PubMed]

Chen, X.

Chen, Z. H.

K.-Y. K. Wong, G. Q. Zhang, and Z. H. Chen, “A stratified approach for camera calibration using spheres,” IEEE Trans. Image Process. 20(2), 305–316 (2011).
[Crossref] [PubMed]

Chihara, K.

D. Douxchamps and K. Chihara, “High-accuracy and robust localization of large control markers for geometric camera calibration,” IEEE Trans. Pattern Anal. Mach. Intell. 31(2), 376–383 (2009).
[Crossref] [PubMed]

Cipolla, R.

K. K. Wong, P. R. S. Mendonca, and R. Cipolla, “Camera calibration from surfaces of revolution,” IEEE Trans. Pattern Anal. Mach. Intell. 25(2), 147–161 (2003).
[Crossref]

Cohen, P.

J. Y. Weng, P. Cohen, and M. Herniou, “Camera calibration with distortion model and accuracy evaluation,” IEEE Trans. Pattern Anal. Mach. Intell. 14(10), 965–980 (1992).
[Crossref]

Colombo, C.

C. Colombo, D. Comanducci, and A. D. Bimbo, “Camera calibration with two arbitrary coaxial circles,” in Proceedings of European Conference on Computer Vision (ECCV, 2006), pp. 265–276.
[Crossref]

Comanducci, D.

C. Colombo, D. Comanducci, and A. D. Bimbo, “Camera calibration with two arbitrary coaxial circles,” in Proceedings of European Conference on Computer Vision (ECCV, 2006), pp. 265–276.
[Crossref]

Davis, L.

M. Agrawal and L. Davis, “Complete camera calibration using spheres: a dual-space approach,” Analysis Int. Math. J. Analysis Appl. 34(3), 257–282 (2003).

Davis, L. S.

M. Agrawal and L. S. Davis, “Camera calibration using spheres: a semi-definite programming approach,” in Proceedings of the IEEE International Conference on Computer Vision (IEEE, 2003), pp.782–789.
[Crossref]

Douxchamps, D.

D. Douxchamps and K. Chihara, “High-accuracy and robust localization of large control markers for geometric camera calibration,” IEEE Trans. Pattern Anal. Mach. Intell. 31(2), 376–383 (2009).
[Crossref] [PubMed]

Flusser, J.

R. Halir and J. Flusser, “Numerically Stable Direct Least Squares Fitting of Ellipses,” (1998).

Gurdjos, P.

J. S. Kim, P. Gurdjos, and I. S. Kweon, “Geometric and algebraic constraints of projected concentric circles and their applications to camera calibration,” IEEE Trans. Pattern Anal. Mach. Intell. 27(4), 637–642 (2005).
[Crossref] [PubMed]

Halir, R.

R. Halir and J. Flusser, “Numerically Stable Direct Least Squares Fitting of Ellipses,” (1998).

He, H. B.

J. H. Sun, H. B. He, and D. B. Zeng, “Global Calibration of Multiple Cameras Based on Sphere Targets,” Sensors (Basel) 16(1), 77 (2016).
[Crossref] [PubMed]

Heikkila, J.

J. Heikkila, “Geometric camera calibration using circular control points,” IEEE Trans. Pattern Anal. Mach. Intell. 22(10), 1066–1077 (2000).
[Crossref]

Herniou, M.

J. Y. Weng, P. Cohen, and M. Herniou, “Camera calibration with distortion model and accuracy evaluation,” IEEE Trans. Pattern Anal. Mach. Intell. 14(10), 965–980 (1992).
[Crossref]

Hu, D. C.

F. Qi, Q. H. Li, Y. P. Luo, and D. C. Hu, “Camera calibration with one-dimensional objects moving under gravity,” Pattern Recognit. 40(1), 343–345 (2007).
[Crossref]

Hu, Z. Y.

Y. H. Wu, X. J. Li, F. C. Wu, and Z. Y. Hu, “Coplanar circles, quasi-affine invariance and calibration,” Image Vis. Comput. 24(4), 319–326 (2006).
[Crossref]

F. C. Wu, Z. Y. Hu, and H. J. Zhu, “Camera calibration with moving one-dimensional objects,” Pattern Recognit. 38(5), 755–765 (2005).
[Crossref]

Y. H. Wu, H. J. Zhu, Z. Y. Hu, and F. C. Wu, “Camera Calibration from the Quasi-Affine invariance of two parallel circles,” in Proceedings of European Conference on Computer Vision (ECCV, 2004), pp.190–202.

Huang, B. K.

Huber, D.

M. H. Ruan and D. Huber, “Calibration of 3D Sensors Using a Spherical Target,” International Conference on 3D Vision (IEEE, 2015), pp. 187–193.

Kim, J. S.

J. S. Kim, P. Gurdjos, and I. S. Kweon, “Geometric and algebraic constraints of projected concentric circles and their applications to camera calibration,” IEEE Trans. Pattern Anal. Mach. Intell. 27(4), 637–642 (2005).
[Crossref] [PubMed]

Kweon, I. S.

J. S. Kim, P. Gurdjos, and I. S. Kweon, “Geometric and algebraic constraints of projected concentric circles and their applications to camera calibration,” IEEE Trans. Pattern Anal. Mach. Intell. 27(4), 637–642 (2005).
[Crossref] [PubMed]

Laurendeau, D.

Li, F. J.

Z. Liu, X. J. Li, F. J. Li, and G. J. Zhang, “Calibration method for line-structured light vision sensor based on a single ball target,” Opt. Lasers Eng. 69, 20–28 (2015).
[Crossref]

Z. Liu, F. J. Li, B. K. Huang, and G. J. Zhang, “Real-time and accurate rail wear measurement method and experimental analysis,” J. Opt. Soc. Am. A 31(8), 1721–1729 (2014).
[Crossref] [PubMed]

Li, Q. H.

F. Qi, Q. H. Li, Y. P. Luo, and D. C. Hu, “Camera calibration with one-dimensional objects moving under gravity,” Pattern Recognit. 40(1), 343–345 (2007).
[Crossref]

Li, W. M.

Li, X. J.

Z. Liu, X. J. Li, F. J. Li, and G. J. Zhang, “Calibration method for line-structured light vision sensor based on a single ball target,” Opt. Lasers Eng. 69, 20–28 (2015).
[Crossref]

Z. Liu, X. J. Li, and Y. Yin, “On-site calibration of line-structured light vision sensor in complex light environments,” Opt. Express 23(23), 29896–29911 (2015).
[Crossref] [PubMed]

Y. H. Wu, X. J. Li, F. C. Wu, and Z. Y. Hu, “Coplanar circles, quasi-affine invariance and calibration,” Image Vis. Comput. 24(4), 319–326 (2006).
[Crossref]

Liu, H.

Liu, Z.

Luo, Y. P.

F. Qi, Q. H. Li, Y. P. Luo, and D. C. Hu, “Camera calibration with one-dimensional objects moving under gravity,” Pattern Recognit. 40(1), 343–345 (2007).
[Crossref]

Mendonca, P. R. S.

K. K. Wong, P. R. S. Mendonca, and R. Cipolla, “Camera calibration from surfaces of revolution,” IEEE Trans. Pattern Anal. Mach. Intell. 25(2), 147–161 (2003).
[Crossref]

Niu, Z. Y.

Poulin-Girard, A. S.

Qi, F.

F. Qi, Q. H. Li, Y. P. Luo, and D. C. Hu, “Camera calibration with one-dimensional objects moving under gravity,” Pattern Recognit. 40(1), 343–345 (2007).
[Crossref]

Ruan, M. H.

M. H. Ruan and D. Huber, “Calibration of 3D Sensors Using a Spherical Target,” International Conference on 3D Vision (IEEE, 2015), pp. 187–193.

Shan, S. Y.

Steger, C.

C. Steger, “An Unbiased Detector of Curvilinear Structures,” IEEE Trans. Pattern Anal. Mach. Intell. 20(2), 113–125 (2002).
[Crossref]

Sun, B.

Sun, J. H.

J. H. Sun, H. B. He, and D. B. Zeng, “Global Calibration of Multiple Cameras Based on Sphere Targets,” Sensors (Basel) 16(1), 77 (2016).
[Crossref] [PubMed]

Teramoto, H.

H. Teramoto and G. Xu, “Camera calibration by a single image of balls: from conic to the absolute conic,” in Proceedings of the Asian Conference on Computer Vision (ACCV, 2002), pp. 499-506.

Thibault, S.

Tsai, R. Y.

R. Y. Tsai, “A versatile camera calibration technique for high-accuracy 3D machine vision metrology using off-the-shelf TV camera and lenses,” IEEE J. Robot. Autom. 3(4), 323–344 (1987).
[Crossref]

Weng, J. Y.

J. Y. Weng, P. Cohen, and M. Herniou, “Camera calibration with distortion model and accuracy evaluation,” IEEE Trans. Pattern Anal. Mach. Intell. 14(10), 965–980 (1992).
[Crossref]

Wong, K. K.

K. K. Wong, P. R. S. Mendonca, and R. Cipolla, “Camera calibration from surfaces of revolution,” IEEE Trans. Pattern Anal. Mach. Intell. 25(2), 147–161 (2003).
[Crossref]

Wong, K.-Y. K.

K.-Y. K. Wong, G. Q. Zhang, and Z. H. Chen, “A stratified approach for camera calibration using spheres,” IEEE Trans. Image Process. 20(2), 305–316 (2011).
[Crossref] [PubMed]

H. Zhang, K.-Y. K. Wong, and G. Zhang, “Camera calibration from images of spheres,” IEEE Trans. Pattern Anal. Mach. Intell. 29(3), 499–503 (2007).
[Crossref] [PubMed]

Wu, F. C.

Y. H. Wu, X. J. Li, F. C. Wu, and Z. Y. Hu, “Coplanar circles, quasi-affine invariance and calibration,” Image Vis. Comput. 24(4), 319–326 (2006).
[Crossref]

F. C. Wu, Z. Y. Hu, and H. J. Zhu, “Camera calibration with moving one-dimensional objects,” Pattern Recognit. 38(5), 755–765 (2005).
[Crossref]

Y. H. Wu, H. J. Zhu, Z. Y. Hu, and F. C. Wu, “Camera Calibration from the Quasi-Affine invariance of two parallel circles,” in Proceedings of European Conference on Computer Vision (ECCV, 2004), pp.190–202.

Wu, Q.

Wu, Y. H.

Y. H. Wu, X. J. Li, F. C. Wu, and Z. Y. Hu, “Coplanar circles, quasi-affine invariance and calibration,” Image Vis. Comput. 24(4), 319–326 (2006).
[Crossref]

Y. H. Wu, H. J. Zhu, Z. Y. Hu, and F. C. Wu, “Camera Calibration from the Quasi-Affine invariance of two parallel circles,” in Proceedings of European Conference on Computer Vision (ECCV, 2004), pp.190–202.

Xu, G.

H. Teramoto and G. Xu, “Camera calibration by a single image of balls: from conic to the absolute conic,” in Proceedings of the Asian Conference on Computer Vision (ACCV, 2002), pp. 499-506.

Yang, L. H.

Yang, S. R.

Yin, Y.

Ying, X. H.

X. H. Ying and H. B. Zha, “Geometric interpretations of the relation between the image of the absolute conic and sphere images,” IEEE Trans. Pattern Anal. Mach. Intell. 28(12), 2031–2036 (2006).
[Crossref] [PubMed]

Zeng, D. B.

J. H. Sun, H. B. He, and D. B. Zeng, “Global Calibration of Multiple Cameras Based on Sphere Targets,” Sensors (Basel) 16(1), 77 (2016).
[Crossref] [PubMed]

Zha, H. B.

X. H. Ying and H. B. Zha, “Geometric interpretations of the relation between the image of the absolute conic and sphere images,” IEEE Trans. Pattern Anal. Mach. Intell. 28(12), 2031–2036 (2006).
[Crossref] [PubMed]

Zhang, G.

H. Zhang, K.-Y. K. Wong, and G. Zhang, “Camera calibration from images of spheres,” IEEE Trans. Pattern Anal. Mach. Intell. 29(3), 499–503 (2007).
[Crossref] [PubMed]

Zhang, G. J.

Z. Liu, X. J. Li, F. J. Li, and G. J. Zhang, “Calibration method for line-structured light vision sensor based on a single ball target,” Opt. Lasers Eng. 69, 20–28 (2015).
[Crossref]

Z. Liu, F. J. Li, B. K. Huang, and G. J. Zhang, “Real-time and accurate rail wear measurement method and experimental analysis,” J. Opt. Soc. Am. A 31(8), 1721–1729 (2014).
[Crossref] [PubMed]

Zhang, G. Q.

K.-Y. K. Wong, G. Q. Zhang, and Z. H. Chen, “A stratified approach for camera calibration using spheres,” IEEE Trans. Image Process. 20(2), 305–316 (2011).
[Crossref] [PubMed]

Zhang, H.

H. Zhang, K.-Y. K. Wong, and G. Zhang, “Camera calibration from images of spheres,” IEEE Trans. Pattern Anal. Mach. Intell. 29(3), 499–503 (2007).
[Crossref] [PubMed]

Zhang, Z. Y.

Z. Y. Zhang, “Camera calibration with one-dimensional objects,” IEEE Trans. Pattern Anal. Mach. Intell. 26(7), 892–899 (2004).
[Crossref] [PubMed]

Z. Y. Zhang, “A flexible new technique for camera calibration,” IEEE Trans. Pattern Anal. Mach. Intell. 22(11), 1330–1334 (2000).
[Crossref]

Zhu, H. J.

F. C. Wu, Z. Y. Hu, and H. J. Zhu, “Camera calibration with moving one-dimensional objects,” Pattern Recognit. 38(5), 755–765 (2005).
[Crossref]

Y. H. Wu, H. J. Zhu, Z. Y. Hu, and F. C. Wu, “Camera Calibration from the Quasi-Affine invariance of two parallel circles,” in Proceedings of European Conference on Computer Vision (ECCV, 2004), pp.190–202.

Zhu, J. G.

Analysis Int. Math. J. Analysis Appl. (1)

M. Agrawal and L. Davis, “Complete camera calibration using spheres: a dual-space approach,” Analysis Int. Math. J. Analysis Appl. 34(3), 257–282 (2003).

Appl. Opt. (2)

IEEE J. Robot. Autom. (1)

R. Y. Tsai, “A versatile camera calibration technique for high-accuracy 3D machine vision metrology using off-the-shelf TV camera and lenses,” IEEE J. Robot. Autom. 3(4), 323–344 (1987).
[Crossref]

IEEE Trans. Image Process. (1)

K.-Y. K. Wong, G. Q. Zhang, and Z. H. Chen, “A stratified approach for camera calibration using spheres,” IEEE Trans. Image Process. 20(2), 305–316 (2011).
[Crossref] [PubMed]

IEEE Trans. Pattern Anal. Mach. Intell. (11)

H. Zhang, K.-Y. K. Wong, and G. Zhang, “Camera calibration from images of spheres,” IEEE Trans. Pattern Anal. Mach. Intell. 29(3), 499–503 (2007).
[Crossref] [PubMed]

X. H. Ying and H. B. Zha, “Geometric interpretations of the relation between the image of the absolute conic and sphere images,” IEEE Trans. Pattern Anal. Mach. Intell. 28(12), 2031–2036 (2006).
[Crossref] [PubMed]

J. S. Kim, P. Gurdjos, and I. S. Kweon, “Geometric and algebraic constraints of projected concentric circles and their applications to camera calibration,” IEEE Trans. Pattern Anal. Mach. Intell. 27(4), 637–642 (2005).
[Crossref] [PubMed]

J. Heikkila, “Geometric camera calibration using circular control points,” IEEE Trans. Pattern Anal. Mach. Intell. 22(10), 1066–1077 (2000).
[Crossref]

J. Y. Weng, P. Cohen, and M. Herniou, “Camera calibration with distortion model and accuracy evaluation,” IEEE Trans. Pattern Anal. Mach. Intell. 14(10), 965–980 (1992).
[Crossref]

Z. Y. Zhang, “A flexible new technique for camera calibration,” IEEE Trans. Pattern Anal. Mach. Intell. 22(11), 1330–1334 (2000).
[Crossref]

D. Douxchamps and K. Chihara, “High-accuracy and robust localization of large control markers for geometric camera calibration,” IEEE Trans. Pattern Anal. Mach. Intell. 31(2), 376–383 (2009).
[Crossref] [PubMed]

Z. Y. Zhang, “Camera calibration with one-dimensional objects,” IEEE Trans. Pattern Anal. Mach. Intell. 26(7), 892–899 (2004).
[Crossref] [PubMed]

K. K. Wong, P. R. S. Mendonca, and R. Cipolla, “Camera calibration from surfaces of revolution,” IEEE Trans. Pattern Anal. Mach. Intell. 25(2), 147–161 (2003).
[Crossref]

C. Steger, “An Unbiased Detector of Curvilinear Structures,” IEEE Trans. Pattern Anal. Mach. Intell. 20(2), 113–125 (2002).
[Crossref]

J. Canny, “A computational approach to edge detection,” IEEE Trans. Pattern Anal. Mach. Intell. 8(6), 679–698 (1986).
[Crossref] [PubMed]

Image Vis. Comput. (1)

Y. H. Wu, X. J. Li, F. C. Wu, and Z. Y. Hu, “Coplanar circles, quasi-affine invariance and calibration,” Image Vis. Comput. 24(4), 319–326 (2006).
[Crossref]

J. Opt. Soc. Am. A (1)

Opt. Express (3)

Opt. Lasers Eng. (1)

Z. Liu, X. J. Li, F. J. Li, and G. J. Zhang, “Calibration method for line-structured light vision sensor based on a single ball target,” Opt. Lasers Eng. 69, 20–28 (2015).
[Crossref]

Pattern Recognit. (2)

F. C. Wu, Z. Y. Hu, and H. J. Zhu, “Camera calibration with moving one-dimensional objects,” Pattern Recognit. 38(5), 755–765 (2005).
[Crossref]

F. Qi, Q. H. Li, Y. P. Luo, and D. C. Hu, “Camera calibration with one-dimensional objects moving under gravity,” Pattern Recognit. 40(1), 343–345 (2007).
[Crossref]

Sensors (Basel) (1)

J. H. Sun, H. B. He, and D. B. Zeng, “Global Calibration of Multiple Cameras Based on Sphere Targets,” Sensors (Basel) 16(1), 77 (2016).
[Crossref] [PubMed]

Other (8)

Y. H. Wu, H. J. Zhu, Z. Y. Hu, and F. C. Wu, “Camera Calibration from the Quasi-Affine invariance of two parallel circles,” in Proceedings of European Conference on Computer Vision (ECCV, 2004), pp.190–202.

C. Colombo, D. Comanducci, and A. D. Bimbo, “Camera calibration with two arbitrary coaxial circles,” in Proceedings of European Conference on Computer Vision (ECCV, 2006), pp. 265–276.
[Crossref]

R. Halir and J. Flusser, “Numerically Stable Direct Least Squares Fitting of Ellipses,” (1998).

R. I. Hartley and A. Zisserman, Multiple View Geometry in Computer Vision (Cambridge University, 2000).

M. Agrawal and L. S. Davis, “Camera calibration using spheres: a semi-definite programming approach,” in Proceedings of the IEEE International Conference on Computer Vision (IEEE, 2003), pp.782–789.
[Crossref]

J. Y. Bouguet, “The MATLAB open source calibration toolbox,” http://www.vision.caltech.edu/bouguetj/calib _doc/ .

H. Teramoto and G. Xu, “Camera calibration by a single image of balls: from conic to the absolute conic,” in Proceedings of the Asian Conference on Computer Vision (ACCV, 2002), pp. 499-506.

M. H. Ruan and D. Huber, “Calibration of 3D Sensors Using a Spherical Target,” International Conference on 3D Vision (IEEE, 2015), pp. 187–193.

Cited By

OSA participates in Crossref's Cited-By Linking service. Citing articles from OSA journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (14)

Fig. 1
Fig. 1 Schematic of the calibration process.
Fig. 2
Fig. 2 (a) Included angle between the normal vectors n1 and n2. (b) Seita angle between the long axis and the y-axis in the image.
Fig. 3
Fig. 3 Extracting and fitting the image of the elliptic curves of the circles of latitude and longitude.
Fig. 4
Fig. 4 (a) Relationship between e l a , l la and ω ; (b) Relationship between l li( i ) , ω and e li ( i ) .
Fig. 5
Fig. 5 The relationship between centers and contours of three spheres in the image.
Fig. 6
Fig. 6 Schematic of solving the translation vector from O t x t y t z t to O c x c y c z c .
Fig. 7
Fig. 7 RMSEs of the intrinsic parameters based on Zhang’s method, GST-based method and SC-based method.
Fig. 8
Fig. 8 Reprojection errors of the target points. (a) Reprojection errors in the u direction; (b) Reprojection errors in the v direction.
Fig. 9
Fig. 9 Physical map of calibration setup. (a) Zhang’s method; (b) GST-based method; (c) SC-based method.
Fig. 10
Fig. 10 Two sets of experiments about curve fitting in dark, bright and non-uniform lighting situation. (a) Set 1; (b) Set 2.
Fig. 11
Fig. 11 Images used for calibration. (a) Checkerboard target images; (b) GST images; (c) Sphere images.
Fig. 12
Fig. 12 Reprojection errors of the target feature points. (a) Reprojection errors distribution based on Zhang’s method; (b) Reprojection errors diatribution based on GST method; (c) The statistics form of the reprojection errors based on Zhang’s method; (d) The statistics form of the reprojection errors based on GST method.
Fig. 13
Fig. 13 Repeatability results analysis based on the three methods. (a) Repeatability of fx based on the three methods; (b) Repeatability of fy based on the three methods; (c) Repeatability of u0 based on the three methods; (d) Repeatability of fx based on Zhang’s method and GST; (e) Repeatability of fy based on Zhang’s method and GST; (f) Repeatability of u0 based on Zhang’s method and GST; (g) Repeatability of v0 based on the three methods; (h) Repeatability of r based on the three methods; (i) Repeatability of v0 based on Zhang’s method and GST; (j) Repeatability of r based on Zhang’s method and GST.
Fig. 14
Fig. 14 Reprojection errors of the target feature points.

Tables (1)

Tables Icon

Table 1 Comparison of the Intrinsic Parameters

Equations (21)

Equations on this page are rendered with MathJax. Learn more.

ρ p = ρ K [ R t ] q = [ a x γ u 0 0 a y v 0 0 0 1 ] [ R t ] q
ω = K -T K 1
L = E O = [ 1 0 0 1 a b a b a 2 + b 2 r 2 ] [ a b 1 ] = [ 0 0 r 2 ] = λ [ 0 0 1 ] = λ L
e = H T E H 1 E = H T e H
o c = H O
l = H T L = λ H T L
λ l = e o c
l la = λ e la ( j ) o la ( j )
l la = λ e ˜ la ( j ) o
l li ( i ) = λ e li ( i ) o li ( i )
l li ( i ) = λ e li ( i ) o
{ C 2 C 1 l 12 = β 2 β 1 l 12 C 3 C 1 l 13 = β 3 β 1 l 13 C 2 C 3 l 23 = β 2 β 3 l 23
{ l la = λ 1 ω v 1 l li( i ) = λ 2 ω v 2
{ v 1 = μ 1 K R d 1 v 2 = μ 2 K R d 2 { K 1 v 1 = μ 1 R d 1 K 1 v 2 = μ 2 R d 2
{ r 3 = ± ( K 1 v 1 ) / K 1 v 1 r 2 = ± ( K 1 v 2 ) / K 1 v 2
r 1 = r 2 × r 3
C la ( j ) = λ H T e la ( j ) H
C la ( j ) = λ H T e la ( j ) H C la ( j ) = λ ( K [ r 1 , r 2 , t ˜ ]) T e la ( j ) K [ r 1 , r 2 , t ˜ ] C la ( j ) = λ [ r 1 , r 2 , t ˜ ] T ( K T e la ( j ) K )[ r 1 , r 2 , t ˜ ]
- d j 2 = t ˜ T ( K T e la ( j ) K ) t ˜ / ( r 1 T ( K T e la ( j ) K ) r 1 )
f ( a ) = min ( l = 1 L ( j = 1 N e la( j l ) H la( j l ) C la( j l ) ) )
f ( b ) = min ( j = 1 m i = 1 n d ( p ^ i j , p i j ) )

Metrics