Abstract

With a paracatadioptric camera, according to the projective antipodal properties of the space sphere in the unit sphere model, two infinity points and a symmetric axis can be obtained. On the image plane, the image of the symmetric axis is the vanishing point’s polar with respect to the image of the absolute conic (IAC), and an infinity point is orthogonal to the polar direction of a circle. Thus, we obtain three vanishing points orthogonal to each other. This study extends research into the internal sphere parameters and the vanishing point in a paracatadioptric camera. The results of tests confirm the feasibility and effectiveness of the proposed calibration algorithms.

© 2020 Optical Society of America under the terms of the OSA Open Access Publishing Agreement

1. Introduction

Computer vision involves several disciplines, and based on studies conducted on the underlying mechanisms of human and animal vision, various models of the process can be built. The data obtained can be analyzed and processed using statistical methods [1]. An important step in computer vision technology is the calibration of the camera used. Cameras are essential for visual monitoring, mobile robot navigation, three-dimensional measurements, target recognition, video tracking, visual workpiece detection, virtual reality, and digital visualization [25]. Owing to the limited field of view of perspective cameras, a catadioptric camera was proposed by Hecht and Zajac [6]. Based on whether the single viewpoint of the catadioptric camera is fixed or not, Baker and Nayar [7] categorized catadioptric cameras into two types: central and noncentral. Furthermore, the mirrors of central catadioptric cameras can be divided into four types: parabolic, hyperbolic, elliptical, and planar. Central catadioptric cameras have a single effective viewpoint and a large imaging field of view and thus are especially useful in the field of computer vision.

In 2001, Geyer and Daniilidis [8] proposed that the imaging process of central catadioptric cameras is equivalent to a two-step projection on the unit viewing sphere. This provided a mathematical basis for the study of central catadioptric cameras. Based on this fact, central catadioptric cameras can be calibrated using the following five methods:

  • (1) Self-calibration: This method does not require a specific calibration object for completing the calibration process based on detecting the information available in the imaged scene. For instance, Kang [9] used the relationship between the corresponding points of multiple images to calibrate a paracatadioptric camera, but it needs multiple images and point tracks.
  • (2) Calibration based on points: The calibration object consists of collinear points, such that the distance between any two points is known. Zhang [10] was able to perform calibration based on the relationship between the image points and the geometric information between the collinear points. However, when the free end of the calibration object moves on the nonplanar curve, the calibration result is not very accurate.
  • (3) Calibration based on a plane: This method involves capturing photographs of a pattern on a plane from different directions and then detecting the information in the images to complete the calibration process. This calibration algorithm exhibits high precision. Zhang [11] adopted a method in which a single camera was moved to capture at least two photographs of the pattern to perform calibration; this study extended the use of 3D computer vision from the laboratory to the real world. Using the imaging relationship between a camera and a central catadioptric mirror, Deng [12] proposed a generalized unit view sphere model. Without the need to fit the conic, by establishing the homography matrix between a 2D plane and its image plane, the IAC can be obtained using the direct linear transform (DLT) algorithm to perform the calibration.
  • (4) Calibration based on a line [1316]: In this method, one does not need to know the position and length of a line in space. The line image is used directly to determine the intrinsic parameters of the camera. Barreto and Araujo [13] analyzed the imaging features of a space line in central catadioptric cameras and proved that three or more lines can be used to complete the calibration process. Zhao and Li [16] used the imaging feature of the line and the properties of antipodal points to complete the calibration of the unit view sphere.
  • (5) Calibration based on a sphere [1722]: The advantage of using a sphere as the calibration object is that it has no occlusions. Thus, it allows for high precision without requiring any measurement information related to the scene. Ying and Hu [17] studied the relationship between the mirror contour and the intrinsic parameters of the camera and proved that a sphere image can provide two constraints without degeneration. Thus, three spheres are enough for calibration. However, this method is nonlinear and is affected by the precision of the initial value of internal parameters. Zhao and Wang [19] initially used two intersecting spheres to calibrate the intrinsic camera parameters. However, the intersecting parts of the two spheres were obscured, because of which the sphere image could not be extracted completely; this affected the accuracy of the calibration algorithm. Li and Zhao [20] analyzed the imaging features of a space sphere on the unit viewing sphere and were able to perform calibration using geometric properties. Zhang et al. [21] showed that the common pole and polar with regards to two sphere images are equivalent to the pole and polar with regards to the image of the absolute conic (IAC) and can be used for calibrating the camera parameters, but the geometric size and space position of the sphere would affect the calibration results.

Based on an analysis of various calibration algorithms [922], in this study, novel algorithms are proposed to calibrate paracatadioptric cameras, using the imaging features of a space sphere in the camera and the pole-polar relationship. A sphere in space is projected onto a pair of antipodal circles on the unit viewing sphere model. According to the properties of antipodal small circles, an infinity point and symmetric axis can be obtained; in addition, we can obtain another infinity point according to the properties of antipodal points. On the image plane, the image of the symmetric axis is the polar of the vanishing point with respect to the IAC, and a vanishing line can be obtained by connecting two vanishing points. The vanishing point and line satisfy the pole-polar relationship. The camera internal parameters can be obtained using the pole-polar relationship and the orthogonal vanishing point.

The remainder of the paper is structured as follows: Section 2 introduces the projection model of a paracatadioptric camera and the relevant concepts. Section 3 describes in detail the proposed calibration algorithms based on sphere images and the pole-polar relationship. The results of evaluation tests are presented and discussed in Section 4. Finally, Section 5 presents the conclusions of the study.

2. Imaging model

In this section, we introduce the projection model of a paracatadioptric camera and the relevant concepts.

Geyer and Daniilidis [8] proposed that the imaging process of central catadioptric cameras is equivalent to a two-step projection on the unit viewing sphere (see Fig. 1). The world coordinate system ${\boldsymbol O}\textrm{ - }{x_w}{y_w}{z_w}$ is established, such that the origin is at the center, ${\boldsymbol O}$, of the unit viewing sphere, and the camera coordinate system, ${{\boldsymbol O}_c}\textrm{ - }{x_c}{y_c}{z_c}$, which takes any point in space as the origin, ${{\boldsymbol O}_c}$, is set up, such that the ${x_c}$ and ${y_c}$ axes are parallel to the ${x_w}$ and ${y_w}$ axes, respectively, and the ${z_c}$ and ${z_w}$ axes coincide with the optic axis, ${\boldsymbol O}{{\boldsymbol O}_c}$. That is to say, the image plane, $\pi$, is perpendicular to axis ${\boldsymbol O}{{\boldsymbol O}_c}$ and intersects at the principal point, ${\boldsymbol p}$.

 

Fig. 1. Imaging model of space point ${\boldsymbol A}$ and sphere ${\boldsymbol Q}$ in paracatadioptric camera.

Download Full Size | PPT Slide | PDF

Step 1: A 3D point, ${\boldsymbol A} = {\left[ {\begin{array}{cc} {\begin{array}{ccc} {{X_w}}&{{Y_w}}&{{Z_w}} \end{array}}&1 \end{array}} \right]^\textrm{T}}$, is projected onto a pair of antipodal points ${{\boldsymbol A}_ \pm } = {\left[ {\begin{array}{cccc} {{{ \pm {X_w}} \mathord{\left/ {\vphantom {{ \pm {X_w}} {||{\boldsymbol A} ||}}} \right.} {||{\boldsymbol A} ||}}}&{{{ \pm {Y_w}} \mathord{\left/ {\vphantom {{ \pm {Y_w}} {||{\boldsymbol A} ||}}} \right.} {||{\boldsymbol A} ||}}}&{{{ \pm {Z_w}} \mathord{\left/ {\vphantom {{ \pm {Z_w}} {||{\boldsymbol A} ||}}} \right.} {||{\boldsymbol A} ||}}}&1 \end{array}} \right]^\textrm{T}}$ with $||{\boldsymbol A} ||= \sqrt {{X_w}^2 + {Y_w}^2 + {Z_w}^2}$ on the surface of the unit viewing sphere, using ${\boldsymbol O}$ as the projection center.

Step 2: Points ${{\boldsymbol A}_ \pm }$ are projected onto $\pi$ to form a pair of antipodal image points, ${{\boldsymbol a}_ \pm }$, with the camera optical center, ${{\boldsymbol O}_c}$, as the virtual projection center; here, +, - denote visible and invisible, respectively. The homogeneous coordinates of points ${{\boldsymbol a}_ \pm }$ can be expressed as

$${\lambda _1}{{\boldsymbol a}_ \pm } = {\boldsymbol K}\left[ {\begin{array}{cccc} 1&0&0&0\\ 0&1&0&0\\ 0&0&1&1 \end{array}} \right]{{\boldsymbol A}_ \pm },$$
where ${\lambda _1}$ are nonzero scale factors; ${\boldsymbol K} = \left[ {\begin{array}{ccc} {{f_u}}&s&{{u_0}}\\ 0&{{f_v}}&{{v_0}}\\ 0&0&1 \end{array}} \right]$ is the matrix of the internal parameters of the camera; ${f_u},{f_v}$ are the scale factors in the directions of the u- and v-axes of $\pi$; $r = {{{f_u}} \mathord{\left/ {\vphantom {{{f_u}} {{f_v}}}} \right.} {{f_v}}}$ is the aspect ratio; s is the skew factor; ${\left[ {\begin{array}{cc} {\begin{array}{cc} {{u_0}}&{{v_0}} \end{array}}&1 \end{array}} \right]^\textrm{T}}$ are the homogeneous coordinates of principal point ${\boldsymbol p}$; and $\xi = ||{{\boldsymbol O}{{\boldsymbol O}_c}} ||$ denotes the mirror parameters. The value of $\xi$ determines the mirror type, as shown in Table 1. In this study, we considered a paracatadioptric camera. Hence, $\xi = 1$.

Proposition 1 [9]. In Fig. 1, if points ${{\boldsymbol a}_ + }$ and ${{\boldsymbol a}_ - }$ are a group of antipodal image points under a central catadioptric camera, then these points satisfy the following relationship:

$$\frac{{1 + \sqrt {1 + \tau {\boldsymbol a}_ + ^\textrm{T}{\boldsymbol \omega }{{\boldsymbol a}_ + }} }}{{{\boldsymbol a}_ + ^T{\boldsymbol \omega }{{\boldsymbol a}_ + }}}{{\boldsymbol a}_ + } + \frac{{1 + \sqrt {1 + \tau {\boldsymbol a}_ - ^\textrm{T}{\boldsymbol \omega }{{\boldsymbol a}_ - }} }}{{{\boldsymbol a}_ - ^T{\boldsymbol \omega }{{\boldsymbol a}_ - }}}{{\boldsymbol a}_ - } = 2{\boldsymbol p,}$$
where ${\boldsymbol p} = {\left[ {\begin{array}{cc} {\begin{array}{cc} {{u_0}}&{{v_0}} \end{array}}&1 \end{array}} \right]^\textrm{T}}$, $\tau = (1 - {\xi ^2})/\xi$, and ${\boldsymbol \omega } = {{\boldsymbol K}^{ - \textrm{T}}}{{\boldsymbol K}^{ - 1}}$ denotes the coefficient matrix of the IAC. For paracatadioptric cameras, $\xi = 1$, and Eq. (2) can be simplified to
$$\frac{1}{{{\boldsymbol a}_ + ^\textrm{T}{\boldsymbol \omega }{{\boldsymbol a}_ + }}}{{\boldsymbol a}_ + } + \frac{1}{{{\boldsymbol a}_ - ^\textrm{T}{\boldsymbol \omega }{{\boldsymbol a}_ - }}}{{\boldsymbol a}_ - } = {\boldsymbol p}{\boldsymbol .}$$

Tables Icon

Table 1. Relationship between length of $\xi$ (the mirror parameter) and mirror type.

3. Calibration of paracatadioptric camera

Duan and Wu [23] proposed that the imaging process for a sphere in the case of a paracatadioptric camera is as follows (see Fig. 1): a space sphere, ${\boldsymbol Q}$, is projected onto two parallel circles, ${{\boldsymbol S}_ + }$ and ${{\boldsymbol S}_ - }$, on the unit viewing sphere; these are a pair of antipodal circles, and their centers are ${{\boldsymbol O}_ + }$ and ${{\boldsymbol O}_ - }$, respectively. Next, ${{\boldsymbol S}_ + }$ and ${{\boldsymbol S}_ - }$ are projected onto conics ${{\boldsymbol C}_ + }$ and ${{\boldsymbol C}_ - }$, respectively, on $\pi$. ${{\boldsymbol C}_ + }$ is the sphere image, and ${{\boldsymbol C}_ - }$ is the antipodal sphere image of ${{\boldsymbol C}_ + }$.

Definition 1 [24]. In Fig. 1, the line through ${{\boldsymbol O}_ + }$ and ${{\boldsymbol O}_ - }$, is called the symmetry axis of ${{\boldsymbol S}_ + }$ and ${{\boldsymbol S}_ - }$.

The symmetry axis through ${{\boldsymbol S}_ + }$ and ${{\boldsymbol S}_ - }$ is ${{\boldsymbol L}_s}$. According to the projection property of antipodal circles, axis ${{\boldsymbol L}_s}$ is perpendicular to line ${{\boldsymbol L}_\infty }$ at infinity on the plane containing ${{\boldsymbol S}_ + }$ and ${{\boldsymbol S}_ - }$, that is, ${{\boldsymbol L}_s} \bot {{\boldsymbol L}_\infty }$. On the image plane, $\pi$, ${{\boldsymbol l}_s}$, and ${{\boldsymbol l}_\infty }$ denote the images of ${{\boldsymbol L}_s}$ and ${{\boldsymbol L}_\infty }$, respectively. Hence, we have the following proposition:

Proposition 2. In Fig. 2, on plane $\pi$ of the paracatadioptric camera, ${{\boldsymbol l}_s}$ and the vanishing line, ${{\boldsymbol l}_\infty }$, on the plane containing ${{\boldsymbol C}_ + }$ and ${{\boldsymbol C}_ - }$, are orthogonal and meet the following condition:

$${{\boldsymbol l}_\infty }^\textrm{T}{{\boldsymbol \omega }^{\ast }}{{\boldsymbol l}_s} = 0,$$
where ${{\boldsymbol \omega }^{\ast }}\textrm{ = }{\boldsymbol K}{{\boldsymbol K}^\textrm{T}}$ denotes the dual of the IAC.

 

Fig. 2. Relationship between pole ${{\boldsymbol v}_1}$ and polar ${{\boldsymbol l}_s}$ with regard to IAC.

Download Full Size | PPT Slide | PDF

As per the property of affine invariance under projective transformation and the definition of the transformation of lines in Ref. [24], Proposition 2 is clearly true. Hence, its proof is omitted.

3.1 Calibration based on property of antipodal circles

Let ${{\boldsymbol S}_ + }$ and ${{\boldsymbol S}_ - }$ intersect at two pairs of conjugate complex points, ${{\boldsymbol A}_i} (i = 1,2,3,4)$, one of which consists of circular points [25]. Let points ${{\boldsymbol A}_1}$ and ${{\boldsymbol A}_2}$ be the circular points, so that the line connecting these points is the line, ${{\boldsymbol L}_\infty }$, at infinity on the plane containing circles ${{\boldsymbol S}_ + }$ and ${{\boldsymbol S}_ - }$. It intersects the line, ${{\boldsymbol L}_{34}}$, passing through points ${{\boldsymbol A}_3}$ and ${{\boldsymbol A}_4}$ at point ${{\boldsymbol V}_{1\infty }}$ at infinity. The line, ${{\boldsymbol L}_{13}}$, connecting points ${{\boldsymbol A}_1}$ and ${{\boldsymbol A}_3}$ intersects the line, ${{\boldsymbol L}_{24}}$, connecting points ${{\boldsymbol A}_2}$ and ${{\boldsymbol A}_4}$ at point ${{\boldsymbol V}_b}$; the line, ${{\boldsymbol L}_{14}}$, connecting points ${{\boldsymbol A}_1}$ and ${{\boldsymbol A}_4}$ intersects the line, ${{\boldsymbol L}_{23}}$, connecting points ${{\boldsymbol A}_2}$ and ${{\boldsymbol A}_3}$ at point ${{\boldsymbol V}_c}$; ${{\boldsymbol L}_s}$ can be obtained by connecting points ${{\boldsymbol V}_b}$ and ${{\boldsymbol V}_c}$ [26]. On plane $\pi$, the images of ${{\boldsymbol V}_{1\infty }}$ and ${{\boldsymbol L}_s}$ are denoted as ${{\boldsymbol v}_1}$ and ${{\boldsymbol l}_s}$, respectively.

Proposition 3. On plane $\pi$ of the paracatadioptric camera, given sphere images ${{\boldsymbol C}_ + }$ and ${{\boldsymbol C}_ - }$ of circles ${{\boldsymbol S}_ + }$ and ${{\boldsymbol S}_ - }$, the image, ${{\boldsymbol l}_s}$, of the symmetry axis, ${{\boldsymbol L}_s}$, is the polar line of the vanishing point, ${{\boldsymbol v}_1}$, with regards to the IAC. Further,

$${{\boldsymbol l}_s} = {\boldsymbol \omega }{{\boldsymbol v}_1}$$
The proof process of Proposition 3 is in Appendix A.

3.2 B. Calibration based on property of antipodal points

In Fig. 1, let us take a point, ${{\boldsymbol A}_ + }$, on circle ${{\boldsymbol S}_ + }$. Then, there exists a point, ${{\boldsymbol A}_ - }$, on the antipodal circle, ${{\boldsymbol S}_ - }$. The tangent lines at points ${{\boldsymbol A}_ + }$ and ${{\boldsymbol A}_ - }$ to ${{\boldsymbol S}_ + }$ and ${{\boldsymbol S}_ - }$ are ${{\boldsymbol L}_ + }$ and ${{\boldsymbol L}_ - }$, respectively. According to the antipodal property, we have ${{\boldsymbol S}_ + }\parallel {{\boldsymbol S}_ - }$, ${\boldsymbol O}{{\boldsymbol O}_ + } = {\boldsymbol O}{{\boldsymbol O}_ - }$ and ${\boldsymbol O}{{\boldsymbol A}_ + } = {\boldsymbol O}{{\boldsymbol A}_ - }$. Thus, ${{\boldsymbol A}_ + }{{\boldsymbol O}_ + }{{\boldsymbol A}_ - }{{\boldsymbol O}_ - }$ is a parallelogram, that is, ${{\boldsymbol A}_ + }{{\boldsymbol O}_ + }\parallel {{\boldsymbol A}_ - }{{\boldsymbol O}_ - }$. Based on the tangents to the circle and given that ${{\boldsymbol L}_ + } \bot {{\boldsymbol A}_ + }{{\boldsymbol O}_ + }$, ${{\boldsymbol L}_ - } \bot {{\boldsymbol A}_ - }{{\boldsymbol O}_ - }$. In other words, ${{\boldsymbol L}_ + }\parallel {{\boldsymbol L}_ - }$. Lines ${{\boldsymbol L}_ + }$ and ${{\boldsymbol L}_ - }$ intersect at point ${{\boldsymbol V}_{2\infty }}$ at infinity. Let the polar lines of points ${{\boldsymbol V}_{1\infty }}$ and ${{\boldsymbol V}_{2\infty }}$ at infinity with regards to ${{\boldsymbol S}_ + }$ be ${{\boldsymbol H}_1}$ and ${{\boldsymbol H}_2}$, respectively. According to the definition of the diameter of a conic in Ref. [24], polar lines ${{\boldsymbol H}_1}$ and ${{\boldsymbol H}_2}$ are along two directions of the diameters of ${{\boldsymbol S}_ + }$, respectively. Hence, ${{\boldsymbol H}_1}$ and ${{\boldsymbol H}_2}$ intersect at the center, ${{\boldsymbol O}_ + }$, of ${{\boldsymbol S}_ + }$. Similarly, the center, ${{\boldsymbol O}_ - }$, of ${{\boldsymbol S}_ - }$ can be determined. On the basis of the harmonic conjugate relationship with respect to the projective transformation, point ${{\boldsymbol V}_{3\infty }}$ at infinity in the direction of ${{\boldsymbol L}_s}$ can be determined based on points ${\boldsymbol O}\textrm{,}\;{{\boldsymbol O}_ + }\textrm{,}{{\boldsymbol O}_ - }$. On the image plane, $\pi$, ${{\boldsymbol v}_3}$ is the image of ${{\boldsymbol V}_{3\infty }}$.

Proposition 4. In Fig. 3, on plane $\pi$ of the paracatadioptric camera, let ${{\boldsymbol l}_\infty }$ be a vanishing line and ${{\boldsymbol v}_3}$ be a vanishing point along the direction of ${{\boldsymbol l}_s}$. Then, ${{\boldsymbol v}_3}$ and ${{\boldsymbol l}_\infty }$ satisfy the pole-polar relationship with regard to the IAC, which is:

$${{\boldsymbol l}_\infty } = {\boldsymbol \omega }{\kern 1pt} {\kern 1pt} {{\boldsymbol v}_3}. $$
The proof of Proposition 4 is given in Appendix B.

3.3 Calibration based on three vanishing points orthogonal to each other

Theorem 1 [24]. (Polarity Principle) If the polar line of point ${\boldsymbol a}$ with respect to conic ${\boldsymbol C}$ passes through point ${\boldsymbol b}$, then the polar line of point ${\boldsymbol b}$ with respect to conic also passes through point ${\boldsymbol a}$.

In Fig. 4, the lines at points ${{\boldsymbol A}_ + }$ and ${{\boldsymbol A^{\prime}}_ + }$ at a tangent to ${{\boldsymbol S}_ + }$ are ${{\boldsymbol L}_\textrm{ + }}$ and ${\boldsymbol L^{\prime}}$, respectively, where ${{\boldsymbol A}_ + }$ and ${{\boldsymbol A^{\prime}}_ + }$ are the two endpoints of the diameter of ${{\boldsymbol S}_ + }$. Let the point at infinity along the direction of the diameter, ${{\boldsymbol A}_ + }{{\boldsymbol A^{\prime}}_ + }$, be ${{\boldsymbol V}_{4\infty }}$. Then, ${{\boldsymbol L}_\textrm{ + }} \bot {{\boldsymbol A}_ + }{{\boldsymbol A^{\prime}}_ + }$, ${{\boldsymbol L^{\prime}}_\textrm{ + }} \bot {{\boldsymbol A}_ + }{{\boldsymbol A^{\prime}}_ + }$, and ${{\boldsymbol L}_\textrm{ + }}\parallel {{\boldsymbol L^{\prime}}_ + }$, and lines ${{\boldsymbol L}_\textrm{ + }}$ and ${{\boldsymbol L^{\prime}}_ + }$ intersect at point ${{\boldsymbol V}_{2\infty }}$ at infinity. Then, points ${{\boldsymbol V}_{2\infty }}$ and ${{\boldsymbol V}_{4\infty }}$ are a pair of points at infinity in the orthogonal directions. Further, from Theorem 1, diameter ${{\boldsymbol A}_ + }{{\boldsymbol A^{\prime}}_ + }$ is the polar ${{\boldsymbol H}_2}$ of point ${{\boldsymbol V}_{2\infty }}$ with respect to ${{\boldsymbol S}_ + }$. Point ${{\boldsymbol V}_{3\infty }}$ is the point at infinity in the direction of ${{\boldsymbol L}_s}$, and points ${{\boldsymbol V}_{2\infty }}$ and ${{\boldsymbol V}_{4\infty }}$ at infinity lie on ${{\boldsymbol L}_\infty }$ at infinity, with ${{\boldsymbol L}_s} \bot {{\boldsymbol L}_\infty }$. So, ${{\boldsymbol V}_{2\infty }}$ and ${{\boldsymbol V}_{3\infty }}$, and ${{\boldsymbol V}_{3\infty }}$ and ${{\boldsymbol V}_{4\infty }}$, are two pairs of points at infinity in the orthogonal directions. Points ${{\boldsymbol V}_{2\infty }},{{\boldsymbol V}_{3\infty }},{{\boldsymbol V}_{4\infty }}$ are three points at infinity in three directions orthogonal to each other. Three vanishing points, ${{\boldsymbol v}_2},{{\boldsymbol v}_3},{{\boldsymbol v}_4}$, orthogonal to each other can be obtained on plane $\pi$ such that ${{\boldsymbol v}_2},{{\boldsymbol v}_3},{{\boldsymbol v}_4}$ are the images of ${{\boldsymbol V}_{2\infty }},{{\boldsymbol V}_{3\infty }},{{\boldsymbol V}_{4\infty }}$, respectively.

Proposition 5. In Fig. 3, on plane $\pi$ of the paracatadioptric camera, given vanishing point ${{\boldsymbol v}_2}$ along the direction of the line, ${{\boldsymbol l}_ + }$, tangent to sphere image ${{\boldsymbol C}_ + }$, vanishing point ${{\boldsymbol v}_3}$ along the direction of image ${{\boldsymbol l}_s}$ of the symmetrical axis, vanishing point ${{\boldsymbol v}_4}$ along the direction of the polar line, ${{\boldsymbol h}_2}$, and three orthogonal vanishing points ${{\boldsymbol v}_2},{{\boldsymbol v}_3},{{\boldsymbol v}_4}$ can be determined such that they meet the following conditions:

$$\left\{ {\begin{array}{c} {{{\boldsymbol v}_2}^\textrm{T}{\boldsymbol \omega }{{\boldsymbol v}_3} = 0}\\ {{{\boldsymbol v}_2}^\textrm{T}{\boldsymbol \omega }{{\boldsymbol v}_4} = 0}\\ {{{\boldsymbol v}_3}^\textrm{T}{\boldsymbol \omega }{{\boldsymbol v}_4} = 0} \end{array}} \right.. $$
The proof of Proposition 5 is given in Appendix C.

 

Fig. 3. Pole, ${{\boldsymbol v}_3}$, and polar, ${{\boldsymbol l}_\infty }$, with regard to the IAC and three vanishing points, ${{\boldsymbol v}_2},{{\boldsymbol v}_3},{{\boldsymbol v}_4}$, orthogonal to each other.

Download Full Size | PPT Slide | PDF

 

Fig. 4. Pair of points ${{\boldsymbol V}_{2\infty }}$ and ${{\boldsymbol V}_{4\infty }}$ at infinity in orthogonal directions.

Download Full Size | PPT Slide | PDF

3.4 Algorithm

Based on the above analysis, ${{\boldsymbol l}_s}$ and vanishing point ${{\boldsymbol v}_1}$ are the pole and polar, respectively, with regard to the IAC as per Proposition 3 (Method 1); vanishing point ${{\boldsymbol v}_3}$ and vanishing line ${{\boldsymbol l}_\infty }$ satisfy the pole-polar relationship with respect to the IAC as per Proposition 4 (Method 2); and three orthogonal vanishing points, ${{\boldsymbol v}_2},{{\boldsymbol v}_3},{{\boldsymbol v}_4}$, can be obtained using Proposition 5 (Method 3). The procedure for these algorithms is as follows:

Input: Sphere images $n(n \ge 3)$.

Output: Matrix of intrinsic parameters of camera, ${\boldsymbol K}$.

Step 1: Extract the pixel coordinates of the sphere images and the projective contour of the mirror, and solve the equations for the sphere images and the antipodal sphere images using Duan and Wu’s method [23].

Step 2: Take any point ${{\boldsymbol a}_ + }$ on sphere image ${{\boldsymbol C}_ + }$, and determine its antipodal point, ${{\boldsymbol a}_ - }$, using Eq. (3).

Step 3: Determine ${{\boldsymbol v}_1}$ using Eq. (10) and ${{\boldsymbol l}_s}$ using Eq. (5).

Step 4: Determine ${{\boldsymbol v}_2}$ using Eq. (19), and using Eq. (20).

Step 5: Determine ${{\boldsymbol v}_3}$ using Eq. (25) and ${{\boldsymbol v}_4}$ using Eq. (27).

Step 6: Once ${{\boldsymbol l}_s},\;{{\boldsymbol v}_1}$ have been determined, solve ${\boldsymbol \omega }$ based on Proposition 3 (Method 1); else if ${{\boldsymbol l}_\infty },{{\boldsymbol v}_3}$ have been determined, determine ${\boldsymbol \omega }$ using Proposition 4 (Method 2); else if ${{\boldsymbol v}_2},{{\boldsymbol v}_3},{{\boldsymbol v}_4}$ have been determined, determine ${\boldsymbol \omega }$ using Proposition 5 (Method 3);

Step 7: Using the expression ${\boldsymbol \omega } = {{\boldsymbol K}^{ - \textrm{T}}}{{\boldsymbol K}^{ - 1}}$, determine ${{\boldsymbol K}^{ - 1}}$ via the Cholesky factorization of ${\boldsymbol \omega }$, and obtain the matrix of the intrinsic parameters of the camera, ${\boldsymbol K}$, from the inverse of ${{\boldsymbol K}^{ - 1}}$.

4. Experiments

To confirm the feasibility and effectiveness of the proposed algorithms, we performed simulations and actual experiments and compared the obtained results with Refs. [12,17,20,23].

4.1 Simulations

During the simulations, we set up a virtual camera with $\xi = 1$ and intrinsic parameters matrix

$${\boldsymbol K = }\left[ {\begin{array}{ccc} {600}&{0.4}&{450}\\ 0&{550}&{350}\\ 0&0&0 \end{array}} \right]$$
Three sphere images are sufficient to estimate the intrinsic parameters of a camera according to the proposed calibration algorithms. Therefore, by using the simulation camera, three sphere images are generated, one of which is shown in Fig. 5. The blue conic represents the projected contour of the paraboloidal mirror, and the purple small conics represent the sphere image and its antipodal sphere image.

 

Fig. 5. Sphere images generated using simulated camera.

Download Full Size | PPT Slide | PDF

Canny edge detection [27] was performed to extract 200 data points for each sphere image and the mirror projective contour. In order to verify the effectiveness of the proposed algorithms, Gaussian noise with zero mean and standard deviation $\sigma ({0 \le \sigma \le 3} )$ was added to each data point.

Due to the manufacturing accuracy and the deviation due to the assembly process, the camera mirror will produce distortion, resulting in distortion of the original image. Lens distortion mainly includes radial distortion and tangential distortion. The initial values of intrinsic parameters of the camera can be obtained by the above algorithms, and the rotation matrix ${\boldsymbol R}$ and the translation vector ${\boldsymbol t}$ can be obtained using the method described in Ref. [20]. It is not much easier to solve the distortion coefficient with the sphere as the calibration than with the plane calibration, because it’s difficult to get the points correspondence between the sphere contour and its projection. Here, we use the method in Ref. [28] to match the measurement point with the ideal point. Intrinsic parameters are optimized by minimizing the following objective function:

$${\boldsymbol F}({\boldsymbol K},{\boldsymbol k},{\boldsymbol R},{\boldsymbol t}) = \sum\limits_{i = 1}^n {\sum\limits_{j = 1}^{{N_i}} {{{||{{m_{ij}} - {{m^{\prime}}_{ij}}({\boldsymbol K},{\boldsymbol k},{{\boldsymbol R}_i},{{\boldsymbol t}_i},{M_{ij}})} ||}^2}} }, $$
where ${k_1}$, ${k_2}$ represents the radial distortion coefficient, ${k_3}$, ${k_4}$ represents the tangential distortion coefficient, represents the measured coordinate j for on image i, and ${m^{\prime}_{ij}}$ represents the ideal coordinate of ${m_{ij}}, {M_{ij}}$ represents the world coordinate j for on image i, ${N_i}$ represents the number of points selected on the ith image. The Levenberg–Marquardt algorithm was used to solve the optimization problem.

We assume the distortion coefficient ${\boldsymbol k = }[{0.1203,0.1354,0.0106, - 0.0312} ]$. Starting from 10 pairs of point correspondence, we compared the variations of the absolute error of distortion coefficient from our proposed three algorithms with the results from Refs. [12,17,20,23]. As shown in Figs. 6(a-d), we find that when the point correspondence reaches 50 pairs, the change of distortion coefficient tends to be stable.

 

Fig. 6. Comparisons of results for proposed algorithms and references [12,17,20,23], (a-d) show absolute errors of four distortion coefficient, namely, ${k_1},{k_2},{k_3},{k_4}$, respectively.

Download Full Size | PPT Slide | PDF

For each noise level, 100 independent experiments were performed to determine the absolute average errors of the five intrinsic parameters, namely, ${f_u},{f_v},s,{u_0},{v_0}$, and analyze the variations in these parameters. After adding noise, we compared the variation from our proposed three algorithms with the results from Refs. [12,17,20,23]. Under different noise levels, the variations in the absolute errors of are shown in Figs. 7(a-e), wherein we can see that where $\sigma$ was zero, the absolute errors of the intrinsic parameters of the camera were zero. With an increase in noise level, the absolute errors of the intrinsic parameters increased linearly. At the same noise level, the absolute errors of our proposed algorithms are lower than that of Deng [12], Ying [17], Li [20] and Duan [23], and the absolute error of Ying [17] was slightly larger than that of other algorithms, because Ying’s algorithm [17] was nonlinear. The absolute error variation trend of the three calibration algorithms presented in this study was mostly the same, showing that our algorithms are effective in a certain noise range.

 

Fig. 7. Comparisons of results for proposed algorithms and references [12,17,20,23], (a-e) show absolute errors of five intrinsic parameters, namely, ${f_u},{f_v},s,{u_0},{v_0}$, respectively.

Download Full Size | PPT Slide | PDF

4.2 Real data

Next, we used a real central catadioptric camera consisting of a conventional camera and a paraboloid mirror; the effective focal length of the camera was 16 mm; its imaging range was 300 − 400 mm; and the image resolution was 1300 × 1170 pixels. The calibration object was a yellow table-tennis ball with a diameter of 40 mm, which was placed on a checkerboard with a $10 \times 10$ grid. The distance between two adjacent characteristic points on the checkerboard was 11 mm, and the accuracy was 0.1 mm. The ball was placed in different positions on the checkerboard and photographed with the camera, as shown in Figs. 8(a)–8(f). Figures 8(a)–8(c) were selected as the calibration images.

 

Fig. 8. (a-f) Six images of a table tennis ball taken at different positions on checkerboard.

Download Full Size | PPT Slide | PDF

First, Canny edge detection [27] was performed for each image in Figs. 8(a)–8(c); the results are shown in Figs. 9(a)–9(c). Next the projection of the sphere image and mirror contour in each image was obtained using the least-squares method [29]. Finally, the initial parameters of the paracatadioptric camera were calculated [17], and the equations for the sphere images and antipodal sphere images were optimized [23].

 

Fig. 9. (a-c) Canny edge detection for images in Figs. 8(a-c), respectively.

Download Full Size | PPT Slide | PDF

After the above-mentioned operations, considering the lens distortion, the initial intrinsic parameters of the camera were obtained using the proposed methods and Refs. [12,17,20,23]; the rotation matrix ${\boldsymbol R}$ and the translation vector ${\boldsymbol t}$ were obtained by using the method described in Ref. [20]. Assume we know the space position of the mirror contour, The projection of the mirror contour can be generated by camera parameters, and we can match the measurement point and the ideal point [28]. In the simulation experiment, we find that when the matching point reaches 50 pairs, the variation of the distortion coefficient tends to be stable, so we use 50 pairs of points to calculate the distortion coefficient. The optimized distortion coefficient and the internal parameters can be obtained using Eq. (8). The average values were taken after 100 independent experiments. The distortion coefficients and calibration results are shown in Tables 2 and 3, respectively. The original resolution of Fig. 8 was 3197 × 3078 pixels in this study, which was demodulated to 1300 × 1170 pixels; while the original resolution in Ref. [20] was 1304 × 1172 pixels, and the results exhibit clear differences. The error is small and within the acceptable range. It can be seen that the calibration results obtained using the three methods used in this study were almost identical and were very similar to those of Refs. [12,17,20,23]; this confirms that the three methods used in this study were suitable within a certain margin of error.

Tables Icon

Table 2. Lens distortion coefficient.

Tables Icon

Table 3. Calibration results for Methods 1, 2, and 3 and references [12,17,20,23] (unit: pixels).

The camera extrinsic parameters were obtained based on the results listed in Table 3 and the calibration algorithm in Ref. [30]. A reprojection error analysis [24] and 3D reconstruction of the feature points on the checkerboard were performed, and we verified the parallelity and orthogonality of the reconstruction information. Initially, Harris corner detection [31] was performed on each image in Figs. 9(a)–9(c) to obtain the feature points on the checkerboard. However, only 95 feature points could be detected owing to the occlusion of the ball on the checkerboard. The red points indicate those detected feature points, as shown in Figs. 10(a)–10(c). Then, we calculated the reprojection errors of the feature points of the three images in Figs. 10(a)–10(c) based on the intrinsic parameters in Table 3. Starting with 9 points in the upper left corner $3 \times 3$ of 95 feature points, according to the rule of adding one row and one column each time, the change of reprojection error with the increase of the number of rows of feature points is shown in Fig. 11(a). Wherein we can see that the reprojection errors of our methods were almost close to each other and distributed around 0-0.24 pixels in the end, while the results of Refs. [12,17,20,23] were approximately 0 − 0.32 pixels, and the reprojection error of the three calibration algorithms fluctuates less with the increase of the number of rows of feature points than the results of Refs. [12,17,20,23]. Similarly, on the basis of the three images in Figs. 8(a)–8(c), according to the rule of adding one image each time in Figs. 8(d)–8(f), we also performed the same operation, as shown in Fig. 11(b).

 

Fig. 10. (a-c) Harris corner detection for images in Figs. 9(a-c), respectively.

Download Full Size | PPT Slide | PDF

 

Fig. 11. Comparisons of results for proposed algorithms and references [12,17,20,23], (a), (b)show the reprojection error with the increase of the number of rows of feature points and the number of images, respectively

Download Full Size | PPT Slide | PDF

We performed 3D reconstruction on feature points on the checkerboard grid in Figs. 10(a)–10(c), and calculated the parallelity and orthogonality of the feature points on the checkerboard; the angles between any two lines in the parallel and orthogonal directions were calculated to obtain the average value, respectively. During the tests, the angles between any two lines in the parallel and orthogonal directions of the checkerboard were 0° and 90°, respectively. We calculated parallelity and orthogonality, starting with 9 points in the upper left corner $3 \times 3$ of 95 feature points, according to the rule of adding one row and one column each time. The change of absolute error of parallelity and orthogonality with the increase of the number of rows of feature points is shown in Figs. 12(a) and 12(b). We can see that the included angles between any two lines in the parallel and orthogonal directions of the checkerboard after 3D reconstruction become smaller and smaller as the number of rows of feature points increases. The three algorithms have small absolute errors and stable fluctuations, and the 3D reconstruction effect is more stable. Similarly, on the basis of the three images in Figs. 8(a)–8(c), according to the rule of adding one image each time in Figs. 8(d)–8(f), we also performed the same operation, as shown in Figs. 12(c) and 12(d), respectively.

 

Fig. 12. Comparisons of results for proposed algorithms and references [12,17,20,23], show the absolute error of (a), (c) parallelity and (b), (d) orthogonality with the increase of the number of rows of feature points and with the increase of the number of images, respectively.

Download Full Size | PPT Slide | PDF

5. Conclusions

In this study, we proposed novel algorithms to calibrate paracatadioptric cameras based on a sphere image and the pole-polar relationship. Under the unit viewing sphere model, a sphere in space is projected onto a pair of antipodal circles, which have two pairs of conjugate complex intersection points, with one point being at infinity. This allows the symmetry axis to be determined. Another point at infinity can be determined using the property of the antipodal points, so the line at infinity can be determined by connecting the two points at infinity. One point at infinity and another point at infinity in the direction of the polar line with respect to a circle is a pair of points at infinity in orthogonal directions. Based on the property of affine invariance under projective transformation, the vanishing points, image of the symmetry axis, and vanishing line can be determined. Finally, the calibration of a paracatadioptric camera was performed based on the pole-polar relationship, the three vanishing points orthogonal to each other, and the IAC. We evaluated the feasibility and effectiveness of the proposed algorithms through several simulations and real-world tests.

Reference [16] used a line for the calibration, and it is unnecessary to consider the length, position, and other information about the line. Method 1, Method 2, Method 3, and Refs. [19,20] all used a sphere as the calibrating object, without any measurement information, and only used the sphere image contour for calibration, with high accuracy. Method 1 and Method 2 carried out a calibration using the pole-polar relationship of IAC. Method 3, on the basis of Method 2, requires only two images to perform calibration using the tri-orthogonal vanishing points; thus, the noise had a lower impact on the calibration. References [16] and [20] completed calibration using the image of the imaged circle points, and in Ref. [19], the calibration was completed using the orthogonal vanishing point. However, two sphere images cannot be completely extracted owing to mutual occlusion impacting the calibration accuracy. In these three methods, three images were generated to complete the calibration.

Appendix A

In Fig. 2, ${{\boldsymbol C}_ + }$ and ${{\boldsymbol C}_ - }$ intersect at two pairs of conjugate complex points, ${{\boldsymbol a}_i}(i = 1,2,3,4)$, which are the images of points ${{\boldsymbol A}_i}(i = 1,2,3,4)$. By solving the simultaneous equations for conics ${{\boldsymbol C}_ + }$ and ${{\boldsymbol C}_ - }$, we have

$$\left\{ {\begin{array}{c} {\left[ {\begin{array}{ccc} u&v&1 \end{array}} \right]{{\boldsymbol C}_ + }{{\left[ {\begin{array}{ccc} u&v&1 \end{array}} \right]}^\textrm{T}} = 0}\\ {\left[ {\begin{array}{ccc} u&v&1 \end{array}} \right]{{\boldsymbol C}_ - }{{\left[ {\begin{array}{ccc} u&v&1 \end{array}} \right]}^\textrm{T}} = 0} \end{array}} \right., $$
where ${\left[ {\begin{array}{ccc} u&v&1 \end{array}} \right]^\textrm{T}}$ denotes the homogeneous coordinates of the pixels point on the image plane. ${{\boldsymbol l}_\infty }$, which passes through ${{\boldsymbol a}_1},{{\boldsymbol a}_2}$, intersects the line, ${{\boldsymbol l}_{34}}$, passing through points ${{\boldsymbol a}_3},{{\boldsymbol a}_4}$ at the vanishing point, ${{\boldsymbol v}_1}$:
$${{\boldsymbol v}_1} = ({{\boldsymbol a}_1} \times {{\boldsymbol a}_2}) \times ({{\boldsymbol a}_3} \times {{\boldsymbol a}_4}), $$
where denotes the vector product, and ${{\boldsymbol l}_{34}}$ represents the image of line ${{\boldsymbol L}_{34}}$. The line, ${{\boldsymbol l}_{13}}$, connecting points ${{\boldsymbol a}_1}$ and ${{\boldsymbol a}_3}$ intersects the line, ${{\boldsymbol l}_{24}}$, connecting points ${{\boldsymbol a}_2}$ and at point:
$${{\boldsymbol v}_b} = ({{\boldsymbol a}_1} \times {{\boldsymbol a}_3}) \times ({{\boldsymbol a}_2} \times {{\boldsymbol a}_4}), $$
where the images of ${{\boldsymbol L}_{13}}, {{\boldsymbol L}_{24}}$, and ${{\boldsymbol V}_b}$ are denoted as ${{\boldsymbol l}_{13}}, {{\boldsymbol l}_{24}}$, and ${{\boldsymbol v}_b}$, respectively. The line, ${{\boldsymbol l}_{14}}$, connecting points ${{\boldsymbol a}_1}$ and ${{\boldsymbol a}_4}$ intersects the line, ${{\boldsymbol l}_{23}}$, connecting points ${{\boldsymbol a}_2}$ and at point ${{\boldsymbol v}_c}$:
$${{\boldsymbol v}_c} = ({{\boldsymbol a}_1} \times {{\boldsymbol a}_4}) \times ({{\boldsymbol a}_2} \times {{\boldsymbol a}_3}), $$
where the images of ${{\boldsymbol L}_{14}}$, ${{\boldsymbol L}_{23}}$, and ${{\boldsymbol V}_c}$ are denoted as ${{\boldsymbol l}_{14}}$, ${{\boldsymbol l}_{23}}$, and ${{\boldsymbol v}_c}$, respectively. The image, ${{\boldsymbol l}_s}$, of the symmetry axis, ${{\boldsymbol L}_s}$, can be determined by simultaneously using Eqs. (11) and (12):
$${{\boldsymbol l}_s} = {{\boldsymbol v}_b} \times {{\boldsymbol v}_c}. $$
According to Result 8.16 in Ref. [24], the vector normal to the plane determined by the back-projection of ${{\boldsymbol l}_s}$ on the unit viewing sphere and the camera optical center, ${{\boldsymbol O}_c}$, can be obtained as follows:
$${\boldsymbol n}\textrm{ = }{{\boldsymbol K}^\textrm{T}}{{\boldsymbol l}_s}. $$
According to Result 8.15 in Ref. [24], the ray from the back-projection of ${{\boldsymbol v}_1}$, on the unit viewing sphere to ${{\boldsymbol O}_c}$, can be determined:
$${{\boldsymbol O}_c}{{\boldsymbol V}_{1\infty }}{\boldsymbol = }{{\boldsymbol K}^{ - 1}}{{\boldsymbol v}_1}. $$
As per Proposition 2, ${{\boldsymbol v}_1}$, lies on ${{\boldsymbol l}_\infty }$, and the normal direction, ${\boldsymbol n}$, is consistent with the direction of the ray, ${{\boldsymbol O}_c}{{\boldsymbol V}_{1\infty }}$. Therefore, the following equation can be formulated:
$${\lambda _2}{{\boldsymbol K}^\textrm{T}}{{\boldsymbol l}_s} = {{\boldsymbol K}^{ - 1}}{{\boldsymbol v}_1}, $$
where ${\lambda _2}$ is a nonzero scale factor. Let us multiply both sides of Eq. (16) by ${{\boldsymbol K}^{ - \textrm{T}}}$. Thus, we obtain
$${\lambda _2}{{\boldsymbol l}_s} = {{\boldsymbol K}^{ - \textrm{T}}}{{\boldsymbol K}^{ - 1}}{{\boldsymbol v}_1} = {\boldsymbol \omega }{{\boldsymbol v}_1}. $$
Equations (17) and (5) are equivalent for a nonzero scaling factor. Further, on the basis of the pole-polar relationship, ${{\boldsymbol l}_s}$ is the polar line of the vanishing points, ${{\boldsymbol v}_1}$, with respect to the IAC.

Appendix B

In Fig. 3, let us take a point, ${{\boldsymbol a}_ + }$, on sphere image ${{\boldsymbol C}_ + }$; the antipodal point, ${{\boldsymbol a}_ - }$, can be obtained from Proposition 1, and the images of points ${{\boldsymbol A}_ + }$ and ${{\boldsymbol A}_ - }$ are ${{\boldsymbol a}_ + }$ and ${{\boldsymbol a}_ - }$, respectively. The tangent lines at points ${{\boldsymbol a}_ + }$ and ${{\boldsymbol a}_ - }$ to ${{\boldsymbol C}_ + }$ and ${{\boldsymbol C}_ - }$ are ${{\boldsymbol l}_ + }$ and ${{\boldsymbol l}_ - }$, respectively, which meet the following conditions:

$$\left\{ {\begin{array}{c} {{{\boldsymbol l}_ + } = {{\boldsymbol C}_ + }{{\boldsymbol a}_\textrm{ + }}}\\ {{{\boldsymbol l}_ - } = {{\boldsymbol C}_ - }{{\boldsymbol a}_ - }} \end{array}} \right., $$
where the images of tangent lines ${{\boldsymbol L}_ + }$ and ${{\boldsymbol L}_ - }$ are ${{\boldsymbol l}_ + }$ and ${{\boldsymbol l}_ - }$, respectively. Further, lines ${{\boldsymbol l}_ + }$ and ${{\boldsymbol l}_ - }$ intersect at vanishing point ${{\boldsymbol v}_2}$:
$${{\boldsymbol v}_2} = {{\boldsymbol l}_ + } \times {{\boldsymbol l}_ - }, $$
where the image of point ${{\boldsymbol V}_{2\infty }}$ at infinity is ${{\boldsymbol v}_2}$. ${{\boldsymbol l}_\infty }$ can be determined by connecting points ${{\boldsymbol v}_1}$ and ${{\boldsymbol v}_2}$:
$${{\boldsymbol l}_\infty } = {{\boldsymbol v}_1} \times {{\boldsymbol v}_2}. $$
Let the polar of vanishing points ${{\boldsymbol v}_1}$ and ${{\boldsymbol v}_2}$ with regards to the sphere image, ${{\boldsymbol C}_ + }$, be ${{\boldsymbol h}_1}$ and ${{\boldsymbol h}_2}$, According to the pole-polar relationship, we have
$$\left\{ {\begin{array}{c} {{{\boldsymbol h}_1} = {{\boldsymbol C}_ + }{{\boldsymbol v}_1}}\\ {{{\boldsymbol h}_2} = {{\boldsymbol C}_ + }{{\boldsymbol v}_2}} \end{array}} \right., $$
where the images of polar lines ${{\boldsymbol H}_1}$ and ${{\boldsymbol H}_2}$ are ${{\boldsymbol h}_1}$ and ${{\boldsymbol h}_2}$, respectively. Moreover, polar lines ${{\boldsymbol h}_1}$ and ${{\boldsymbol h}_2}$ intersect at point ${{\boldsymbol o}_ + }$ which is the image of ${{\boldsymbol O}_ + }$.
$${{\boldsymbol o}_\textrm{ + }}\textrm{ = }{{\boldsymbol h}_1} \times {{\boldsymbol h}_2}. $$
The image, ${{\boldsymbol o}_ - }$, of ${{\boldsymbol O}_ - }$ can be obtained in a similar manner. Points ${{\boldsymbol o}_ + },{\boldsymbol p},{{\boldsymbol o}_ - },{{\boldsymbol v}_3}$ satisfy the harmonic conjugate relationship. Thus, their cross ratio is as follows:
$$({{\boldsymbol o}_ - }{\boldsymbol p},{{\boldsymbol o}_ + }{{\boldsymbol v}_3}) ={-} 1, $$
where the image of point ${\boldsymbol O}$ is the principal point ${\boldsymbol p} = {\left[ {\begin{array}{cc} {\begin{array}{cc} {{u_0}}&{{v_0}} \end{array}}&1 \end{array}} \right]^\textrm{T}}$ on plane $\pi$.

On the basis of Proposition 2, and given that ${{\boldsymbol v}_1}$ and ${{\boldsymbol v}_2}$ lie on ${{\boldsymbol l}_\infty }$, the two pairs of points, ${{\boldsymbol v}_1}$ and ${{\boldsymbol v}_3}$, ${{\boldsymbol v}_2}$ and ${{\boldsymbol v}_3}$, are two pairs of orthogonal vanishing points, as per Result 8.22 in Ref. [24], and meet the following conditions:

$$\left\{ {\begin{array}{c} {{{\boldsymbol v}_3}^\textrm{T}{\boldsymbol \omega }{{\boldsymbol v}_1} = 0}\\ {{{\boldsymbol v}_3}^\textrm{T}{\boldsymbol \omega }{{\boldsymbol v}_2} = 0} \end{array}} \right.. $$
By simultaneously solving Eqs. (20) and (24), we obtain
$${{\boldsymbol v}_3} = {\boldsymbol \omega }{{\boldsymbol v}_1} \times {\boldsymbol \omega }{{\boldsymbol v}_2} = {{\boldsymbol \omega }^ \ast }({{\boldsymbol v}_1} \times {{\boldsymbol v}_2}) = {{\boldsymbol \omega }^ \ast }{{\boldsymbol l}_\infty }, $$
where ${{\boldsymbol \omega }^ \ast }\textrm{ = }{\boldsymbol K}{{\boldsymbol K}^\textrm{T}}$. Let us left multiply both sides of Eq. (25) by ${\boldsymbol \omega }$. We then get
$${{\boldsymbol l}_\infty } = {\boldsymbol \omega }{{\boldsymbol v}_3}. $$
On the basis of Eq. (26) and the pole-polar relationship, ${{\boldsymbol v}_3}$ and ${{\boldsymbol l}_\infty }$ satisfy the pole-polar relationship with respect to the IAC. □

Appendix C

As shown in Fig. 3, ${{\boldsymbol a}_ + },{{\boldsymbol a^{\prime}}_ + },{{\boldsymbol l}_ + },{{\boldsymbol l^{\prime}}_ + }$ are the images of ${{\boldsymbol A}_ + },{{\boldsymbol A^{\prime}}_ + },{{\boldsymbol L}_\textrm{ + }},{{\boldsymbol L^{\prime}}_ + }$, respectively. According to affine invariance under projective transformation, the tangent lines, ${{\boldsymbol l}_ + }$ and ${{\boldsymbol l^{\prime}}_\textrm{ + }}$, at points ${{\boldsymbol a}_ + }$ and ${{\boldsymbol a^{\prime}}_ + }$ are parallel and orthogonal to polar line ${{\boldsymbol h}_2}$. Thus, a pair of orthogonal vanishing points, ${{\boldsymbol v}_2}$ and ${{\boldsymbol v}_4}$, can be obtained, in which vanishing point ${{\boldsymbol v}_4}$ is the intersection point of polar line ${{\boldsymbol h}_2}$ and vanishing line ${{\boldsymbol l}_\infty }$. As per Result 2.2 in Ref. [24], we have

$${{\boldsymbol v}_4} = {{\boldsymbol l}_\infty } \times {{\boldsymbol h}_2}. $$
According to Proposition 2, vanishing point ${{\boldsymbol v}_2}$ lies on vanishing line ${{\boldsymbol l}_\infty }$, while vanishing point ${{\boldsymbol v}_3}$ lies on ${{\boldsymbol l}_s}$. Thus, one pair of orthogonal vanishing points, ${{\boldsymbol v}_2}$ and ${{\boldsymbol v}_3}$, can be determined. Similarly, another pair of orthogonal vanishing points, ${{\boldsymbol v}_3}$ and ${{\boldsymbol v}_4}$, can also be obtained. To summarize, vanishing points ${{\boldsymbol v}_2},{{\boldsymbol v}_3},{{\boldsymbol v}_4}$ are three vanishing points orthogonal to each other, as per Result 8.22 in Ref. [24], which corresponds to Eq. (7).

Funding

National Natural Science Foundation of China (11861075, 61663048).

Disclosures

The authors declare that there are no conflicts of interest related to this article.

References

1. Z. H. Zhang, S. J. Huang, S. S. Meng, F. Gao, and X. Q. Jiang, “A simple, flexible and automatic 3D calibration method for a phase calculation-based fringe projection imaging system,” Opt. Express 21(10), 12218–12227 (2013). [CrossRef]  

2. S. D. Ma, R. H. Zhu, C. G. Quan, L. Chen, C. J. Tay, and B. Li, “Flexible structured-light-based three-dimensional profile reconstruction method considering lens projection-imaging distortion,” Appl. Opt. 51(13), 2419–2428 (2012). [CrossRef]  

3. G. Hu and W. E. Dixon, “Lyapunov-based adaptive visual servo tracking control using central catadioptric camera,” in Proceedings of IEEE International Conference on Control Applications (IEEE, 2007), pp. 1486–1491.

4. P. Sturm, “A method for 3D reconstruction of piecewise planar objects from single panoramic images,” in Proceedings of IEEE Workshop on Omnidirectional Vision (IEEE, 2000), pp. 119–126.

5. S. Peleg, M. Ben-Ezra, and Y. Pritch, “Omnistereo: panoramic stereo imaging,” IEEE Trans. Pattern Anal. Machine Intell. 23(3), 279–290 (2001). [CrossRef]  

6. E. Hecht and A. Zajac, Optics, 3rd edn. (Addison-Wesley Press, Reading, MA, 1997).

7. S. Baker and S. Nayar, “A theory of single-viewpoint catadioptric image formation,” Int. J. Comput. Vision. 35(2), 175–196 (1999). [CrossRef]  

8. C. Geyer and K. Daniilidis, “Catadioptric projective geometry,” Int. J. Comput. Vision. 45(3), 223–243 (2001). [CrossRef]  

9. S. B. Kang, “Catadioptric self-calibration,” in Proceedings of IEEE International Conference on Computer Vision and Pattern Recognition (IEEE, 2000), pp. 201–207.

10. Z. Zhang, “Camera calibration with one-dimensional objects,” IEEE Trans. Pattern Anal. Machine Intell. 26(7), 892–899 (2004). [CrossRef]  

11. Z. Zhang, “A flexible new technique for camera calibration,” IEEE Trans. Pattern Anal. Machine Intell. 22(11), 1330–1334 (2000). [CrossRef]  

12. X. Deng, F. Wu, and Y. Wu, “An easy calibration method for central catadioptric cameras,” Acta Automatica Sinica 33(8), 801–808 (2007). [CrossRef]  

13. J. Barreto and H. Araujo, “Geometric properties of central catadioptric line images and their application in calibration,” IEEE Trans. Pattern Anal. Machine Intell. 27(8), 1327–1333 (2005). [CrossRef]  

14. F. Wu, F. Duan, Z. Hu, and Y. Wu, “A new linear algorithm for calibrating central catadioptric cameras,” Pattern Recogn. 41(10), 3166–3172 (2008). [CrossRef]  

15. F. Duan, F. Wu, M. Zhou, X. Deng, and Y. Tian, “Calibrating effective focal length for central catadioptric cameras using one space line,” Pattern Recogn. Lett. 33(5), 646–653 (2012). [CrossRef]  

16. Y. Zhao, Y. Li, and B. Zheng, “Calibrating a paracatadioptric camera by the property of the polar of a point at infinity with respect to a circle,” Appl. Opt. 57(15), 4345–4352 (2018). [CrossRef]  

17. X. Ying and Z. Hu, “Catadioptric camera calibration using geometric invariants,” IEEE Trans. Pattern Anal. Machine Intell. 26(10), 1260–1271 (2004). [CrossRef]  

18. X. Ying and H. Zha, “Geometric interpretations of the relation between the image of the absolute conic and sphere images,” IEEE Trans. Pattern Anal. Machine Intell. 28(12), 2031–2036 (2006). [CrossRef]  

19. Y. Zhao and Y. Wang, “Intrinsic parameter determination of a paracatadioptric camera by the intersection of two sphere projections,” J. Opt. Soc. Am. A 32(11), 2201–2209 (2015). [CrossRef]  

20. Y. Li and Y. Zhao, “Calibration of a paracatadioptric camera by projection imaging of a single sphere,” Appl. Opt. 56(8), 2230 (2017). [CrossRef]  

21. H. Zhang, K. Wong, and G. Zhang, “Camera calibration from images of spheres,” IEEE Trans. Pattern Anal. Machine Intell. 29(3), 499–502 (2007). [CrossRef]  

22. X. Ying and H. Zha, “Linear approaches to camera calibration from sphere images or active intrinsic calibration using vanishing points,” in Proceedings of IEEE International Conference on Computer Vision (IEEE, 2005), pp. 596–603.

23. H. Duan and Y. Wu, “A calibration method for paracatadioptric camera from sphere images,” Pattern Recogn. Lett. 33(6), 677–684 (2012). [CrossRef]  

24. R. Hartley and A. Zisserman, Multiple View Geometry in Computer Vision, 2nd edn. (Cambridge University Press, Cambridge, 2003).

25. Y. Wu, H. Zhu, Z. Hu, and F. Wu, “Camera calibration from the quasi-affine invariance of two parallel circles,” in Proceedings of the 8th European Conference on Computer Vision (Springer, 2004), pp. 190–202.

26. C. Colombo, B. Del, and F. Pernici, “Metric 3D reconstruction and texture acquisition of surfaces of revolution from a single uncalibrated view,” IEEE Trans. Pattern Anal. Machine Intell. 27(1), 99–114 (2005). [CrossRef]  

27. J. Canny, “A computational approach to edge detection,” IEEE Trans. Pattern Anal. Machine Intell. 8(6), 679–698 (1986). [CrossRef]  

28. J. Yu and F. Da, “Bi-tangent line based approach for multi-camera calibration using spheres,” J. Opt. Soc. Am. A 35(2), 221–229 (2018). [CrossRef]  

29. A. Fitzgibbon, M. Pilu, and R. Fisher, “Direct least square fitting of ellipses,” IEEE Trans. Pattern Anal. Machine Intell. 21(5), 476–480 (1999). [CrossRef]  

30. B. Zhang and Y. Li, “A method for calibrating the central catadioptric camera via homographic matrix,” in Proceedings of IEEE International Conference on Information and Automation (IEEE, 2008), pp. 972–977.

31. C. G. Harris and M. J. Stephens, “A combined corner and edge detector,” in Proceedings of the 4th Alvey Vision Conference (1988), pp. 147–151.

References

  • View by:
  • |
  • |
  • |

  1. Z. H. Zhang, S. J. Huang, S. S. Meng, F. Gao, and X. Q. Jiang, “A simple, flexible and automatic 3D calibration method for a phase calculation-based fringe projection imaging system,” Opt. Express 21(10), 12218–12227 (2013).
    [Crossref]
  2. S. D. Ma, R. H. Zhu, C. G. Quan, L. Chen, C. J. Tay, and B. Li, “Flexible structured-light-based three-dimensional profile reconstruction method considering lens projection-imaging distortion,” Appl. Opt. 51(13), 2419–2428 (2012).
    [Crossref]
  3. G. Hu and W. E. Dixon, “Lyapunov-based adaptive visual servo tracking control using central catadioptric camera,” in Proceedings of IEEE International Conference on Control Applications (IEEE, 2007), pp. 1486–1491.
  4. P. Sturm, “A method for 3D reconstruction of piecewise planar objects from single panoramic images,” in Proceedings of IEEE Workshop on Omnidirectional Vision (IEEE, 2000), pp. 119–126.
  5. S. Peleg, M. Ben-Ezra, and Y. Pritch, “Omnistereo: panoramic stereo imaging,” IEEE Trans. Pattern Anal. Machine Intell. 23(3), 279–290 (2001).
    [Crossref]
  6. E. Hecht and A. Zajac, Optics, 3rd edn. (Addison-Wesley Press, Reading, MA, 1997).
  7. S. Baker and S. Nayar, “A theory of single-viewpoint catadioptric image formation,” Int. J. Comput. Vision. 35(2), 175–196 (1999).
    [Crossref]
  8. C. Geyer and K. Daniilidis, “Catadioptric projective geometry,” Int. J. Comput. Vision. 45(3), 223–243 (2001).
    [Crossref]
  9. S. B. Kang, “Catadioptric self-calibration,” in Proceedings of IEEE International Conference on Computer Vision and Pattern Recognition (IEEE, 2000), pp. 201–207.
  10. Z. Zhang, “Camera calibration with one-dimensional objects,” IEEE Trans. Pattern Anal. Machine Intell. 26(7), 892–899 (2004).
    [Crossref]
  11. Z. Zhang, “A flexible new technique for camera calibration,” IEEE Trans. Pattern Anal. Machine Intell. 22(11), 1330–1334 (2000).
    [Crossref]
  12. X. Deng, F. Wu, and Y. Wu, “An easy calibration method for central catadioptric cameras,” Acta Automatica Sinica 33(8), 801–808 (2007).
    [Crossref]
  13. J. Barreto and H. Araujo, “Geometric properties of central catadioptric line images and their application in calibration,” IEEE Trans. Pattern Anal. Machine Intell. 27(8), 1327–1333 (2005).
    [Crossref]
  14. F. Wu, F. Duan, Z. Hu, and Y. Wu, “A new linear algorithm for calibrating central catadioptric cameras,” Pattern Recogn. 41(10), 3166–3172 (2008).
    [Crossref]
  15. F. Duan, F. Wu, M. Zhou, X. Deng, and Y. Tian, “Calibrating effective focal length for central catadioptric cameras using one space line,” Pattern Recogn. Lett. 33(5), 646–653 (2012).
    [Crossref]
  16. Y. Zhao, Y. Li, and B. Zheng, “Calibrating a paracatadioptric camera by the property of the polar of a point at infinity with respect to a circle,” Appl. Opt. 57(15), 4345–4352 (2018).
    [Crossref]
  17. X. Ying and Z. Hu, “Catadioptric camera calibration using geometric invariants,” IEEE Trans. Pattern Anal. Machine Intell. 26(10), 1260–1271 (2004).
    [Crossref]
  18. X. Ying and H. Zha, “Geometric interpretations of the relation between the image of the absolute conic and sphere images,” IEEE Trans. Pattern Anal. Machine Intell. 28(12), 2031–2036 (2006).
    [Crossref]
  19. Y. Zhao and Y. Wang, “Intrinsic parameter determination of a paracatadioptric camera by the intersection of two sphere projections,” J. Opt. Soc. Am. A 32(11), 2201–2209 (2015).
    [Crossref]
  20. Y. Li and Y. Zhao, “Calibration of a paracatadioptric camera by projection imaging of a single sphere,” Appl. Opt. 56(8), 2230 (2017).
    [Crossref]
  21. H. Zhang, K. Wong, and G. Zhang, “Camera calibration from images of spheres,” IEEE Trans. Pattern Anal. Machine Intell. 29(3), 499–502 (2007).
    [Crossref]
  22. X. Ying and H. Zha, “Linear approaches to camera calibration from sphere images or active intrinsic calibration using vanishing points,” in Proceedings of IEEE International Conference on Computer Vision (IEEE, 2005), pp. 596–603.
  23. H. Duan and Y. Wu, “A calibration method for paracatadioptric camera from sphere images,” Pattern Recogn. Lett. 33(6), 677–684 (2012).
    [Crossref]
  24. R. Hartley and A. Zisserman, Multiple View Geometry in Computer Vision, 2nd edn. (Cambridge University Press, Cambridge, 2003).
  25. Y. Wu, H. Zhu, Z. Hu, and F. Wu, “Camera calibration from the quasi-affine invariance of two parallel circles,” in Proceedings of the 8th European Conference on Computer Vision (Springer, 2004), pp. 190–202.
  26. C. Colombo, B. Del, and F. Pernici, “Metric 3D reconstruction and texture acquisition of surfaces of revolution from a single uncalibrated view,” IEEE Trans. Pattern Anal. Machine Intell. 27(1), 99–114 (2005).
    [Crossref]
  27. J. Canny, “A computational approach to edge detection,” IEEE Trans. Pattern Anal. Machine Intell. 8(6), 679–698 (1986).
    [Crossref]
  28. J. Yu and F. Da, “Bi-tangent line based approach for multi-camera calibration using spheres,” J. Opt. Soc. Am. A 35(2), 221–229 (2018).
    [Crossref]
  29. A. Fitzgibbon, M. Pilu, and R. Fisher, “Direct least square fitting of ellipses,” IEEE Trans. Pattern Anal. Machine Intell. 21(5), 476–480 (1999).
    [Crossref]
  30. B. Zhang and Y. Li, “A method for calibrating the central catadioptric camera via homographic matrix,” in Proceedings of IEEE International Conference on Information and Automation (IEEE, 2008), pp. 972–977.
  31. C. G. Harris and M. J. Stephens, “A combined corner and edge detector,” in Proceedings of the 4th Alvey Vision Conference (1988), pp. 147–151.

2018 (2)

2017 (1)

2015 (1)

2013 (1)

2012 (3)

S. D. Ma, R. H. Zhu, C. G. Quan, L. Chen, C. J. Tay, and B. Li, “Flexible structured-light-based three-dimensional profile reconstruction method considering lens projection-imaging distortion,” Appl. Opt. 51(13), 2419–2428 (2012).
[Crossref]

F. Duan, F. Wu, M. Zhou, X. Deng, and Y. Tian, “Calibrating effective focal length for central catadioptric cameras using one space line,” Pattern Recogn. Lett. 33(5), 646–653 (2012).
[Crossref]

H. Duan and Y. Wu, “A calibration method for paracatadioptric camera from sphere images,” Pattern Recogn. Lett. 33(6), 677–684 (2012).
[Crossref]

2008 (1)

F. Wu, F. Duan, Z. Hu, and Y. Wu, “A new linear algorithm for calibrating central catadioptric cameras,” Pattern Recogn. 41(10), 3166–3172 (2008).
[Crossref]

2007 (2)

H. Zhang, K. Wong, and G. Zhang, “Camera calibration from images of spheres,” IEEE Trans. Pattern Anal. Machine Intell. 29(3), 499–502 (2007).
[Crossref]

X. Deng, F. Wu, and Y. Wu, “An easy calibration method for central catadioptric cameras,” Acta Automatica Sinica 33(8), 801–808 (2007).
[Crossref]

2006 (1)

X. Ying and H. Zha, “Geometric interpretations of the relation between the image of the absolute conic and sphere images,” IEEE Trans. Pattern Anal. Machine Intell. 28(12), 2031–2036 (2006).
[Crossref]

2005 (2)

J. Barreto and H. Araujo, “Geometric properties of central catadioptric line images and their application in calibration,” IEEE Trans. Pattern Anal. Machine Intell. 27(8), 1327–1333 (2005).
[Crossref]

C. Colombo, B. Del, and F. Pernici, “Metric 3D reconstruction and texture acquisition of surfaces of revolution from a single uncalibrated view,” IEEE Trans. Pattern Anal. Machine Intell. 27(1), 99–114 (2005).
[Crossref]

2004 (2)

Z. Zhang, “Camera calibration with one-dimensional objects,” IEEE Trans. Pattern Anal. Machine Intell. 26(7), 892–899 (2004).
[Crossref]

X. Ying and Z. Hu, “Catadioptric camera calibration using geometric invariants,” IEEE Trans. Pattern Anal. Machine Intell. 26(10), 1260–1271 (2004).
[Crossref]

2001 (2)

C. Geyer and K. Daniilidis, “Catadioptric projective geometry,” Int. J. Comput. Vision. 45(3), 223–243 (2001).
[Crossref]

S. Peleg, M. Ben-Ezra, and Y. Pritch, “Omnistereo: panoramic stereo imaging,” IEEE Trans. Pattern Anal. Machine Intell. 23(3), 279–290 (2001).
[Crossref]

2000 (1)

Z. Zhang, “A flexible new technique for camera calibration,” IEEE Trans. Pattern Anal. Machine Intell. 22(11), 1330–1334 (2000).
[Crossref]

1999 (2)

S. Baker and S. Nayar, “A theory of single-viewpoint catadioptric image formation,” Int. J. Comput. Vision. 35(2), 175–196 (1999).
[Crossref]

A. Fitzgibbon, M. Pilu, and R. Fisher, “Direct least square fitting of ellipses,” IEEE Trans. Pattern Anal. Machine Intell. 21(5), 476–480 (1999).
[Crossref]

1986 (1)

J. Canny, “A computational approach to edge detection,” IEEE Trans. Pattern Anal. Machine Intell. 8(6), 679–698 (1986).
[Crossref]

Araujo, H.

J. Barreto and H. Araujo, “Geometric properties of central catadioptric line images and their application in calibration,” IEEE Trans. Pattern Anal. Machine Intell. 27(8), 1327–1333 (2005).
[Crossref]

Baker, S.

S. Baker and S. Nayar, “A theory of single-viewpoint catadioptric image formation,” Int. J. Comput. Vision. 35(2), 175–196 (1999).
[Crossref]

Barreto, J.

J. Barreto and H. Araujo, “Geometric properties of central catadioptric line images and their application in calibration,” IEEE Trans. Pattern Anal. Machine Intell. 27(8), 1327–1333 (2005).
[Crossref]

Ben-Ezra, M.

S. Peleg, M. Ben-Ezra, and Y. Pritch, “Omnistereo: panoramic stereo imaging,” IEEE Trans. Pattern Anal. Machine Intell. 23(3), 279–290 (2001).
[Crossref]

Canny, J.

J. Canny, “A computational approach to edge detection,” IEEE Trans. Pattern Anal. Machine Intell. 8(6), 679–698 (1986).
[Crossref]

Chen, L.

Colombo, C.

C. Colombo, B. Del, and F. Pernici, “Metric 3D reconstruction and texture acquisition of surfaces of revolution from a single uncalibrated view,” IEEE Trans. Pattern Anal. Machine Intell. 27(1), 99–114 (2005).
[Crossref]

Da, F.

Daniilidis, K.

C. Geyer and K. Daniilidis, “Catadioptric projective geometry,” Int. J. Comput. Vision. 45(3), 223–243 (2001).
[Crossref]

Del, B.

C. Colombo, B. Del, and F. Pernici, “Metric 3D reconstruction and texture acquisition of surfaces of revolution from a single uncalibrated view,” IEEE Trans. Pattern Anal. Machine Intell. 27(1), 99–114 (2005).
[Crossref]

Deng, X.

F. Duan, F. Wu, M. Zhou, X. Deng, and Y. Tian, “Calibrating effective focal length for central catadioptric cameras using one space line,” Pattern Recogn. Lett. 33(5), 646–653 (2012).
[Crossref]

X. Deng, F. Wu, and Y. Wu, “An easy calibration method for central catadioptric cameras,” Acta Automatica Sinica 33(8), 801–808 (2007).
[Crossref]

Dixon, W. E.

G. Hu and W. E. Dixon, “Lyapunov-based adaptive visual servo tracking control using central catadioptric camera,” in Proceedings of IEEE International Conference on Control Applications (IEEE, 2007), pp. 1486–1491.

Duan, F.

F. Duan, F. Wu, M. Zhou, X. Deng, and Y. Tian, “Calibrating effective focal length for central catadioptric cameras using one space line,” Pattern Recogn. Lett. 33(5), 646–653 (2012).
[Crossref]

F. Wu, F. Duan, Z. Hu, and Y. Wu, “A new linear algorithm for calibrating central catadioptric cameras,” Pattern Recogn. 41(10), 3166–3172 (2008).
[Crossref]

Duan, H.

H. Duan and Y. Wu, “A calibration method for paracatadioptric camera from sphere images,” Pattern Recogn. Lett. 33(6), 677–684 (2012).
[Crossref]

Fisher, R.

A. Fitzgibbon, M. Pilu, and R. Fisher, “Direct least square fitting of ellipses,” IEEE Trans. Pattern Anal. Machine Intell. 21(5), 476–480 (1999).
[Crossref]

Fitzgibbon, A.

A. Fitzgibbon, M. Pilu, and R. Fisher, “Direct least square fitting of ellipses,” IEEE Trans. Pattern Anal. Machine Intell. 21(5), 476–480 (1999).
[Crossref]

Gao, F.

Geyer, C.

C. Geyer and K. Daniilidis, “Catadioptric projective geometry,” Int. J. Comput. Vision. 45(3), 223–243 (2001).
[Crossref]

Harris, C. G.

C. G. Harris and M. J. Stephens, “A combined corner and edge detector,” in Proceedings of the 4th Alvey Vision Conference (1988), pp. 147–151.

Hartley, R.

R. Hartley and A. Zisserman, Multiple View Geometry in Computer Vision, 2nd edn. (Cambridge University Press, Cambridge, 2003).

Hecht, E.

E. Hecht and A. Zajac, Optics, 3rd edn. (Addison-Wesley Press, Reading, MA, 1997).

Hu, G.

G. Hu and W. E. Dixon, “Lyapunov-based adaptive visual servo tracking control using central catadioptric camera,” in Proceedings of IEEE International Conference on Control Applications (IEEE, 2007), pp. 1486–1491.

Hu, Z.

F. Wu, F. Duan, Z. Hu, and Y. Wu, “A new linear algorithm for calibrating central catadioptric cameras,” Pattern Recogn. 41(10), 3166–3172 (2008).
[Crossref]

X. Ying and Z. Hu, “Catadioptric camera calibration using geometric invariants,” IEEE Trans. Pattern Anal. Machine Intell. 26(10), 1260–1271 (2004).
[Crossref]

Y. Wu, H. Zhu, Z. Hu, and F. Wu, “Camera calibration from the quasi-affine invariance of two parallel circles,” in Proceedings of the 8th European Conference on Computer Vision (Springer, 2004), pp. 190–202.

Huang, S. J.

Jiang, X. Q.

Kang, S. B.

S. B. Kang, “Catadioptric self-calibration,” in Proceedings of IEEE International Conference on Computer Vision and Pattern Recognition (IEEE, 2000), pp. 201–207.

Li, B.

Li, Y.

Ma, S. D.

Meng, S. S.

Nayar, S.

S. Baker and S. Nayar, “A theory of single-viewpoint catadioptric image formation,” Int. J. Comput. Vision. 35(2), 175–196 (1999).
[Crossref]

Peleg, S.

S. Peleg, M. Ben-Ezra, and Y. Pritch, “Omnistereo: panoramic stereo imaging,” IEEE Trans. Pattern Anal. Machine Intell. 23(3), 279–290 (2001).
[Crossref]

Pernici, F.

C. Colombo, B. Del, and F. Pernici, “Metric 3D reconstruction and texture acquisition of surfaces of revolution from a single uncalibrated view,” IEEE Trans. Pattern Anal. Machine Intell. 27(1), 99–114 (2005).
[Crossref]

Pilu, M.

A. Fitzgibbon, M. Pilu, and R. Fisher, “Direct least square fitting of ellipses,” IEEE Trans. Pattern Anal. Machine Intell. 21(5), 476–480 (1999).
[Crossref]

Pritch, Y.

S. Peleg, M. Ben-Ezra, and Y. Pritch, “Omnistereo: panoramic stereo imaging,” IEEE Trans. Pattern Anal. Machine Intell. 23(3), 279–290 (2001).
[Crossref]

Quan, C. G.

Stephens, M. J.

C. G. Harris and M. J. Stephens, “A combined corner and edge detector,” in Proceedings of the 4th Alvey Vision Conference (1988), pp. 147–151.

Sturm, P.

P. Sturm, “A method for 3D reconstruction of piecewise planar objects from single panoramic images,” in Proceedings of IEEE Workshop on Omnidirectional Vision (IEEE, 2000), pp. 119–126.

Tay, C. J.

Tian, Y.

F. Duan, F. Wu, M. Zhou, X. Deng, and Y. Tian, “Calibrating effective focal length for central catadioptric cameras using one space line,” Pattern Recogn. Lett. 33(5), 646–653 (2012).
[Crossref]

Wang, Y.

Wong, K.

H. Zhang, K. Wong, and G. Zhang, “Camera calibration from images of spheres,” IEEE Trans. Pattern Anal. Machine Intell. 29(3), 499–502 (2007).
[Crossref]

Wu, F.

F. Duan, F. Wu, M. Zhou, X. Deng, and Y. Tian, “Calibrating effective focal length for central catadioptric cameras using one space line,” Pattern Recogn. Lett. 33(5), 646–653 (2012).
[Crossref]

F. Wu, F. Duan, Z. Hu, and Y. Wu, “A new linear algorithm for calibrating central catadioptric cameras,” Pattern Recogn. 41(10), 3166–3172 (2008).
[Crossref]

X. Deng, F. Wu, and Y. Wu, “An easy calibration method for central catadioptric cameras,” Acta Automatica Sinica 33(8), 801–808 (2007).
[Crossref]

Y. Wu, H. Zhu, Z. Hu, and F. Wu, “Camera calibration from the quasi-affine invariance of two parallel circles,” in Proceedings of the 8th European Conference on Computer Vision (Springer, 2004), pp. 190–202.

Wu, Y.

H. Duan and Y. Wu, “A calibration method for paracatadioptric camera from sphere images,” Pattern Recogn. Lett. 33(6), 677–684 (2012).
[Crossref]

F. Wu, F. Duan, Z. Hu, and Y. Wu, “A new linear algorithm for calibrating central catadioptric cameras,” Pattern Recogn. 41(10), 3166–3172 (2008).
[Crossref]

X. Deng, F. Wu, and Y. Wu, “An easy calibration method for central catadioptric cameras,” Acta Automatica Sinica 33(8), 801–808 (2007).
[Crossref]

Y. Wu, H. Zhu, Z. Hu, and F. Wu, “Camera calibration from the quasi-affine invariance of two parallel circles,” in Proceedings of the 8th European Conference on Computer Vision (Springer, 2004), pp. 190–202.

Ying, X.

X. Ying and H. Zha, “Geometric interpretations of the relation between the image of the absolute conic and sphere images,” IEEE Trans. Pattern Anal. Machine Intell. 28(12), 2031–2036 (2006).
[Crossref]

X. Ying and Z. Hu, “Catadioptric camera calibration using geometric invariants,” IEEE Trans. Pattern Anal. Machine Intell. 26(10), 1260–1271 (2004).
[Crossref]

X. Ying and H. Zha, “Linear approaches to camera calibration from sphere images or active intrinsic calibration using vanishing points,” in Proceedings of IEEE International Conference on Computer Vision (IEEE, 2005), pp. 596–603.

Yu, J.

Zajac, A.

E. Hecht and A. Zajac, Optics, 3rd edn. (Addison-Wesley Press, Reading, MA, 1997).

Zha, H.

X. Ying and H. Zha, “Geometric interpretations of the relation between the image of the absolute conic and sphere images,” IEEE Trans. Pattern Anal. Machine Intell. 28(12), 2031–2036 (2006).
[Crossref]

X. Ying and H. Zha, “Linear approaches to camera calibration from sphere images or active intrinsic calibration using vanishing points,” in Proceedings of IEEE International Conference on Computer Vision (IEEE, 2005), pp. 596–603.

Zhang, B.

B. Zhang and Y. Li, “A method for calibrating the central catadioptric camera via homographic matrix,” in Proceedings of IEEE International Conference on Information and Automation (IEEE, 2008), pp. 972–977.

Zhang, G.

H. Zhang, K. Wong, and G. Zhang, “Camera calibration from images of spheres,” IEEE Trans. Pattern Anal. Machine Intell. 29(3), 499–502 (2007).
[Crossref]

Zhang, H.

H. Zhang, K. Wong, and G. Zhang, “Camera calibration from images of spheres,” IEEE Trans. Pattern Anal. Machine Intell. 29(3), 499–502 (2007).
[Crossref]

Zhang, Z.

Z. Zhang, “Camera calibration with one-dimensional objects,” IEEE Trans. Pattern Anal. Machine Intell. 26(7), 892–899 (2004).
[Crossref]

Z. Zhang, “A flexible new technique for camera calibration,” IEEE Trans. Pattern Anal. Machine Intell. 22(11), 1330–1334 (2000).
[Crossref]

Zhang, Z. H.

Zhao, Y.

Zheng, B.

Zhou, M.

F. Duan, F. Wu, M. Zhou, X. Deng, and Y. Tian, “Calibrating effective focal length for central catadioptric cameras using one space line,” Pattern Recogn. Lett. 33(5), 646–653 (2012).
[Crossref]

Zhu, H.

Y. Wu, H. Zhu, Z. Hu, and F. Wu, “Camera calibration from the quasi-affine invariance of two parallel circles,” in Proceedings of the 8th European Conference on Computer Vision (Springer, 2004), pp. 190–202.

Zhu, R. H.

Zisserman, A.

R. Hartley and A. Zisserman, Multiple View Geometry in Computer Vision, 2nd edn. (Cambridge University Press, Cambridge, 2003).

Acta Automatica Sinica (1)

X. Deng, F. Wu, and Y. Wu, “An easy calibration method for central catadioptric cameras,” Acta Automatica Sinica 33(8), 801–808 (2007).
[Crossref]

Appl. Opt. (3)

IEEE Trans. Pattern Anal. Machine Intell. (10)

H. Zhang, K. Wong, and G. Zhang, “Camera calibration from images of spheres,” IEEE Trans. Pattern Anal. Machine Intell. 29(3), 499–502 (2007).
[Crossref]

C. Colombo, B. Del, and F. Pernici, “Metric 3D reconstruction and texture acquisition of surfaces of revolution from a single uncalibrated view,” IEEE Trans. Pattern Anal. Machine Intell. 27(1), 99–114 (2005).
[Crossref]

J. Canny, “A computational approach to edge detection,” IEEE Trans. Pattern Anal. Machine Intell. 8(6), 679–698 (1986).
[Crossref]

A. Fitzgibbon, M. Pilu, and R. Fisher, “Direct least square fitting of ellipses,” IEEE Trans. Pattern Anal. Machine Intell. 21(5), 476–480 (1999).
[Crossref]

S. Peleg, M. Ben-Ezra, and Y. Pritch, “Omnistereo: panoramic stereo imaging,” IEEE Trans. Pattern Anal. Machine Intell. 23(3), 279–290 (2001).
[Crossref]

Z. Zhang, “Camera calibration with one-dimensional objects,” IEEE Trans. Pattern Anal. Machine Intell. 26(7), 892–899 (2004).
[Crossref]

Z. Zhang, “A flexible new technique for camera calibration,” IEEE Trans. Pattern Anal. Machine Intell. 22(11), 1330–1334 (2000).
[Crossref]

X. Ying and Z. Hu, “Catadioptric camera calibration using geometric invariants,” IEEE Trans. Pattern Anal. Machine Intell. 26(10), 1260–1271 (2004).
[Crossref]

X. Ying and H. Zha, “Geometric interpretations of the relation between the image of the absolute conic and sphere images,” IEEE Trans. Pattern Anal. Machine Intell. 28(12), 2031–2036 (2006).
[Crossref]

J. Barreto and H. Araujo, “Geometric properties of central catadioptric line images and their application in calibration,” IEEE Trans. Pattern Anal. Machine Intell. 27(8), 1327–1333 (2005).
[Crossref]

Int. J. Comput. Vision. (2)

S. Baker and S. Nayar, “A theory of single-viewpoint catadioptric image formation,” Int. J. Comput. Vision. 35(2), 175–196 (1999).
[Crossref]

C. Geyer and K. Daniilidis, “Catadioptric projective geometry,” Int. J. Comput. Vision. 45(3), 223–243 (2001).
[Crossref]

J. Opt. Soc. Am. A (2)

Opt. Express (1)

Pattern Recogn. (1)

F. Wu, F. Duan, Z. Hu, and Y. Wu, “A new linear algorithm for calibrating central catadioptric cameras,” Pattern Recogn. 41(10), 3166–3172 (2008).
[Crossref]

Pattern Recogn. Lett. (2)

F. Duan, F. Wu, M. Zhou, X. Deng, and Y. Tian, “Calibrating effective focal length for central catadioptric cameras using one space line,” Pattern Recogn. Lett. 33(5), 646–653 (2012).
[Crossref]

H. Duan and Y. Wu, “A calibration method for paracatadioptric camera from sphere images,” Pattern Recogn. Lett. 33(6), 677–684 (2012).
[Crossref]

Other (9)

R. Hartley and A. Zisserman, Multiple View Geometry in Computer Vision, 2nd edn. (Cambridge University Press, Cambridge, 2003).

Y. Wu, H. Zhu, Z. Hu, and F. Wu, “Camera calibration from the quasi-affine invariance of two parallel circles,” in Proceedings of the 8th European Conference on Computer Vision (Springer, 2004), pp. 190–202.

X. Ying and H. Zha, “Linear approaches to camera calibration from sphere images or active intrinsic calibration using vanishing points,” in Proceedings of IEEE International Conference on Computer Vision (IEEE, 2005), pp. 596–603.

B. Zhang and Y. Li, “A method for calibrating the central catadioptric camera via homographic matrix,” in Proceedings of IEEE International Conference on Information and Automation (IEEE, 2008), pp. 972–977.

C. G. Harris and M. J. Stephens, “A combined corner and edge detector,” in Proceedings of the 4th Alvey Vision Conference (1988), pp. 147–151.

S. B. Kang, “Catadioptric self-calibration,” in Proceedings of IEEE International Conference on Computer Vision and Pattern Recognition (IEEE, 2000), pp. 201–207.

E. Hecht and A. Zajac, Optics, 3rd edn. (Addison-Wesley Press, Reading, MA, 1997).

G. Hu and W. E. Dixon, “Lyapunov-based adaptive visual servo tracking control using central catadioptric camera,” in Proceedings of IEEE International Conference on Control Applications (IEEE, 2007), pp. 1486–1491.

P. Sturm, “A method for 3D reconstruction of piecewise planar objects from single panoramic images,” in Proceedings of IEEE Workshop on Omnidirectional Vision (IEEE, 2000), pp. 119–126.

Cited By

OSA participates in Crossref's Cited-By Linking service. Citing articles from OSA journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (12)

Fig. 1.
Fig. 1. Imaging model of space point ${\boldsymbol A}$ and sphere ${\boldsymbol Q}$ in paracatadioptric camera.
Fig. 2.
Fig. 2. Relationship between pole ${{\boldsymbol v}_1}$ and polar ${{\boldsymbol l}_s}$ with regard to IAC.
Fig. 3.
Fig. 3. Pole, ${{\boldsymbol v}_3}$, and polar, ${{\boldsymbol l}_\infty }$, with regard to the IAC and three vanishing points, ${{\boldsymbol v}_2},{{\boldsymbol v}_3},{{\boldsymbol v}_4}$, orthogonal to each other.
Fig. 4.
Fig. 4. Pair of points ${{\boldsymbol V}_{2\infty }}$ and ${{\boldsymbol V}_{4\infty }}$ at infinity in orthogonal directions.
Fig. 5.
Fig. 5. Sphere images generated using simulated camera.
Fig. 6.
Fig. 6. Comparisons of results for proposed algorithms and references [12,17,20,23], (a-d) show absolute errors of four distortion coefficient, namely, ${k_1},{k_2},{k_3},{k_4}$, respectively.
Fig. 7.
Fig. 7. Comparisons of results for proposed algorithms and references [12,17,20,23], (a-e) show absolute errors of five intrinsic parameters, namely, ${f_u},{f_v},s,{u_0},{v_0}$, respectively.
Fig. 8.
Fig. 8. (a-f) Six images of a table tennis ball taken at different positions on checkerboard.
Fig. 9.
Fig. 9. (a-c) Canny edge detection for images in Figs. 8(a-c), respectively.
Fig. 10.
Fig. 10. (a-c) Harris corner detection for images in Figs. 9(a-c), respectively.
Fig. 11.
Fig. 11. Comparisons of results for proposed algorithms and references [12,17,20,23], (a), (b)show the reprojection error with the increase of the number of rows of feature points and the number of images, respectively
Fig. 12.
Fig. 12. Comparisons of results for proposed algorithms and references [12,17,20,23], show the absolute error of (a), (c) parallelity and (b), (d) orthogonality with the increase of the number of rows of feature points and with the increase of the number of images, respectively.

Tables (3)

Tables Icon

Table 1. Relationship between length of ξ (the mirror parameter) and mirror type.

Tables Icon

Table 2. Lens distortion coefficient.

Tables Icon

Table 3. Calibration results for Methods 1, 2, and 3 and references [12,17,20,23] (unit: pixels).

Equations (28)

Equations on this page are rendered with MathJax. Learn more.

λ 1 a ± = K [ 1 0 0 0 0 1 0 0 0 0 1 1 ] A ± ,
1 + 1 + τ a + T ω a + a + T ω a + a + + 1 + 1 + τ a T ω a a T ω a a = 2 p ,
1 a + T ω a + a + + 1 a T ω a a = p .
l T ω l s = 0 ,
l s = ω v 1
l = ω v 3 .
{ v 2 T ω v 3 = 0 v 2 T ω v 4 = 0 v 3 T ω v 4 = 0 .
K = [ 600 0.4 450 0 550 350 0 0 0 ]
F ( K , k , R , t ) = i = 1 n j = 1 N i | | m i j m i j ( K , k , R i , t i , M i j ) | | 2 ,
{ [ u v 1 ] C + [ u v 1 ] T = 0 [ u v 1 ] C [ u v 1 ] T = 0 ,
v 1 = ( a 1 × a 2 ) × ( a 3 × a 4 ) ,
v b = ( a 1 × a 3 ) × ( a 2 × a 4 ) ,
v c = ( a 1 × a 4 ) × ( a 2 × a 3 ) ,
l s = v b × v c .
n  =  K T l s .
O c V 1 = K 1 v 1 .
λ 2 K T l s = K 1 v 1 ,
λ 2 l s = K T K 1 v 1 = ω v 1 .
{ l + = C + a  +  l = C a ,
v 2 = l + × l ,
l = v 1 × v 2 .
{ h 1 = C + v 1 h 2 = C + v 2 ,
o  +   =  h 1 × h 2 .
( o p , o + v 3 ) = 1 ,
{ v 3 T ω v 1 = 0 v 3 T ω v 2 = 0 .
v 3 = ω v 1 × ω v 2 = ω ( v 1 × v 2 ) = ω l ,
l = ω v 3 .
v 4 = l × h 2 .

Metrics