Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Camera calibration based on two-cylinder target

Open Access Open Access

Abstract

A large-size 2D target cannot be used for calibration in the application scenarios with confined spaces, neither can traditional 1D target because the need of constrained motion. To solve this problem, a camera calibration method based on two-cylinder (TC) target is proposed. The TC target is placed in the Field of View (FOV) arbitrarily and images of the target are acquired from multiple views, then the camera can be calibrated by establishing the perspective projection relationship between the TC target and its projective contours in each view. The experiments with both synthetic and real data show that the proposed method has better anti-noise ability and higher accuracy compared with small-size 2D target.

© 2019 Optical Society of America under the terms of the OSA Open Access Publishing Agreement

1. Introduction

With the rapid development of vision sensors and computer technology, computer vision has been widely used in many fields, such as dimension measurement [1,2], motion estimation [3,4], scene reconstruction [5,6] and so on. As a key and fundamental problem in computer vision, camera calibration can establish the correspondence between the three-dimensional coordinates of spatial points and the two-dimensional coordinates of image points [7].

In order to meet the needs of different application scenarios, scholars have proposed a variety of camera calibration methods. According to the difference of target’s dimension, the existing calibration methods can be divided into four categories: three-dimensional (3D) target-based [8,9], two-dimensional (2D) target-based [1012], one-dimensional (1D) target-based [1316] and zero-dimensional (0D) target-based (or self-calibration) [17,18] method. Among them, the method based on 0D-Target does not require any target, and only the corresponding relation of feature points between different views is used to complete the calibration. Although this method is flexible in application, it’s also less robust and prone to failure. Targets of specified size are needed in other three methods. The advantages of 1D target over other two kinds of targets are that it’s portable, easily machined with high accuracy and suitable for use in confined spaces or large FOV.

Camera calibration based on 1D target was first proposed by Zhang [13]. This method uses a set of collinear feature points as constraints to establish a system of equations. However, in the calibration process, one endpoint of the 1D target needs to be fixed as the origin, which means the target cannot move arbitrarily. Wu et al. [14] proposed another calibration method based on 1D target, which requires a platform to ensure the planar motion of 1D target. Lv et al. [15] assume that the 1D target is located along the X axis of the world coordinate system. Based on this new model, a 3×2 1D homography is defined to relate the points in the 1D target to the perspective image points thereof. Then, the basic constraint for camera calibration using 1D target from a single image is derived. Zhang [13] has proved that camera calibration is impossible by using a single camera to observe the 1D object under general rigid motions. So, one shortcoming of the methods mentioned above is that the 1D target should be controlled to undertake some especial motions, such as rotations around a fixed point or planar motions. To overcome this issue, Wang et al. [16] proposed a calibration method with 1D target under general rigid motion which only suitable for multi-camera system. In addition, all the 1D targets used in the above methods consist of at least three collinear points with relative distances accurately known.

In this paper, we propose a new camera calibration method based on TC target which consists of two right circular cylinders with known radii and lengths. There’s no need to set any feature points on the cylinders, and the cylinders can be placed randomly. Camera calibration can be done by taking TC target images from multiple views.

The rest of the paper is organized as follows. The perspective projection model of cylindrical target is introduced in Section 2. The detailed procedure of the calibration method based on TC target is presented in Section 3. In Section 4, both synthetic data real data are used to validate the proposed method compared with commonly used 2D target. This paper ends with some concluding remarks in Section 5.

2. Perspective projection model of cylindrical target

The projection process of a single cylindrical target ${\mathbf \Theta }$ with arbitrary pose is shown in Fig. 1. Let ${{\boldsymbol P}_C}$ denotes the three-dimensional coordinate of optic center in the world coordinate system (${O_W} - {X_W}{Y_W}{Z_W}$), and the camera coordinate system (${O_C} - {X_C}{Y_C}{Z_C}$) is established with ${{\boldsymbol P}_C}$ as the origin ${O_C}$. The rotation and translation between ${O_W} - {X_W}{Y_W}{Z_W}$ and ${O_C} - {X_C}{Y_C}{Z_C}$ are denoted as $({\textbf R}_W^C,{\textbf t}_W^C)$. Meanwhile, we define the target coordinate system (${O_T} - {X_T}{Y_T}{Z_T}$) with one end circle center of ${\mathbf \Theta }$ as origin ${O_T}$ and take the axis of ${\mathbf \Theta }$ as ${Z_T}$.

 figure: Fig. 1.

Fig. 1. Perspective projection model of cylindrical target.

Download Full Size | PDF

Generally, the projected image of ${\mathbf \Theta }$ can be divided into the projection ellipses $({{\boldsymbol c}_1},{{\boldsymbol c}_2})$ of two end circles $({{\boldsymbol C}_1},{{\boldsymbol C}_2})$ and projection lines $({{\boldsymbol l}_1},{{\boldsymbol l}_2})$ of cylindrical surface ${\boldsymbol Q}$. The projection lines $({{\boldsymbol l}_1},{{\boldsymbol l}_2})$ are also called the apparent contour of ${\boldsymbol Q}$ [19]. For right circular cylinders, Doignon et al. [20] proposed one kind of perspective projection model which only involves the Plücker coordinates of the cylinder axis and the radius as parameters. However, since the model lacks the position information of the end circle centers, it can only be used to calculate $({{\boldsymbol l}_1},{{\boldsymbol l}_2})$. Inspired by this idea, we propose another more complete model for the perspective projection of cylindrical target. This model can be used to calculate $({{\boldsymbol c}_1},{{\boldsymbol c}_2})$ and $({{\boldsymbol l}_1},{{\boldsymbol l}_2})$ simultaneously. Our model will be introduced in detail below.

In ${O_W} - {X_W}{Y_W}{Z_W}$, ${\mathbf \Theta }$ can be modeled as Eq. (1),

$${\mathbf \Theta } = \{{{{\boldsymbol P}_\textrm{1}}:{{({{x_\textrm{1}},{y_\textrm{1}},{z_\textrm{1}}} )}^T};{{\boldsymbol P}_2}:{{({{x_\textrm{2}},{y_\textrm{2}},{z_\textrm{2}}} )}^T};r} \}$$
where ${{\boldsymbol P}_\textrm{1}}$ and ${{\boldsymbol P}_\textrm{2}}$ represent the three-dimensional coordinates of end circle centers, r represents the radius of ${\mathbf \Theta }$. The solving methods of $({{\boldsymbol c}_1},{{\boldsymbol c}_2})$ and $({{\boldsymbol l}_1},{{\boldsymbol l}_2})$ will be detailed below.

2.1 Projection ellipses of end circles

For an arbitrary ${\mathbf \Theta }$, we take the center ${{\boldsymbol P}_1}$ of ${{\boldsymbol C}_\textrm{1}}$ as ${O_T}$ and establish ${O_T} - {X_T}{Y_T}{Z_T}$. Obviously, ${{\boldsymbol C}_\textrm{1}}$ located in the plane ${O_T} - {X_T}{Y_T}$. The unit vectors of ${Z_W}$-axis and ${Z_T}$-axis can be expressed as Eq. (2),

$$\left\{ {\begin{array}{{c}} {{{\textbf v}_{ZW}} = {{({0,0,1} )}^T}}\\ {{{\textbf v}_{ZT}} = \frac{{{{\boldsymbol P}_1} - {{\boldsymbol P}_2}}}{{||{{{\boldsymbol P}_1} - {{\boldsymbol P}_2}} ||}}} \end{array}} \right.$$
Let ${{\textbf v}_{ZW}} \times {{\textbf v}_{ZT}}$ be the rotation axis, $\arccos ({{{\textbf v}_{ZW}} \cdot {{\textbf v}_{ZT}}} )$ be the rotation angle, and ${{\boldsymbol P}_1}$ be the translation vector, then the rotation and translation relates ${O_T} - {X_T}{Y_T}{Z_T}$ and ${O_W} - {X_W}{Y_W}{Z_W}$ can be expressed as Eq. (3),
$$\left\{ {\begin{array}{{l}} {{\textbf R}_T^W = {\mathbf \Re }\left( {\frac{{{{\textbf v}_{ZW}} \times {{\textbf v}_{ZT}}}}{{||{{{\textbf v}_{ZW}} \times {{\textbf v}_{ZT}}} ||}}\arccos ({{{\textbf v}_{ZW}} \cdot {{\textbf v}_{ZT}}} )} \right)}\\ {{\textbf t}_T^W = {{\boldsymbol P}_1}} \end{array}} \right.$$
where ${\mathbf \Re }(\cdot )$ denotes the Rodrigues transform of a rotation vector to a rotation matrix. Furthermore, combine $({\textbf R}_T^W,{\textbf t}_T^W)$ with $({\textbf R}_W^C,{\textbf t}_W^C)$, then the rotation and translation relates ${O_T} - {X_T}{Y_T}{Z_T}$ and ${O_C} - {X_C}{Y_C}{Z_C}$ can be expressed as Eq. (4),
$$\left\{ {\begin{array}{{l}} {{\textbf R}_T^C={\textbf R}_W^C{\textbf R}_T^W}\\ {{\textbf t}_T^C={\textbf R}_W^C{\textbf t}_T^W+{\textbf t}_W^C} \end{array}} \right.$$
Therefore, for any point ${{\boldsymbol X}_T}$ in ${O_T} - {X_T}{Y_T}{Z_T}$, the coordinate of its projection point ${{\boldsymbol x}_T}$ can be obtained by Eq. (5),
$$s{\tilde{{\boldsymbol x}}_T} = {\textbf K}[{{\textbf R}_T^C{| \textbf{t}}_T^C} ]{\tilde{{\boldsymbol X}}_T},\quad {\textbf K} = \left[ {\begin{array}{{ccc}} {{f_x}}&0&{{u_0}}\\ 0&{{f_y}}&{{v_0}}\\ 0&0&1 \end{array}} \right]$$
where ${s}$ is an arbitrary scale factor, ${\tilde{{\boldsymbol x}}_T}$ and ${\tilde{{\boldsymbol X}}_T}$ are the homogeneous coordinates of ${{\boldsymbol x}_T}$ and ${{\boldsymbol X}_T}$ respectively, and $\textrm{K}$ is the camera intrinsic parameters, consisting of the following parameters: $({f_x},{f_y})$ are the effective focal lengths, $({{u}_0},{{v}_0})$ are the coordinates of principal points. Based on Eq. (5), for any point ${{\boldsymbol X}_T} = {({x_T},{y_T},0)^T}$ in planar coordinate system ${O_T} - {X_T}{Y_T}$, there exist,
$$\begin{aligned} s{{\tilde{x}}_T} & = \textbf{K}\left[ {\begin{array}{{cccc}} {{{({\textbf{r}_T^C} )}^1}}&{{{({\textbf{r}_T^C} )}^2}}&{{{({\textbf{r}_T^C} )}^3}}&{\textbf{t}_T^C} \end{array}} \right]{{\tilde{{\boldsymbol X}}}_T}\\ & = \textbf{K}\left[ {\begin{array}{{ccc}} {{{({\textbf{r}_T^C} )}^1}}&{{{({\textbf{r}_T^C} )}^2}}&{\textbf{t}_T^C} \end{array}} \right]\left[ {\begin{array}{{c}} {{x_T}}\\ {{y_T}}\\ 1 \end{array}} \right] = \textbf{H}_T^C\left[ {\begin{array}{{c}} {{x_T}}\\ {{y_T}}\\ 1 \end{array}} \right] \end{aligned}$$
where ${({\textbf{r}_T^C} )^1}$, ${({\textbf{r}_T^C} )^2}$ and ${({\textbf{r}_T^C} )^3}$ denote the first, second and third column of $\textbf{R}_T^C$. ${\textbf H}_T^C = {\textbf K}\left[ {\begin{array}{{ccc}} {{{({{\textbf r}_T^C} )}^1}}&{{{({{\textbf r}_T^C} )}^2}}&{{\textbf t}_T^C} \end{array}} \right]$ is a $3 \times 3$ homography matrix which denotes the points correspondence between ${O_T} - {X_T}{Y_T}$ and image plane. Furthermore, in ${O_T} - {X_T}{Y_T}$, ${{\boldsymbol C}_\textrm{1}}$ can be expressed as a symmetric coefficient matrix,
$${{\boldsymbol C}_\textrm{1}} = \left[ {\begin{array}{{ccc}} 1&0&0\\ 0&1&0\\ 0&0&{ - {r^2}} \end{array}} \right]$$
At last, associate with ${\textbf H}_T^C$ mentioned in Eq. (6), the coefficient matrix of projection ellipse ${{\boldsymbol c}_\textrm{1}}$ is given by Eq. (8),
$${{\boldsymbol c}_\textrm{1}} = {({{\textbf H}_T^C} )^{ - T}}{{\boldsymbol C}_1}{({{\textbf H}_T^C} )^{ - 1}}$$
Similarly, take the center ${{\boldsymbol P}_2}$ of ${{\boldsymbol C}_2}$ as ${O_T}$ and establish ${O_T} - {X_T}{Y_T}{Z_T}$, the coefficient matrix of projection ellipse ${{\boldsymbol c}_2}$ can be solved.

2.2 Apparent contour of cylindrical surface

As Fig. 1 illustrates, the apparent contour of cylindrical surface ${\boldsymbol Q}$ consist of two straight lines $({{\boldsymbol l}_1},{{\boldsymbol l}_2})$. The intuitive solution of $({{\boldsymbol l}_1},{{\boldsymbol l}_2})$ is taking it as the external common tangents of ${{\boldsymbol c}_1}$ and ${{\boldsymbol c}_2}$. But this method requires solving a system of binary quadratic equations, which is a time consuming process for computer. Here we will provide a linear solution which is more efficient. The corresponding contour generator consists of two generators $({{\boldsymbol L}_1},{{\boldsymbol L}_2})$ on the cylindrical surface, which can be seen as the intersections of $({{\boldsymbol \pi }_{\varUpsilon 1}},{{\boldsymbol \pi }_{\varUpsilon 2}})$ and ${\boldsymbol Q}$, or can also be seen as the intersections of $({{\boldsymbol \pi }_{\varUpsilon 1}},{{\boldsymbol \pi }_{\varUpsilon 2}})$ and ${{\boldsymbol \pi }_\varGamma }$. Where, $({{\boldsymbol \pi }_{\varUpsilon 1}},{{\boldsymbol \pi }_{\varUpsilon 2}})$ are the tangent planes of ${\boldsymbol Q}$ which are passing through the camera’s optic center ${O_C}$, ${{\boldsymbol \pi }_\varGamma }$ is the support plane of contour generator. It’s obvious that $({{\boldsymbol L}_1},{{\boldsymbol L}_2})$ in 3-space map to $({{\boldsymbol l}_1},{{\boldsymbol l}_2})$ in the image.

First, the solution of ${{\boldsymbol \pi }_\varGamma }$ is derived. In ${O_T} - {X_T}{Y_T}{Z_T}$, ${\boldsymbol Q}$ can be expressed as a symmetric coefficient matrix,

$${\boldsymbol Q} = \left[ {\begin{array}{{cccc}} 1&0&0&0\\ 0&1&0&0\\ 0&0&0&0\\ 0&0&0&{ - {r^2}} \end{array}} \right]$$
In ${O_W} - {X_W}{Y_W}{Z_W}$, the three dimensional coordinate of ${O_C}$ is: ${{\boldsymbol P}_C} = - {({{\textbf R}_W^C} )^{ - 1}}{\textbf t}_W^C$. The plane ${{\boldsymbol \pi }_\varGamma }$ is given by Eq. (10),
$${\boldsymbol{\pi} _\varGamma }={\boldsymbol Q}{\tilde{{\boldsymbol P}}_C}$$
where ${\tilde{{\boldsymbol P}}_C}$ is the homogeneous coordinate of ${{\boldsymbol P}_C}$.

Then, the solution of $({{\boldsymbol \pi }_{\varUpsilon 1}},{{\boldsymbol \pi }_{\varUpsilon 2}})$ are presented. The plane ${\boldsymbol \pi _\varOmega }$ which is formed by axis $\overline {{{\boldsymbol P}_1}{{\boldsymbol P}_2}}$ and ${O_C}$ is determined by three points $({{\boldsymbol P}_1},{{\boldsymbol P}_2},{{\boldsymbol P}_C})$, and $({{\boldsymbol \pi }_{\varUpsilon 1}},{{\boldsymbol \pi }_{\varUpsilon 2}})$ can be solved by rotating ${\boldsymbol \pi _\varOmega }$ $\pm \psi$ degrees around ${O_C}$. Where, $\psi$ is the angle between $({{\boldsymbol \pi }_{\varUpsilon 1}},{{\boldsymbol \pi }_{\varUpsilon 2}})$ and ${\boldsymbol \pi _\varOmega }$. The rotation matrix between $({{\boldsymbol \pi }_{\varUpsilon 1}},{{\boldsymbol \pi }_{\varUpsilon 2}})$ and ${\boldsymbol \pi _\varOmega }$ can be solved by Eq. (11),

$$\begin{array}{{l}} {{\textbf R}_\varOmega ^{\varUpsilon k}={\mathbf \Re }\left( { \pm \frac{{{{\boldsymbol P}_1} - {{\boldsymbol P}_2}}}{{||{{{\boldsymbol P}_1} - {{\boldsymbol P}_2}} ||}}\psi } \right){{\boldsymbol \pi }_{\varUpsilon k}}}\\ \quad\quad{={\mathbf \Re }\left( { \pm \frac{{{{\boldsymbol P}_1} - {{\boldsymbol P}_2}}}{{||{{{\boldsymbol P}_1} - {{\boldsymbol P}_2}} ||}}\arcsin \left( {\frac{r}{{d({\overline {{{\boldsymbol P}_1}{{\boldsymbol P}_2}} ,{{\boldsymbol P}_C}} )}}} \right)} \right){{\boldsymbol \pi }_{\varUpsilon k}}}\\ \quad\quad{={\mathbf \Re }\left( { \pm \frac{{{{\boldsymbol P}_1} - {{\boldsymbol P}_2}}}{{||{{{\boldsymbol P}_1} - {{\boldsymbol P}_2}} ||}}\arcsin \left( {\frac{{r||{{{\boldsymbol P}_1} - {{\boldsymbol P}_2}} ||}}{{||{({{{\boldsymbol P}_C} - {{\boldsymbol P}_1}} )\times ({{{\boldsymbol P}_1} - {{\boldsymbol P}_2}} )} ||}}} \right)} \right){{\boldsymbol \pi }_{\varUpsilon k}}} \end{array}$$
where, $k = 1,2$, $d({\overline {{{\boldsymbol P}_1}{{\boldsymbol P}_2}} ,{{\boldsymbol P}_C}} )$ denotes the distance between ${{\boldsymbol P}_C}$ and $\overline {{{\boldsymbol P}_1}{{\boldsymbol P}_2}}$.

At last, the apparent contour of cylindrical surface can be solved by solving the intersections of $({{\boldsymbol \pi }_{\varUpsilon 1}},{{\boldsymbol \pi }_{\varUpsilon 2}})$ and ${\boldsymbol \pi _\varOmega }$. The dual Plücker matrices of intersections ${{\boldsymbol L}_k}$ are given by Eq. (12),

$${\boldsymbol L}_k^\ast = {\boldsymbol \pi _\varGamma }\pi _{\varUpsilon k}^T - {\boldsymbol \pi _{\varUpsilon k}}\pi _\varGamma ^T$$
The Plücker matrices of ${{\boldsymbol L}_k}$ can be gotten by rewriting ${\boldsymbol L}_k^\ast $,
$${{\boldsymbol L}_k} = \left[ {\begin{array}{{cccc}} 0&{l_{k34}^\ast }&{l_{k42}^ \ast }&{l_{k23}^ \ast }\\ { - l_{k34}^ \ast }&0&{l_{k14}^ \ast }&{ - l_{k13}^ \ast }\\ { - l_{k42}^ \ast }&{ - l_{k14}^ \ast }&0&{l_{k12}^ \ast }\\ { - l_{k23}^ \ast }&{l_{k13}^ \ast }&{ - l_{k12}^ \ast }&0 \end{array}} \right]$$
where $l_{kij}^\ast $ denotes the ${i^{\textrm{th}}}$ row and ${j^{\textrm{th}}}$ column of ${{\boldsymbol L}_k}$. Besides, the projection matrix of camera is already known as ${\textbf P} = {\textbf K}[{{\textbf R}_T^C{| \textbf{t}}_T^C} ]$, so the projection lines ${{\boldsymbol l}_k}$ of ${{\boldsymbol L}_k}$ can be solved by Eq. (14),
$${[{{{\boldsymbol l}_k}} ]_ \times } = {\textbf P}{{\boldsymbol L}_k}{{\textbf P}^T}$$
where ${[{{{\boldsymbol l}_k}} ]_ \times }$ denotes the skew-symmetric matrix of ${{\boldsymbol l}_k}$.

3. Calibration algorithm

The TC target $({{\mathbf \Theta }_n},n = 1,2)$ is placed in the FOV arbitrarily and the calibration images are captured from M multiple views. The implementation of the calibration algorithm can be divided into four main steps: estimation of distortion coefficient, estimation of intrinsic parameters, estimation of external parameters and global nonlinear optimization for all parameters.

3.1 Estimation of distortion coefficient

For any cylindrical targets ${\mathbf \Theta }$ with arbitrary pose in 3-space, the apparent contour of cylindrical surface is two generators $({{\boldsymbol L}_1},{{\boldsymbol L}_2})$ on the cylindrical surface. It’s known that projective transformation keep straight lines, so the projection lines $({{\boldsymbol l}_1},{{\boldsymbol l}_2})$ should also be straight lines. However, due to lens distortion, the projection lines in the actual image will not keep straight. Considering the first term of radial distortion, the lens distortion can be expressed as Eq. (15),

$$\left\{ \begin{array}{l} \mathord{\buildrel{\lower3pt\hbox{$\scriptscriptstyle\smile$}} \over u} = u + ({u - {u_0}} )({1 + {k_1}{r^2}} )\\ \mathord{\buildrel{\lower3pt\hbox{$\scriptscriptstyle\smile$}} \over v} = v + ({v - {v_0}} )({1 + {k_1}{r^2}} )\end{array} \right.$$
where $(u,v)$ and $(\mathord{\buildrel{\lower3pt\hbox{$\scriptscriptstyle\smile$}} \over u} ,\mathord{\buildrel{\lower3pt\hbox{$\scriptscriptstyle\smile$}} \over v} )$ are the ideal (distortion-free) and real (distorted) image coordinates respectively, $r = \sqrt {{{({u - {u_0}} )}^2} + {{({v - {v_0}} )}^2}}$ is the distance between the image point and the principle point, ${k_1}$ is the coefficient of radial distortion. ${k_1}$ can be estimated by minimizing the distances from each edge points on the projection line segments to the fitting lines. Then, based on the distortion model described in Eq. (15), the undistorted image points’ coordinates $(u,v)$ can be solved by iterative method.

3.2 Estimation of intrinsic parameters

For a single cylindrical target, the intersection point of projection lines $({{\boldsymbol l}_1},{{\boldsymbol l}_2})$ is also the vanish point of axis $\overline {{{\boldsymbol P}_1}{{\boldsymbol P}_2}}$. For TC target which consist of two cylinders, we can find two vanish points. The image coordinates of these tow vanish points can be denoted as ${{\boldsymbol v}_1} = {({{u_1},{v_1}} )^T}$ and ${{\boldsymbol v}_2} = {({{u_2},{v_2}} )^T}$. Then the angle $\theta$ between the axes of TC target in the 3-space is satisfying Eq. (16),

$$\textrm{cos}\theta =\frac{{\tilde{{\boldsymbol v}}_1^T{\boldsymbol \omega }{{\tilde{{\boldsymbol v}}}_2}}}{{\sqrt {({\tilde{{\boldsymbol v}}_1^T{\boldsymbol \omega }{{\tilde{{\boldsymbol v}}}_1}} )({\tilde{{\boldsymbol v}}_2^T{\boldsymbol \omega }{{\tilde{{\boldsymbol v}}}_2}} )} }}$$
where, ${\tilde{{\boldsymbol v}}_1}$ and ${\tilde{{\boldsymbol v}}_2}$ are the homogeneous coordinates of ${{\boldsymbol v}_1}$ and ${{\boldsymbol v}_2}$ respectively, ${\boldsymbol \omega } = {{\textbf K}^{ - T}}{{\textbf K}^{ - 1}}$ describes the image of the absolute conic. The number of pixels in the image in the directions of $(u,v)$ are denoted as $({N_u},{N_v})$. In consideration of ${f_x} \approx {f_y}$, ${u_0} \approx 0.5{N_u}$ and ${v_0} \approx 0.5{N_v}$, after translating the origin of image coordinate system to the principle point, the intrinsic parameters matrix can be simplified as Eq. (17),
$${\textbf K^{\prime}} = \left[ {\begin{array}{{ccc}} f&0&0\\ 0&f&0\\ 0&0&1 \end{array}} \right]$$
Then Eq. (16) can be transformed into Eq. (18),
$$\textrm{cos}\theta =\frac{{{{{\tilde{\boldsymbol v}^{\prime T}}}_1}{\boldsymbol \omega^{\prime}}{{{\tilde{\boldsymbol v}^{\prime}}}_2}}}{{\sqrt {({{{{\tilde{\boldsymbol v}^{\prime T}}}_1}{\boldsymbol \omega^{\prime}}{{{\tilde{\boldsymbol v}^{\prime}}}_1}} )({{{{\tilde{\boldsymbol v}^{\prime T}}}_2}{\boldsymbol \omega^{\prime}}{{{\tilde{\boldsymbol v}^{\prime}}}_2}} )} }}$$
where, ${{\tilde{\boldsymbol v}^{\prime}}_1} = {\tilde{{\boldsymbol v}}_1} - {({{u_0},{v_0},0} )^T}$, ${{\tilde{\boldsymbol v}^{\prime}}_2} = {\tilde{{\boldsymbol v}}_2} - {({{u_0},{v_0},0} )^T}$, ${\boldsymbol \omega ^{\prime}} = {{\textbf K^{\prime}}^{ - T}}{{\textbf K^{\prime}}^{ - 1}}$. In the ${m^{\textrm{th}}}$ and ${n^{\textrm{th}}}$ view, the vanish points of cylindrical targets are denoted as $({\tilde{{\boldsymbol v}}_{m1}},{\tilde{{\boldsymbol v}}_{m2}})$ and $({\tilde{{\boldsymbol v}}_{n1}},{\tilde{{\boldsymbol v}}_{n2}})$ respectively. Since the angle $\theta$ keep constant during the calibration process, there exists,
$$\frac{{{\tilde{\boldsymbol v}^{\prime T}}_{m1}\tilde{{\boldsymbol \omega ^{\prime}}}{{{\tilde{\boldsymbol v}^{\prime}}}_{m2}}}}{{\sqrt {({{\tilde{\boldsymbol v}^{\prime T}}_{m1}\tilde{{\boldsymbol \omega^{\prime}}}{{{\tilde{\boldsymbol v}^{\prime}}}_{m1}}} )({{\tilde{\boldsymbol v}^{\prime T}}_{m2}\tilde{{\boldsymbol \omega^{\prime}}}{{{\tilde{\boldsymbol v}^{\prime}}}_{m2}}} )} }} = \frac{{{\tilde{\boldsymbol v}^{\prime T}}_{n1}\tilde{{\boldsymbol \omega ^{\prime}}}{{{\tilde{\boldsymbol v}^{\prime}}}_{n2}}}}{{\sqrt {({{\tilde{\boldsymbol v}^{\prime T}}_{n1}\tilde{{\boldsymbol \omega^{\prime}}}{{{\tilde{\boldsymbol v}^{\prime}}}_{n1}}} )({{\tilde{\boldsymbol v}^{\prime T}}_{n2}\tilde{{\boldsymbol \omega^{\prime}}}{{{\tilde{\boldsymbol v}^{\prime}}}_{n2}}} )} }}$$
$f$ can be estimated by solving the equation above and taking the positive root. Then, the intrinsic parameters can be initialized as Eq. (20),
$${\textbf K} = \left[ {\begin{array}{{ccc}} f&0&{0.5{N_u}}\\ 0&f&{0.5{N_v}}\\ 0&0&1 \end{array}} \right]$$
More accurate intrinsic parameters can be obtained by minimizing the following objective function,
$$\arg \mathop {\min }\limits_{\textbf K} \sum\limits_{m = 1}^M {{{\left\|{\cos {\theta_m} - \frac{1}{M}\sum\limits_{i=1}^M {\cos {\theta_i}} } \right\|}^2}}$$
where M is the number of calibration views. Equation (21) represents the mean square error of $\textrm{cos}\theta$ in all views and minimizing it is a nonlinear optimization problem, which can be solved with the Levenberg-Marquardt (LM) algorithm.

3.3 Estimation of external parameters

For total of M calibration views, each image contains two cylindrical targets. For each cylindrical target contains two end circle centers, two projection ellipses and two projection lines. In the ${m^{\textrm{th}}}$ view, the three-dimensional coordinate of ${n^{\textrm{th}}}$ cylindrical target’s ${k^{\textrm{th}}}$ end circle center can be denoted as ${{\boldsymbol P}_{mnk}}$, where $m=1,2, \ldots ,M$, $n=1,2$, $k=1,2$. The accurate image coordinate of corresponding projection point ${{\boldsymbol p}_{mnk}}$ can be obtained by elliptical fitting and eccentricity error correction for the end circle’s projection contour [21]. Besides, set the vanish point of $\overline {{{\boldsymbol P}_{mn1}}{{\boldsymbol P}_{mn2}}}$ as ${{\boldsymbol v}_{mn}}$. Then based on property of vanish point, the back-projection ray ${{\textbf K}^{ - 1}}{\tilde{{\boldsymbol v}}_{mn}}$ is parallel with $\overline {{{\boldsymbol P}_{mn1}}{{\boldsymbol P}_{mn2}}}$. Meanwhile, the length of target is l. Based on the constrains mentioned above, the following simultaneous equations can be established,

$$\left\{ {\begin{array}{{l}} {({{{\boldsymbol P}_{mn1}} - {{\boldsymbol P}_{mn2}}} )\times {{\textbf K}^{ - 1}}{{\tilde{{\boldsymbol v}}}_{mn}}=0}\\ {||{{{\boldsymbol P}_{mn1}} - {{\boldsymbol P}_{mn2}}} ||=l}\\ {{s_1}{{\boldsymbol p}_{mn1}} = {\textbf K}{{\boldsymbol P}_{mn1}}}\\ {{s_2}{{\boldsymbol p}_{mn2}} = {\textbf K}{{\boldsymbol P}_{mn2}}} \end{array}} \right.$$
${{\boldsymbol P}_{mnk}}$ can be gotten by solving Eq. (22). Set the camera coordinate system of first view ${O_{{C_1}}} - {X_{{C_1}}}{Y_{{C_1}}}{Z_{{C_1}}}$ as the world coordinate system, then the transformation $({\textbf R}_W^{{C_m}},{\textbf t}_W^{{C_m}})$ from ${O_{{C_m}}} - {X_{{C_m}}}{Y_{{C_m}}}{Z_{{C_m}}}$ to ${O_W} - {X_W}{Y_W}{Z_W}$ which satisfy ${{\boldsymbol P}_{mnk}} = {\textbf R}_W^{{C_m}}{{\boldsymbol P}_{1nk}} + {\textbf t}_W^{{C_m}}$ can be solved. Among them, $({\textbf R}_W^{{C_m}},{\textbf t}_W^{{C_m}})$ are the external parameters.

3.4 Global nonlinear optimization

In the calibration process mentioned above, only several conditions are used, such as: the angle between TC target axes keeps constant, the projective coordinates of end circle centers, and so on. But the edge features of TC target’s projected image are not fully used, and the estimate parameters are not optimal. Therefore, the calibration result can be further optimized by nonlinear optimization. In ${O_{{C_1}}} - {X_{{C_1}}}{Y_{{C_1}}}{Z_{{C_1}}}$ (${O_W} - {X_W}{Y_W}{Z_W}$), TC target with known end circle centers’ coordinates ${{\boldsymbol P}_{1nk}}$ can be modeled as Eq. (23),

$$\left\{ {\begin{array}{{l}} {{{\mathbf \Theta }_1} = \{{{{\boldsymbol P}_{\textrm{111}}}:{{({{x_{\textrm{111}}},{y_{\textrm{111}}},{z_{\textrm{111}}}} )}^T};{{\boldsymbol P}_{112}}:{{({{x_{\textrm{112}}},{y_{\textrm{112}}},{z_{\textrm{112}}}} )}^T};{r_1}} \}}\\ {{{\mathbf \Theta }_2} = \{{{{\boldsymbol P}_{\textrm{121}}}:{{({{x_{\textrm{121}}},{y_{\textrm{121}}},{z_{\textrm{121}}}} )}^T};{{\boldsymbol P}_{112}}:{{({{x_{\textrm{122}}},{y_{\textrm{122}}},{z_{\textrm{122}}}} )}^T};{r_2}} \}} \end{array}} \right.$$
Based on the estimated distortion coefficient ${k_1}$, internal parameters ${\textbf K}$ and external parameters $({\textbf R}_W^{{C_m}},{\textbf t}_W^{{C_m}})$, the projection ellipses ${{\boldsymbol c}_{mnk}}$ and projection lines ${{\boldsymbol l}_{mnk}}$ of TC target can be calculated by the perspective projection model of cylindrical target described in Section 2. After that, LM algorithm is adopted to solve the nonlinear optimization problem for camera parameters which aims at minimizing distance from the extracted edge points to the projection ellipses or projection lines. The objective function is presented in the following form,
$$\arg \min \sum\limits_{m = 1}^M {\sum\limits_{n = 1}^2 {\sum\limits_{k = 1}^2 {\left\{ {\sum\limits_g {{{[{d({{{\boldsymbol p}_{{{\boldsymbol c}_{mnkg}}}},{{\boldsymbol c}_{mnk}}} )} ]}^2} + \sum\limits_w {{{[{d({{{\boldsymbol p}_{{{\boldsymbol l}_{mnkw}}}},{{\boldsymbol l}_{mnk}}} )} ]}^2}} } } \right\}} } }$$
where, ${{\boldsymbol p}_{{{\boldsymbol c}_{mnkg}}}}$ and ${{\boldsymbol p}_{{{\boldsymbol l}_{mnkw}}}}$ denote the image coordinates of ${g^{\textrm{th}}}$ discrete point on ${{\boldsymbol c}_{mnk}}$ and the ${w^{\textrm{th}}}$ discrete point on ${{\boldsymbol l}_{mnk}}$ respectively, $d({{{\boldsymbol p}_{{{\boldsymbol c}_{mnkg}}}},{{\boldsymbol c}_{mnk}}} )$ denotes the distance from ${{\boldsymbol p}_{{{\boldsymbol c}_{mnkg}}}}$ to ${{\boldsymbol c}_{mnk}}$, $d({{{\boldsymbol p}_{{{\boldsymbol l}_{mnkw}}}},{{\boldsymbol l}_{mnk}}} )$ denotes the distance from ${{\boldsymbol p}_{{{\boldsymbol l}_{mnkw}}}}$ to ${{\boldsymbol l}_{mnk}}$. $d({{{\boldsymbol p}_{{{\boldsymbol c}_{mnkg}}}},{{\boldsymbol c}_{mnk}}} )$ and $d({{{\boldsymbol p}_{{{\boldsymbol l}_{mnkw}}}},{{\boldsymbol l}_{mnk}}} )$ also defined as re-projection errors. The parameters to be optimized include ${k_1}$, ${\textbf K}$, $({{\textbf R}_{W - {C_m}}},{{\textbf t}_{W - {C_m}}})$ and ${{\boldsymbol P}_{1nk}}$.

4. Experimental result

The proposed camera calibration method based on TC target is tested on both synthetic data and real data. A comparison with the conventional calibration method based on 2D target [10] is also carried out. Results are shown by comparing the relative errors of computed distortion coefficient ${k_1}$, effective focal length $({f_x},{f_y})$ and coordinates of principal point $({{u}_0},{{v}_0})$.

4.1 Synthetic data

In this part, the calibration method based on TC target is tested on synthetic data. Meanwhile, the calibration results with 2D target are compared in the same scene. The experiments are divided into three groups, and the targets used for calibration are TC target, 2D target with 10 mm grids and 2D target with 30 mm grids respectively. The first term of radial distortion coefficient is set as: ${k_1}= - 4 \times {10^{ - 8}}$. The cell size is set as: $({\Delta _u},{\Delta _v})=(0.00345,0.00346)$ mm. The focal length is set as: $f = 12$ mm. The image size is set as: $({N_u},{N_v}) = (2448,2048)$ pixel. The coordinates of principal point are set as: $\textrm{(}{{u}_0},{{v}_0}\textrm{)}=\textrm{(}{{0.99{N_u}} \mathord{\left/ {\vphantom {{0.99{N_u}} 2}} \right.} 2},{{1.01{N_v}} \mathord{\left/ {\vphantom {{1.01{N_v}} 2}} \right.} 2}\textrm{)}$. Then, the intrinsic parameters matrix is,

$${\textbf K} = \left[ {\begin{array}{{ccc}} {{{{f \mathord{\left/ {\vphantom {f \Delta }} \right.} \Delta }}_u}}&0&{{{0.99{N_u}} \mathord{\left/ {\vphantom {{0.99{N_u}} 2}} \right.} 2}}\\ 0&{{f \mathord{\left/ {\vphantom {f {{\Delta _v}}}} \right.} {{\Delta _v}}}}&{1.01{{{N_v}} \mathord{\left/ {\vphantom {{{N_v}} 2}} \right.} 2}}\\ 0&0&1 \end{array}} \right]$$
Besides, the radii and lengths of TC target are set as ${r_1} = {r_2} = 6$ mm and ${l_1} = {l_2} = 200$ mm respectively. The experimental scene is shown in Fig. 2.

 figure: Fig. 2.

Fig. 2. The experimental scene designed for generating synthetic images.

Download Full Size | PDF

Establish ${O_W} - {X_W}{Y_W}{Z_W}$ in the scene, and place the TC target in the plane ${O_W} - {X_W}{Y_W}$. The distance between the midpoints of ${{\mathbf \Theta }_1}$ and ${{\mathbf \Theta }_2}$ is set as: $d = 100$ mm. The angle between the axes of ${{\mathbf \Theta }_1}$ and ${{\mathbf \Theta }_2}$ is set as: $\varphi = \textrm{2}{\textrm{0}^ \circ }$. Then, the TC target can be modeled as Eq. (26),

$$\left\{ {\begin{array}{{l}} {{{\mathbf \Theta }_1} = \left\{ {{{\boldsymbol P}_{\textrm{11}}}:{{\left[ { - \frac{{{l_1}}}{2}\cos \left( {\frac{\varphi }{2}} \right),\frac{d}{2} + \frac{{{l_1}}}{2}\sin \left( {\frac{\varphi }{2}} \right),0} \right]}^T};{{\boldsymbol P}_{12}}:{{\left[ {\frac{{{l_1}}}{2}\cos \left( {\frac{\varphi }{2}} \right),\frac{d}{2} - \frac{{{l_1}}}{2}\sin \left( {\frac{\varphi }{2}} \right),0} \right]}^T};{r_1}} \right\}}\\ {{{\mathbf \Theta }_2} = \left\{ {{{\boldsymbol P}_{\textrm{21}}}:{{\left[ { - \frac{{{l_2}}}{2}\cos \left( {\frac{\varphi }{2}} \right), - \frac{d}{2} - \frac{{{l_2}}}{2}\sin \left( {\frac{\varphi }{2}} \right),0} \right]}^T};{{\boldsymbol P}_{22}}:{{\left[ {\frac{{{l_2}}}{2}\cos \left( {\frac{\varphi }{2}} \right), - \frac{d}{2} + \frac{{{l_2}}}{2}\sin \left( {\frac{\varphi }{2}} \right),0} \right]}^T};{r_2}} \right\}} \end{array}} \right.$$
The camera takes pictures of TC target from 9 different views (${C_1} \sim {C_9}$). The optic centers ${O_{{C_m}}}$ are all in the plane ${z_W} = 300$ mm and point at the origin ${O_W}$. The synthetic calibration images generated by the scene are shown in Fig. 3, where the label for each image is the corresponding optic center’s coordinate in ${O_W} - {X_W}{Y_W}{Z_W}$.

 figure: Fig. 3.

Fig. 3. The synthetic calibration images of TC target.

Download Full Size | PDF

Comparative experiments are carried out in the same scene, but substitute 2D target for TC target and process calibration with Zhang’s method. Comparative experiments are divided into two groups. The 2D targets used in comparative experiments are both with $5 \times 7$ corner points, but with 10 mm grids and 30 mm grids respectively. The 2D targets with 30 mm grids can fill the FOV, and so with higher calibration accuracy. The 2D targets with 10 mm is used for evaluating the adverse effects of small-size target.

Before calibration, Gaussian noise of mean 0 and standard deviations ranged from 0 pixel to 1 pixel (step is 0.1 pixel) was added to the feature points’ coordinates of synthetic images. For each noise level, we performed 20 independent trials. The calibration accuracy was evaluated using the Root Mean Square (RMS) of the relative errors between the calibration results and ground truth. The change curves for $({f_x},{f_y})$, $({{u}_0},{{v}_0})$ and ${k_1}$ are shown in Fig. 4.

 figure: Fig. 4.

Fig. 4. Effects of pixel coordinate noise on calibration accuracy using different calibration methods. (a) ${f_x}$; (b) ${f_y}$; (c) ${{u}_0}$; (d) ${{v}_0}$; (e) ${k_1}$.

Download Full Size | PDF

Figure 4 displays that, for 2D target, the relative errors for each parameter show linear changes with the noise level. 2D target with 30 mm grids achieve higher accuracy and more stable calibration results because it can fill FOV. On the contrary, the relative errors are bigger for 2D target with 10 mm grids because it cannot fill FOV. ${k_1}$ is most affected by noise, followed by $({{u}_0},{{v}_0})$, and $({f_x},{f_y})$ are least affected.

But for TC target-based calibration method proposed in this paper, the calibration results show strong anti-noise ability. Especially when the noise level is greater than 0.4 pixel, the relative errors haven’t significantly increased for each parameter. Among them, ${k_1}$ calibrated by TC target is more accuracy compared with other two methods based on 2D targets, $({f_x},{f_y})$ and $({{u}_0},{{v}_0})$ can also achieved more accurate results than 2D target with 10 mm grids. The reason has great relationship with the estimation processes which are implemented by fitting geometric features (e.g., fitting lines for estimate distortion coefficient and internal parameters, fitting ellipses for estimate internal parameters). Besides, TC target has more feature points to participate in global nonlinear optimization.

4.2 Real data

In this part, experiments based on real data are performed and analyzed. The TC target used for calibration consists of two turned stainless steel right circular cylinders of 200 mm long and 6 mm radius. The camera to be calibrated is MER-504-10GM-P made by Daheng with M1214-MP2 lens made by Computar. The image resolution is $(2448,2048)$ pixel, the focal length is 12 mm, the working distance is about 300 mm. Figure 5 shows the actual calibration images which were taken from 9 different views, and the accurate subpixel edges of TC target’s projection were detected based on partial area effect [22]. Meanwhile, 2D target with $5 \times 6$ corner points and 37 mm grids is also used for calibrating the same image acquisition system as a comparative experiment. The images of 2D target used for calibrating were also taken from 9 different views. Table 1 shows the calibration results based on the method proposed in this paper and Zhang’s method with real data, meanwhile, the relative errors for each parameter are also given.

 figure: Fig. 5.

Fig. 5. The actual calibration images with TC target from 9 different views.

Download Full Size | PDF

Tables Icon

Table 1. Comparison of Calibration Results Between TC Target and 2D Target Methods

As can be seen from Table 1, the calibration results of TC target and 2D target are quite close. The relative errors of $({f_x},{f_y})$, $({{u}_0},{{v}_0})$ and ${k_1}$ are smaller than 0.30%, 0.60% and 2.20% respectively. The re-projection errors of TC target in one of the calibration images are shown in Fig. 6, where Fig. 6(a) and Fig. 6(b) indicates the re-projection errors before and after global nonlinear optimization respectively. The arrow lengths are equal to re-projection error values times 150. Figure 7 shows the statistical histograms of re-projection errors for all views before and after global nonlinear optimization. Figure 6 and 7 show obvious decreasing of re-projection errors after optimization, where the Root Mean Square (RMS) value of re-projection errors for all views is 1.56 pixel before optimization and reduced to 0.23 pixel after optimization.

 figure: Fig. 6.

Fig. 6. TC target re-projection errors in one calibration image. (a) Before global nonlinear optimization; (b) After global nonlinear optimization.

Download Full Size | PDF

 figure: Fig. 7.

Fig. 7. Statistical histogram of re-projection errors for all views before (blue) and after (red) global nonlinear optimization.

Download Full Size | PDF

5. Conclusion

A new camera calibration method based on TC target is proposed. The TC target consists of two right circular cylinders with known radii and lengths. This method is essentially still a 1D target based calibration technology, so keeps the advantages of easy processing, portable, requiring less space and so on. But compared with traditional 1D target, there’s no need to set any feature points on the TC target. The camera calibration task is accomplished based on the perspective projection model of cylindrical target. This model describes the relationship between the TC target and its projective ellipses and projection lines. The experiments with both synthetic and real data show that the proposed method has strong anti-noise ability and similar result compared with the traditional large-size 2D target based calibration method, and achieve higher calibration accuracy compared with small-size 2D target. Therefore, the TC target based calibration method proposed in this paper is suitable for the application scenarios with confined spaces for calibration.

References

1. K. Genovese, Y. X. Chi, and B. Pan, “Stereo-camera calibration for large-scale DIC measurements with active phase targets and planar mirrors,” Opt. Express 27(6), 9040–9053 (2019). [CrossRef]  

2. S. Zhang, “High-speed 3D shape measurement with structured light methods: A review,” Opt. Lasers Eng. 106, 119–131 (2018). [CrossRef]  

3. S. M. Jiao, M. J. Sun, Y. Gao, T. Lei, Z. W. Xie, and X. C. Yuan, “Motion estimation and quality enhancement for a single image in dynamic single-pixel imaging,” Opt. Express 27(9), 12841–12854 (2019). [CrossRef]  

4. T. Schneider, M. Y. Li, and C. Cadena, “Observability-aware self-calibration of visual and inertial sensors for ego-motion estimation,” IEEE Sens. J. 19(10), 3846–3860 (2019). [CrossRef]  

5. H. Yanagihara, T. Kakue, Y. Yamamoto, T. Shimobaba, and T. Ito, “Real-time three-dimensional video reconstruction of real scenes with deep depth using electro-holographic display system,” Opt. Express 27(11), 15662–15678 (2019). [CrossRef]  

6. H. N. Xu, L. Yu, J. Y. Hou, and S. M. Fei, “Automatic reconstruction method for large scene based on multi-site point cloud stitching,” Measurement 131, 590–596 (2019). [CrossRef]  

7. P. G. Anne-Sophie, T. Simon, and L. Denis, “Influence of camera calibration conditions on the accuracy of 3D reconstruction,” Opt. Express 24(3), 2678–2686 (2016). [CrossRef]  

8. R. Y. Tsai, “A versatile camera calibration technique for high-accuracy 3D machine vision metrology using off-the-shelf TV cameras and lenses,” IEEE J. Robot. Autom. 3(4), 323–344 (1987). [CrossRef]  

9. L. Huang, F. P. Da, and S. Y. Gai, “Research on multi-camera calibration and point cloud correction method based on three-dimensional calibration object,” Opt. Lasers Eng. 115, 32–41 (2019). [CrossRef]  

10. Z. Y. Zhang, “A flexible new technique for camera calibration,” IEEE Trans. Pattern Anal. Mach. Intell. 22(11), 1330–1334 (2000). [CrossRef]  

11. B. L. Cai, Y. W. Wang, J. J. Wu, and M. Y. Wang, “An effective method for camera calibration in defocus scene with circular gratings,” Opt. Lasers Eng. 114, 44–49 (2019). [CrossRef]  

12. B. H. Shan, W. T. Yuan, and Z. L. Xue, “A calibration method for stereovision system based on solid circle target,” Measurement 132, 213–223 (2019). [CrossRef]  

13. Z. Y. Zhang, “Camera calibration with one-dimensional objects,” IEEE. Trans. Pattern Anal. Mach. Intell. 26(7), 892–899 (2004). [CrossRef]  

14. F. C. Wu, Z. Y. Hu, and H. J. Zhu, “Camera calibration with moving one-dimensional objects,” Pattern Recognit. 38(5), 755–765 (2005). [CrossRef]  

15. Y. W. Lv, W. Liu, and X. P. Xu, “Methods based on 1D homography for camera calibration with 1D objects,” Appl. Opt. 57(9), 2155–2164 (2018). [CrossRef]  

16. L. Wang, F. C. Wu, and Z. Y. Hu, “Multi-camera calibration with one-dimensional object under general motions,” in Proceedings of IEEE International Conference on Computer Vision (IEEE, 2007), pp. 1–7.

17. S. J. Maybank and O. D. Faugeras, “A theory of self-calibration of a moving camera,” Int. J. Comput. Vis. 8(2), 123–151 (1992). [CrossRef]  

18. Q. Sun, X. Y. Wang, J. P. Xu, and L. Wang, “Camera self-calibration with lens distortion,” Optik 127(10), 4506–4513 (2016). [CrossRef]  

19. R. Hartley and A. Zisserman, Multiple view geometry in computer vision (Cambridge University, 2003).

20. C. Doignon and M. D. Mathelin, “A degenerate conic-based method for a direct fitting and 3-D pose of cylinders with a single perspective view,” in Proceedings of IEEE International Conference on Robotics and Automation (IEEE, 2007), pp. 4220–4225.

21. Y. J. Shen, X. Zhang, and W. Cheng, “Quasi-eccentricity error modeling and compensation in vision metrology,” Meas. Sci. Technol. 29(4), 1–9 (2018). [CrossRef]  

22. A. Trujillo-Pino, K. Krissian, and M. Alemán-Flores, “Accurate subpixel edge location based on partial area effect,” Image Vis. Comput. 31(1), 72–90 (2013). [CrossRef]  

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (7)

Fig. 1.
Fig. 1. Perspective projection model of cylindrical target.
Fig. 2.
Fig. 2. The experimental scene designed for generating synthetic images.
Fig. 3.
Fig. 3. The synthetic calibration images of TC target.
Fig. 4.
Fig. 4. Effects of pixel coordinate noise on calibration accuracy using different calibration methods. (a) ${f_x}$; (b) ${f_y}$; (c) ${{u}_0}$; (d) ${{v}_0}$; (e) ${k_1}$.
Fig. 5.
Fig. 5. The actual calibration images with TC target from 9 different views.
Fig. 6.
Fig. 6. TC target re-projection errors in one calibration image. (a) Before global nonlinear optimization; (b) After global nonlinear optimization.
Fig. 7.
Fig. 7. Statistical histogram of re-projection errors for all views before (blue) and after (red) global nonlinear optimization.

Tables (1)

Tables Icon

Table 1. Comparison of Calibration Results Between TC Target and 2D Target Methods

Equations (26)

Equations on this page are rendered with MathJax. Learn more.

Θ = { P 1 : ( x 1 , y 1 , z 1 ) T ; P 2 : ( x 2 , y 2 , z 2 ) T ; r }
{ v Z W = ( 0 , 0 , 1 ) T v Z T = P 1 P 2 | | P 1 P 2 | |
{ R T W = ( v Z W × v Z T | | v Z W × v Z T | | arccos ( v Z W v Z T ) ) t T W = P 1
{ R T C = R W C R T W t T C = R W C t T W + t W C
s x ~ T = K [ R T C | t T C ] X ~ T , K = [ f x 0 u 0 0 f y v 0 0 0 1 ]
s x ~ T = K [ ( r T C ) 1 ( r T C ) 2 ( r T C ) 3 t T C ] X ~ T = K [ ( r T C ) 1 ( r T C ) 2 t T C ] [ x T y T 1 ] = H T C [ x T y T 1 ]
C 1 = [ 1 0 0 0 1 0 0 0 r 2 ]
c 1 = ( H T C ) T C 1 ( H T C ) 1
Q = [ 1 0 0 0 0 1 0 0 0 0 0 0 0 0 0 r 2 ]
π Γ = Q P ~ C
R Ω Υ k = ( ± P 1 P 2 | | P 1 P 2 | | ψ ) π Υ k = ( ± P 1 P 2 | | P 1 P 2 | | arcsin ( r d ( P 1 P 2 ¯ , P C ) ) ) π Υ k = ( ± P 1 P 2 | | P 1 P 2 | | arcsin ( r | | P 1 P 2 | | | | ( P C P 1 ) × ( P 1 P 2 ) | | ) ) π Υ k
L k = π Γ π Υ k T π Υ k π Γ T
L k = [ 0 l k 34 l k 42 l k 23 l k 34 0 l k 14 l k 13 l k 42 l k 14 0 l k 12 l k 23 l k 13 l k 12 0 ]
[ l k ] × = P L k P T
{ u = u + ( u u 0 ) ( 1 + k 1 r 2 ) v = v + ( v v 0 ) ( 1 + k 1 r 2 )
cos θ = v ~ 1 T ω v ~ 2 ( v ~ 1 T ω v ~ 1 ) ( v ~ 2 T ω v ~ 2 )
K = [ f 0 0 0 f 0 0 0 1 ]
cos θ = v ~ T 1 ω v ~ 2 ( v ~ T 1 ω v ~ 1 ) ( v ~ T 2 ω v ~ 2 )
v ~ T m 1 ω ~ v ~ m 2 ( v ~ T m 1 ω ~ v ~ m 1 ) ( v ~ T m 2 ω ~ v ~ m 2 ) = v ~ T n 1 ω ~ v ~ n 2 ( v ~ T n 1 ω ~ v ~ n 1 ) ( v ~ T n 2 ω ~ v ~ n 2 )
K = [ f 0 0.5 N u 0 f 0.5 N v 0 0 1 ]
arg min K m = 1 M cos θ m 1 M i = 1 M cos θ i 2
{ ( P m n 1 P m n 2 ) × K 1 v ~ m n = 0 | | P m n 1 P m n 2 | | = l s 1 p m n 1 = K P m n 1 s 2 p m n 2 = K P m n 2
{ Θ 1 = { P 111 : ( x 111 , y 111 , z 111 ) T ; P 112 : ( x 112 , y 112 , z 112 ) T ; r 1 } Θ 2 = { P 121 : ( x 121 , y 121 , z 121 ) T ; P 112 : ( x 122 , y 122 , z 122 ) T ; r 2 }
arg min m = 1 M n = 1 2 k = 1 2 { g [ d ( p c m n k g , c m n k ) ] 2 + w [ d ( p l m n k w , l m n k ) ] 2 }
K = [ f / f Δ Δ u 0 0.99 N u / 0.99 N u 2 2 0 f / f Δ v Δ v 1.01 N v / N v 2 2 0 0 1 ]
{ Θ 1 = { P 11 : [ l 1 2 cos ( φ 2 ) , d 2 + l 1 2 sin ( φ 2 ) , 0 ] T ; P 12 : [ l 1 2 cos ( φ 2 ) , d 2 l 1 2 sin ( φ 2 ) , 0 ] T ; r 1 } Θ 2 = { P 21 : [ l 2 2 cos ( φ 2 ) , d 2 l 2 2 sin ( φ 2 ) , 0 ] T ; P 22 : [ l 2 2 cos ( φ 2 ) , d 2 + l 2 2 sin ( φ 2 ) , 0 ] T ; r 2 }
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.