Abstract

Projector calibration is one of the most essential steps for structured light systems. Some methods have high precision but require a complicated calibration procedure, such as the method based on phase-shifting. Other methods take advantage of simple implementation but cannot meet the accuracy requirement, for example, the method based on homography. In this paper, we proposed a compensation method for flexible and accurate projector calibration. To make the calibration procedure easy to operate, the homographic matrix between the projector and camera is established through feature points projected. Then, the 2D image points compensation method based on the re-projection error iteration algorithm was carried out, and a modified bundle adjustment (BA) algorithm is put forward to refine the calibration parameters of the system. Finally, the feature point reconstruction experiment is implemented to verify the high flexibility and accuracy performance of the proposed method.

© 2020 Optical Society of America under the terms of the OSA Open Access Publishing Agreement

1. Introduction

Structured light method has shown an enormous potential for the significantly increasing applications in many areas, including robotic vision, industrial inspection and virtual reality, etc., because it’s non-contact, low cost, highly accurate, capable of full-field acquisition and ease of implementation [15]. However, the system calibration is still a challenging issue in the research area, as the accuracy of the system measurement is mainly determined by the system calibration, especially the projector calibration. The main ideas behind the existing projector calibration methods can be classified into two categories [6,7]. Ones based on mapping from the phase to the height [812] and the other ones based on the concept of stereo vision [1317]. In the second category, the projector can be considered as a “reverse camera”, so the system can be flexibly calibrated by mature stereo vision technique.

One of the most vital processes in the projector calibration method is to establish the mapping relationship between the 3D feature points and their correspondent 2D image points efficiently and accurately. Many methods have been proposed to solve the issue, such as phase-shifting method [1820], homography method [2123], Digital image correlation method [24], and so on. In the phase-shifting method, many vertical and horizontal structured light patterns need to be projected on the surface of the calibration board at each position. Besides, the method often needs heavy calculations to find the absolute phase map, which makes it inflexible and time-consuming. Recently, many efforts have been made to establish the homographic matrix between the coordinate of the projector and the camera for its advantage in simplifying projector calibration procedure. For example, Anwar [21] proposed a method by moving the projector and fixing both the camera and the screen, then using the constant transformation to obtain the 3D coordinates of the projected point, which facilitates the projector calibration but with low calibration precision. Huang et al. [22] developed a correspondence algorithm based on De Bruijn patterns to set up the homography matrix robust and adopted a bundle adjustment algorithm to optimize the estimated camera and projector models. It is useful in applications that need frequent re-calibration, although the calibration accuracy is affected by using imperfect planar structured light nodes. Juarez-Salazar et al. [23] utilized superposed color checkerboards to build the homography matrix flexibly.

However, no matter which mapping method is adopted, the accuracy of 2D image points in the projector is still an urgent problem to be solved. Therefore, recent literature have proposed different solutions. Liu et al. [25] presented a curve fitting method based on the analysis of the photoelectric module to obtain the accurate pixel coordinates and the residuals can be reduced by a polynomial distortion representation. Wang et al. [26] utilized the interpolation techniques to keep the pixel coordinates in the sub-pixel accuracy and a full-field phase map to evaluate the effectiveness of the non-linear distortion procedure. Zhou et al. [27] focused on the calibration errors of the principal point and focal length and proposed a systematic recalibration method. Zhou et al. [19] adopted the principle of projective invariance of the cross ratio to improve the accuracy of the phase mapping with sub-pixel level. Ren et al. [24] further presented a two-dimensional digital image correlation (DIC) method to refine the accuracy of the mapping based on affine transformation theory to obtain the sub-pixel matching precision. The DIC method can refine the 2D image points based on both the phase-shifting method and the homographic method, but it is time-consuming with the complex algorithm or more image processing. Focusing on the projector distortion residual compensation method, the literature [20,28,29] proposed a method based on adaptive fringe patterns or distorted fringe patterns or pre-distorted fringe patterns to compensate for the residual distortion map error effectively. For the accuracy issue of the projector calibration, bundle adjustment (BA) algorithm, which is widely applied in the close-range photography field for its efficient ability of optimization, is proved to be useful and suitable for the projector calibration [7,30,31].

In the proposed paper, utilizing the method based on homography, the projector can obtain the feature points only once at each calibration position without any sophisticated ancillary equipment or complicated procedure. Furthermore, the reprojection error iteration algorithm is presented to improve the accuracy of the image point and solve the initial value of the BA algorithm. Then, the camera and the projector can be optimized jointly based on the BA algorithm with the same high precision calibration pattern board.

The rest of the paper is organized as follows: In Section 2, the overall scheme of projector calibration is introduced. The projector calibration method is detailed in Section 3. In Section 4, experimental verification of the proposed method is given and conclusion of this work is presented in Section 5.

2. Overall scheme of projector calibration

The projector calibration method firstly regards the projector as a “reverse camera”. The 2D image points on the projector is obtained by establishing the homographic relationship between the image plane of the projector and camera. The initial calibration of the projector is realized by Zhang's [32] calibration method and the reprojection error of each point is obtained. Then the accuracy of 2D image points are improved by reprojection error iteration algorithm. Finally, based on the high-precision 3D feature points on the calibration board, the 2D image points, the internal and external parameters of the projector are optimized via BA algorithm, and the accurate calibration results of the projector can be obtained. There are obvious advantages of the presented method, as the following:

  • (1) By projecting feature points, the homographic relationship between the image plane of the projector and camera is established, and only one image needs to be taken for each position, which can increase the flexibility and rapidity of the calibration procedure.
  • (2) The bundle adjustment algorithm is adopted for further calibration optimization, which will improve the calibration accuracy of the projector. In addition, the initial value of the BA is achieved via the simple reprojection errors iteration algorithm.
  • (3) The same accurate 3D feature points on the calibration board are applied for the projector and camera simultaneously, so the whole bundling optimization of the projector and camera can be carried out. The flowchart of the overall calibration method as shown in Fig. 1.

 

Fig. 1. The overall scheme of projector calibration.

Download Full Size | PPT Slide | PDF

3. Calibration of projector

3.1 Mathematical model of projector calibration

From the perspective of working principle and optical imaging, the projector can be regarded as a “reverse camera”, so the model representation of the camera is suitable for the projector model too. Ideally, the projector can be considered as a pinhole model too, so the relationship between the 3D feature points ${{\boldsymbol P}_w}({{X_w},{Y_w},{Z_w},1} )$ and the 2D image points ${{\boldsymbol P}_p}({{u_p},{v_p},1} )$ of the projector can be expressed as:

$$\left[ {\begin{array}{{c}} {{u_p}}\\ {{v_p}}\\ 1 \end{array}} \right] = s{{\boldsymbol K}_p}[\begin{array}{{cc}} {{{\boldsymbol R}_p}}&{{{\boldsymbol T}_p}} \end{array}]\left[ {\begin{array}{{c}} {{X_w}}\\ {{Y_w}}\\ {{Z_w}}\\ 1 \end{array}} \right]$$
where $s = 1/{Z_p}$ is scaling factor. ${{\boldsymbol K}_p}$ is the projector's internal parameter matrix, which can be express as follows:
$${{\boldsymbol K}_p} = \left[ {\begin{array}{{ccc}} {\frac{{{f_p}}}{{d{x_p}}}}&0&{{u_{p0}}}\\ 0&{\frac{{{f_p}}}{{d{y_p}}}}&{{v_{p0}}}\\ 0&0&1 \end{array}} \right] = \left[ {\begin{array}{{ccc}} {{f_{pu}}}&0&{{u_{p0}}}\\ 0&{{f_{pv}}}&{{v_{p0}}}\\ 0&0&1 \end{array}} \right].$$

Where $d{x_p}$ and $d{y_p}$ represent the physical size of a single pixel on the X axis and Y axis, respectively. ${f_{pu}}$ and ${f_{pv}}$ are the effective focal length of the projector. $({{u_{p0}},{v_{p0}}} )$ is the coordinate of the main point of the image. $[\begin{array}{{cc}} {{{\boldsymbol R}_p}}&{{{\boldsymbol T}_p}} \end{array}]$ is the external parameter matrix of the projector.

In order to obtain higher calibration accuracy, lens distortion must be corrected. After considering the distortion error, the image point denote as ${{\boldsymbol P^{\prime}}_p}({{{u^{\prime}}_p},{{v^{\prime}}_p},1} )$, so

$$\left\{ {\begin{array}{{c}} {{{u^{\prime}}_p} = {u_p}(1 + {k_1}{r^2} + {k_2}{r^4} + {k_3}{r^6}) + 2{p_1}xy + {p_2}({r^2} + 2{x^2})}\\ {{{v^{\prime}}_p} = {v_p}(1 + {k_1}{r^2} + {k_2}{r^4} + {k_3}{r^6}) + 2{p_2}xy + {p_1}({r^2} + 2{y^2})} \end{array}} \right., $$
where ${r^2} = {x^2} + {y^2},\;{k_1},\;{k_2}$ and ${k_3}$ are the radial distortion coefficients, ${p_1}$ and ${p_2}$ are the tangential distortion coefficients.

3.2 Establishment of homographic relationship

According to projective geometry knowledge, without considering distortion, the imaging of 3D feature points on the image plane of the camera and projector both can be regarded as a simple projective transformation relation. Therefore, the approximate projective transformation relationship between the image plane of the camera and projector can be established and described by a homographic matrix [33].

Suppose any point in the 3D feature points set is denoted as ${\boldsymbol P}_w^k$ on the calibration board. The 2D image points on the image plane of the projector is denoted as ${\boldsymbol P}_p^k$, after captured by the camera, the 2D image point on the camera image plane is denoted as ${\boldsymbol P}_c^k$. The plane of the calibration board is set to satisfy the equation:

$${{\boldsymbol n}^\textrm{T}}{\boldsymbol P}_w^k + d = 0. $$
Where n represents the normal vector of the calibration plane, $d$ is translation. Then combining with Eq. (1), we can obtain:
$${\boldsymbol P}_c^k = {\boldsymbol K}c({\boldsymbol RP}_w^k + {\boldsymbol t}) = {\boldsymbol K}c\left[ {{\boldsymbol RP}_w^k + {\boldsymbol t}\left( { - \frac{{{\boldsymbol t}{{\boldsymbol n}^\textrm{T}}}}{d}} \right){\boldsymbol P}_w^k} \right] = {\boldsymbol K}c\left( {{\boldsymbol R} - \frac{{{\boldsymbol t}{{\boldsymbol n}^\textrm{T}}}}{d}} \right){\boldsymbol P}_w^k = {\boldsymbol K}c\left( {{\boldsymbol R} - \frac{{{\boldsymbol t}{{\boldsymbol n}^\textrm{T}}}}{d}} \right){\boldsymbol K}{p^{ - 1}}{\boldsymbol P}_p^k.$$
Where ${\boldsymbol K}c\left( {{\boldsymbol R} - \frac{{{\boldsymbol t}{{\boldsymbol n}^\textrm{T}}}}{d}} \right){\boldsymbol K}{p^{ - 1}}$ is noted as H, so Eq. (4) can be rewritten as
$${\boldsymbol P}_c^k=\gamma{{\boldsymbol HP}_p^k}$$
where ${\gamma}$ is scaling factor, H can be represent as
$${\boldsymbol H} = \left[ {\begin{array}{{ccc}} {{h_{11}}}&{{h_{12}}}&{{h_{13}}}\\ {{h_{21}}}&{{h_{22}}}&{h_{23}^{}}\\ {{h_{31}}}&{{h_{32}}}&{{h_{33}}} \end{array}} \right]$$
As shown in Fig. 2, the mapping relation between the image plane of the projector and camera is established, namely homographic relationship. The image points are used to be represented by homogeneous coordinates, and the image points of the projector and the camera can be normalized as
$$\begin{array}{l} {u_c} = \frac{{{h_{11}}{u_p} + {h_{12}}{v_p} + {h_{13}}}}{{{h_{31}}{u_p} + {h_{32}}{v_p} + {h_{33}}}}\\ {v_c} = \frac{{{h_{21}}{u_p} + {h_{22}}{v_p} + {h_{23}}}}{{{h_{31}}{u_p} + {h_{32}}{v_p} + {h_{33}}}} \end{array}$$

 

Fig. 2. Sketch of the homographic relationship between the projector and camera.

Download Full Size | PPT Slide | PDF

From the above equation, it can be known that a set of corresponding points can construct two constraint equations, and then only four sets of corresponding points are needed to obtain the above-mentioned homographic matrix. However, due to the existence of noise and other interference factors, there usually exists a large error when the homographic matrix obtained by only four points. In this paper, the weight is introduced to improve the accuracy of the homographic matrix. In other words, a large number of marked points are projected onto the calibration board, four marked points are taken to calculate the homographic matrix, and the distance between the sampling center and the whole image center is calculated, so as to assign weights to the corresponding homographic matrix. Therefore, the optimized homographic matrix can be expressed as:

$${{\boldsymbol H}_{CP}} = \; \left[ {\begin{array}{{ccc}} {\mathop \sum \limits_{i = 1}^n w_{11}^i \cdot h_{11}^i}&{\mathop \sum \limits_{i = 1}^n w_{12}^i \cdot h_{12}^i}&{\mathop \sum \limits_{i = 1}^n w_{13}^i \cdot h_{13}^i}\\ {\mathop \sum \limits_{i = 1}^n w_{21}^i \cdot h_{21}^i}&{\mathop \sum \limits_{i = 1}^n w_{22}^i \cdot h_{22}^i}&{\mathop \sum \limits_{i = 1}^n w_{23}^i \cdot h_{23}^i}\\ {\mathop \sum \limits_{i = 1}^n w_{31}^i \cdot h_{31}^i}&{\mathop \sum \limits_{i = 1}^n w_{32}^i \cdot h_{32}^i}&{\mathop \sum \limits_{i = 1}^n w_{33}^i \cdot h_{33}^i} \end{array}} \right],$$

Where ${w^i}$ represents the weight of the homologous matrix, it can be calculated as ${w^i} = {d_i}/\sum\limits_{i = 1}^n {d{}_i} $, ${d_i}$ is the distance between the sampling center and the image center. Then the image points of the feature points on the projector image plane can be calculated by the homographic matrix, so the projector can “capture” the feature points on the calibration board for Zhang's calibration method.

3.3 Principle of reprojection error iteration algorithm

According to the principle of Zhang's calibration method, when the input 2D image points change, the internal and external parameters of the projector will change accordingly. The calibration based on the above method has large reprojection error for the low precision image points obtained only by homographic relationship. Some methods, such as DIC method [24] or compensation methods [28,29,34] have been adopted to improve the accuracy of the 2D image points, but the calculations are time-consuming. Therefore, a method based on error iteration is proposed. Its essence is to adjust the uncertain 2D image coordinates according to the value and direction of the reprojection errors.

Suppose that ${P_w}$ is a corner point on the calibration board, which accuracy is high enough. ${{\boldsymbol P}_P}({u_p},{v_p})$ is the image point obtained by homographic relationship which is mentioned in Section 3.2. Therefore, the projector can be calibrated by Zhang’s method. The reprojection image point, which calculated by the internal and external parameters, is denoted as ${{\boldsymbol P^{\prime}}_P}({u^{\prime}_p},{v^{\prime}_p})$, the error between ${{\boldsymbol P}_P}$ and ${{\boldsymbol P^{\prime}}_P}$ is represented as $(\Delta u,\Delta v)$, that is also called as the reprojection error, which is shown in Fig. 3. The reprojection error of every corner point on each checkerboard placement position can be obtained. Usually the reprojection error is used as the evaluation standard of calibration accuracy, but it is used as the basis for iteration here. The specific iterative algorithm steps are as follows:

  • (1) Carry out the projector calibration using the image points ${{\boldsymbol P}_P}$ obtained by the homographic matrix. The initial internal and external parameters of the projector and reprojection errors can be obtained.
  • (2) Express the reprojection error as $\Delta u = {u^{\prime}_p} - {u_p}$ and $\Delta v = {v^{\prime}_p} - {v_p}$, which includes the error’s direction and value.
  • (3) Replace the image points ${{\boldsymbol P}_p}({u_p},{v_p})$ by ${{\boldsymbol P}_{pi}}({u_{pi}},{v_{pi}})$, where ${u_{p\textrm{i}}}\textrm{ = }{u_p} + \Delta u/2$ and ${v_{p\textrm{i}}}\textrm{ = }{v_p} + \Delta v/2$.
  • (4) Calculate the projector again with the adjustment point ${{\boldsymbol P}_{pi}}({u_{pi}},{v_{pi}})$, the new internal and external parameters of the projector and reprojection errors can be obtained.
  • (5) Repeat steps (2)–(4) until it reaches the set calibration accuracy.

 

Fig. 3. The reprojection error of the point.

Download Full Size | PPT Slide | PDF

After iteration, the high-precision internal and external parameters and image point coordinates under the calibration model can be obtained. The method is easy to implement, and theoretically, the accuracy can reach zero error. In actual measurement, it is necessary to form a binocular system with the camera, so it is used as the initial value of the BA algorithm.

3.4 Calibration parameters optimization based on the BA algorithm

The BA algorithm can effectively optimize the 2D image points, the projector parameters and the 3D feature points separately or simultaneously. The 3D feature points on the calibration board are accurate by manufacture, so only the image point coordinates, the internal and external parameters are optimized simultaneously in the proposed BA optimization process. However, when there are different types of observed values in the adjustment problem, it is necessary to estimate the prior unit weight variance. In order to improve the accuracy of variance estimation, a posterior variance estimation method is proposed to calculate the variances of various observed values and determine the weights according to their values.

3.4.1. Establishment of the bundle adjustment model

According to Eq. (1), the collinear equation is established as follows:

$$\left\{ {\begin{array}{{c}} {{u_\textrm{p}} = {f_{pu}}\frac{{{m_{11}}{X_\textrm{w}} + {m_{12}}{Y_\textrm{w}} + {m_{13}}{Z_\textrm{w}} + {m_{14}}}}{{{m_{13}}{X_\textrm{w}} + {m_{32}}{Y_\textrm{w}} + {m_{33}}{Z_\textrm{w}} + {m_{34}}}} + {u_{p0}}}\\ {{v_\textrm{p}} = {f_{pv}}\frac{{{m_{21}}{X_\textrm{w}} + {m_{22}}{Y_\textrm{w}} + {m_{23}}{Z_\textrm{w}} + {m_{14}}}}{{{m_{13}}{X_\textrm{w}} + {m_{32}}{Y_\textrm{w}} + {m_{33}}{Z_\textrm{w}} + {m_{34}}}} + {v_{p0}}} \end{array}} \right., $$
where ${m_{ij}}$ are the coefficients of the external parameter matrix and other parameters are consistent with Eq. (1). Due to the existence of error, there must be some deviation between the ideal 2D image point and the calculated 2D image point. Similar to the model proposed in lecture [33], 2D image coordinate error $({\Delta u,\Delta v} )$ and coordinate corrections $({du,dv} )$ are introduced, but the image coordinate corrections are caused only by the error of linear parameters, principal points and effective focal length, and distortion coefficients, keep the 3D point coordinate unchanged. So the equation can be expressed as follows,
$$\left[ {\begin{array}{{c}} {{u_p}}\\ {{v_p}} \end{array}} \right] + \left[ {\begin{array}{{c}} {\Delta u}\\ {\Delta v} \end{array}} \right] = \left[ {\begin{array}{{c}} {{{u^{\prime}}_\textrm{p}}}\\ {{{v^{\prime}}_\textrm{p}}} \end{array}} \right] + \left[ {\begin{array}{{c}} {du}\\ {dv} \end{array}} \right]$$
where $({{{u^{\prime}}_p},{{v^{\prime}}_p}} )$ is calculated 2D image point coordinate after corrected. That is, it is calculated by substituting Eq. (7) into each parameter value of each iteration. The coordinate corrections $({du,dv} )$ can be developed using first-order Taylor series and separately expressed as linear parameters, principal points and effective focal length, and distortion coefficients. Thus, it can be expressed in matrix form,
$$\begin{aligned}\left[ {\begin{array}{{c}} {\Delta u}\\ {\Delta v} \end{array}} \right] &= \left[ {\begin{array}{{cc}} {\frac{{\partial {u_\textrm{p}}}}{{\partial {m_{11}}}}}&{\frac{{\partial {u_\textrm{p}}}}{{\partial {m_{12}}}} \cdots }\\ {\frac{{\partial {v_\textrm{p}}}}{{\partial {m_{11}}}}}&{\frac{{\partial {v_\textrm{p}}}}{{\partial {m_{12}}}} \cdots } \end{array}\begin{array}{{c}} {\frac{{\partial {u_\textrm{p}}}}{{\partial {m_{34}}}}}\\ {\frac{{\partial {v_\textrm{p}}}}{{\partial {m_{34}}}}} \end{array}} \right]\left[ {\begin{array}{{c}} {\Delta {m_{11}}}\\ {\Delta {m_{12}}}\\ \vdots \\ {\Delta {m_{34}}} \end{array}} \right] + \; \; \left[ {\begin{array}{{cc}} {\frac{{\partial {u_\textrm{p}}}}{{\partial {f_{pu}}}}}&0\\ 0&{\frac{{\partial {v_\textrm{p}}}}{{\partial {f_{pv}}}}} \end{array}\; \; \; \; \; \begin{array}{{cc}} {\frac{{\partial {u_\textrm{p}}}}{{\partial {u_{p0}}}}}&0\\ 0&{\frac{{\partial {v_\textrm{p}}}}{{\partial {v_{p0}}}}} \end{array}\; \; \; } \right]\left[ {\begin{array}{{c}} {\Delta {f_{pu}}}\\ {\Delta {f_{pv}}}\\ {\Delta {u_{p0}}}\\ {\Delta {v_{p0}}} \end{array}} \right]\\ &\quad + \left[ {\begin{array}{{cc}} {\frac{{\partial {u_\textrm{p}}}}{{\partial {k_1}}}}&{\frac{{\partial {u_\textrm{p}}}}{{\partial {k_2}}} \cdots }\\ {\frac{{\partial {v_\textrm{p}}}}{{\partial {k_1}}}}&{\frac{{\partial {v_\textrm{p}}}}{{\partial {k_2}}} \cdots } \end{array}\begin{array}{{c}} {\frac{{\partial {u_\textrm{p}}}}{{\partial {p_2}}}}\\ {\frac{{\partial {v_\textrm{p}}}}{{\partial {p_2}}}} \end{array}} \right]\left[ {\begin{array}{{c}} {\Delta {k_1}}\\ {\Delta {k_2}}\\ \vdots \\ {\Delta {p_2}} \end{array}} \right] - \left[ {\begin{array}{{c}} {{u_p} - {{u^{\prime}}_\textrm{p}}}\\ {{u_p} - {{v^{\prime}}_\textrm{p}}} \end{array}} \right].\end{aligned}$$
Therefore, according to the collinear equation, an error equation can be established as follows,
$${\boldsymbol V} = \left[ {\begin{array}{{c}} {\Delta u}\\ {\Delta v} \end{array}} \right] = \left[ {\frac{{\partial {\boldsymbol F}}}{{\partial {\boldsymbol M}}}} \right][{\Delta {\boldsymbol M}} ]+ \left[ {\frac{{\partial {\boldsymbol F}}}{{\partial {\boldsymbol K}}}} \right][{\Delta {\boldsymbol K}} ]+ \left[ {\frac{{\partial {\boldsymbol F}}}{{\partial {\boldsymbol D}}}} \right][{\Delta {\boldsymbol D}} ]- {\boldsymbol L}$$
where ${\boldsymbol V}$ is image point coordinate error vector, ${\boldsymbol L}$ represents difference between ideal image coordinates and calculated approximate image coordinates. To simplify the expression, replace each part of Eq. (9) with a symbol, that is, the error equation can be expressed as
$${\boldsymbol V} = {\boldsymbol {Am}} + {\boldsymbol {Bk}} + {\boldsymbol {Cd}} - {\boldsymbol L}$$
where ${\boldsymbol A}$ represents Taylor series expansion of linear transformation parameters expands the first-term matrix, ${\boldsymbol m}$ is correction of linear transformation parameters, ${\boldsymbol B}$ represents Taylor series of principal point and effective focal length expands the first-term matrix, ${\boldsymbol k}$ is correction of principal point and effective focal length, ${\boldsymbol C}$ represents Taylor series of distortion coefficient expands the first-term matrix, ${\boldsymbol d}$ is correction of distortion coefficient, Then the adjustment equation can be established as follows,
$${\boldsymbol V} = \left[ {\begin{array}{{c}} {\begin{array}{{c}} {\boldsymbol V}\\ {{{\boldsymbol V}^{\boldsymbol m}}}\\ {{{\boldsymbol V}^{\boldsymbol k}}} \end{array}}\\ {{{\boldsymbol V}^{\boldsymbol d}}} \end{array}} \right] = \left[ {\begin{array}{{c}} {\begin{array}{{c}} {{\boldsymbol V}_\textrm{1}}\\ {{\boldsymbol V}_\textrm{2}}\\ {{\boldsymbol V}_\textrm{3}} \end{array}}\\ {{\boldsymbol V}_\textrm{4}} \end{array}} \right] = \left[ {\begin{array}{{ccc}} {\boldsymbol A}&{\boldsymbol B}&{\boldsymbol C}\\ {\boldsymbol E}&0&0\\ 0&{\boldsymbol E}&0\\ 0&0&{\boldsymbol E} \end{array}} \right]\left[ {\begin{array}{{c}} {\boldsymbol m}\\ {\boldsymbol k}\\ {\boldsymbol d} \end{array}} \right] - \left[ {\begin{array}{{c}} {\begin{array}{{c}} {\boldsymbol L}\\ 0\\ 0 \end{array}}\\ 0 \end{array}} \right] = {\boldsymbol {GX}} - {\boldsymbol L}$$
where ${{\boldsymbol V}^m}$ is the linear parameters error vector, ${{\boldsymbol V}^k}$ is the principal point and effective focal length error vector, and ${{\boldsymbol V}^d}$ is the distortion coefficient error vector. In order to facilitate flexible control of the correction effect of each optimization variable, each parameter is regarded as an observation value and the weight is used to control the degree of correction. The weight is determined by the following posterior variance estimation method.
$${\boldsymbol P} = \left[ {\begin{array}{{cccc}} {{{\boldsymbol P}_v}}&0&0&0\\ 0&{{{\boldsymbol P}_{{V^k}}}}&0&0\\ 0&0&{{{\boldsymbol P}_{{V^d}}}}&0\\ 0&0&0&{{{\boldsymbol P}_{{V^m}}}} \end{array}} \right] = \left[ {\begin{array}{{cccc}} {{{\boldsymbol P}_1}}&0&0&0\\ 0&{{{\boldsymbol P}_2}}&0&0\\ 0&0&{{{\boldsymbol P}_3}}&0\\ 0&0&0&{{{\boldsymbol P}_4}} \end{array}} \right]$$
where ${{\boldsymbol P}_{{V^m}}}$, ${{\boldsymbol P}_{{V^k}}}$ and ${{\boldsymbol P}_{{V^d}}}$ are the corresponding weights of linear parameters, principal points and effective focal length, and distortion coefficients respectively. The error can be minimized to Eq. (14), namely
$$({{m_{11}},{m_{12}}, \cdots {m_{33}};{f_{pu}}, \cdots {v_{p0}};{k_1} \cdots {p_2}} )= \arg \min ({{{\boldsymbol V}^T}{\boldsymbol PV}} )$$

3.4.2. Posterior variance estimation

According to the free extreme principle, the following equation can be obtained from Eq. (14)

$$\frac{{\partial {{\boldsymbol V}^T}{\boldsymbol {PV}}}}{{\partial {\boldsymbol X}}} = 2{{\boldsymbol V}^T}{\boldsymbol P}\frac{{\partial {\boldsymbol V}}}{{\partial {\boldsymbol X}}} = 2{{\boldsymbol V}^T}{\boldsymbol P}\frac{{\partial ({{\boldsymbol {GX}} - {\boldsymbol L}} )}}{{\partial {\boldsymbol X}}} = 2{{\boldsymbol V}^T}{\boldsymbol {PG}} = 0$$
that is:
$${{\boldsymbol G}^T}{\boldsymbol {PV}} = 0$$
substitute Eq. (12) into the above equation to obtain:
$$({{{\boldsymbol G}^T}{\boldsymbol {PG}}} ){\boldsymbol X} - {{\boldsymbol G}^T}{\boldsymbol {PL}} = 0$$
to simplify the calculation, matrix ${\boldsymbol G}$ is denoted as:
$${\boldsymbol G} = \left[ {\begin{array}{{ccc}} {\boldsymbol A}&{\boldsymbol B}&{\boldsymbol C}\\ {\boldsymbol E}&0&0\\ 0&{\boldsymbol E}&0\\ 0&0&{\boldsymbol E} \end{array}} \right] = \left[ {\begin{array}{{c}} {{\boldsymbol G_1}}\\ {{\boldsymbol G_2}}\\ {{\boldsymbol G_3}}\\ {{\boldsymbol G_4}} \end{array}} \right].$$
Its normal equation can be obtained as:
$${\boldsymbol N} = {{\boldsymbol G}^T}{\boldsymbol {PG}} = \sum\limits_{i = 1,2,3,4} {{\boldsymbol N}_i} = \sum\limits_{i = 1,2,3,4} {{\boldsymbol G}_i^T{\boldsymbol P}_i{\boldsymbol G}_i}$$
$${\boldsymbol W} = {{\boldsymbol G}^T}{\boldsymbol {PL}} = \sum\limits_{i = 1,2,3,4} {{\boldsymbol W}_i} = \sum\limits_{i = 1,2,3,4} {{\boldsymbol G}_i^T{\boldsymbol P}_i{\boldsymbol L}_i}$$
where ${{\boldsymbol N}_i}$ and ${{\boldsymbol W}_i}$ are the elements of matrix and respectively.

The weight of the observation value can be determined by some empirical formulas, but practice shows that it is not accurate enough in many cases. The reasonable ratio of the weights of different observations is the key to the study, and the reasonable ratio of the weights depends on the reasonable determination of its variance. Obviously, the correctness of the prior variance estimation will have a direct impact on the bundle adjustment results. Therefore, it is required to estimate the prior variance during the adjustment to determine the weight. The correctness of the prior variance is usually compared with that of the posterior variance. When the two are inconsistent, the prior variance is considered inappropriate and needs to be re-weighted according to the posterior variance.

There are many ways to use the posterior variance to determine the weight. The least square estimation method is presented in this paper. First, assign initial weights to various observations and perform pre-adjustment. The correction number V of each observation value obtained by the pre-adjustment is used to estimate the prior variance. The weights of the various observations ${P_i}$ given in the first adjustment are generally unreasonable, that is, their corresponding unit weight variances are not equal. Therefore, if the variances of the unit weights are not equal or the difference is large, it means that the weighting is unreasonable and the weighting needs to be reset.

Denote the unit weight variance of each observed value as $\sigma _1^2$, $\sigma _2^2$, $\sigma _3^2$ and $\sigma _4^2$, respectively, so ${{\boldsymbol D}_{{L_i}}} = {\sigma _i}^2{{\boldsymbol P}_i}^{ - 1}$ $({i = 1,2,3,4} )$ and the value of $\sigma _1^2$, $\sigma _2^2$, $\sigma _3^2$ and $\sigma _4^2$ can be estimated through the sum of squares of the correction vector V formed by each adjustment, that is ${{\boldsymbol V}_i}^T{{\boldsymbol P}_i}{{\boldsymbol V}_i}$, till each $\sigma _i^2$ is the same. According to the mathematical expectation equation of quadratic function, it can be obtained as follows,

$$E({{{\boldsymbol V}^T}{\boldsymbol {PV}}} )= tr({{\boldsymbol {PD}}({\boldsymbol V} )} )+ {E^T}({\boldsymbol V} ){\boldsymbol {PE}}({\boldsymbol V}). $$
Since the expected value of the theoretical error vector should be zero, then:
$$E({{{\boldsymbol V}^T}{\boldsymbol {PV}}} )= tr({{\boldsymbol {PD}}({\boldsymbol V} )} ). $$
According to Eqs. (12) and (17), it can be obtained:
$$\begin{aligned}{{\boldsymbol V}_1} &= {{\boldsymbol G}_1}{\boldsymbol X} - {{\boldsymbol L}_1} = {{\boldsymbol G}_1}{{\boldsymbol N}^{ - 1}}({{{\boldsymbol W}_1} + {{\boldsymbol W}_2} + {{\boldsymbol W}_3} + {{\boldsymbol W}_4}} )- {{\boldsymbol L}_1}\\ & = ({{{\boldsymbol G}_1}{{\boldsymbol N}^{ - 1}}{{\boldsymbol G}_1}^T{{\boldsymbol P}_1} - {\boldsymbol E}} ){{\boldsymbol L}_1} + \sum\limits_{i = 2,3,4} {{{\boldsymbol G}_1}{{\boldsymbol N}^{ - 1}}{{\boldsymbol G}_i}{\boldsymbol P}_i{\boldsymbol L}_i}\end{aligned}$$
where ${{\boldsymbol G}_\textrm{1}} = \left[ {\begin{array}{{ccc}} {\boldsymbol A}&{\boldsymbol B}&{\boldsymbol C} \end{array}} \right]$ is the matrix formed by the first row of the coefficient matrix ${\boldsymbol G}$, and the covariance propagation rate can be applied to calculate:
$${{\boldsymbol D}_{{V_1}}} = ({{{\boldsymbol G}_1}{{\boldsymbol N}^{ - 1}}{{\boldsymbol G}_1}^T{{\boldsymbol P}_1} - {\boldsymbol E}} ){{\boldsymbol D}_{{L_1}}}{({{{\boldsymbol G}_1}{{\boldsymbol N}^{ - 1}}{{\boldsymbol G}_1}^T{{\boldsymbol P}_1} - {\boldsymbol E}} )^T} + \sum\limits_{i = 2,3,4} {({{\boldsymbol G}_1}{{\boldsymbol N}^{ - 1}}{{\boldsymbol G}_i}{\boldsymbol P}_i{\boldsymbol L}_i){{\boldsymbol D}_{L\textrm{i}}}{{({{\boldsymbol G}_1}{{\boldsymbol N}^{ - 1}}{{\boldsymbol G}_i}{\boldsymbol P}_i{\boldsymbol L}_i)}^T}}$$
where ${{\boldsymbol D}_{Li}} = \sigma _i^2{\boldsymbol P}_i^{ - 1}$, the above equation can be sorted out
$${{\boldsymbol D}_{{V_1}}} = {\sigma _1}^2({{{\boldsymbol G}_1}{{\boldsymbol N}^{ - 1}}{{\boldsymbol N}_1}{{\boldsymbol N}^{ - 1}}{\boldsymbol G}_1^T - 2{{\boldsymbol G}_1}{{\boldsymbol N}^{ - 1}}{\boldsymbol G}_1^T + {\boldsymbol P}_1^{ - 1}} )+ \sum\limits_{i = 2,3,4} {{\sigma _i}^2({{{\boldsymbol G}_1}{{\boldsymbol N}^{ - 1}}{{\boldsymbol N}_i}{{\boldsymbol N}^{ - 1}}{\boldsymbol G}_1^T} )}$$
According to Eq. (21), it can be obtained:
$$\begin{aligned} {\boldsymbol E}({{\boldsymbol V}_1^T{{\boldsymbol P}_1}{{\boldsymbol V}_1}} ) &= {\sigma _1}^2tr({{{\boldsymbol P}_1}({{{\boldsymbol G}_1}{{\boldsymbol N}^{ - 1}}{{\boldsymbol N}_{\boldsymbol 1}}{{\boldsymbol N}^{ - 1}}{\boldsymbol G}_1^T - 2{{\boldsymbol G}_1}{{\boldsymbol N}^{ - 1}}{\boldsymbol G}_1^T + {\boldsymbol P}_1^{ - 1}} )} )\\ & \qquad + \sum\limits_{i = 2,3,4} {{\sigma _i}^2tr({{{\boldsymbol P}_1}({{{\boldsymbol G}_1}{{\boldsymbol {\rm N}}^{ - 1}}{{\boldsymbol N}_{\boldsymbol i}}{{\boldsymbol N}^{ - 1}}{\boldsymbol G}_1^T} )} )} \\ \begin{array}{{ccc}} {\begin{array}{{cc}} {}&{} \end{array}}&{}&{} \end{array} \\ &= {\sigma _1}^2[{{{\boldsymbol n}_{\boldsymbol 1}} + tr({{{\boldsymbol N}^{ - 1}}{{\boldsymbol N}_1}{{\boldsymbol N}^{ - 1}}{{\boldsymbol N}_1}} )- 2tr({{{\boldsymbol N}^{ - 1}}{{\boldsymbol N}_1}} )} ]+ \sum\limits_{i = 2,3,4} {{\sigma _i}^2tr({{{\boldsymbol N}^{ - 1}}{{\boldsymbol N}_1}{{\boldsymbol N}^{ - 1}}{{\boldsymbol N}_i}} )} \end{aligned}$$
Repeat the process for ${{\boldsymbol V}_2}$, ${{\boldsymbol V}_3}$ and ${{\boldsymbol V}_4}$. Therefore, it can be obtained:
$$({{{\boldsymbol V}_2}^T{{\boldsymbol P}_2}{{\boldsymbol V}_2}} )= tr({{{\boldsymbol P}_2}{{\boldsymbol D}_{{V_2}}}} )= {\sigma _2}^2[{{{\boldsymbol n}_2} + tr({{{\boldsymbol N}^{ - 1}}{{\boldsymbol N}_2}{{\boldsymbol N}^{ - 1}}{{\boldsymbol N}_2}} )- 2tr({{{\boldsymbol N}^{ - 1}}{{\boldsymbol N}_2}} )} ]+ \sum\limits_{i = 1,3,4} {{\sigma _i}^2tr({{{\boldsymbol N}^{ - 1}}{{\boldsymbol N}_2}{{\boldsymbol N}^{ - 1}}{{\boldsymbol N}_i}} )}$$
and so on. According to the property of matrix trace, for diagonal matrix $\lambda$:
$$tr(\lambda )= \mathop \sum \nolimits^ {\lambda _{ii}}. $$
So ${n_i}$ is the sum of the diagonal elements of matrix ${{\boldsymbol P}_i}$. Similarly, the matrix ${\boldsymbol S}$ can be obtained:
$${\boldsymbol S} = \left[ {\begin{array}{{cc}} {\begin{array}{{cc}} {{s_{11}}}&{{s_{12}}}\\ {{s_{21}}}&{{s_{22}}} \end{array}}&{\begin{array}{{cc}} {{s_{13}}}&{{s_{14}}}\\ {{s_{23}}}&{{s_{24}}} \end{array}}\\ {\begin{array}{{cc}} {{s_{31}}}&{{s_{32}}}\\ {{s_{41}}}&{{s_{42}}} \end{array}}&{\begin{array}{{cc}} {{s_{33}}}&{{s_{34}}}\\ {{s_{43}}}&{{s_{44}}} \end{array}} \end{array}} \right].$$
Where ${s_{ij}}$ are the coefficients in the expectation equation. The variance component estimation equation is established according to matrix ${\boldsymbol S}$:
$${\boldsymbol S}\left[ {\begin{array}{{c}} {{\sigma_1}^2}\\ {{\sigma_2}^2}\\ {{\sigma_3}^2}\\ {{\sigma_4}^2} \end{array}} \right] = \left[ {\begin{array}{{c}} {{{\boldsymbol V}_1}^T{{\boldsymbol P}_1}{{\boldsymbol V}_1}}\\ {{{\boldsymbol V}_2}^T{{\boldsymbol P}_2}{{\boldsymbol V}_2}}\\ {{{\boldsymbol V}_3}^T{{\boldsymbol P}_3}{{\boldsymbol V}_3}}\\ {{{\boldsymbol V}_4}^T{{\boldsymbol P}_4}{{\boldsymbol V}_4}} \end{array}} \right].$$
Denote as:
$${\boldsymbol S} \cdot {\boldsymbol \theta } = {{\boldsymbol W}_\theta }$$
where ${{\boldsymbol W}_\theta } = {[{{{\boldsymbol V}_1}^T{{\boldsymbol P}_1}{{\boldsymbol V}_1},{{\boldsymbol V}_2}^T{{\boldsymbol P}_2}{{\boldsymbol V}_2},{{\boldsymbol V}_3}^T{{\boldsymbol P}_3}{{\boldsymbol V}_3},{{\boldsymbol V}_4}^T{{\boldsymbol P}_4}{{\boldsymbol V}_4}} ]^\textrm{T}}$, ${\mathbf \theta } = {[\sigma _1^2,\sigma _2^2,\sigma _3^2,\sigma _4^2]^T}$. After the calculation of variance estimation, it is still necessary to test the calculated results for determining whether the weight ratio before adjustment is correct. If the expression $({{\sigma_1}^2 = {\sigma_2}^2 = {\sigma_3}^2 = {\sigma_4}^2} )$ is true, the weight ratio determined before adjustment is correct. Otherwise, it is necessary to take the current calculation result as the prior weight for the next adjustment and determine the modified weight according to Eq. (30).
$${{\boldsymbol P}_i}^{k + 1} = \frac{\textrm{C}}{{{\sigma _i}^2}}{{\boldsymbol P}_i}^k$$
where C is a constant and the first $\sigma _i^2$ is selected as $\sigma _1^2$. The calculation steps of variance estimation are as follows:
  • (1) According to the error of different parameters, carry out the pre-test weight estimation respectively, and determine the initial weight value ${{\boldsymbol P}_{\boldsymbol i}}$ of all kinds of error.
  • (2) Make the first adjustment and obtain ${{\boldsymbol V}_i}^T{{\boldsymbol P}_i}{{\boldsymbol V}_i}$.
  • (3) Perform the first variance component estimation according to Eq. (29), and obtain the first valuation of unit weight variance $\sigma _i^2$ of all kinds of observed values. Then the weight is determined according to Eq. (30).
  • (4) Repeat step 2 and step 3, namely adjustment – variance component estimation – adjustment after weighting.
  • (5) If the expression $({{\sigma_1}^2 = {\sigma_2}^2 = {\sigma_3}^2 = {\sigma_4}^2} )$ is true, complete adjustment, otherwise repeat steps (2) and (3).

3.5 System calibration

Due to the projector calibration can be accomplished with the same mathematical model and calibration board as the camera, the system composed of a camera and a projector is developed, and the system can be regarded as a classic “stereo vision” system. Therefore, all the mature stereo calibration technologies can be adopted in the proposed system.

4. Experiment results

4.1 System composition

In order to verify the effectiveness of the proposed method, a structured light vision system is developed as shown in Fig. 4. The hardware consists of the following parts. (1) An industrial CCD camera (imaging source DMK23U274) with resolution of 1600×1200 pixels and frame rate of 25f/s. (2) A projector (Optoma OSX808) with resolution of 1920×1080 pixels. (3) A circular dot calibration board with a white background, the size of the calibration board is 400mm×300mm and the distance to circular feature point is 30mm, the manufacturing accuracy is about 0.0025mm, the number of grids is 11×8, and (4) a desktop for software installation.

 

Fig. 4. The stereo structured light vision system.

Download Full Size | PPT Slide | PDF

The software of the system adopts Microsoft Visual Studio 2013 as a development platform combination with OpenCV, EIGEN, SBA and so on. It mainly includes a system calibration module and structured light measurement module. In the system calibration module, the proposed projector calibration method is implemented.

4.2 Experiment of projector calibration

According to Zhang's calibration principle, it is necessary to obtain the 2D images of the calibration board at different positions. In this method, the projection feature points are projected on the white background of the calibration pattern board and the projector and camera can “capture” the image simultaneously, so only one image taken in each calibration position. Then the ellipse fitting method is applied to extract the center of the circle and sort the center according to the slope, which is shown in Fig. 5. Then, the projection feature point image is separated from the original image of the calibration board. One of the resulting images is shown in Fig. 6. The mapping relationship between camera and projector can be established through the projection feature points using the method proposed in Section 3.2, and the homographic matrix can be calculated. Based on the homographic mapping relation, the image points on the projector, which formed by the original feature points, can be obtained simply, the result is shown in Fig. 7. Then both the camera and the projector can be calibrated by Zhang's calibration method based on the same calibration pattern board.

 

Fig. 5. The captured image and result of the circle center extraction.

Download Full Size | PPT Slide | PDF

 

Fig. 6. The projection feature point image.

Download Full Size | PPT Slide | PDF

 

Fig. 7. The feature point on the projector.

Download Full Size | PPT Slide | PDF

The reprojection errors of the camera and projector are obtained after the calibration respectively, and the reprojection errors of some positions are shown in Fig. 8, respectively. Figure 9 shows the error comparing of the camera and projector and the detail error results of the calibration are listed in Table 1. It shows that: firstly, the calibration accuracy of the camera is much higher than that of the projector. Secondly, the results of the projector calibration fluctuate greatly under different positions, Malformations are often present such as shown in position 2 and 3. The main reason is that the lens distortion of the projector is larger, and more importantly, the projector cannot obtain the image directly, so the low precision 2D image points obtained by homographic matrix cause the poor calibration result.

 

Fig. 8. The reprojection errors (a) camera (b) projector.

Download Full Size | PPT Slide | PDF

 

Fig. 9. Comparison results of the camera and projector.

Download Full Size | PPT Slide | PDF

Tables Icon

Table 1. Calibration results of the camera and projector.

In order to make comparison between before and after the optimization, stereo calibration is carried out for the measurement system and feature points on any position of the calibration board are reconstructed. Plane fitting was performed on the reconstructed results and the Z direction error is estimated by flatness error. One of the 3D results as shown in Fig. 10, the results show that the calibration result is low precision. The plane fitting error is in the range of ±0.2 mm. The distance between the points on the calibration board was measured as shown in Fig. 11, the maximum deviation of the X direction is about 0.04 mm, and in Y direction is 0.06 mm.

 

Fig. 10. Result graph of reconstruction of feature points.

Download Full Size | PPT Slide | PDF

 

Fig. 11. The distance error between the feature points (a) X direction and (b) Y direction.

Download Full Size | PPT Slide | PDF

To improve the accuracy, the reprojection error iteration method was proposed to refine the 2D image points. The details have been addressed in Section 3.3. In the procedure, keeping the 3D accurate feature points unchanged, optimize the inner and outer parameters by reducing the deviation of 2D image points. The reprojection error of the 7 times iteration for the camera and projector are shown in Fig. 12, respectively. The results show that the reprojection error can be refined greatly by the presented method, almost up to 0.002 pixel for camera and 0.008 pixel for projector after only 7 times iteration, which is shown in Fig. 13. The 3D reconstruction of the same feature points are calculated too, the fitting error results of the reconstruction for feature points is shown as Fig. 14. Figure 15 shows the comparing result between before and after iteration, and the distance error in X and Y direction is shown in Fig. 16, respectively. It is obvious that the accuracy of the 3D points is improved dramatically.

 

Fig. 12. The reprojection error of 7 times iteration (a) camera and (b) projector.

Download Full Size | PPT Slide | PDF

 

Fig. 13. The reprojection error after iteration (a) camera and (b) projector.

Download Full Size | PPT Slide | PDF

 

Fig. 14. Result graph of reconstruction of feature points.

Download Full Size | PPT Slide | PDF

 

Fig. 15. The comparing result between before and after iteration (a) camera (b) projector.

Download Full Size | PPT Slide | PDF

 

Fig. 16. The distance error between the feature points (a) X direction and (b) Y direction.

Download Full Size | PPT Slide | PDF

The bundle adjustment algorithm, which introduced in Section 3.4, is adopted to further optimize the calibration parameters. The reprojection error distribution of the camera and projector are both further improved, which is shown in Fig. 17. Figure 18 shows that the malformed error distribution of the projector is reduced greatly and the accuracy of the projector is almost up to the same as the camera. Figures 19 and 20 illustrate that the high accuracy of the 3D feature points can be obtained by the proposed method.

 

Fig. 17. The reprojection error distribution after bundle adjustment optimization (a) camera and (b) projector.

Download Full Size | PPT Slide | PDF

 

Fig. 18. The reprojection error after BA optimization (a) camera and (b) projector.

Download Full Size | PPT Slide | PDF

 

Fig. 19. Result graph of reconstruction of feature points.

Download Full Size | PPT Slide | PDF

 

Fig. 20. The distance error between the feature points (a) X direction and (b) Y direction.

Download Full Size | PPT Slide | PDF

In order to improve the calibration accuracy, a quadrature function fitting algorithm is adopted to compensate the error of the 2D image points, which is gained by the phase-shifting method in Ref. [34]. For comparison, the compensation method is applied to our calibration procedure. First, the homography relationship is carried out to obtain the 2D image points, and then the error of each position can be fit by the quadrature function fitting algorithm, one of the results is shown as Fig. 21. Moreover, the 2D image points are improved by the compensation method and the BA algorithm is implemented to optimize the calibration parameters. Finally, the 3D coordinates of the feature points are reconstructed by the same method of our proposed method. Figures 22 and 23 show the comparison result between two compensation methods. It is obvious that our method, which is based on iteration, can obtain more accurate calibration results even if the error gained by the homography relationship method is larger than the phase-shifting method.

 

Fig. 21. Result graph of error quadrature function fitting (a) X direction and (b) Y direction.

Download Full Size | PPT Slide | PDF

 

Fig. 22. Result graph of reconstruction of feature points.

Download Full Size | PPT Slide | PDF

 

Fig. 23. The distance error between the feature points for two methods (a) X direction and (b) Y direction.

Download Full Size | PPT Slide | PDF

The calibration method has been used in our structured light system, it can measure 3D data successfully. For example, a car fender was measured by the system using four steps phase-shifting method, one of the progresses is showed as Fig. 24, and the 3D data of the measurement is shown as Fig. 25.

 

Fig. 24. One of the measurement progresses.

Download Full Size | PPT Slide | PDF

 

Fig. 25. The 3D data of the measurement.

Download Full Size | PPT Slide | PDF

5. Conclusion

In this paper, an accurate projector calibration method that has the same procedure as camera and without any restriction or ancillary equipment, was proposed. For this, the 2D image points of the projector could be obtained flexibly based on homography and compensated easily by the error iteration algorithm. The parameters of both the camera and the projector reliably obtained by the bundle adjustment algorithm.

The application of the homographic matrix made the calibration procedure easy to implement for only one image that needed to be “captured” at each calibration position. High-accuracy projector parameters can be obtained by the error iteration and bundle adjustment method, which were tested to be effective and robust. The proposed method is useful for online or in-situ structured light system calibration. In order to obtain the real calibration accuracy, this paper does not consider the influence of various factors in the measurement process, such as phase accuracy, ambient light, etc. The future work will focus on the measurement accuracy of the dynamic object with the consideration of other factors, such as phase error, stereo matching error and so on.

Funding

National Science and Technology Planning Project (2015BAF24B00); Fujian Province Industry-University-Research Program (2017H6012, 2019H6016); Key (Guiding) Projects in Fujian Province (2017H0019, 2018H0020); China Scholarship Council (201807540008).

Disclosures

The authors declare no conflicts of interest.

References

1. C. Zuo, S. Feng, L. Huang, T. Tao, W. Yin, and Q. Chen, “Phase shifting algorithms for fringe projection profilometry: A review,” Opt. Lasers Eng. 109, 23–59 (2018). [CrossRef]  

2. Y. He and S. Chen, “Advances in sensing and processing methods for three-dimensional robot vision,” Int. J. Adv. Robot. Syst. 15(2), 172988141876062 (2018). [CrossRef]  

3. G. Sansoni, M. Trebeschi, and F. Docchio, “State-of-the-art and applications of 3D imaging sensors in industry, cultural heritage, medicine, and criminal investigation,” Sensors 9(1), 568–601 (2009). [CrossRef]  

4. S. S. Gorthi and P. Rastogi, “Fringe projection techniques: Whither we are?” Opt. Lasers Eng. 48(2), 133–140 (2010). [CrossRef]  

5. C. Portalés, P. Casanova-Salas, S. Casas, J. Gimeno, and M. Fernández, “An interactive cameraless projector calibration method,” Virtual Real. 24(1), 109–121 (2020). [CrossRef]  

6. M. Vo, Z. Wang, T. Hoang, and D. Nguyen, “Flexible calibration technique for fringe-projection-based three-dimensional imaging,” Opt. Lett. 35(19), 3192 (2010). [CrossRef]  

7. X. Liu, Z. Cai, Y. Yin, H. Jiang, D. He, W. He, Z. Zhang, and X. Peng, “Calibration of fringe projection profilometry using an inaccurate 2D reference target,” Opt. Lasers Eng. 89, 131–137 (2017). [CrossRef]  

8. F. J. Cuevas, M. Servin, O. N. Stavroudis, and R. Rodriguez-Vera, “Multi-layer neural network applied to phase and depth recovery from fringe patterns,” Opt. Commun. 181(4–6), 239–259 (2000). [CrossRef]  

9. J. Villa, M. Araiza, D. Alaniz, R. Ivanov, and M. Ortiz, “Transformation of phase to (x,y,z)-coordinates for the calibration of a fringe projection profilometer,” Opt. Lasers Eng. 50(2), 256–261 (2012). [CrossRef]  

10. J. Lu, R. Mo, H. Sun, and Z. Chang, “Flexible calibration of phase-to-height conversion in fringe projection profilometry,” Appl. Opt. 55(23), 6381 (2016). [CrossRef]  

11. W. Zhao, X. Su, and W. Chen, “Discussion on accurate phase–height mapping in fringe projection profilometry,” Opt. Eng. 56(10), 1 (2017). [CrossRef]  

12. W. Guo, Z. Wu, R. Xu, Q. Zhang, and M. Fujigaki, “A fast reconstruction method for three-dimensional shape measurement using dual-frequency grating projection and phase-to-height lookup table,” Opt. Laser Technol. 112, 269–277 (2019). [CrossRef]  

13. B. Li, N. Karpinsky, and S. Zhang, “Novel calibration method for structured-light system with an out-of-focus projector,” Appl. Opt. 53(16), 3415 (2014). [CrossRef]  

14. Z. Li, “Accurate calibration method for a structured light system,” Opt. Eng. 47(5), 053604 (2008). [CrossRef]  

15. P. S. Huang, “Novel method for structured light system calibration,” Opt. Eng. 45(8), 083601 (2006). [CrossRef]  

16. S. Huang, L. Xie, Z. Wang, Z. Zhang, F. Gao, and X. Jiang, “Accurate projector calibration method by using an optical coaxial camera,” Appl. Opt. 54(4), 789 (2015). [CrossRef]  

17. S. Yang, M. Liu, J. Song, S. Yin, Y. Guo, Y. Ren, and J. Zhu, “Projector calibration method based on stereo vision system,” Opt. Rev. 24(6), 727–733 (2017). [CrossRef]  

18. R. Chen, J. Xu, H. Chen, J. Su, Z. Zhang, and K. Chen, “Accurate calibration method for camera and projector in fringe patterns measurement system,” Appl. Opt. 55(16), 4293 (2016). [CrossRef]  

19. W. Zhang, W. Li, L. Yu, H. Luo, H. Zhao, and H. Xia, “Sub-pixel projector calibration method for fringe projection profilometry,” Opt. Express 25(16), 19158 (2017). [CrossRef]  

20. S. Yang, M. Liu, J. Song, S. Yin, Y. Ren, J. Zhu, and S. Chen, “Projector distortion residual compensation in fringe projection system,” Opt. Lasers Eng. 114, 104–110 (2019). [CrossRef]  

21. H. Anwar, “Calibrating projector flexibly for a real-time active 3D scanning system,” Optik (Munich, Ger.) 158, 1088–1094 (2018). [CrossRef]  

22. B. Huang, S. Ozdemir, Y. Tang, C. Liao, and H. Ling, “A Single-Shot-Per-Pose Camera-Projector Calibration System for Imperfect Planar Targets,” Adjun. Proc. - 2018 IEEE Int. Symp. Mix. Augment. Reality, ISMAR-Adjunct 201815–20 (2018).

23. R. Juarez-Salazar and V. H. Diaz-Ramirez, “Flexible camera-projector calibration using superposed color checkerboards,” Opt. Lasers Eng. 120, 59–65 (2019). [CrossRef]  

24. M. Liu, C. Sun, S. Huang, and Z. Zhang, “An accurate projector calibration method based on polynomial distortion representation,” Sensors 15(10), 26567–26582 (2015). [CrossRef]  

25. Z. Wang, M. Liu, S. Yang, S. Huang, X. Bai, X. Liu, J. Zhu, X. Liu, and Z. Zhang, “Precise full-field distortion rectification and evaluation method for a digital projector,” Opt. Rev. 23(5), 746–752 (2016). [CrossRef]  

26. P. Zhou, Y. Yu, G. Cai, and S. Huang, “Projector recalibration of three-dimensional profilometry system,” Appl. Opt. 55(9), 2294 (2016). [CrossRef]  

27. M. Ren, J. Liang, B. Wei, and W. Pai, “Novel projector calibration method for monocular structured light system based on digital image correlation,” Optik (Munich, Ger.) 132, 337–347 (2017). [CrossRef]  

28. K. Li, J. Bu, and D. Zhang, “Lens distortion elimination for improving measurement accuracy of fringe projection profilometry,” Opt. Lasers Eng. 85, 53–64 (2016). [CrossRef]  

29. A. Gonzalez and J. Meneses, “Accurate calibration method for a fringe projection system by projecting an adaptive fringe pattern,” Appl. Opt. 58(17), 4610 (2019). [CrossRef]  

30. H. Liu, H. Lin, and L. Yao, “Calibration method for projector-camera-based telecentric fringe projection profilometry system,” Opt. Express 25(25), 31492 (2017). [CrossRef]  

31. M. E. Deetjen and D. Lentink, “Automated calibration of multi-camera-projector structured light systems for volumetric high-speed 3D surface reconstructions,” Opt. Express 26(25), 33278 (2018). [CrossRef]  

32. Z. Zhang, “Flexible camera calibration by viewing a plane from unknown orientations,” Proc. IEEE Int. Conf. Comput. Vis. 1, 666–673 vol.1 (1999). [CrossRef]  

33. R. Hartley and A. Zisserman, (2004). Multiple View Geometry in Computer Vision (2nd ed.). Cambridge University Press, Cambridge.

34. Z. Wang, J. Huang, J. Gao, and Q. Xue, “Calibration of the structured light measurement system with bundle adjustment,” Jixie Gongcheng Xuebao/Journal Mech. Eng. 49, 4–13 (2013).

References

  • View by:
  • |
  • |
  • |

  1. C. Zuo, S. Feng, L. Huang, T. Tao, W. Yin, and Q. Chen, “Phase shifting algorithms for fringe projection profilometry: A review,” Opt. Lasers Eng. 109, 23–59 (2018).
    [Crossref]
  2. Y. He and S. Chen, “Advances in sensing and processing methods for three-dimensional robot vision,” Int. J. Adv. Robot. Syst. 15(2), 172988141876062 (2018).
    [Crossref]
  3. G. Sansoni, M. Trebeschi, and F. Docchio, “State-of-the-art and applications of 3D imaging sensors in industry, cultural heritage, medicine, and criminal investigation,” Sensors 9(1), 568–601 (2009).
    [Crossref]
  4. S. S. Gorthi and P. Rastogi, “Fringe projection techniques: Whither we are?” Opt. Lasers Eng. 48(2), 133–140 (2010).
    [Crossref]
  5. C. Portalés, P. Casanova-Salas, S. Casas, J. Gimeno, and M. Fernández, “An interactive cameraless projector calibration method,” Virtual Real. 24(1), 109–121 (2020).
    [Crossref]
  6. M. Vo, Z. Wang, T. Hoang, and D. Nguyen, “Flexible calibration technique for fringe-projection-based three-dimensional imaging,” Opt. Lett. 35(19), 3192 (2010).
    [Crossref]
  7. X. Liu, Z. Cai, Y. Yin, H. Jiang, D. He, W. He, Z. Zhang, and X. Peng, “Calibration of fringe projection profilometry using an inaccurate 2D reference target,” Opt. Lasers Eng. 89, 131–137 (2017).
    [Crossref]
  8. F. J. Cuevas, M. Servin, O. N. Stavroudis, and R. Rodriguez-Vera, “Multi-layer neural network applied to phase and depth recovery from fringe patterns,” Opt. Commun. 181(4–6), 239–259 (2000).
    [Crossref]
  9. J. Villa, M. Araiza, D. Alaniz, R. Ivanov, and M. Ortiz, “Transformation of phase to (x,y,z)-coordinates for the calibration of a fringe projection profilometer,” Opt. Lasers Eng. 50(2), 256–261 (2012).
    [Crossref]
  10. J. Lu, R. Mo, H. Sun, and Z. Chang, “Flexible calibration of phase-to-height conversion in fringe projection profilometry,” Appl. Opt. 55(23), 6381 (2016).
    [Crossref]
  11. W. Zhao, X. Su, and W. Chen, “Discussion on accurate phase–height mapping in fringe projection profilometry,” Opt. Eng. 56(10), 1 (2017).
    [Crossref]
  12. W. Guo, Z. Wu, R. Xu, Q. Zhang, and M. Fujigaki, “A fast reconstruction method for three-dimensional shape measurement using dual-frequency grating projection and phase-to-height lookup table,” Opt. Laser Technol. 112, 269–277 (2019).
    [Crossref]
  13. B. Li, N. Karpinsky, and S. Zhang, “Novel calibration method for structured-light system with an out-of-focus projector,” Appl. Opt. 53(16), 3415 (2014).
    [Crossref]
  14. Z. Li, “Accurate calibration method for a structured light system,” Opt. Eng. 47(5), 053604 (2008).
    [Crossref]
  15. P. S. Huang, “Novel method for structured light system calibration,” Opt. Eng. 45(8), 083601 (2006).
    [Crossref]
  16. S. Huang, L. Xie, Z. Wang, Z. Zhang, F. Gao, and X. Jiang, “Accurate projector calibration method by using an optical coaxial camera,” Appl. Opt. 54(4), 789 (2015).
    [Crossref]
  17. S. Yang, M. Liu, J. Song, S. Yin, Y. Guo, Y. Ren, and J. Zhu, “Projector calibration method based on stereo vision system,” Opt. Rev. 24(6), 727–733 (2017).
    [Crossref]
  18. R. Chen, J. Xu, H. Chen, J. Su, Z. Zhang, and K. Chen, “Accurate calibration method for camera and projector in fringe patterns measurement system,” Appl. Opt. 55(16), 4293 (2016).
    [Crossref]
  19. W. Zhang, W. Li, L. Yu, H. Luo, H. Zhao, and H. Xia, “Sub-pixel projector calibration method for fringe projection profilometry,” Opt. Express 25(16), 19158 (2017).
    [Crossref]
  20. S. Yang, M. Liu, J. Song, S. Yin, Y. Ren, J. Zhu, and S. Chen, “Projector distortion residual compensation in fringe projection system,” Opt. Lasers Eng. 114, 104–110 (2019).
    [Crossref]
  21. H. Anwar, “Calibrating projector flexibly for a real-time active 3D scanning system,” Optik (Munich, Ger.) 158, 1088–1094 (2018).
    [Crossref]
  22. B. Huang, S. Ozdemir, Y. Tang, C. Liao, and H. Ling, “A Single-Shot-Per-Pose Camera-Projector Calibration System for Imperfect Planar Targets,” Adjun. Proc. - 2018 IEEE Int. Symp. Mix. Augment. Reality, ISMAR-Adjunct 201815–20 (2018).
  23. R. Juarez-Salazar and V. H. Diaz-Ramirez, “Flexible camera-projector calibration using superposed color checkerboards,” Opt. Lasers Eng. 120, 59–65 (2019).
    [Crossref]
  24. M. Liu, C. Sun, S. Huang, and Z. Zhang, “An accurate projector calibration method based on polynomial distortion representation,” Sensors 15(10), 26567–26582 (2015).
    [Crossref]
  25. Z. Wang, M. Liu, S. Yang, S. Huang, X. Bai, X. Liu, J. Zhu, X. Liu, and Z. Zhang, “Precise full-field distortion rectification and evaluation method for a digital projector,” Opt. Rev. 23(5), 746–752 (2016).
    [Crossref]
  26. P. Zhou, Y. Yu, G. Cai, and S. Huang, “Projector recalibration of three-dimensional profilometry system,” Appl. Opt. 55(9), 2294 (2016).
    [Crossref]
  27. M. Ren, J. Liang, B. Wei, and W. Pai, “Novel projector calibration method for monocular structured light system based on digital image correlation,” Optik (Munich, Ger.) 132, 337–347 (2017).
    [Crossref]
  28. K. Li, J. Bu, and D. Zhang, “Lens distortion elimination for improving measurement accuracy of fringe projection profilometry,” Opt. Lasers Eng. 85, 53–64 (2016).
    [Crossref]
  29. A. Gonzalez and J. Meneses, “Accurate calibration method for a fringe projection system by projecting an adaptive fringe pattern,” Appl. Opt. 58(17), 4610 (2019).
    [Crossref]
  30. H. Liu, H. Lin, and L. Yao, “Calibration method for projector-camera-based telecentric fringe projection profilometry system,” Opt. Express 25(25), 31492 (2017).
    [Crossref]
  31. M. E. Deetjen and D. Lentink, “Automated calibration of multi-camera-projector structured light systems for volumetric high-speed 3D surface reconstructions,” Opt. Express 26(25), 33278 (2018).
    [Crossref]
  32. Z. Zhang, “Flexible camera calibration by viewing a plane from unknown orientations,” Proc. IEEE Int. Conf. Comput. Vis. 1, 666–673 vol.1 (1999).
    [Crossref]
  33. R. Hartley and A. Zisserman, (2004). Multiple View Geometry in Computer Vision (2nd ed.). Cambridge University Press, Cambridge.
  34. Z. Wang, J. Huang, J. Gao, and Q. Xue, “Calibration of the structured light measurement system with bundle adjustment,” Jixie Gongcheng Xuebao/Journal Mech. Eng. 49, 4–13 (2013).

2020 (1)

C. Portalés, P. Casanova-Salas, S. Casas, J. Gimeno, and M. Fernández, “An interactive cameraless projector calibration method,” Virtual Real. 24(1), 109–121 (2020).
[Crossref]

2019 (4)

W. Guo, Z. Wu, R. Xu, Q. Zhang, and M. Fujigaki, “A fast reconstruction method for three-dimensional shape measurement using dual-frequency grating projection and phase-to-height lookup table,” Opt. Laser Technol. 112, 269–277 (2019).
[Crossref]

S. Yang, M. Liu, J. Song, S. Yin, Y. Ren, J. Zhu, and S. Chen, “Projector distortion residual compensation in fringe projection system,” Opt. Lasers Eng. 114, 104–110 (2019).
[Crossref]

R. Juarez-Salazar and V. H. Diaz-Ramirez, “Flexible camera-projector calibration using superposed color checkerboards,” Opt. Lasers Eng. 120, 59–65 (2019).
[Crossref]

A. Gonzalez and J. Meneses, “Accurate calibration method for a fringe projection system by projecting an adaptive fringe pattern,” Appl. Opt. 58(17), 4610 (2019).
[Crossref]

2018 (4)

M. E. Deetjen and D. Lentink, “Automated calibration of multi-camera-projector structured light systems for volumetric high-speed 3D surface reconstructions,” Opt. Express 26(25), 33278 (2018).
[Crossref]

H. Anwar, “Calibrating projector flexibly for a real-time active 3D scanning system,” Optik (Munich, Ger.) 158, 1088–1094 (2018).
[Crossref]

C. Zuo, S. Feng, L. Huang, T. Tao, W. Yin, and Q. Chen, “Phase shifting algorithms for fringe projection profilometry: A review,” Opt. Lasers Eng. 109, 23–59 (2018).
[Crossref]

Y. He and S. Chen, “Advances in sensing and processing methods for three-dimensional robot vision,” Int. J. Adv. Robot. Syst. 15(2), 172988141876062 (2018).
[Crossref]

2017 (6)

X. Liu, Z. Cai, Y. Yin, H. Jiang, D. He, W. He, Z. Zhang, and X. Peng, “Calibration of fringe projection profilometry using an inaccurate 2D reference target,” Opt. Lasers Eng. 89, 131–137 (2017).
[Crossref]

W. Zhao, X. Su, and W. Chen, “Discussion on accurate phase–height mapping in fringe projection profilometry,” Opt. Eng. 56(10), 1 (2017).
[Crossref]

S. Yang, M. Liu, J. Song, S. Yin, Y. Guo, Y. Ren, and J. Zhu, “Projector calibration method based on stereo vision system,” Opt. Rev. 24(6), 727–733 (2017).
[Crossref]

W. Zhang, W. Li, L. Yu, H. Luo, H. Zhao, and H. Xia, “Sub-pixel projector calibration method for fringe projection profilometry,” Opt. Express 25(16), 19158 (2017).
[Crossref]

H. Liu, H. Lin, and L. Yao, “Calibration method for projector-camera-based telecentric fringe projection profilometry system,” Opt. Express 25(25), 31492 (2017).
[Crossref]

M. Ren, J. Liang, B. Wei, and W. Pai, “Novel projector calibration method for monocular structured light system based on digital image correlation,” Optik (Munich, Ger.) 132, 337–347 (2017).
[Crossref]

2016 (5)

K. Li, J. Bu, and D. Zhang, “Lens distortion elimination for improving measurement accuracy of fringe projection profilometry,” Opt. Lasers Eng. 85, 53–64 (2016).
[Crossref]

Z. Wang, M. Liu, S. Yang, S. Huang, X. Bai, X. Liu, J. Zhu, X. Liu, and Z. Zhang, “Precise full-field distortion rectification and evaluation method for a digital projector,” Opt. Rev. 23(5), 746–752 (2016).
[Crossref]

P. Zhou, Y. Yu, G. Cai, and S. Huang, “Projector recalibration of three-dimensional profilometry system,” Appl. Opt. 55(9), 2294 (2016).
[Crossref]

R. Chen, J. Xu, H. Chen, J. Su, Z. Zhang, and K. Chen, “Accurate calibration method for camera and projector in fringe patterns measurement system,” Appl. Opt. 55(16), 4293 (2016).
[Crossref]

J. Lu, R. Mo, H. Sun, and Z. Chang, “Flexible calibration of phase-to-height conversion in fringe projection profilometry,” Appl. Opt. 55(23), 6381 (2016).
[Crossref]

2015 (2)

M. Liu, C. Sun, S. Huang, and Z. Zhang, “An accurate projector calibration method based on polynomial distortion representation,” Sensors 15(10), 26567–26582 (2015).
[Crossref]

S. Huang, L. Xie, Z. Wang, Z. Zhang, F. Gao, and X. Jiang, “Accurate projector calibration method by using an optical coaxial camera,” Appl. Opt. 54(4), 789 (2015).
[Crossref]

2014 (1)

2013 (1)

Z. Wang, J. Huang, J. Gao, and Q. Xue, “Calibration of the structured light measurement system with bundle adjustment,” Jixie Gongcheng Xuebao/Journal Mech. Eng. 49, 4–13 (2013).

2012 (1)

J. Villa, M. Araiza, D. Alaniz, R. Ivanov, and M. Ortiz, “Transformation of phase to (x,y,z)-coordinates for the calibration of a fringe projection profilometer,” Opt. Lasers Eng. 50(2), 256–261 (2012).
[Crossref]

2010 (2)

S. S. Gorthi and P. Rastogi, “Fringe projection techniques: Whither we are?” Opt. Lasers Eng. 48(2), 133–140 (2010).
[Crossref]

M. Vo, Z. Wang, T. Hoang, and D. Nguyen, “Flexible calibration technique for fringe-projection-based three-dimensional imaging,” Opt. Lett. 35(19), 3192 (2010).
[Crossref]

2009 (1)

G. Sansoni, M. Trebeschi, and F. Docchio, “State-of-the-art and applications of 3D imaging sensors in industry, cultural heritage, medicine, and criminal investigation,” Sensors 9(1), 568–601 (2009).
[Crossref]

2008 (1)

Z. Li, “Accurate calibration method for a structured light system,” Opt. Eng. 47(5), 053604 (2008).
[Crossref]

2006 (1)

P. S. Huang, “Novel method for structured light system calibration,” Opt. Eng. 45(8), 083601 (2006).
[Crossref]

2000 (1)

F. J. Cuevas, M. Servin, O. N. Stavroudis, and R. Rodriguez-Vera, “Multi-layer neural network applied to phase and depth recovery from fringe patterns,” Opt. Commun. 181(4–6), 239–259 (2000).
[Crossref]

1999 (1)

Z. Zhang, “Flexible camera calibration by viewing a plane from unknown orientations,” Proc. IEEE Int. Conf. Comput. Vis. 1, 666–673 vol.1 (1999).
[Crossref]

Alaniz, D.

J. Villa, M. Araiza, D. Alaniz, R. Ivanov, and M. Ortiz, “Transformation of phase to (x,y,z)-coordinates for the calibration of a fringe projection profilometer,” Opt. Lasers Eng. 50(2), 256–261 (2012).
[Crossref]

Anwar, H.

H. Anwar, “Calibrating projector flexibly for a real-time active 3D scanning system,” Optik (Munich, Ger.) 158, 1088–1094 (2018).
[Crossref]

Araiza, M.

J. Villa, M. Araiza, D. Alaniz, R. Ivanov, and M. Ortiz, “Transformation of phase to (x,y,z)-coordinates for the calibration of a fringe projection profilometer,” Opt. Lasers Eng. 50(2), 256–261 (2012).
[Crossref]

Bai, X.

Z. Wang, M. Liu, S. Yang, S. Huang, X. Bai, X. Liu, J. Zhu, X. Liu, and Z. Zhang, “Precise full-field distortion rectification and evaluation method for a digital projector,” Opt. Rev. 23(5), 746–752 (2016).
[Crossref]

Bu, J.

K. Li, J. Bu, and D. Zhang, “Lens distortion elimination for improving measurement accuracy of fringe projection profilometry,” Opt. Lasers Eng. 85, 53–64 (2016).
[Crossref]

Cai, G.

Cai, Z.

X. Liu, Z. Cai, Y. Yin, H. Jiang, D. He, W. He, Z. Zhang, and X. Peng, “Calibration of fringe projection profilometry using an inaccurate 2D reference target,” Opt. Lasers Eng. 89, 131–137 (2017).
[Crossref]

Casanova-Salas, P.

C. Portalés, P. Casanova-Salas, S. Casas, J. Gimeno, and M. Fernández, “An interactive cameraless projector calibration method,” Virtual Real. 24(1), 109–121 (2020).
[Crossref]

Casas, S.

C. Portalés, P. Casanova-Salas, S. Casas, J. Gimeno, and M. Fernández, “An interactive cameraless projector calibration method,” Virtual Real. 24(1), 109–121 (2020).
[Crossref]

Chang, Z.

Chen, H.

Chen, K.

Chen, Q.

C. Zuo, S. Feng, L. Huang, T. Tao, W. Yin, and Q. Chen, “Phase shifting algorithms for fringe projection profilometry: A review,” Opt. Lasers Eng. 109, 23–59 (2018).
[Crossref]

Chen, R.

Chen, S.

S. Yang, M. Liu, J. Song, S. Yin, Y. Ren, J. Zhu, and S. Chen, “Projector distortion residual compensation in fringe projection system,” Opt. Lasers Eng. 114, 104–110 (2019).
[Crossref]

Y. He and S. Chen, “Advances in sensing and processing methods for three-dimensional robot vision,” Int. J. Adv. Robot. Syst. 15(2), 172988141876062 (2018).
[Crossref]

Chen, W.

W. Zhao, X. Su, and W. Chen, “Discussion on accurate phase–height mapping in fringe projection profilometry,” Opt. Eng. 56(10), 1 (2017).
[Crossref]

Cuevas, F. J.

F. J. Cuevas, M. Servin, O. N. Stavroudis, and R. Rodriguez-Vera, “Multi-layer neural network applied to phase and depth recovery from fringe patterns,” Opt. Commun. 181(4–6), 239–259 (2000).
[Crossref]

Deetjen, M. E.

Diaz-Ramirez, V. H.

R. Juarez-Salazar and V. H. Diaz-Ramirez, “Flexible camera-projector calibration using superposed color checkerboards,” Opt. Lasers Eng. 120, 59–65 (2019).
[Crossref]

Docchio, F.

G. Sansoni, M. Trebeschi, and F. Docchio, “State-of-the-art and applications of 3D imaging sensors in industry, cultural heritage, medicine, and criminal investigation,” Sensors 9(1), 568–601 (2009).
[Crossref]

Feng, S.

C. Zuo, S. Feng, L. Huang, T. Tao, W. Yin, and Q. Chen, “Phase shifting algorithms for fringe projection profilometry: A review,” Opt. Lasers Eng. 109, 23–59 (2018).
[Crossref]

Fernández, M.

C. Portalés, P. Casanova-Salas, S. Casas, J. Gimeno, and M. Fernández, “An interactive cameraless projector calibration method,” Virtual Real. 24(1), 109–121 (2020).
[Crossref]

Fujigaki, M.

W. Guo, Z. Wu, R. Xu, Q. Zhang, and M. Fujigaki, “A fast reconstruction method for three-dimensional shape measurement using dual-frequency grating projection and phase-to-height lookup table,” Opt. Laser Technol. 112, 269–277 (2019).
[Crossref]

Gao, F.

Gao, J.

Z. Wang, J. Huang, J. Gao, and Q. Xue, “Calibration of the structured light measurement system with bundle adjustment,” Jixie Gongcheng Xuebao/Journal Mech. Eng. 49, 4–13 (2013).

Gimeno, J.

C. Portalés, P. Casanova-Salas, S. Casas, J. Gimeno, and M. Fernández, “An interactive cameraless projector calibration method,” Virtual Real. 24(1), 109–121 (2020).
[Crossref]

Gonzalez, A.

Gorthi, S. S.

S. S. Gorthi and P. Rastogi, “Fringe projection techniques: Whither we are?” Opt. Lasers Eng. 48(2), 133–140 (2010).
[Crossref]

Guo, W.

W. Guo, Z. Wu, R. Xu, Q. Zhang, and M. Fujigaki, “A fast reconstruction method for three-dimensional shape measurement using dual-frequency grating projection and phase-to-height lookup table,” Opt. Laser Technol. 112, 269–277 (2019).
[Crossref]

Guo, Y.

S. Yang, M. Liu, J. Song, S. Yin, Y. Guo, Y. Ren, and J. Zhu, “Projector calibration method based on stereo vision system,” Opt. Rev. 24(6), 727–733 (2017).
[Crossref]

Hartley, R.

R. Hartley and A. Zisserman, (2004). Multiple View Geometry in Computer Vision (2nd ed.). Cambridge University Press, Cambridge.

He, D.

X. Liu, Z. Cai, Y. Yin, H. Jiang, D. He, W. He, Z. Zhang, and X. Peng, “Calibration of fringe projection profilometry using an inaccurate 2D reference target,” Opt. Lasers Eng. 89, 131–137 (2017).
[Crossref]

He, W.

X. Liu, Z. Cai, Y. Yin, H. Jiang, D. He, W. He, Z. Zhang, and X. Peng, “Calibration of fringe projection profilometry using an inaccurate 2D reference target,” Opt. Lasers Eng. 89, 131–137 (2017).
[Crossref]

He, Y.

Y. He and S. Chen, “Advances in sensing and processing methods for three-dimensional robot vision,” Int. J. Adv. Robot. Syst. 15(2), 172988141876062 (2018).
[Crossref]

Hoang, T.

Huang, B.

B. Huang, S. Ozdemir, Y. Tang, C. Liao, and H. Ling, “A Single-Shot-Per-Pose Camera-Projector Calibration System for Imperfect Planar Targets,” Adjun. Proc. - 2018 IEEE Int. Symp. Mix. Augment. Reality, ISMAR-Adjunct 201815–20 (2018).

Huang, J.

Z. Wang, J. Huang, J. Gao, and Q. Xue, “Calibration of the structured light measurement system with bundle adjustment,” Jixie Gongcheng Xuebao/Journal Mech. Eng. 49, 4–13 (2013).

Huang, L.

C. Zuo, S. Feng, L. Huang, T. Tao, W. Yin, and Q. Chen, “Phase shifting algorithms for fringe projection profilometry: A review,” Opt. Lasers Eng. 109, 23–59 (2018).
[Crossref]

Huang, P. S.

P. S. Huang, “Novel method for structured light system calibration,” Opt. Eng. 45(8), 083601 (2006).
[Crossref]

Huang, S.

Z. Wang, M. Liu, S. Yang, S. Huang, X. Bai, X. Liu, J. Zhu, X. Liu, and Z. Zhang, “Precise full-field distortion rectification and evaluation method for a digital projector,” Opt. Rev. 23(5), 746–752 (2016).
[Crossref]

P. Zhou, Y. Yu, G. Cai, and S. Huang, “Projector recalibration of three-dimensional profilometry system,” Appl. Opt. 55(9), 2294 (2016).
[Crossref]

M. Liu, C. Sun, S. Huang, and Z. Zhang, “An accurate projector calibration method based on polynomial distortion representation,” Sensors 15(10), 26567–26582 (2015).
[Crossref]

S. Huang, L. Xie, Z. Wang, Z. Zhang, F. Gao, and X. Jiang, “Accurate projector calibration method by using an optical coaxial camera,” Appl. Opt. 54(4), 789 (2015).
[Crossref]

Ivanov, R.

J. Villa, M. Araiza, D. Alaniz, R. Ivanov, and M. Ortiz, “Transformation of phase to (x,y,z)-coordinates for the calibration of a fringe projection profilometer,” Opt. Lasers Eng. 50(2), 256–261 (2012).
[Crossref]

Jiang, H.

X. Liu, Z. Cai, Y. Yin, H. Jiang, D. He, W. He, Z. Zhang, and X. Peng, “Calibration of fringe projection profilometry using an inaccurate 2D reference target,” Opt. Lasers Eng. 89, 131–137 (2017).
[Crossref]

Jiang, X.

Juarez-Salazar, R.

R. Juarez-Salazar and V. H. Diaz-Ramirez, “Flexible camera-projector calibration using superposed color checkerboards,” Opt. Lasers Eng. 120, 59–65 (2019).
[Crossref]

Karpinsky, N.

Lentink, D.

Li, B.

Li, K.

K. Li, J. Bu, and D. Zhang, “Lens distortion elimination for improving measurement accuracy of fringe projection profilometry,” Opt. Lasers Eng. 85, 53–64 (2016).
[Crossref]

Li, W.

Li, Z.

Z. Li, “Accurate calibration method for a structured light system,” Opt. Eng. 47(5), 053604 (2008).
[Crossref]

Liang, J.

M. Ren, J. Liang, B. Wei, and W. Pai, “Novel projector calibration method for monocular structured light system based on digital image correlation,” Optik (Munich, Ger.) 132, 337–347 (2017).
[Crossref]

Liao, C.

B. Huang, S. Ozdemir, Y. Tang, C. Liao, and H. Ling, “A Single-Shot-Per-Pose Camera-Projector Calibration System for Imperfect Planar Targets,” Adjun. Proc. - 2018 IEEE Int. Symp. Mix. Augment. Reality, ISMAR-Adjunct 201815–20 (2018).

Lin, H.

Ling, H.

B. Huang, S. Ozdemir, Y. Tang, C. Liao, and H. Ling, “A Single-Shot-Per-Pose Camera-Projector Calibration System for Imperfect Planar Targets,” Adjun. Proc. - 2018 IEEE Int. Symp. Mix. Augment. Reality, ISMAR-Adjunct 201815–20 (2018).

Liu, H.

Liu, M.

S. Yang, M. Liu, J. Song, S. Yin, Y. Ren, J. Zhu, and S. Chen, “Projector distortion residual compensation in fringe projection system,” Opt. Lasers Eng. 114, 104–110 (2019).
[Crossref]

S. Yang, M. Liu, J. Song, S. Yin, Y. Guo, Y. Ren, and J. Zhu, “Projector calibration method based on stereo vision system,” Opt. Rev. 24(6), 727–733 (2017).
[Crossref]

Z. Wang, M. Liu, S. Yang, S. Huang, X. Bai, X. Liu, J. Zhu, X. Liu, and Z. Zhang, “Precise full-field distortion rectification and evaluation method for a digital projector,” Opt. Rev. 23(5), 746–752 (2016).
[Crossref]

M. Liu, C. Sun, S. Huang, and Z. Zhang, “An accurate projector calibration method based on polynomial distortion representation,” Sensors 15(10), 26567–26582 (2015).
[Crossref]

Liu, X.

X. Liu, Z. Cai, Y. Yin, H. Jiang, D. He, W. He, Z. Zhang, and X. Peng, “Calibration of fringe projection profilometry using an inaccurate 2D reference target,” Opt. Lasers Eng. 89, 131–137 (2017).
[Crossref]

Z. Wang, M. Liu, S. Yang, S. Huang, X. Bai, X. Liu, J. Zhu, X. Liu, and Z. Zhang, “Precise full-field distortion rectification and evaluation method for a digital projector,” Opt. Rev. 23(5), 746–752 (2016).
[Crossref]

Z. Wang, M. Liu, S. Yang, S. Huang, X. Bai, X. Liu, J. Zhu, X. Liu, and Z. Zhang, “Precise full-field distortion rectification and evaluation method for a digital projector,” Opt. Rev. 23(5), 746–752 (2016).
[Crossref]

Lu, J.

Luo, H.

Meneses, J.

Mo, R.

Nguyen, D.

Ortiz, M.

J. Villa, M. Araiza, D. Alaniz, R. Ivanov, and M. Ortiz, “Transformation of phase to (x,y,z)-coordinates for the calibration of a fringe projection profilometer,” Opt. Lasers Eng. 50(2), 256–261 (2012).
[Crossref]

Ozdemir, S.

B. Huang, S. Ozdemir, Y. Tang, C. Liao, and H. Ling, “A Single-Shot-Per-Pose Camera-Projector Calibration System for Imperfect Planar Targets,” Adjun. Proc. - 2018 IEEE Int. Symp. Mix. Augment. Reality, ISMAR-Adjunct 201815–20 (2018).

Pai, W.

M. Ren, J. Liang, B. Wei, and W. Pai, “Novel projector calibration method for monocular structured light system based on digital image correlation,” Optik (Munich, Ger.) 132, 337–347 (2017).
[Crossref]

Peng, X.

X. Liu, Z. Cai, Y. Yin, H. Jiang, D. He, W. He, Z. Zhang, and X. Peng, “Calibration of fringe projection profilometry using an inaccurate 2D reference target,” Opt. Lasers Eng. 89, 131–137 (2017).
[Crossref]

Portalés, C.

C. Portalés, P. Casanova-Salas, S. Casas, J. Gimeno, and M. Fernández, “An interactive cameraless projector calibration method,” Virtual Real. 24(1), 109–121 (2020).
[Crossref]

Rastogi, P.

S. S. Gorthi and P. Rastogi, “Fringe projection techniques: Whither we are?” Opt. Lasers Eng. 48(2), 133–140 (2010).
[Crossref]

Ren, M.

M. Ren, J. Liang, B. Wei, and W. Pai, “Novel projector calibration method for monocular structured light system based on digital image correlation,” Optik (Munich, Ger.) 132, 337–347 (2017).
[Crossref]

Ren, Y.

S. Yang, M. Liu, J. Song, S. Yin, Y. Ren, J. Zhu, and S. Chen, “Projector distortion residual compensation in fringe projection system,” Opt. Lasers Eng. 114, 104–110 (2019).
[Crossref]

S. Yang, M. Liu, J. Song, S. Yin, Y. Guo, Y. Ren, and J. Zhu, “Projector calibration method based on stereo vision system,” Opt. Rev. 24(6), 727–733 (2017).
[Crossref]

Rodriguez-Vera, R.

F. J. Cuevas, M. Servin, O. N. Stavroudis, and R. Rodriguez-Vera, “Multi-layer neural network applied to phase and depth recovery from fringe patterns,” Opt. Commun. 181(4–6), 239–259 (2000).
[Crossref]

Sansoni, G.

G. Sansoni, M. Trebeschi, and F. Docchio, “State-of-the-art and applications of 3D imaging sensors in industry, cultural heritage, medicine, and criminal investigation,” Sensors 9(1), 568–601 (2009).
[Crossref]

Servin, M.

F. J. Cuevas, M. Servin, O. N. Stavroudis, and R. Rodriguez-Vera, “Multi-layer neural network applied to phase and depth recovery from fringe patterns,” Opt. Commun. 181(4–6), 239–259 (2000).
[Crossref]

Song, J.

S. Yang, M. Liu, J. Song, S. Yin, Y. Ren, J. Zhu, and S. Chen, “Projector distortion residual compensation in fringe projection system,” Opt. Lasers Eng. 114, 104–110 (2019).
[Crossref]

S. Yang, M. Liu, J. Song, S. Yin, Y. Guo, Y. Ren, and J. Zhu, “Projector calibration method based on stereo vision system,” Opt. Rev. 24(6), 727–733 (2017).
[Crossref]

Stavroudis, O. N.

F. J. Cuevas, M. Servin, O. N. Stavroudis, and R. Rodriguez-Vera, “Multi-layer neural network applied to phase and depth recovery from fringe patterns,” Opt. Commun. 181(4–6), 239–259 (2000).
[Crossref]

Su, J.

Su, X.

W. Zhao, X. Su, and W. Chen, “Discussion on accurate phase–height mapping in fringe projection profilometry,” Opt. Eng. 56(10), 1 (2017).
[Crossref]

Sun, C.

M. Liu, C. Sun, S. Huang, and Z. Zhang, “An accurate projector calibration method based on polynomial distortion representation,” Sensors 15(10), 26567–26582 (2015).
[Crossref]

Sun, H.

Tang, Y.

B. Huang, S. Ozdemir, Y. Tang, C. Liao, and H. Ling, “A Single-Shot-Per-Pose Camera-Projector Calibration System for Imperfect Planar Targets,” Adjun. Proc. - 2018 IEEE Int. Symp. Mix. Augment. Reality, ISMAR-Adjunct 201815–20 (2018).

Tao, T.

C. Zuo, S. Feng, L. Huang, T. Tao, W. Yin, and Q. Chen, “Phase shifting algorithms for fringe projection profilometry: A review,” Opt. Lasers Eng. 109, 23–59 (2018).
[Crossref]

Trebeschi, M.

G. Sansoni, M. Trebeschi, and F. Docchio, “State-of-the-art and applications of 3D imaging sensors in industry, cultural heritage, medicine, and criminal investigation,” Sensors 9(1), 568–601 (2009).
[Crossref]

Villa, J.

J. Villa, M. Araiza, D. Alaniz, R. Ivanov, and M. Ortiz, “Transformation of phase to (x,y,z)-coordinates for the calibration of a fringe projection profilometer,” Opt. Lasers Eng. 50(2), 256–261 (2012).
[Crossref]

Vo, M.

Wang, Z.

Z. Wang, M. Liu, S. Yang, S. Huang, X. Bai, X. Liu, J. Zhu, X. Liu, and Z. Zhang, “Precise full-field distortion rectification and evaluation method for a digital projector,” Opt. Rev. 23(5), 746–752 (2016).
[Crossref]

S. Huang, L. Xie, Z. Wang, Z. Zhang, F. Gao, and X. Jiang, “Accurate projector calibration method by using an optical coaxial camera,” Appl. Opt. 54(4), 789 (2015).
[Crossref]

Z. Wang, J. Huang, J. Gao, and Q. Xue, “Calibration of the structured light measurement system with bundle adjustment,” Jixie Gongcheng Xuebao/Journal Mech. Eng. 49, 4–13 (2013).

M. Vo, Z. Wang, T. Hoang, and D. Nguyen, “Flexible calibration technique for fringe-projection-based three-dimensional imaging,” Opt. Lett. 35(19), 3192 (2010).
[Crossref]

Wei, B.

M. Ren, J. Liang, B. Wei, and W. Pai, “Novel projector calibration method for monocular structured light system based on digital image correlation,” Optik (Munich, Ger.) 132, 337–347 (2017).
[Crossref]

Wu, Z.

W. Guo, Z. Wu, R. Xu, Q. Zhang, and M. Fujigaki, “A fast reconstruction method for three-dimensional shape measurement using dual-frequency grating projection and phase-to-height lookup table,” Opt. Laser Technol. 112, 269–277 (2019).
[Crossref]

Xia, H.

Xie, L.

Xu, J.

Xu, R.

W. Guo, Z. Wu, R. Xu, Q. Zhang, and M. Fujigaki, “A fast reconstruction method for three-dimensional shape measurement using dual-frequency grating projection and phase-to-height lookup table,” Opt. Laser Technol. 112, 269–277 (2019).
[Crossref]

Xue, Q.

Z. Wang, J. Huang, J. Gao, and Q. Xue, “Calibration of the structured light measurement system with bundle adjustment,” Jixie Gongcheng Xuebao/Journal Mech. Eng. 49, 4–13 (2013).

Yang, S.

S. Yang, M. Liu, J. Song, S. Yin, Y. Ren, J. Zhu, and S. Chen, “Projector distortion residual compensation in fringe projection system,” Opt. Lasers Eng. 114, 104–110 (2019).
[Crossref]

S. Yang, M. Liu, J. Song, S. Yin, Y. Guo, Y. Ren, and J. Zhu, “Projector calibration method based on stereo vision system,” Opt. Rev. 24(6), 727–733 (2017).
[Crossref]

Z. Wang, M. Liu, S. Yang, S. Huang, X. Bai, X. Liu, J. Zhu, X. Liu, and Z. Zhang, “Precise full-field distortion rectification and evaluation method for a digital projector,” Opt. Rev. 23(5), 746–752 (2016).
[Crossref]

Yao, L.

Yin, S.

S. Yang, M. Liu, J. Song, S. Yin, Y. Ren, J. Zhu, and S. Chen, “Projector distortion residual compensation in fringe projection system,” Opt. Lasers Eng. 114, 104–110 (2019).
[Crossref]

S. Yang, M. Liu, J. Song, S. Yin, Y. Guo, Y. Ren, and J. Zhu, “Projector calibration method based on stereo vision system,” Opt. Rev. 24(6), 727–733 (2017).
[Crossref]

Yin, W.

C. Zuo, S. Feng, L. Huang, T. Tao, W. Yin, and Q. Chen, “Phase shifting algorithms for fringe projection profilometry: A review,” Opt. Lasers Eng. 109, 23–59 (2018).
[Crossref]

Yin, Y.

X. Liu, Z. Cai, Y. Yin, H. Jiang, D. He, W. He, Z. Zhang, and X. Peng, “Calibration of fringe projection profilometry using an inaccurate 2D reference target,” Opt. Lasers Eng. 89, 131–137 (2017).
[Crossref]

Yu, L.

Yu, Y.

Zhang, D.

K. Li, J. Bu, and D. Zhang, “Lens distortion elimination for improving measurement accuracy of fringe projection profilometry,” Opt. Lasers Eng. 85, 53–64 (2016).
[Crossref]

Zhang, Q.

W. Guo, Z. Wu, R. Xu, Q. Zhang, and M. Fujigaki, “A fast reconstruction method for three-dimensional shape measurement using dual-frequency grating projection and phase-to-height lookup table,” Opt. Laser Technol. 112, 269–277 (2019).
[Crossref]

Zhang, S.

Zhang, W.

Zhang, Z.

X. Liu, Z. Cai, Y. Yin, H. Jiang, D. He, W. He, Z. Zhang, and X. Peng, “Calibration of fringe projection profilometry using an inaccurate 2D reference target,” Opt. Lasers Eng. 89, 131–137 (2017).
[Crossref]

R. Chen, J. Xu, H. Chen, J. Su, Z. Zhang, and K. Chen, “Accurate calibration method for camera and projector in fringe patterns measurement system,” Appl. Opt. 55(16), 4293 (2016).
[Crossref]

Z. Wang, M. Liu, S. Yang, S. Huang, X. Bai, X. Liu, J. Zhu, X. Liu, and Z. Zhang, “Precise full-field distortion rectification and evaluation method for a digital projector,” Opt. Rev. 23(5), 746–752 (2016).
[Crossref]

M. Liu, C. Sun, S. Huang, and Z. Zhang, “An accurate projector calibration method based on polynomial distortion representation,” Sensors 15(10), 26567–26582 (2015).
[Crossref]

S. Huang, L. Xie, Z. Wang, Z. Zhang, F. Gao, and X. Jiang, “Accurate projector calibration method by using an optical coaxial camera,” Appl. Opt. 54(4), 789 (2015).
[Crossref]

Z. Zhang, “Flexible camera calibration by viewing a plane from unknown orientations,” Proc. IEEE Int. Conf. Comput. Vis. 1, 666–673 vol.1 (1999).
[Crossref]

Zhao, H.

Zhao, W.

W. Zhao, X. Su, and W. Chen, “Discussion on accurate phase–height mapping in fringe projection profilometry,” Opt. Eng. 56(10), 1 (2017).
[Crossref]

Zhou, P.

Zhu, J.

S. Yang, M. Liu, J. Song, S. Yin, Y. Ren, J. Zhu, and S. Chen, “Projector distortion residual compensation in fringe projection system,” Opt. Lasers Eng. 114, 104–110 (2019).
[Crossref]

S. Yang, M. Liu, J. Song, S. Yin, Y. Guo, Y. Ren, and J. Zhu, “Projector calibration method based on stereo vision system,” Opt. Rev. 24(6), 727–733 (2017).
[Crossref]

Z. Wang, M. Liu, S. Yang, S. Huang, X. Bai, X. Liu, J. Zhu, X. Liu, and Z. Zhang, “Precise full-field distortion rectification and evaluation method for a digital projector,” Opt. Rev. 23(5), 746–752 (2016).
[Crossref]

Zisserman, A.

R. Hartley and A. Zisserman, (2004). Multiple View Geometry in Computer Vision (2nd ed.). Cambridge University Press, Cambridge.

Zuo, C.

C. Zuo, S. Feng, L. Huang, T. Tao, W. Yin, and Q. Chen, “Phase shifting algorithms for fringe projection profilometry: A review,” Opt. Lasers Eng. 109, 23–59 (2018).
[Crossref]

Appl. Opt. (6)

Int. J. Adv. Robot. Syst. (1)

Y. He and S. Chen, “Advances in sensing and processing methods for three-dimensional robot vision,” Int. J. Adv. Robot. Syst. 15(2), 172988141876062 (2018).
[Crossref]

Jixie Gongcheng Xuebao/Journal Mech. Eng. (1)

Z. Wang, J. Huang, J. Gao, and Q. Xue, “Calibration of the structured light measurement system with bundle adjustment,” Jixie Gongcheng Xuebao/Journal Mech. Eng. 49, 4–13 (2013).

Opt. Commun. (1)

F. J. Cuevas, M. Servin, O. N. Stavroudis, and R. Rodriguez-Vera, “Multi-layer neural network applied to phase and depth recovery from fringe patterns,” Opt. Commun. 181(4–6), 239–259 (2000).
[Crossref]

Opt. Eng. (3)

Z. Li, “Accurate calibration method for a structured light system,” Opt. Eng. 47(5), 053604 (2008).
[Crossref]

P. S. Huang, “Novel method for structured light system calibration,” Opt. Eng. 45(8), 083601 (2006).
[Crossref]

W. Zhao, X. Su, and W. Chen, “Discussion on accurate phase–height mapping in fringe projection profilometry,” Opt. Eng. 56(10), 1 (2017).
[Crossref]

Opt. Express (3)

Opt. Laser Technol. (1)

W. Guo, Z. Wu, R. Xu, Q. Zhang, and M. Fujigaki, “A fast reconstruction method for three-dimensional shape measurement using dual-frequency grating projection and phase-to-height lookup table,” Opt. Laser Technol. 112, 269–277 (2019).
[Crossref]

Opt. Lasers Eng. (7)

S. Yang, M. Liu, J. Song, S. Yin, Y. Ren, J. Zhu, and S. Chen, “Projector distortion residual compensation in fringe projection system,” Opt. Lasers Eng. 114, 104–110 (2019).
[Crossref]

J. Villa, M. Araiza, D. Alaniz, R. Ivanov, and M. Ortiz, “Transformation of phase to (x,y,z)-coordinates for the calibration of a fringe projection profilometer,” Opt. Lasers Eng. 50(2), 256–261 (2012).
[Crossref]

C. Zuo, S. Feng, L. Huang, T. Tao, W. Yin, and Q. Chen, “Phase shifting algorithms for fringe projection profilometry: A review,” Opt. Lasers Eng. 109, 23–59 (2018).
[Crossref]

S. S. Gorthi and P. Rastogi, “Fringe projection techniques: Whither we are?” Opt. Lasers Eng. 48(2), 133–140 (2010).
[Crossref]

K. Li, J. Bu, and D. Zhang, “Lens distortion elimination for improving measurement accuracy of fringe projection profilometry,” Opt. Lasers Eng. 85, 53–64 (2016).
[Crossref]

X. Liu, Z. Cai, Y. Yin, H. Jiang, D. He, W. He, Z. Zhang, and X. Peng, “Calibration of fringe projection profilometry using an inaccurate 2D reference target,” Opt. Lasers Eng. 89, 131–137 (2017).
[Crossref]

R. Juarez-Salazar and V. H. Diaz-Ramirez, “Flexible camera-projector calibration using superposed color checkerboards,” Opt. Lasers Eng. 120, 59–65 (2019).
[Crossref]

Opt. Lett. (1)

Opt. Rev. (2)

S. Yang, M. Liu, J. Song, S. Yin, Y. Guo, Y. Ren, and J. Zhu, “Projector calibration method based on stereo vision system,” Opt. Rev. 24(6), 727–733 (2017).
[Crossref]

Z. Wang, M. Liu, S. Yang, S. Huang, X. Bai, X. Liu, J. Zhu, X. Liu, and Z. Zhang, “Precise full-field distortion rectification and evaluation method for a digital projector,” Opt. Rev. 23(5), 746–752 (2016).
[Crossref]

Optik (Munich, Ger.) (2)

M. Ren, J. Liang, B. Wei, and W. Pai, “Novel projector calibration method for monocular structured light system based on digital image correlation,” Optik (Munich, Ger.) 132, 337–347 (2017).
[Crossref]

H. Anwar, “Calibrating projector flexibly for a real-time active 3D scanning system,” Optik (Munich, Ger.) 158, 1088–1094 (2018).
[Crossref]

Proc. IEEE Int. Conf. Comput. Vis. (1)

Z. Zhang, “Flexible camera calibration by viewing a plane from unknown orientations,” Proc. IEEE Int. Conf. Comput. Vis. 1, 666–673 vol.1 (1999).
[Crossref]

Sensors (2)

M. Liu, C. Sun, S. Huang, and Z. Zhang, “An accurate projector calibration method based on polynomial distortion representation,” Sensors 15(10), 26567–26582 (2015).
[Crossref]

G. Sansoni, M. Trebeschi, and F. Docchio, “State-of-the-art and applications of 3D imaging sensors in industry, cultural heritage, medicine, and criminal investigation,” Sensors 9(1), 568–601 (2009).
[Crossref]

Virtual Real. (1)

C. Portalés, P. Casanova-Salas, S. Casas, J. Gimeno, and M. Fernández, “An interactive cameraless projector calibration method,” Virtual Real. 24(1), 109–121 (2020).
[Crossref]

Other (2)

B. Huang, S. Ozdemir, Y. Tang, C. Liao, and H. Ling, “A Single-Shot-Per-Pose Camera-Projector Calibration System for Imperfect Planar Targets,” Adjun. Proc. - 2018 IEEE Int. Symp. Mix. Augment. Reality, ISMAR-Adjunct 201815–20 (2018).

R. Hartley and A. Zisserman, (2004). Multiple View Geometry in Computer Vision (2nd ed.). Cambridge University Press, Cambridge.

Cited By

OSA participates in Crossref's Cited-By Linking service. Citing articles from OSA journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (25)

Fig. 1.
Fig. 1. The overall scheme of projector calibration.
Fig. 2.
Fig. 2. Sketch of the homographic relationship between the projector and camera.
Fig. 3.
Fig. 3. The reprojection error of the point.
Fig. 4.
Fig. 4. The stereo structured light vision system.
Fig. 5.
Fig. 5. The captured image and result of the circle center extraction.
Fig. 6.
Fig. 6. The projection feature point image.
Fig. 7.
Fig. 7. The feature point on the projector.
Fig. 8.
Fig. 8. The reprojection errors (a) camera (b) projector.
Fig. 9.
Fig. 9. Comparison results of the camera and projector.
Fig. 10.
Fig. 10. Result graph of reconstruction of feature points.
Fig. 11.
Fig. 11. The distance error between the feature points (a) X direction and (b) Y direction.
Fig. 12.
Fig. 12. The reprojection error of 7 times iteration (a) camera and (b) projector.
Fig. 13.
Fig. 13. The reprojection error after iteration (a) camera and (b) projector.
Fig. 14.
Fig. 14. Result graph of reconstruction of feature points.
Fig. 15.
Fig. 15. The comparing result between before and after iteration (a) camera (b) projector.
Fig. 16.
Fig. 16. The distance error between the feature points (a) X direction and (b) Y direction.
Fig. 17.
Fig. 17. The reprojection error distribution after bundle adjustment optimization (a) camera and (b) projector.
Fig. 18.
Fig. 18. The reprojection error after BA optimization (a) camera and (b) projector.
Fig. 19.
Fig. 19. Result graph of reconstruction of feature points.
Fig. 20.
Fig. 20. The distance error between the feature points (a) X direction and (b) Y direction.
Fig. 21.
Fig. 21. Result graph of error quadrature function fitting (a) X direction and (b) Y direction.
Fig. 22.
Fig. 22. Result graph of reconstruction of feature points.
Fig. 23.
Fig. 23. The distance error between the feature points for two methods (a) X direction and (b) Y direction.
Fig. 24.
Fig. 24. One of the measurement progresses.
Fig. 25.
Fig. 25. The 3D data of the measurement.

Tables (1)

Tables Icon

Table 1. Calibration results of the camera and projector.

Equations (35)

Equations on this page are rendered with MathJax. Learn more.

[ u p v p 1 ] = s K p [ R p T p ] [ X w Y w Z w 1 ]
K p = [ f p d x p 0 u p 0 0 f p d y p v p 0 0 0 1 ] = [ f p u 0 u p 0 0 f p v v p 0 0 0 1 ] .
{ u p = u p ( 1 + k 1 r 2 + k 2 r 4 + k 3 r 6 ) + 2 p 1 x y + p 2 ( r 2 + 2 x 2 ) v p = v p ( 1 + k 1 r 2 + k 2 r 4 + k 3 r 6 ) + 2 p 2 x y + p 1 ( r 2 + 2 y 2 ) ,
n T P w k + d = 0.
P c k = K c ( R P w k + t ) = K c [ R P w k + t ( t n T d ) P w k ] = K c ( R t n T d ) P w k = K c ( R t n T d ) K p 1 P p k .
P c k = γ H P p k
H = [ h 11 h 12 h 13 h 21 h 22 h 23 h 31 h 32 h 33 ]
u c = h 11 u p + h 12 v p + h 13 h 31 u p + h 32 v p + h 33 v c = h 21 u p + h 22 v p + h 23 h 31 u p + h 32 v p + h 33
H C P = [ i = 1 n w 11 i h 11 i i = 1 n w 12 i h 12 i i = 1 n w 13 i h 13 i i = 1 n w 21 i h 21 i i = 1 n w 22 i h 22 i i = 1 n w 23 i h 23 i i = 1 n w 31 i h 31 i i = 1 n w 32 i h 32 i i = 1 n w 33 i h 33 i ] ,
{ u p = f p u m 11 X w + m 12 Y w + m 13 Z w + m 14 m 13 X w + m 32 Y w + m 33 Z w + m 34 + u p 0 v p = f p v m 21 X w + m 22 Y w + m 23 Z w + m 14 m 13 X w + m 32 Y w + m 33 Z w + m 34 + v p 0 ,
[ u p v p ] + [ Δ u Δ v ] = [ u p v p ] + [ d u d v ]
[ Δ u Δ v ] = [ u p m 11 u p m 12 v p m 11 v p m 12 u p m 34 v p m 34 ] [ Δ m 11 Δ m 12 Δ m 34 ] + [ u p f p u 0 0 v p f p v u p u p 0 0 0 v p v p 0 ] [ Δ f p u Δ f p v Δ u p 0 Δ v p 0 ] + [ u p k 1 u p k 2 v p k 1 v p k 2 u p p 2 v p p 2 ] [ Δ k 1 Δ k 2 Δ p 2 ] [ u p u p u p v p ] .
V = [ Δ u Δ v ] = [ F M ] [ Δ M ] + [ F K ] [ Δ K ] + [ F D ] [ Δ D ] L
V = A m + B k + C d L
V = [ V V m V k V d ] = [ V 1 V 2 V 3 V 4 ] = [ A B C E 0 0 0 E 0 0 0 E ] [ m k d ] [ L 0 0 0 ] = G X L
P = [ P v 0 0 0 0 P V k 0 0 0 0 P V d 0 0 0 0 P V m ] = [ P 1 0 0 0 0 P 2 0 0 0 0 P 3 0 0 0 0 P 4 ]
( m 11 , m 12 , m 33 ; f p u , v p 0 ; k 1 p 2 ) = arg min ( V T P V )
V T P V X = 2 V T P V X = 2 V T P ( G X L ) X = 2 V T P G = 0
G T P V = 0
( G T P G ) X G T P L = 0
G = [ A B C E 0 0 0 E 0 0 0 E ] = [ G 1 G 2 G 3 G 4 ] .
N = G T P G = i = 1 , 2 , 3 , 4 N i = i = 1 , 2 , 3 , 4 G i T P i G i
W = G T P L = i = 1 , 2 , 3 , 4 W i = i = 1 , 2 , 3 , 4 G i T P i L i
E ( V T P V ) = t r ( P D ( V ) ) + E T ( V ) P E ( V ) .
E ( V T P V ) = t r ( P D ( V ) ) .
V 1 = G 1 X L 1 = G 1 N 1 ( W 1 + W 2 + W 3 + W 4 ) L 1 = ( G 1 N 1 G 1 T P 1 E ) L 1 + i = 2 , 3 , 4 G 1 N 1 G i P i L i
D V 1 = ( G 1 N 1 G 1 T P 1 E ) D L 1 ( G 1 N 1 G 1 T P 1 E ) T + i = 2 , 3 , 4 ( G 1 N 1 G i P i L i ) D L i ( G 1 N 1 G i P i L i ) T
D V 1 = σ 1 2 ( G 1 N 1 N 1 N 1 G 1 T 2 G 1 N 1 G 1 T + P 1 1 ) + i = 2 , 3 , 4 σ i 2 ( G 1 N 1 N i N 1 G 1 T )
E ( V 1 T P 1 V 1 ) = σ 1 2 t r ( P 1 ( G 1 N 1 N 1 N 1 G 1 T 2 G 1 N 1 G 1 T + P 1 1 ) ) + i = 2 , 3 , 4 σ i 2 t r ( P 1 ( G 1 N 1 N i N 1 G 1 T ) ) = σ 1 2 [ n 1 + t r ( N 1 N 1 N 1 N 1 ) 2 t r ( N 1 N 1 ) ] + i = 2 , 3 , 4 σ i 2 t r ( N 1 N 1 N 1 N i )
( V 2 T P 2 V 2 ) = t r ( P 2 D V 2 ) = σ 2 2 [ n 2 + t r ( N 1 N 2 N 1 N 2 ) 2 t r ( N 1 N 2 ) ] + i = 1 , 3 , 4 σ i 2 t r ( N 1 N 2 N 1 N i )
t r ( λ ) = λ i i .
S = [ s 11 s 12 s 21 s 22 s 13 s 14 s 23 s 24 s 31 s 32 s 41 s 42 s 33 s 34 s 43 s 44 ] .
S [ σ 1 2 σ 2 2 σ 3 2 σ 4 2 ] = [ V 1 T P 1 V 1 V 2 T P 2 V 2 V 3 T P 3 V 3 V 4 T P 4 V 4 ] .
S θ = W θ
P i k + 1 = C σ i 2 P i k