## Abstract

By combining a fringe projection setup with a telecentric lens, a fringe pattern could be projected and imaged within a small area, making it possible to measure the three-dimensional (3D) surfaces of micro-components. This paper focuses on the flexible calibration of the fringe projection profilometry (FPP) system using a telecentric lens. An analytical telecentric projector-camera calibration model is introduced, in which the rig structure parameters remain invariant for all views, and the 3D calibration target can be located on the projector image plane with sub-pixel precision. Based on the presented calibration model, a two-step calibration procedure is proposed. First, the initial parameters, e.g., the projector-camera rig, projector intrinsic matrix, and coordinates of the control points of a 3D calibration target, are estimated using the affine camera factorization calibration method. Second, a bundle adjustment algorithm with various simultaneous views is applied to refine the calibrated parameters, especially the rig structure parameters and coordinates of the control points forth 3D target. Because the control points are determined during the calibration, there is no need for an accurate 3D reference target, whose is costly and extremely difficult to fabricate, particularly for tiny objects used to calibrate the telecentric FPP system. Real experiments were performed to validate the performance of the proposed calibration method. The test results showed that the proposed approach is very accurate and reliable.

© 2017 Optical Society of America under the terms of the OSA Open Access Publishing Agreement

## 1. Introduction

With recent advances in precision manufacturing, micro-level three-dimensional (3D) metrology has become increasingly important. With the development of a digital projector based on a liquid crystal display (LCD) and digital light projector (DLP), fringe projection profilometry (FPP) has become one of the most widely used techniques in 3D shape measurement because of its inherent advantages such as its full-field data acquisition, high measurement accuracy, portability, and flexibility [1]. A stereoscopic microscope is generally adopted as the basic optical system for microscopic FPP, with a projector and camera installed on the two cylinders of the stereoscopic microscope, i.e., the projection end and imaging end, respectively [2]. The main drawback of such a system is that the depth of field (DOF) is limited to the sub-millimeter order, which is insufficient to measure a 3D object with height variations of several millimeters. Furthermore, the distance between the optical system and object must be small for micro-level 3D metrology. Other drawbacks to conventional lenses are the perspective effect and lens distortion, which cause object to appear distorted at short distances [3]. Compared to conventional microscopic lenses, telecentric lenses feature orthographic projection and have many advantages such as high resolution, nearly zero distortion, constant magnification, and an increased DOF [4–6]. Because of these features, researchers have given attention to telecentric FPP [7], which replaces the lenses of a micro-projector and camera with telecentric lenses (called atelecentric camera or an affine camera).

For any FPP measurement system, the projector-camera calibration is one of the most important and challenging issues, because the measurement accuracy largely depends on it. The calibration approaches for a projector-camera with conventional lenses have been extensively studied over a long period of time, and the existing methods can be mainly divided into two categories: phase-height-based methods and stereovision-based methods [8]. In a phase-height-based method, the relations between the absolute phase values of the fringe pattern and the height of an object are identified and constructed using a lookup Table [9, 10] or parametric polynomial models [11–13]. This kind of method can be directly applied to the calibration of a telecentric projector-camera, because it avoids the calibration of the system parameters [14]. However, it generally requires a reference plane to compute the phase difference and relative height, which limits the measurement volume [15]. Moreover, to achieve higher accuracy, the calibration target must move a certain distance along the optical axis of the camera or projector, which means a high-precision translation stage or gauge block is inevitable [16]. Therefore, the existing phase-height-based methods are complicated and hard to implement in practical environments because of the requirement of a precise translating stage or gauge block [9]. In contrast, the stereovision-based method is based on the binocular vision theory, in which the projector can be regarded as an inverse camera and described with the same mathematical model [17]. The projector can observe the calibration targets with the help of the camera of the FPP system. Thus, the projector-camera system can also be flexibly calibrated using the stereovision technique without practical limitations. It has become one of the most popular approaches and has been widely studied because of its accuracy and flexibility. For example, there are many flexible and practical approaches to exploit both the intrinsic and extrinsic parameters of a perspective projector-camera system using a planar checkerboard or circular calibration board [17–22].

Unlike the perspective projection of a pinhole camera, a telecentric camera is insensitive to changes in the depth along the optical axis. Thus, the existing stereovision-based calibration methods for perspective FPP systems cannot be straightforwardly applied. Espino et al. described a general model that included the intrinsic and extrinsic parameters of a vision system with a telecentric lens [3]. Based on a similar model, Li et al. developed a flexible method to calibrate telecentric cameras [23], and then successfully employed it for an FPP system [24]. Peng et al. adapted this method to calibrate a microscopic fringe projection system with a Scheimpflug telecentric lens [25], in which the effect caused by the Scheimpflug condition was modeled as a “tangential distortion” model. Li et al. proposed a flexible calibration algorithm for a telecentric FPP system [26], for which the traditional 2D planar calibration method and an additional two-step refining process were proposed. Both of the aforementioned calibration methods can effectively accomplish the calibration task for a telecentric FPP system. However, a pose ambiguity problem is inevitable for a telecentric camera if a planar calibration target is adopted [27, 28]. Yin et al. employed general imaging model to calibrate atelecentric FPP system and achieved good reconstruction accuracy [29]. Chen et al. proposed a closed-form solution for a telecentric stereo micro-vision system [30, 31]. In these two calibration methods, the problem of the sign ambiguity induced by the planar-object-based calibration technique was successfully solved with the help of a positioning stage device. However, the additional positioning stage device increased the hardware cost of the whole system and made the calibration process complicated and laborious. Li and Zhang proposed a framework to calibrate such a microscopic FPP system using a telecentric camera and perspective projector [32]. In this method, the 3D coordinates of the planar target points in the calibrated projector coordinate system could be found, and then used to calibrate the fixed telecentric camera. The sign ambiguity problem could be overcome using3D control points for telecentric camera calibration, which meant their Z coordinates were not all zeros. As previously mentioned, this calibration framework is problematic for an FPP system consisting of a telecentric projector and telecentric camera, because it cannot determine the pose of a unique planar target for the telecentric camera/projector without additional information.

Generally, the translation or Z-direction of the telecentric camera can be recovered using some assumptions such as moving the origins of the camera coordinate systems along their respective optical axes to a sphere with a fixed radius [33]. The attitude ambiguity problem can also be solved using3D calibration reference targets. However, it should be noted that the calibration accuracy largely depends on the fabrication quality of the reference target, such as the accuracy of the feature points' locations. The fabrication of a high-quality 3D calibration reference target is costly and extremely difficult, particularly for tiny objects used to calibrate the telecentric FPP system. So far, many calibration methods for affine cameras have been proposed [34–36], in which the 3D structure of the scene and the camera motion can be achieved simultaneously. Inspired by these methods, this paper proposes a calibration approach for telecentric FPP systems without accurate 3D reference targets. The 3D coordinates of the feature points of the reference target can be determined using the proposed approach.

The rest of this paper is organized as follows. Section 2introduces some basic principles such as the rig model of the telecentric projector-camera, local planar homography, and sub-pixel mapping model. Based on these principles, the calibration method is proposed in section 3. The experimental verification of the proposed calibration method is reported in section 4, and section 5 concludes this work.

## 2. Calibration model of telecentric FPP system

This section introduces some basic principles and deductions and provides theoretical support for the proposed calibration method.

#### 2.1 Telecentric projector-camera model

The measurement model of the telecentric FPP system is shown in Fig. 1, and includes a telecentric projector and telecentric camera. Rather than capturing images, the projector is used to project coded patterns (structured light), which are in turn captured by the camera and decoded for correspondence. As previously mentioned, the telecentric projector can be regarded as an inverse telecentric camera and described using the same mathematical model.

As shown in Fig. 1, the bi-telecentric lens simply performs a magnification in both the X and Y directions of the camera coordinate system, while it is not sensitive to the depth in the Z direction. Suppose that $\left(R,t\right)$ is the rotation matrix and translation vector, which relates the world coordinate system to the camera coordinate system, and the projection of an arbitrary point $P{\left[\begin{array}{ccc}{x}_{w}& {y}_{w}& {z}_{w}\end{array}\right]}^{T}$ in the 3D world coordinate system to the undistorted image plane in pixel units is expressed by the following equation:

In Eqs. (1) and (2), $K$is the camera’s intrinsic matrix, ${R}_{2\times 3}$ represents the first two rows of rotation matrix $R$, ${t}_{s}={\left[\begin{array}{cc}{t}_{x}& {t}_{y}\end{array}\right]}^{T}$ is the truncated translation of $t={\left[\begin{array}{ccc}{t}_{x}& {t}_{y}& {t}_{\text{z}}\end{array}\right]}^{T}$, $m={\left[\begin{array}{cc}u& v\end{array}\right]}^{T}$ is the image coordinate of $P{\left[\begin{array}{ccc}{x}_{w}& {y}_{w}& {z}_{w}\end{array}\right]}^{T}$ in pixels, $m$ is the effective magnification of the telecentric camera, and $\left({u}_{0},{v}_{0}\right)$ are the coordinates of the image plane’s center.

According to the above model, the relationship between the object space (object point$P{\left[\begin{array}{ccc}{x}_{w}& {y}_{w}& {z}_{w}\end{array}\right]}^{T}$) and image spaces of the projector and camera (correspondence points ${m}_{\text{P}}{\left[\begin{array}{cc}{u}_{p}& {v}_{p}\end{array}\right]}^{T}$ and ${m}_{\text{C}}{\left[\begin{array}{cc}{u}_{c}& {v}_{c}\end{array}\right]}^{T}$) in Fig. 1 are described as follows:

Combining Eqs. (3) and (4), if ${m}_{\text{P}}{\left[\begin{array}{cc}{u}_{p}& {v}_{p}\end{array}\right]}^{T}\leftrightarrow {m}_{\text{C}}{\left[\begin{array}{cc}{u}_{c}& {v}_{c}\end{array}\right]}^{T}$ is an image point correspondence of $P{\left[\begin{array}{ccc}{x}_{w}& {y}_{w}& {z}_{w}\end{array}\right]}^{T}$, and both the projector and camera are calibrated, then the world point $P{\left[\begin{array}{ccc}{x}_{w}& {y}_{w}& {z}_{w}\end{array}\right]}^{T}$ can be obtained using the linear least-square method.

Generally, the projector coordinate system is defined as the world coordinate system. Then, Eqs. (3) and (4) can be expressed as follows:

According to Eq. (7), ${R}_{P}^{C}$ of the telecentric FPP system can be directly achieved if ${R}_{P}$ and ${R}_{C}$ are both obtained in a common world coordinate system. However, there are some problems in recovering ${t}_{P}^{C}$ because only the truncated translations ${t}_{Ps}={\left[\begin{array}{cc}{t}_{Px}& {t}_{Py}\end{array}\right]}^{T}$ and ${t}_{Cs}={\left[\begin{array}{cc}{t}_{Cx}& {t}_{Cy}\end{array}\right]}^{T}$, but not the completed translations ${t}_{P}={\left[\begin{array}{ccc}{t}_{Px}& {t}_{Py}& {t}_{Pz}\end{array}\right]}^{T}$ and ${t}_{C}={\left[\begin{array}{ccc}{t}_{Cx}& {t}_{Cy}& {t}_{Cz}\end{array}\right]}^{T}$, can be recovered for a telecentric FPP system. Generally, the projector and camera are independently calibrated. Then, $\left({R}_{P},{t}_{Ps}\right)$ and $\left({R}_{C},{t}_{Cs}\right)$, which are independent calibration results from a pattern image simultaneously captured in one orientation, are selected as the calibration results [31, 37]. Obviously, the calibration results will be significantly improved if non-linear optimization can be implemented with a greater number of images captured from different observed orientations.

In order to avoid degeneracy, the rig calibration model of the telecentric FPP system is adopted, which has only four independent variables, as shown in Fig. 2. Suppose that ${{X}^{\prime}}_{C}{{Y}^{\prime}}_{C}{{Z}^{\prime}}_{C}$ and ${{X}^{\prime}}_{P}{{Y}^{\prime}}_{P}{{Z}^{\prime}}_{P}$ denote the determined coordinate systems of the camera and projector according to the 3D calibration target, respectively. ${{o}^{\prime}}_{C}$ and ${{o}^{\prime}}_{P}$ are their corresponding coordinate origins. Because the coordinate origins of the projector-camera coordinates can be selected anywhere along the associated optical axis, we suppose that (1) the coordinate origin of coordinate system ${X}_{C}{Y}_{C}{Z}_{C}$, which is denoted as ${o}_{C}$, is the intersection point of its ${Z}_{C}$ axis and ${Y}_{P}{Z}_{P}$ plane, as shown in Fig. 2(a), or the ${Z}_{P}{X}_{P}$ plane, as shown in Fig. 2(b); (2) the coordinate origin of coordinate system ${X}_{P}{Y}_{P}{Z}_{P}$, which is denoted as ${o}_{P}$, is the perpendicular foot of the ${Z}_{P}$ axis and its straight-line passing through ${o}_{P}$. Under the above assumptions, ${R}_{P}^{C}$ and ${t}_{P}^{C}$, which are the extrinsic parameters of the telecentric FPP system, can be derived from the following formula:

#### 2.2 Sub-pixel mapping model based on local homographies

The projector cannot capture images like the camera. However, the projector can indirectly capture images by establishing a relationship between itself and the camera. Inspired by [38], we estimate the coordinates of the calibration points in the projector image plane using local homographies.

According to the telecentric camera model, the coordinates of a target point on the plane and its image point on the camera image plane satisfy the following relationship [39]:

Equation (10) is always approximately true if the local region near the control point is small enough even for a 3D calibration target.

Suppose that ${H}_{T}^{C}$ and ${H}_{T}^{P}$ are the homography matrices from the local target plane to the image planes of the camera and projector, respectively. Then, the homography matrix from the camera image plane to the projector image plane can be calculated as follows:

In order to obtain the projector pixel coordinates of the control points, two groups of gray code patterns, one vertical and the other horizontal, are generated with a computer and projected onto the surface of the 3D calibration target using the projector. Then, a dense set of correspondences between the projector and camera pixels is found using the captured camera images. Finally, the set of the correspondences in small subareas is used to compute a local homography that makes it possible to find the projection of any of the points of the 3D calibration target onto the projector image plane with sub-pixel precision. A more complete discussion of this can be found in [39].

Ideally, every pixel on the camera image plane corresponds to a unique gray code value after being decoded. However, because the resolution of the camera does not often exactly match that of the projector (but is usually much higher), ambiguity occurs, and a few pixels of the captured image correspond to the same projector image pixel, as shown in Fig. 3. Note that even using a camera with the same resolution cannot guarantee pixel-to-pixel correspondence because of the different perspectives. In order to determine a precise mapping relationship, the barycenter of these “black” pixels must be calculated (as shown in Fig. 3), which is a sub-pixel of the camera’s captured image and corresponds to the pixel on the projector image associated with the decoded gray codes [40].

## 3. Calibration method

The proposed calibration method for the telecentric FPP system consists of two parts, i.e., the parameter initialization with factorization part and optimization with bundle adjustment part. In this method, the 3D coordinates of the control points and the projector-camera rig parameters are found simultaneously.

#### 3.1 Parameter initialization with factorization

This section introduces the estimation method for the initial parameters (e.g., the control point coordinates, projector-camera rig, and projector intrinsic matrix), which was inspired by the affine camera factorization calibration method.

### 3.1.1 Factorization method

After successfully finding the control points in the image planes of the telecentric projector-camera system, the factorization approach can be implemented to solve the observed control points’ 3D structure and the projector-camera motion from these observations.

We arrange all the observed coordinates of the projector-camera in matrix form:

These matrices are called *observation matrices*, where $l$ and $n$ are the total numbers of observed points in all the images and in the observed images from different views, respectively.

In Eqs. (14) and (15), ${m}_{\text{P}ij}{\left[\begin{array}{cc}{u}_{Pij}& {v}_{Pij}\end{array}\right]}^{T}\leftrightarrow {m}_{\text{C}ij}{\left[\begin{array}{cc}{u}_{Pij}& {v}_{Pij}\end{array}\right]}^{T}$ are the observed correspondence coordinates of control point $P{\left[\begin{array}{ccc}{x}_{j}& {y}_{j}& {z}_{j}\end{array}\right]}^{T}$ on the image planes of the camera and projector, respectively. $j=1,2,\cdots ,l$ is the number of control points in the 3D calibration target. $i=1,2,\cdots ,n$ is the number of observed images.

On the other hand, we can arrange the first two rows of each attitude matrix and all the 3D positions $P{\left[\begin{array}{ccc}{x}_{j}& {y}_{j}& {z}_{j}\end{array}\right]}^{T}$ in the matrix form:

The 2*n* × 3 matrices ${M}_{C}$ and ${M}_{P}$ are called the *motion matrices*, and the 3 × *m* matrix $S$ is the *shape matrix*. Note that we chose to make the world coordinate origin the centroid of the points $P{\left[\begin{array}{ccc}{x}_{j}& {y}_{j}& {z}_{j}\end{array}\right]}^{T}$.

Under orthographic projection, we have,

In Eq. (20), the motion *matrices*, and *shape matrix $S$* can be found using the factorization method if the camera’s effective magnification $m$is known or supposed to 1. More complete discussions of the complete algorithm can be found in [41] and [35].

Points $P{\left[\begin{array}{ccc}{x}_{j}& {y}_{j}& {z}_{j}\end{array}\right]}^{T}$ and ${K}_{P}$ can be directly determined using the procedure shown above. The attitude matrices ${R}_{Pi}$ and ${R}_{Ci}$ associated with the captured images $i$ can be obtained by orthogonality. The truncated translations ${t}_{Psi}$ and ${t}_{Csi}$ can be obtained according to Eqs. (3) and (4). For example, ${R}_{Pi}$ and ${t}_{Psi}$ can be obtained using Eqs. (21) and (22), respectively.

### 3.1.2 Initial rig parameter estimation

As previously mentioned, ${R}_{P}^{C}$ of the telecentric FPP system can be directly found if ${R}_{P}$ and ${R}_{C}$ are both obtained. In the following, we will only discuss how to obtain ${y}_{oC}$ and ${x}_{oC}$. Suppose that $\left({R}_{P},{\left[\begin{array}{cc}{t}_{Ls}^{T}& 0\end{array}\right]}^{T}\right)$ and $\left({R}_{C},{\left[\begin{array}{cc}{t}_{Cs}^{T}& 0\end{array}\right]}^{T}\right)$ are the rotation matrices and translation vectors that relate the world coordinate system to the coordinate systems ${{X}^{\prime}}_{P}{{Y}^{\prime}}_{P}{{Z}^{\prime}}_{P}$ and ${{X}^{\prime}}_{C}{{Y}^{\prime}}_{C}{{Z}^{\prime}}_{C}$ in Fig. 2(a), respectively, where the last components of the translations of the camera and projector are all set to zero. Then, the coordinate origin of ${{X}^{\prime}}_{C}{{Y}^{\prime}}_{C}{{Z}^{\prime}}_{C}$ in coordinate system ${{X}^{\prime}}_{P}{{Y}^{\prime}}_{P}{{Z}^{\prime}}_{P}$ satisfies the following equation:

Because it is the Y coordinate of the intersection point of the ${Z}_{C}$ axis and ${{Z}^{\prime}}_{P}{{Y}^{\prime}}_{P}$ plane, ${y}_{oC}$ satisfies the following:

Similarly, ${x}_{oC}$ in Fig. 2(b) satisfies the following:

In Eqs. (24) and (25), ${n}_{C}^{}={\left[\begin{array}{ccc}{n}_{Cx}& {n}_{Cy}& {n}_{Cz}\end{array}\right]}^{T}$ is the direction vector of the ${Z}_{C}$ axis on ${{X}^{\prime}}_{P}{{Y}^{\prime}}_{P}{{Z}^{\prime}}_{P}$, which is equal to the last row of ${R}_{P}^{C}$.

Therefore, the translation vector that relates the projector coordinate system to the camera coordinate system is determined.

#### 3.2 Optimization with bundle adjustment

To further improve the calibration accuracy, a bundle adjustment algorithm for the different views is used to refine all of the parameters, e.g., the camera poses, projector-camera rig parameters, and 3D coordinates of the control points, by minimizing function $e$:

*Levenberg-Marquardt*algorithm.

The optimization with bundle adjustment can also be applied to calibrate the projector-camera distortions. However, we found that its influence could be negligible for our projector-camera setup. Thus, the distortion was not considered in this work.

Note that the obtained 3D coordinates of points on the reference target were determined up to a scale factor if the camera’s effective magnification $m$was supposed to be one.

#### 3.3 Summary

The complete calibration procedure with the proposed method can be summarized in the following steps:

Step1, Optionally, determine the camera’s effective magnification by our planar-object-based calibration technique [39], as the fabrication of a high-quality 2D calibration target is much cheaper and easier than that of a 3D calibration target.

Step 2, Fix a 3D calibration target within the working volume and take an image using the camera. Then, project two sets of gray code patterns, one horizontal and the other vertical, onto the 3D calibration target and capture the images of these fringe patterns.

Step 3, Randomly change the pose of the 3D calibration target within the working volume, and repeat step 2 to acquire at least three groups of images.

Step 4, For each group of images, extract the camera image coordinates of the control points in the 3D calibration target image and decode the gray code patterns into projector row and column correspondences. Then, compute the projector sub-pixel coordinates of the control points according to the local homographies obtained in small projector image patches (e.g., a 3 × 3 pixel square).

Step 5, Estimate the initial parameters of the control points’ coordinates, projector-camera rig, and projector intrinsic matrix using the factorization method.

Step 6, All of the parameters, intrinsic and extrinsic, can be bundle-adjusted together to minimize the total re-projection error.

Step 7, Optionally, determine the scale factor using the methods proposed in [42] or [43], if the camera’s effective magnification is unknown.

Note that the use of gray code patterns to estimate sub-pixel projector coordinates for the centers of circles may limit the applicability of the technique if the projector is defocused (i.e., the defocused binary pattern projection technique is implemented). In this case, phase shifting patterns could also be used for the calibration.

## 4. Experiment and discussion

This section reports some experiments and analyses that were conducted to determine the validity and performance of the proposed calibration method.

#### 4.1 Test setup

The test system included a digital CCD camera (IGV-B2520M, IMPERX) with a pixel resolution of 2456 × 2058 and a projector (VPL-EX250, SONY) with a pixel resolution of 1024 × 768. The telecentric lens used for the camera was a bi-telecentric lens (TCSM036, OPTO) with a designed magnification of 0.2430. It had a designed working distance of 102.5 mm, with a field depth of 6 mm. The original lens of the projector was removed, and a bi-telecentric lens (TCSM036, OPTO) was installed. The measurement volume of this designed FPP system was approximately 34.6mm × 29.0mm × 6mm. Because the ${Z}_{P}$-axis was approximately parallel with the ${Y}_{P}{Z}_{P}$ plane, the rig model shown in Fig. 2(b) was adopted in the following tests.

In the tests, a 3D object target with 88 circular dots for calibration points was adopted, as shown in Fig. 4. The control points were designed to be the centers of the circular dots. The control points on the camera images could identified in a fully automatic way and extracted in sub-pixels using the method proposed in [44]. Because the dot area was white, patterns near the centers could be directly and robustly calculated.

#### 4.2 Calibration results

In the experiment, 17 images observed from different views were captured for the projector-camera calibration. In our first calibration test, the magnification of the telecentric camera was calibrated using our single bi-telecentric camera calibration approach [39]. We also tested the calibration process where the scale factor as determined with a step master using a method similar to that proposed in [42]. We found that the two calibration processes achieved almost the same results. Therefore, we only provide the results of our first calibration tests for brevity.

Figure 5 shows the target at one of the calibrated positions. The completely illuminated camera image is shown in Fig. 5(a). The projector “image” achieved by the sub-pixel mapping approach described in section 2.2 is shown in Figs. 5(b) and 5(c), in which projector pixels with the same color correspond to the same camera column (b) and same camera row (c), respectively.

The estimated 3D control points and calibration results for the intrinsic parameters are presented in Fig. 6 and Table 1, respectively. We calculated the points’ re-projection errors based on the optimal extrinsic parameters of the FPP system. The re-projection errors after bundle adjustment are presented in Fig. 7. As shown in Fig. 7, the RMS values of the re-projection errors are (0.21, 0.16) pixels and (0.17, 0.10) pixels for the camera and projector, respectively. Considering the image resolution, we can say that the proposed method can provide sufficiently accurate calibration results.

In addition, in order to evaluate the quality of the estimated 3D control points, we reconstructed the 3D coordinates of the control points; then, we compared these 3D coordinates with the estimated 3D control points after implementing a transformation, which is described in Eq. (27).

where ${P}_{con}$ and ${{P}^{\prime}}_{con}$ are the coordinates of the reconstructed control point and its associated transformation result, respectively. $\left({R}_{t},{t}_{t}\right)$ is the transformation between two Cartesian coordinate systems, which satisfies the following equation:Figure 8 shows the reconstructed errors of the control points for all of the calibration images. From this figure, the RMS values of the errors are (3.0, 4.5, 7.8) $\mu m$ for the calibrated control points.

#### 4.3 3D reconstruction with calibrated setup

First, a step master was employed to evaluate the performance of the designed system, which is widely used in surface metrology evaluations. The step master used in the experiment was the Mitutoyo 516-499 Cera Step Master 300C, which has four designed steps with nominal values of 20, 50, 100, and 300 ($\mu m$). The uncertainty of these nominal steps is 0.20$\mu m$, while the variation of each step is within 0.05$\mu m$.

The measured results for the step master are shown in Fig. 9 as a depth map. In addition, the measured data for the steps were utilized to fit planes with a robust estimation, and the results are listed in Table 2, which indicates that the RMS error of the measured nominal steps is 2.6$\mu m$. Considering the uncertainty of the plane fitting, a measurement accuracy of 10$\mu m$could be achieved with a measurement volume of 34.6mm × 29.0mm × 6.0mm.

To further validate the performance of the proposed method, we reconstructed the 3D geometry of the calibration target from the captured images. We also applied the calibrated FPP system to measure single objects. The results are shown in Figs. 10–12. According to the results, we can observe that the complete models of the objects were reconstructed, and the surface topography of raised characters is clear. Thus, these results further validate the performance of the calibration method.

## 5. Conclusion

This paper presented a new calibration method for a telecentric FPP system, which is comprised of a telecentric camera and telecentric projector. The projector-camera rig parameters and 3D coordinates of the control points on the 3D target were all determined using the proposed method. Thus, there was no need for an accurate 3D reference target, whose fabrication is costly and extremely difficult, particularly for tiny objects used to calibrate atelecentric FPP system. The experimental results demonstrated the success of our calibration framework by achieving high measurement accuracy. It is worth noting that this method could also be applied to calibrate the distortions of the projector-camera system.

## Funding

National Natural Science Foundation of China (NSFC) NO.51509251.

## References and links

**1. **S. Van der Jeught and J. J. J. Dirckx, “Real-time structured light profilometry: a review,” Opt. Lasers Eng. **87**, 18–31 (2016).

**2. **Y. Hu, Q. Chen, T. Tao, H. Li, and C. Zuo, “Absolute three-dimensional micro surface profile measurement based on a Greenough-type stereomicroscope,” Meas. Sci. Technol. **28**, 45004 (2017).

**3. **J. G. Rico Espino, J. Gonzalez-Barbosa, R. A. Gomez Loenzo, D. M. Cordova Esparza, and R. Gonzalez-Barbosa, “Vision system for 3D reconstruction with telecentric lens,” in Mexican Conference on Pattern Recognition (Springer-Verlag, 2012), pp. 127–136.

**4. **A. Mikš and J. Novák, “Design of a double-sided telecentric zoom lens,” Appl. Opt. **51**(24), 5928–5935 (2012). [PubMed]

**5. **J. Zhang, X. Chen, J. Xi, and Z. Wu, “Aberration correction of double-sided telecentric zoom lenses using lens modules,” Appl. Opt. **53**(27), 6123–6132 (2014). [PubMed]

**6. **J. S. Kim and T. Kanade, “Multiaperture telecentric lens for 3D reconstruction,” Opt. Lett. **36**(7), 1050–1052 (2011). [PubMed]

**7. **B. Li and S. Zhang, “Microscopic structured light 3D profilometry: Binary defocusing technique vs. sinusoidal fringe projection,” Opt. Lasers Eng. **96**, 117–123 (2017).

**8. **Y. Yin, X. Peng, A. Li, X. Liu, and B. Z. Gao, “Calibration of fringe projection profilometry with bundle adjustment strategy,” Opt. Lett. **37**(4), 542–544 (2012). [PubMed]

**9. **Z. Zhang, S. Huang, S. Meng, F. Gao, and X. Jiang, “A simple, flexible and automatic 3D calibration method for a phase calculation-based fringe projection imaging system,” Opt. Express **21**(10), 12218–12227 (2013). [PubMed]

**10. **H. Luo, J. Xu, N. Hoa Binh, S. Liu, C. Zhang, and K. Chen, “A simple calibration procedure for structured light system,” Opt. Lasers Eng. **57**, 6–12 (2014).

**11. **J. Lu, R. Mo, H. Sun, and Z. Chang, “Flexible calibration of phase-to-height conversion in fringe projection profilometry,” Appl. Opt. **55**(23), 6381–6388 (2016). [PubMed]

**12. **F. Zhu, H. Shi, P. Bai, D. Lei, and X. He, “Nonlinear calibration for generalized fringe projection profilometry under large measuring depth range,” Appl. Opt. **52**(32), 7718–7723 (2013). [PubMed]

**13. **L. Merner, Y. Wang, and S. Zhang, “Accurate calibration for 3D shape measurement system using a binary defocusing technique,” Opt. Lasers Eng. **51**, 514–519 (2013).

**14. **F. Zhu, W. Liu, H. Shi, and X. He, “Accurate 3D measurement system and calibration for speckle projection method,” Opt. Lasers Eng. **48**, 1132–1139 (2010).

**15. **Z. Cai, X. Liu, A. Li, Q. Tang, X. Peng, and B. Z. Gao, “Phase-3D mapping method developed from back-projection stereovision model for fringe projection profilometry,” Opt. Express **25**(2), 1262–1277 (2017). [PubMed]

**16. **P. Lu, C. Sun, B. Liu, and P. Wang, “Accurate and robust calibration method based on pattern geometric constraints for fringe projection profilometry,” Appl. Opt. **56**(4), 784–794 (2017). [PubMed]

**17. **S. Zhang and P. S. Huang, “Novel method for structured light system calibration,” Opt. Eng. **45**, 83601 (2006).

**18. **Y. An, T. Bell, B. Li, J. Xu, and S. Zhang, “Method for large-range structured light system calibration,” Appl. Opt. **55**(33), 9563–9572 (2016). [PubMed]

**19. **B. Li and S. Zhang, “Structured light system calibration method with optimal fringe angle,” Appl. Opt. **53**(33), 7942–7950 (2014). [PubMed]

**20. **S. Yang, M. Liu, J. Song, S. Yin, Y. Guo, Y. Ren, and J. Zhu, “Flexible digital projector calibration method based on per-pixel distortion measurement and correction,” Opt. Lasers Eng. **92**, 29–38 (2017).

**21. **Z. Huang, J. Xi, Y. Yu, and Q. Guo, “Accurate projector calibration based on a new point-to-point mapping relationship between the camera and projector images,” Appl. Opt. **54**, 347 (2015).

**22. **W. Zhang, W. Li, L. Yu, H. Luo, H. Zhao, and H. Xia, “Sub-pixel projector calibration method for fringe projection profilometry,” Opt. Express **25**(16), 19158–19169 (2017). [PubMed]

**23. **D. Li and J. Tian, “An accurate calibration method for a camera with telecentric lenses,” Opt. Lasers Eng. **51**, 538–541 (2013).

**24. **D. Li, C. Liu, and J. Tian, “Telecentric 3D profilometry based on phase-shifting fringe projection,” Opt. Express **22**(26), 31826–31835 (2014). [PubMed]

**25. **J. Peng, M. Wang, D. Deng, X. Liu, Y. Yin, and X. Peng, “Distortion correction for microscopic fringe projection system with Scheimpflug telecentric lens,” Appl. Opt. **54**(34), 10055–10062 (2015). [PubMed]

**26. **L. Rao, F. Da, W. Kong, and H. Huang, “Flexible calibration method for telecentric fringe projection profilometry systems,” Opt. Express **24**(2), 1222–1237 (2016). [PubMed]

**27. **H. Tanaka, Y. Sumi, and Y. Matsumoto, “A solution to pose ambiguity of visual markers using Moir patterns,” in IEEE/RSJ International Conference on Intelligent Robots and Systems (IEEE, 2014), pp. 3129–3134.

**28. **T. Collins and A. Bartoli, “Planar Structure-from-Motion with Affine Camera Models: Closed-Form Solutions, Ambiguities and Degeneracy Analysis,” IEEE Trans. Pattern Anal. Mach. Intell. **39**(6), 1237–1255 (2017). [PubMed]

**29. **Y. Yin, M. Wang, B. Z. Gao, X. Liu, and X. Peng, “Fringe projection 3D microscopy with the general imaging model,” Opt. Express **23**(5), 6846–6857 (2015). [PubMed]

**30. **L. Huiyang, C. Zhong, and Z. Xianmin, “Calibration of camera with small FOV and DOF telecentric lens,” in IEEE International Conference on Robotics and Biomimetics (IEEE, 2013), pp. 498–503.

**31. **Z. Chen, H. Liao, and X. Zhang, “Telecentric stereo micro-vision system: Calibration method and experiments,” Opt. Lasers Eng. **57**, 82–92 (2014).

**32. **B. Li and S. Zhang, “Flexible calibration method for microscopic structured light system using telecentric lens,” Opt. Express **23**(20), 25795–25803 (2015). [PubMed]

**33. **C. Steger, “A Comprehensive and Versatile Camera Model for Cameras with Tilt Lenses,” Int. J. Comput. Vis. **123**(2), 1–39 (2017).

**34. **A. Habed, A. Amintabar, and B. Boufama, “Affine camera calibration from homographies of parallel planes,” in *IEEE International Conference on Image Processing* (IEEE 2010), pp. 4249–4252.

**35. **K. Kanatani, Y. Sugaya, and Y. Kanazawa, “Self-calibration of Affine Cameras,” in *Guide to 3D Vision* Computation (Springer International Publishing, 2016), pp. 163–182.

**36. **L. Quan, “Self-calibration of an affine camera from multiple views,” Int. J. Comput. Vis. **19**, 93–105 (1996).

**37. **Q. Mei, J. Gao, H. Lin, Y. Chen, H. Yunbo, W. Wang, G. Zhang, and X. Chen, “Structure light telecentric stereoscopic vision 3D measurement system based on Scheimpflug condition,” Opt. Lasers Eng. **86**, 83–91 (2016).

**38. **D. Moreno and G. Taubin, “Simple, Accurate, and Robust Projector-Camera Calibration,” in International Conference on 3D Imaging (IEEE, 2012), pp. 464–471.

**39. **L. Yao and H. Liu, “A flexible calibration approach for cameras with double-sided telecentric lenses,” Int. J. Adv. Robot. Syst. **13**, 82 (2016).

**40. **H. Lin, H. Liu, and L. Yao, “3D-shape reconstruction based on a sub-pixel-level mapping relationship between the camera and projector,” Proc. SPIE **10255**, 1025504 (2017).

**41. **C. Tomasi and T. Kanade, “Shape and motion from image streams under orthography: a factorization method,” Int. J. Comput. Vis. **9**, 137–154 (1992).

**42. **X. Liu, Z. Cai, Y. Yin, H. Jiang, D. He, W. He, Z. Zhang, and X. Peng, “Calibration of fringe projection profilometry using an inaccurate 2D reference target,” Opt. Lasers Eng. **89**, 131–137 (2017).

**43. **R. Chen, J. Xu, S. Zhang, H. Chen, Y. Guan, and K. Chen, “A self-recalibration method based on scale-invariant registration for structured light measurement systems,” Opt. Lasers Eng. **88**, 75–81 (2017).

**44. **Y. Oyamada, P. Fallavollita, and N. Navab, “Single Camera Calibration using partially visible calibration objects based on Random Dots Marker Tracking Algorithm,” in IEEE and ACM International Symposium on Mixed and Augmented Reality (IEEE, 2012).

**45. **B. K. P. Horn, “Closed-form solution of absolute orientation using unit quaternions,” J. Opt. Soc. Am. A **4**, 629–642 (1987).