Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Calibration method for projector-camera-based telecentric fringe projection profilometry system

Open Access Open Access

Abstract

By combining a fringe projection setup with a telecentric lens, a fringe pattern could be projected and imaged within a small area, making it possible to measure the three-dimensional (3D) surfaces of micro-components. This paper focuses on the flexible calibration of the fringe projection profilometry (FPP) system using a telecentric lens. An analytical telecentric projector-camera calibration model is introduced, in which the rig structure parameters remain invariant for all views, and the 3D calibration target can be located on the projector image plane with sub-pixel precision. Based on the presented calibration model, a two-step calibration procedure is proposed. First, the initial parameters, e.g., the projector-camera rig, projector intrinsic matrix, and coordinates of the control points of a 3D calibration target, are estimated using the affine camera factorization calibration method. Second, a bundle adjustment algorithm with various simultaneous views is applied to refine the calibrated parameters, especially the rig structure parameters and coordinates of the control points forth 3D target. Because the control points are determined during the calibration, there is no need for an accurate 3D reference target, whose is costly and extremely difficult to fabricate, particularly for tiny objects used to calibrate the telecentric FPP system. Real experiments were performed to validate the performance of the proposed calibration method. The test results showed that the proposed approach is very accurate and reliable.

© 2017 Optical Society of America under the terms of the OSA Open Access Publishing Agreement

1. Introduction

With recent advances in precision manufacturing, micro-level three-dimensional (3D) metrology has become increasingly important. With the development of a digital projector based on a liquid crystal display (LCD) and digital light projector (DLP), fringe projection profilometry (FPP) has become one of the most widely used techniques in 3D shape measurement because of its inherent advantages such as its full-field data acquisition, high measurement accuracy, portability, and flexibility [1]. A stereoscopic microscope is generally adopted as the basic optical system for microscopic FPP, with a projector and camera installed on the two cylinders of the stereoscopic microscope, i.e., the projection end and imaging end, respectively [2]. The main drawback of such a system is that the depth of field (DOF) is limited to the sub-millimeter order, which is insufficient to measure a 3D object with height variations of several millimeters. Furthermore, the distance between the optical system and object must be small for micro-level 3D metrology. Other drawbacks to conventional lenses are the perspective effect and lens distortion, which cause object to appear distorted at short distances [3]. Compared to conventional microscopic lenses, telecentric lenses feature orthographic projection and have many advantages such as high resolution, nearly zero distortion, constant magnification, and an increased DOF [4–6]. Because of these features, researchers have given attention to telecentric FPP [7], which replaces the lenses of a micro-projector and camera with telecentric lenses (called atelecentric camera or an affine camera).

For any FPP measurement system, the projector-camera calibration is one of the most important and challenging issues, because the measurement accuracy largely depends on it. The calibration approaches for a projector-camera with conventional lenses have been extensively studied over a long period of time, and the existing methods can be mainly divided into two categories: phase-height-based methods and stereovision-based methods [8]. In a phase-height-based method, the relations between the absolute phase values of the fringe pattern and the height of an object are identified and constructed using a lookup Table [9, 10] or parametric polynomial models [11–13]. This kind of method can be directly applied to the calibration of a telecentric projector-camera, because it avoids the calibration of the system parameters [14]. However, it generally requires a reference plane to compute the phase difference and relative height, which limits the measurement volume [15]. Moreover, to achieve higher accuracy, the calibration target must move a certain distance along the optical axis of the camera or projector, which means a high-precision translation stage or gauge block is inevitable [16]. Therefore, the existing phase-height-based methods are complicated and hard to implement in practical environments because of the requirement of a precise translating stage or gauge block [9]. In contrast, the stereovision-based method is based on the binocular vision theory, in which the projector can be regarded as an inverse camera and described with the same mathematical model [17]. The projector can observe the calibration targets with the help of the camera of the FPP system. Thus, the projector-camera system can also be flexibly calibrated using the stereovision technique without practical limitations. It has become one of the most popular approaches and has been widely studied because of its accuracy and flexibility. For example, there are many flexible and practical approaches to exploit both the intrinsic and extrinsic parameters of a perspective projector-camera system using a planar checkerboard or circular calibration board [17–22].

Unlike the perspective projection of a pinhole camera, a telecentric camera is insensitive to changes in the depth along the optical axis. Thus, the existing stereovision-based calibration methods for perspective FPP systems cannot be straightforwardly applied. Espino et al. described a general model that included the intrinsic and extrinsic parameters of a vision system with a telecentric lens [3]. Based on a similar model, Li et al. developed a flexible method to calibrate telecentric cameras [23], and then successfully employed it for an FPP system [24]. Peng et al. adapted this method to calibrate a microscopic fringe projection system with a Scheimpflug telecentric lens [25], in which the effect caused by the Scheimpflug condition was modeled as a “tangential distortion” model. Li et al. proposed a flexible calibration algorithm for a telecentric FPP system [26], for which the traditional 2D planar calibration method and an additional two-step refining process were proposed. Both of the aforementioned calibration methods can effectively accomplish the calibration task for a telecentric FPP system. However, a pose ambiguity problem is inevitable for a telecentric camera if a planar calibration target is adopted [27, 28]. Yin et al. employed general imaging model to calibrate atelecentric FPP system and achieved good reconstruction accuracy [29]. Chen et al. proposed a closed-form solution for a telecentric stereo micro-vision system [30, 31]. In these two calibration methods, the problem of the sign ambiguity induced by the planar-object-based calibration technique was successfully solved with the help of a positioning stage device. However, the additional positioning stage device increased the hardware cost of the whole system and made the calibration process complicated and laborious. Li and Zhang proposed a framework to calibrate such a microscopic FPP system using a telecentric camera and perspective projector [32]. In this method, the 3D coordinates of the planar target points in the calibrated projector coordinate system could be found, and then used to calibrate the fixed telecentric camera. The sign ambiguity problem could be overcome using3D control points for telecentric camera calibration, which meant their Z coordinates were not all zeros. As previously mentioned, this calibration framework is problematic for an FPP system consisting of a telecentric projector and telecentric camera, because it cannot determine the pose of a unique planar target for the telecentric camera/projector without additional information.

Generally, the translation or Z-direction of the telecentric camera can be recovered using some assumptions such as moving the origins of the camera coordinate systems along their respective optical axes to a sphere with a fixed radius [33]. The attitude ambiguity problem can also be solved using3D calibration reference targets. However, it should be noted that the calibration accuracy largely depends on the fabrication quality of the reference target, such as the accuracy of the feature points' locations. The fabrication of a high-quality 3D calibration reference target is costly and extremely difficult, particularly for tiny objects used to calibrate the telecentric FPP system. So far, many calibration methods for affine cameras have been proposed [34–36], in which the 3D structure of the scene and the camera motion can be achieved simultaneously. Inspired by these methods, this paper proposes a calibration approach for telecentric FPP systems without accurate 3D reference targets. The 3D coordinates of the feature points of the reference target can be determined using the proposed approach.

The rest of this paper is organized as follows. Section 2introduces some basic principles such as the rig model of the telecentric projector-camera, local planar homography, and sub-pixel mapping model. Based on these principles, the calibration method is proposed in section 3. The experimental verification of the proposed calibration method is reported in section 4, and section 5 concludes this work.

2. Calibration model of telecentric FPP system

This section introduces some basic principles and deductions and provides theoretical support for the proposed calibration method.

2.1 Telecentric projector-camera model

The measurement model of the telecentric FPP system is shown in Fig. 1, and includes a telecentric projector and telecentric camera. Rather than capturing images, the projector is used to project coded patterns (structured light), which are in turn captured by the camera and decoded for correspondence. As previously mentioned, the telecentric projector can be regarded as an inverse telecentric camera and described using the same mathematical model.

 figure: Fig. 1

Fig. 1 Measurement model of FPP system with bi-telecentric lens.

Download Full Size | PDF

As shown in Fig. 1, the bi-telecentric lens simply performs a magnification in both the X and Y directions of the camera coordinate system, while it is not sensitive to the depth in the Z direction. Suppose that (R,t) is the rotation matrix and translation vector, which relates the world coordinate system to the camera coordinate system, and the projection of an arbitrary point P[xwywzw]T in the 3D world coordinate system to the undistorted image plane in pixel units is expressed by the following equation:

[m1]=K[R2×3ts01×31][P1],
where

K=[m0u00mv0001].

In Eqs. (1) and (2), Kis the camera’s intrinsic matrix, R2×3 represents the first two rows of rotation matrix R, ts=[txty]T is the truncated translation of t=[txtytz]T, m=[uv]T is the image coordinate of P[xwywzw]T in pixels, m is the effective magnification of the telecentric camera, and (u0,v0) are the coordinates of the image plane’s center.

According to the above model, the relationship between the object space (object pointP[xwywzw]T) and image spaces of the projector and camera (correspondence points mP[upvp]T and mC[ucvc]T) in Fig. 1 are described as follows:

[mP1]=KP[RP2×3tPs01×31][P1],
[mC1]=Kc[Rc2×3tcs01×31][P1],
where (RP,tP) and (RC,tC) are the rotation matrices and translation vectors that relate the world coordinate system to the projector coordinate system and camera coordinate system, respectively.

Combining Eqs. (3) and (4), if mP[upvp]TmC[ucvc]T is an image point correspondence of P[xwywzw]T, and both the projector and camera are calibrated, then the world point P[xwywzw]T can be obtained using the linear least-square method.

Generally, the projector coordinate system is defined as the world coordinate system. Then, Eqs. (3) and (4) can be expressed as follows:

[mp1]=Kp[I2×302×101×31][P1],
[mC1]=KC[RP2×3CtPsC01×31][P1].
where I2×3 represents the first two rows of a 3 × 3 identity matrix I3×3. RPC and tPC are the extrinsic parameters of the FPP system. For a perspective FPP system with a conventional lens, RPC and tPC can be derived from the following formula:

{RPC=RCRP1tPC=tCRCRP1tP.

According to Eq. (7), RPC of the telecentric FPP system can be directly achieved if RP and RC are both obtained in a common world coordinate system. However, there are some problems in recovering tPC because only the truncated translations tPs=[tPxtPy]T and tCs=[tCxtCy]T, but not the completed translations tP=[tPxtPytPz]T and tC=[tCxtCytCz]T, can be recovered for a telecentric FPP system. Generally, the projector and camera are independently calibrated. Then, (RP,tPs) and (RC,tCs), which are independent calibration results from a pattern image simultaneously captured in one orientation, are selected as the calibration results [31, 37]. Obviously, the calibration results will be significantly improved if non-linear optimization can be implemented with a greater number of images captured from different observed orientations.

In order to avoid degeneracy, the rig calibration model of the telecentric FPP system is adopted, which has only four independent variables, as shown in Fig. 2. Suppose that XCYCZC and XPYPZP denote the determined coordinate systems of the camera and projector according to the 3D calibration target, respectively. oC and oP are their corresponding coordinate origins. Because the coordinate origins of the projector-camera coordinates can be selected anywhere along the associated optical axis, we suppose that (1) the coordinate origin of coordinate system XCYCZC, which is denoted as oC, is the intersection point of its ZC axis and YPZP plane, as shown in Fig. 2(a), or the ZPXP plane, as shown in Fig. 2(b); (2) the coordinate origin of coordinate system XPYPZP, which is denoted as oP, is the perpendicular foot of the ZP axis and its straight-line passing through oP. Under the above assumptions, RPC and tPC, which are the extrinsic parameters of the telecentric FPP system, can be derived from the following formula:

{RPC=RCRP1tPC=RPC×[0yoC0]T,
or
{RPC=RCRP1tPC=RPC×[xoC00]T,
where, [0yoC0]T and [xoC00]T are respectively the coordinate origins of XCYCZC in XPYPZP, as shown in Figs. 2(a) and 2(b), and are invariant because the projector and camera are fixed. In this case, the structure parameters, i.e., RLR and yoC (or xoC), remain invariant for all views during the optimization procedure. Note that if the ZC axis is approximately parallel with the ZPXP plane, the rig model described in Fig. 2(a) should be adopted, otherwise if the ZC axis is approximately parallel with the YPZP plane, the rig model described in Fig. 2(b) should be adopted.

 figure: Fig. 2

Fig. 2 Selected projector-camera coordinates. (a) Rig model whenZC axis is approximately parallel withZPXPplane (b) Rig model whenZC axis is approximately parallel withYPZPplane.

Download Full Size | PDF

2.2 Sub-pixel mapping model based on local homographies

The projector cannot capture images like the camera. However, the projector can indirectly capture images by establishing a relationship between itself and the camera. Inspired by [38], we estimate the coordinates of the calibration points in the projector image plane using local homographies.

According to the telecentric camera model, the coordinates of a target point on the plane and its image point on the camera image plane satisfy the following relationship [39]:

[ucvc1]=[h11h12h13h21h22h23001][xwyw1]=H[xwyw1].

Equation (10) is always approximately true if the local region near the control point is small enough even for a 3D calibration target.

Suppose that HTC and HTP are the homography matrices from the local target plane to the image planes of the camera and projector, respectively. Then, the homography matrix from the camera image plane to the projector image plane can be calculated as follows:

HCP=HTP(HTC)1.

In order to obtain the projector pixel coordinates of the control points, two groups of gray code patterns, one vertical and the other horizontal, are generated with a computer and projected onto the surface of the 3D calibration target using the projector. Then, a dense set of correspondences between the projector and camera pixels is found using the captured camera images. Finally, the set of the correspondences in small subareas is used to compute a local homography that makes it possible to find the projection of any of the points of the 3D calibration target onto the projector image plane with sub-pixel precision. A more complete discussion of this can be found in [39].

Ideally, every pixel on the camera image plane corresponds to a unique gray code value after being decoded. However, because the resolution of the camera does not often exactly match that of the projector (but is usually much higher), ambiguity occurs, and a few pixels of the captured image correspond to the same projector image pixel, as shown in Fig. 3. Note that even using a camera with the same resolution cannot guarantee pixel-to-pixel correspondence because of the different perspectives. In order to determine a precise mapping relationship, the barycenter of these “black” pixels must be calculated (as shown in Fig. 3), which is a sub-pixel of the camera’s captured image and corresponds to the pixel on the projector image associated with the decoded gray codes [40].

 figure: Fig. 3

Fig. 3 Sub-pixels on camera image plane correspond to pixels on projector image plane.

Download Full Size | PDF

3. Calibration method

The proposed calibration method for the telecentric FPP system consists of two parts, i.e., the parameter initialization with factorization part and optimization with bundle adjustment part. In this method, the 3D coordinates of the control points and the projector-camera rig parameters are found simultaneously.

3.1 Parameter initialization with factorization

This section introduces the estimation method for the initial parameters (e.g., the control point coordinates, projector-camera rig, and projector intrinsic matrix), which was inspired by the affine camera factorization calibration method.

3.1.1 Factorization method

After successfully finding the control points in the image planes of the telecentric projector-camera system, the factorization approach can be implemented to solve the observed control points’ 3D structure and the projector-camera motion from these observations.

We arrange all the observed coordinates of the projector-camera in matrix form:

WC=[u¯C11u¯C12u¯C1lv¯C11v¯C12v¯C1lu¯Cn1u¯Cn2u¯Cnlv¯Cn1v¯Cn2v¯Cnl]2n×l,
WP=[u¯P11u¯P12u¯P1lv¯P11v¯P12v¯P1lu¯Pn1u¯Pn2u¯Pnlv¯Pn1v¯Pn2v¯Pnl]2n×l.

These matrices are called observation matrices, where l and n are the total numbers of observed points in all the images and in the observed images from different views, respectively.

{u¯Cij=uCij1mj=1muCijv¯Cij=vCij1mj=1mvCij,
{u¯Pij=uPij1mj=1muPijv¯Pij=vPij1mj=1mvPij.

In Eqs. (14) and (15), mPij[uPijvPij]TmCij[uPijvPij]T are the observed correspondence coordinates of control point P[xjyjzj]T on the image planes of the camera and projector, respectively. j=1,2,,l is the number of control points in the 3D calibration target. i=1,2,,n is the number of observed images.

On the other hand, we can arrange the first two rows of each attitude matrix and all the 3D positions P[xjyjzj]T in the matrix form:

MC=[rC111rC112rC113rC121rC122rC123rCn11rCn12rCn13rCn21rCn22rCn23]2n×3,
MP=[rP111rP112rP113rP121rP122rP123rPn11rPn12rPn13rPn21rPn22rPn23]2n×3,
MP=[rP111rP112rP113rP121rP122rP123rPn11rPn12rPn13rPn21rPn22rPn23]2n×3,
S=[x1x2xly1y2ylz1z2zl]3×l.

The 2n × 3 matrices MC and MP are called the motion matrices, and the 3 × m matrix S is the shape matrix. Note that we chose to make the world coordinate origin the centroid of the points P[xjyjzj]T.

Under orthographic projection, we have,

{WC=MCSWP=MPS.

In Eq. (20), the motion matrices, and shape matrix S can be found using the factorization method if the camera’s effective magnification mis known or supposed to 1. More complete discussions of the complete algorithm can be found in [41] and [35].

Points P[xjyjzj]T and KP can be directly determined using the procedure shown above. The attitude matrices RPi and RCi associated with the captured images i can be obtained by orthogonality. The truncated translations tPsi and tCsi can be obtained according to Eqs. (3) and (4). For example, RPi and tPsi can be obtained using Eqs. (21) and (22), respectively.

RPi=[rPxiTrPyiTrPziT]T,
tPsi=[tPxitPyi]T,
where rPxi=[rPi11rPi11rPi11], rPyi=[rPi21rPi21rPi21], rPzi=rPxi×rPyi, tPxi=1mj=1muPij, and tPyi=1mj=1mvPij.

3.1.2 Initial rig parameter estimation

As previously mentioned, RPC of the telecentric FPP system can be directly found if RP and RC are both obtained. In the following, we will only discuss how to obtain yoC and xoC. Suppose that (RP,[tLsT0]T) and (RC,[tCsT0]T) are the rotation matrices and translation vectors that relate the world coordinate system to the coordinate systems XPYPZP and XCYCZC in Fig. 2(a), respectively, where the last components of the translations of the camera and projector are all set to zero. Then, the coordinate origin of XCYCZC in coordinate system XPYPZP satisfies the following equation:

oC=[oCxoCyoCz]T=RPC[tCs0]+[tPs0].

Because it is the Y coordinate of the intersection point of the ZC axis and ZPYP plane, yoC satisfies the following:

yoC=oCynCynCx×oCx.

Similarly, xoC in Fig. 2(b) satisfies the following:

xoC=oCxnCxnCy×oCy.

In Eqs. (24) and (25), nC=[nCxnCynCz]T is the direction vector of the ZC axis on XPYPZP, which is equal to the last row of RPC.

Therefore, the translation vector that relates the projector coordinate system to the camera coordinate system is determined.

3.2 Optimization with bundle adjustment

To further improve the calibration accuracy, a bundle adjustment algorithm for the different views is used to refine all of the parameters, e.g., the camera poses, projector-camera rig parameters, and 3D coordinates of the control points, by minimizing function e:

e=i=1nj=1m[mPijp˜Pij(RPi,tPsi,KP,xj,yj,zj)2+mCijp˜Cij(RPi,tPsi,RPC,tPC,xj,yj,zj)2],
where mPij and mCij are the image coordinates of control points P[xjyjzj]T on the projector-camera image plane pair i, respectively; p˜Pij and p˜Cij are the projections of point P[xjyjzj]T on the projector-camera image plane pair i, respectively. The rotation matrices RPi and RPC are parameterized using vectors with three parameters, which are realized by Rodrigues’ formula. Minimizing function e is a nonlinear minimization problem, which is solved using the Levenberg-Marquardt algorithm.

The optimization with bundle adjustment can also be applied to calibrate the projector-camera distortions. However, we found that its influence could be negligible for our projector-camera setup. Thus, the distortion was not considered in this work.

Note that the obtained 3D coordinates of points on the reference target were determined up to a scale factor if the camera’s effective magnification mwas supposed to be one.

3.3 Summary

The complete calibration procedure with the proposed method can be summarized in the following steps:

Step1, Optionally, determine the camera’s effective magnification by our planar-object-based calibration technique [39], as the fabrication of a high-quality 2D calibration target is much cheaper and easier than that of a 3D calibration target.

Step 2, Fix a 3D calibration target within the working volume and take an image using the camera. Then, project two sets of gray code patterns, one horizontal and the other vertical, onto the 3D calibration target and capture the images of these fringe patterns.

Step 3, Randomly change the pose of the 3D calibration target within the working volume, and repeat step 2 to acquire at least three groups of images.

Step 4, For each group of images, extract the camera image coordinates of the control points in the 3D calibration target image and decode the gray code patterns into projector row and column correspondences. Then, compute the projector sub-pixel coordinates of the control points according to the local homographies obtained in small projector image patches (e.g., a 3 × 3 pixel square).

Step 5, Estimate the initial parameters of the control points’ coordinates, projector-camera rig, and projector intrinsic matrix using the factorization method.

Step 6, All of the parameters, intrinsic and extrinsic, can be bundle-adjusted together to minimize the total re-projection error.

Step 7, Optionally, determine the scale factor using the methods proposed in [42] or [43], if the camera’s effective magnification is unknown.

Note that the use of gray code patterns to estimate sub-pixel projector coordinates for the centers of circles may limit the applicability of the technique if the projector is defocused (i.e., the defocused binary pattern projection technique is implemented). In this case, phase shifting patterns could also be used for the calibration.

4. Experiment and discussion

This section reports some experiments and analyses that were conducted to determine the validity and performance of the proposed calibration method.

4.1 Test setup

The test system included a digital CCD camera (IGV-B2520M, IMPERX) with a pixel resolution of 2456 × 2058 and a projector (VPL-EX250, SONY) with a pixel resolution of 1024 × 768. The telecentric lens used for the camera was a bi-telecentric lens (TCSM036, OPTO) with a designed magnification of 0.2430. It had a designed working distance of 102.5 mm, with a field depth of 6 mm. The original lens of the projector was removed, and a bi-telecentric lens (TCSM036, OPTO) was installed. The measurement volume of this designed FPP system was approximately 34.6mm × 29.0mm × 6mm. Because the ZP-axis was approximately parallel with the YPZP plane, the rig model shown in Fig. 2(b) was adopted in the following tests.

In the tests, a 3D object target with 88 circular dots for calibration points was adopted, as shown in Fig. 4. The control points were designed to be the centers of the circular dots. The control points on the camera images could identified in a fully automatic way and extracted in sub-pixels using the method proposed in [44]. Because the dot area was white, patterns near the centers could be directly and robustly calculated.

 figure: Fig. 4

Fig. 4 3D object target for calibration. (a) Designed calibration object with circular dots (b) Photograph of 3D calibration target.

Download Full Size | PDF

4.2 Calibration results

In the experiment, 17 images observed from different views were captured for the projector-camera calibration. In our first calibration test, the magnification of the telecentric camera was calibrated using our single bi-telecentric camera calibration approach [39]. We also tested the calibration process where the scale factor as determined with a step master using a method similar to that proposed in [42]. We found that the two calibration processes achieved almost the same results. Therefore, we only provide the results of our first calibration tests for brevity.

Figure 5 shows the target at one of the calibrated positions. The completely illuminated camera image is shown in Fig. 5(a). The projector “image” achieved by the sub-pixel mapping approach described in section 2.2 is shown in Figs. 5(b) and 5(c), in which projector pixels with the same color correspond to the same camera column (b) and same camera row (c), respectively.

 figure: Fig. 5

Fig. 5 Examples of calibration images. (a) Completely illuminated camera image (b) and (c) Projector images achieved by sub-pixel mapping, where pixels with same color correspond respectively to same camera column (b) and same camera row (c).

Download Full Size | PDF

The estimated 3D control points and calibration results for the intrinsic parameters are presented in Fig. 6 and Table 1, respectively. We calculated the points’ re-projection errors based on the optimal extrinsic parameters of the FPP system. The re-projection errors after bundle adjustment are presented in Fig. 7. As shown in Fig. 7, the RMS values of the re-projection errors are (0.21, 0.16) pixels and (0.17, 0.10) pixels for the camera and projector, respectively. Considering the image resolution, we can say that the proposed method can provide sufficiently accurate calibration results.

 figure: Fig. 6

Fig. 6 Estimated 3D control points.

Download Full Size | PDF

Tables Icon

Table 1. Estimated intrinsic parameters

 figure: Fig. 7

Fig. 7 Re-projection error after bundle adjustment. (a) Re-projection error of camera images (b) Re-projection error of projector images.

Download Full Size | PDF

In addition, in order to evaluate the quality of the estimated 3D control points, we reconstructed the 3D coordinates of the control points; then, we compared these 3D coordinates with the estimated 3D control points after implementing a transformation, which is described in Eq. (27).

Pcon=RtPcon+tt,
where Pcon and Pcon are the coordinates of the reconstructed control point and its associated transformation result, respectively. (Rt,tt) is the transformation between two Cartesian coordinate systems, which satisfies the following equation:
(Rt,tt)=argminj=1mRtPconj+RtPestj,
where Pestj and Pconj are the coordinates of control point j on the estimated 3D control points during calibration and on its associated reconstructed result, respectively. Typically, Eq. (28) can be satisfied by the closed-form solution of the absolute orientation [45].

Figure 8 shows the reconstructed errors of the control points for all of the calibration images. From this figure, the RMS values of the errors are (3.0, 4.5, 7.8) μm for the calibrated control points.

 figure: Fig. 8

Fig. 8 Reconstructed errors of calibrated control points.

Download Full Size | PDF

4.3 3D reconstruction with calibrated setup

First, a step master was employed to evaluate the performance of the designed system, which is widely used in surface metrology evaluations. The step master used in the experiment was the Mitutoyo 516-499 Cera Step Master 300C, which has four designed steps with nominal values of 20, 50, 100, and 300 (μm). The uncertainty of these nominal steps is 0.20μm, while the variation of each step is within 0.05μm.

The measured results for the step master are shown in Fig. 9 as a depth map. In addition, the measured data for the steps were utilized to fit planes with a robust estimation, and the results are listed in Table 2, which indicates that the RMS error of the measured nominal steps is 2.6μm. Considering the uncertainty of the plane fitting, a measurement accuracy of 10μmcould be achieved with a measurement volume of 34.6mm × 29.0mm × 6.0mm.

 figure: Fig. 9

Fig. 9 Measured depth map of step master (mm).

Download Full Size | PDF

Tables Icon

Table 2. Measured results of step master (μm)

To further validate the performance of the proposed method, we reconstructed the 3D geometry of the calibration target from the captured images. We also applied the calibrated FPP system to measure single objects. The results are shown in Figs. 10–12. According to the results, we can observe that the complete models of the objects were reconstructed, and the surface topography of raised characters is clear. Thus, these results further validate the performance of the calibration method.

 figure: Fig. 10

Fig. 10 Reconstruction of calibration target. (a) Reconstructed calibration target (b) Reconstructed calibration target rendered with texture (c) Region on surface of reconstructed calibration target with color depth.

Download Full Size | PDF

 figure: Fig. 11

Fig. 11 Reconstruction of coin. (a) Photo of reconstructed coin; (b) Whole view of reconstructed coin; (c) Region on surface of reconstructed coin rendered with color depth.

Download Full Size | PDF

 figure: Fig. 12

Fig. 12 Reconstruction of BGA solder balls. (a) Photo of reconstructed BGA solder balls; (b) and (c) Views of reconstructed BGA solder balls.

Download Full Size | PDF

5. Conclusion

This paper presented a new calibration method for a telecentric FPP system, which is comprised of a telecentric camera and telecentric projector. The projector-camera rig parameters and 3D coordinates of the control points on the 3D target were all determined using the proposed method. Thus, there was no need for an accurate 3D reference target, whose fabrication is costly and extremely difficult, particularly for tiny objects used to calibrate atelecentric FPP system. The experimental results demonstrated the success of our calibration framework by achieving high measurement accuracy. It is worth noting that this method could also be applied to calibrate the distortions of the projector-camera system.

Funding

National Natural Science Foundation of China (NSFC) NO.51509251.

References and links

1. S. Van der Jeught and J. J. J. Dirckx, “Real-time structured light profilometry: a review,” Opt. Lasers Eng. 87, 18–31 (2016).

2. Y. Hu, Q. Chen, T. Tao, H. Li, and C. Zuo, “Absolute three-dimensional micro surface profile measurement based on a Greenough-type stereomicroscope,” Meas. Sci. Technol. 28, 45004 (2017).

3. J. G. Rico Espino, J. Gonzalez-Barbosa, R. A. Gomez Loenzo, D. M. Cordova Esparza, and R. Gonzalez-Barbosa, “Vision system for 3D reconstruction with telecentric lens,” in Mexican Conference on Pattern Recognition (Springer-Verlag, 2012), pp. 127–136.

4. A. Mikš and J. Novák, “Design of a double-sided telecentric zoom lens,” Appl. Opt. 51(24), 5928–5935 (2012). [PubMed]  

5. J. Zhang, X. Chen, J. Xi, and Z. Wu, “Aberration correction of double-sided telecentric zoom lenses using lens modules,” Appl. Opt. 53(27), 6123–6132 (2014). [PubMed]  

6. J. S. Kim and T. Kanade, “Multiaperture telecentric lens for 3D reconstruction,” Opt. Lett. 36(7), 1050–1052 (2011). [PubMed]  

7. B. Li and S. Zhang, “Microscopic structured light 3D profilometry: Binary defocusing technique vs. sinusoidal fringe projection,” Opt. Lasers Eng. 96, 117–123 (2017).

8. Y. Yin, X. Peng, A. Li, X. Liu, and B. Z. Gao, “Calibration of fringe projection profilometry with bundle adjustment strategy,” Opt. Lett. 37(4), 542–544 (2012). [PubMed]  

9. Z. Zhang, S. Huang, S. Meng, F. Gao, and X. Jiang, “A simple, flexible and automatic 3D calibration method for a phase calculation-based fringe projection imaging system,” Opt. Express 21(10), 12218–12227 (2013). [PubMed]  

10. H. Luo, J. Xu, N. Hoa Binh, S. Liu, C. Zhang, and K. Chen, “A simple calibration procedure for structured light system,” Opt. Lasers Eng. 57, 6–12 (2014).

11. J. Lu, R. Mo, H. Sun, and Z. Chang, “Flexible calibration of phase-to-height conversion in fringe projection profilometry,” Appl. Opt. 55(23), 6381–6388 (2016). [PubMed]  

12. F. Zhu, H. Shi, P. Bai, D. Lei, and X. He, “Nonlinear calibration for generalized fringe projection profilometry under large measuring depth range,” Appl. Opt. 52(32), 7718–7723 (2013). [PubMed]  

13. L. Merner, Y. Wang, and S. Zhang, “Accurate calibration for 3D shape measurement system using a binary defocusing technique,” Opt. Lasers Eng. 51, 514–519 (2013).

14. F. Zhu, W. Liu, H. Shi, and X. He, “Accurate 3D measurement system and calibration for speckle projection method,” Opt. Lasers Eng. 48, 1132–1139 (2010).

15. Z. Cai, X. Liu, A. Li, Q. Tang, X. Peng, and B. Z. Gao, “Phase-3D mapping method developed from back-projection stereovision model for fringe projection profilometry,” Opt. Express 25(2), 1262–1277 (2017). [PubMed]  

16. P. Lu, C. Sun, B. Liu, and P. Wang, “Accurate and robust calibration method based on pattern geometric constraints for fringe projection profilometry,” Appl. Opt. 56(4), 784–794 (2017). [PubMed]  

17. S. Zhang and P. S. Huang, “Novel method for structured light system calibration,” Opt. Eng. 45, 83601 (2006).

18. Y. An, T. Bell, B. Li, J. Xu, and S. Zhang, “Method for large-range structured light system calibration,” Appl. Opt. 55(33), 9563–9572 (2016). [PubMed]  

19. B. Li and S. Zhang, “Structured light system calibration method with optimal fringe angle,” Appl. Opt. 53(33), 7942–7950 (2014). [PubMed]  

20. S. Yang, M. Liu, J. Song, S. Yin, Y. Guo, Y. Ren, and J. Zhu, “Flexible digital projector calibration method based on per-pixel distortion measurement and correction,” Opt. Lasers Eng. 92, 29–38 (2017).

21. Z. Huang, J. Xi, Y. Yu, and Q. Guo, “Accurate projector calibration based on a new point-to-point mapping relationship between the camera and projector images,” Appl. Opt. 54, 347 (2015).

22. W. Zhang, W. Li, L. Yu, H. Luo, H. Zhao, and H. Xia, “Sub-pixel projector calibration method for fringe projection profilometry,” Opt. Express 25(16), 19158–19169 (2017). [PubMed]  

23. D. Li and J. Tian, “An accurate calibration method for a camera with telecentric lenses,” Opt. Lasers Eng. 51, 538–541 (2013).

24. D. Li, C. Liu, and J. Tian, “Telecentric 3D profilometry based on phase-shifting fringe projection,” Opt. Express 22(26), 31826–31835 (2014). [PubMed]  

25. J. Peng, M. Wang, D. Deng, X. Liu, Y. Yin, and X. Peng, “Distortion correction for microscopic fringe projection system with Scheimpflug telecentric lens,” Appl. Opt. 54(34), 10055–10062 (2015). [PubMed]  

26. L. Rao, F. Da, W. Kong, and H. Huang, “Flexible calibration method for telecentric fringe projection profilometry systems,” Opt. Express 24(2), 1222–1237 (2016). [PubMed]  

27. H. Tanaka, Y. Sumi, and Y. Matsumoto, “A solution to pose ambiguity of visual markers using Moir patterns,” in IEEE/RSJ International Conference on Intelligent Robots and Systems (IEEE, 2014), pp. 3129–3134.

28. T. Collins and A. Bartoli, “Planar Structure-from-Motion with Affine Camera Models: Closed-Form Solutions, Ambiguities and Degeneracy Analysis,” IEEE Trans. Pattern Anal. Mach. Intell. 39(6), 1237–1255 (2017). [PubMed]  

29. Y. Yin, M. Wang, B. Z. Gao, X. Liu, and X. Peng, “Fringe projection 3D microscopy with the general imaging model,” Opt. Express 23(5), 6846–6857 (2015). [PubMed]  

30. L. Huiyang, C. Zhong, and Z. Xianmin, “Calibration of camera with small FOV and DOF telecentric lens,” in IEEE International Conference on Robotics and Biomimetics (IEEE, 2013), pp. 498–503.

31. Z. Chen, H. Liao, and X. Zhang, “Telecentric stereo micro-vision system: Calibration method and experiments,” Opt. Lasers Eng. 57, 82–92 (2014).

32. B. Li and S. Zhang, “Flexible calibration method for microscopic structured light system using telecentric lens,” Opt. Express 23(20), 25795–25803 (2015). [PubMed]  

33. C. Steger, “A Comprehensive and Versatile Camera Model for Cameras with Tilt Lenses,” Int. J. Comput. Vis. 123(2), 1–39 (2017).

34. A. Habed, A. Amintabar, and B. Boufama, “Affine camera calibration from homographies of parallel planes,” in IEEE International Conference on Image Processing (IEEE 2010), pp. 4249–4252.

35. K. Kanatani, Y. Sugaya, and Y. Kanazawa, “Self-calibration of Affine Cameras,” in Guide to 3D Vision Computation (Springer International Publishing, 2016), pp. 163–182.

36. L. Quan, “Self-calibration of an affine camera from multiple views,” Int. J. Comput. Vis. 19, 93–105 (1996).

37. Q. Mei, J. Gao, H. Lin, Y. Chen, H. Yunbo, W. Wang, G. Zhang, and X. Chen, “Structure light telecentric stereoscopic vision 3D measurement system based on Scheimpflug condition,” Opt. Lasers Eng. 86, 83–91 (2016).

38. D. Moreno and G. Taubin, “Simple, Accurate, and Robust Projector-Camera Calibration,” in International Conference on 3D Imaging (IEEE, 2012), pp. 464–471.

39. L. Yao and H. Liu, “A flexible calibration approach for cameras with double-sided telecentric lenses,” Int. J. Adv. Robot. Syst. 13, 82 (2016).

40. H. Lin, H. Liu, and L. Yao, “3D-shape reconstruction based on a sub-pixel-level mapping relationship between the camera and projector,” Proc. SPIE 10255, 1025504 (2017).

41. C. Tomasi and T. Kanade, “Shape and motion from image streams under orthography: a factorization method,” Int. J. Comput. Vis. 9, 137–154 (1992).

42. X. Liu, Z. Cai, Y. Yin, H. Jiang, D. He, W. He, Z. Zhang, and X. Peng, “Calibration of fringe projection profilometry using an inaccurate 2D reference target,” Opt. Lasers Eng. 89, 131–137 (2017).

43. R. Chen, J. Xu, S. Zhang, H. Chen, Y. Guan, and K. Chen, “A self-recalibration method based on scale-invariant registration for structured light measurement systems,” Opt. Lasers Eng. 88, 75–81 (2017).

44. Y. Oyamada, P. Fallavollita, and N. Navab, “Single Camera Calibration using partially visible calibration objects based on Random Dots Marker Tracking Algorithm,” in IEEE and ACM International Symposium on Mixed and Augmented Reality (IEEE, 2012).

45. B. K. P. Horn, “Closed-form solution of absolute orientation using unit quaternions,” J. Opt. Soc. Am. A 4, 629–642 (1987).

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (12)

Fig. 1
Fig. 1 Measurement model of FPP system with bi-telecentric lens.
Fig. 2
Fig. 2 Selected projector-camera coordinates. (a) Rig model when Z C axis is approximately parallel with Z P X P plane (b) Rig model when Z C axis is approximately parallel with Y P Z P plane.
Fig. 3
Fig. 3 Sub-pixels on camera image plane correspond to pixels on projector image plane.
Fig. 4
Fig. 4 3D object target for calibration. (a) Designed calibration object with circular dots (b) Photograph of 3D calibration target.
Fig. 5
Fig. 5 Examples of calibration images. (a) Completely illuminated camera image (b) and (c) Projector images achieved by sub-pixel mapping, where pixels with same color correspond respectively to same camera column (b) and same camera row (c).
Fig. 6
Fig. 6 Estimated 3D control points.
Fig. 7
Fig. 7 Re-projection error after bundle adjustment. (a) Re-projection error of camera images (b) Re-projection error of projector images.
Fig. 8
Fig. 8 Reconstructed errors of calibrated control points.
Fig. 9
Fig. 9 Measured depth map of step master (mm).
Fig. 10
Fig. 10 Reconstruction of calibration target. (a) Reconstructed calibration target (b) Reconstructed calibration target rendered with texture (c) Region on surface of reconstructed calibration target with color depth.
Fig. 11
Fig. 11 Reconstruction of coin. (a) Photo of reconstructed coin; (b) Whole view of reconstructed coin; (c) Region on surface of reconstructed coin rendered with color depth.
Fig. 12
Fig. 12 Reconstruction of BGA solder balls. (a) Photo of reconstructed BGA solder balls; (b) and (c) Views of reconstructed BGA solder balls.

Tables (2)

Tables Icon

Table 1 Estimated intrinsic parameters

Tables Icon

Table 2 Measured results of step master ( μm)

Equations (28)

Equations on this page are rendered with MathJax. Learn more.

[ m 1 ]=K[ R 2×3 t s 0 1×3 1 ][ P 1 ],
K=[ m 0 u 0 0 m v 0 0 0 1 ].
[ m P 1 ]= K P [ R P2×3 t Ps 0 1×3 1 ][ P 1 ],
[ m C 1 ]= K c [ R c2×3 t cs 0 1×3 1 ][ P 1 ],
[ m p 1 ]= K p [ I 2×3 0 2×1 0 1×3 1 ][ P 1 ],
[ m C 1 ]= K C [ R P2×3 C t Ps C 0 1×3 1 ][ P 1 ].
{ R P C = R C R P 1 t P C = t C R C R P 1 t P .
{ R P C = R C R P 1 t P C = R P C × [ 0 y oC 0 ] T ,
{ R P C = R C R P 1 t P C = R P C × [ x oC 0 0 ] T ,
[ u c v c 1 ]=[ h 11 h 12 h 13 h 21 h 22 h 23 0 0 1 ][ x w y w 1 ]=H[ x w y w 1 ].
H C P = H T P ( H T C ) 1 .
W C = [ u ¯ C11 u ¯ C12 u ¯ C1l v ¯ C11 v ¯ C12 v ¯ C1l u ¯ Cn1 u ¯ Cn2 u ¯ Cnl v ¯ Cn1 v ¯ Cn2 v ¯ Cnl ] 2n×l ,
W P = [ u ¯ P11 u ¯ P12 u ¯ P1l v ¯ P11 v ¯ P12 v ¯ P1l u ¯ Pn1 u ¯ Pn2 u ¯ Pnl v ¯ Pn1 v ¯ Pn2 v ¯ Pnl ] 2n×l .
{ u ¯ Cij = u Cij 1 m j=1 m u Cij v ¯ Cij = v Cij 1 m j=1 m v Cij ,
{ u ¯ Pij = u Pij 1 m j=1 m u Pij v ¯ Pij = v Pij 1 m j=1 m v Pij .
M C = [ r C 1 11 r C 1 12 r C 1 13 r C 1 21 r C 1 22 r C 1 23 r C n 11 r C n 12 r C n 13 r C n 21 r C n 22 r C n 23 ] 2n×3 ,
M P = [ r P 1 11 r P 1 12 r P 1 13 r P 1 21 r P 1 22 r P 1 23 r P n 11 r P n 12 r P n 13 r P n 21 r P n 22 r P n 23 ] 2n×3 ,
M P = [ r P 1 11 r P 1 12 r P 1 13 r P 1 21 r P 1 22 r P 1 23 r P n 11 r P n 12 r P n 13 r P n 21 r P n 22 r P n 23 ] 2n×3 ,
S= [ x 1 x 2 x l y 1 y 2 y l z 1 z 2 z l ] 3×l .
{ W C = M C S W P = M P S .
R Pi = [ r Pxi T r Pyi T r Pzi T ] T ,
t Psi = [ t Pxi t Pyi ] T ,
o C = [ o Cx o Cy o Cz ] T = R P C [ t Cs 0 ]+[ t Ps 0 ].
y oC = o Cy n Cy n Cx × o Cx .
x oC = o Cx n Cx n Cy × o Cy .
e= i=1 n j=1 m [ m P ij p ˜ P ij ( R Pi , t Psi , K P , x j , y j , z j ) 2 + m C ij p ˜ Cij ( R Pi , t Psi , R P C , t P C , x j , y j , z j ) 2 ] ,
P con = R t P con + t t ,
( R t , t t )=argmin j=1 m R t P conj + R t P estj ,
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.