## Abstract

Three-dimensional (3D) imaging and metrology of microstructures is a critical task for the design, fabrication, and inspection of microelements. Newly developed fringe projection 3D microscopy is presented in this paper. The system is configured according to camera-projector layout and long working distance lenses. The Scheimpflug principle is employed to make full use of the limited depth of field. For such a specific system, the general imaging model is introduced to reach a full 3D reconstruction. A dedicated calibration procedure is developed to realize quantitative 3D imaging. Experiments with a prototype demonstrate the accessibility of the proposed configuration, model, and calibration approach.

© 2015 Optical Society of America

## 1. Introduction

Fringe projection profilometry (FPP), which can obtain 3D surface topography with high accuracy and high point density by means of phase information [1], has been widely employed in industry inspection, reverse engineering, plastic surgery, assessment of cultural heritage, entertainment, etc [2,3]. As a measurement method based on geometrical optics, FPP is used for measurement on an object scale ranging from the meter to the millimeter. FPP in micro scale, also referred to as fringe projection 3D microscopy (FP-3DM), plays an increasingly greater role in micro manufacturing and roughness measurement. Although the literature includes a large number of reports on the use of FPP in testing objects in the midrange, few reports FP-3DM.

General 3D microscopy may be realized based on various mechanisms [4–8], while a FP-3DM prototype is normally developed according to one of two frameworks. One is the modification of a stereo microscope [9–13], and the other employs a long working distance (LWD) lens [14–17]. Because different types of LWD lens may be used for projection and imaging, this framework is more flexible for use in magnification matching, working distance adjustment, and structure design. Various fringe projection techniques are used to generate the fringe pattern, including grating [9,12], liquid crystal display (LCD) [14], digital micromirror device (DMD) [10,16], liquid crystal on silicon (LCoS) [11,13], organic light-emitting diode (OLED) [18], and interferometry [17]. Of these, the DMD-based digital light processing (DLP) projector is best for controllability, speed, linearity, and efficiency and is increasingly used for fringe projection [19].

Although most FP-3DM have a similar working principle, system formulation and the corresponding calibration approach may differ. For telecentric and nearly telecentric setups, synthetic wavelength can be defined by the fringe period and the angle between illumination and detection, which determines the linear relationship between height and phase distribution [9,11,12]. This model is brief but limited to the telecentric optical patch. For typical, non-telecentric configurations, the classical triangulation model inherited from FPP in medium scale [20] is effective [10,14,15]. If the working distance is sufficiently large, height has a linear relationship with the phase difference between the object’s surface and the reference plane. It should be noted that most system formulations focus on phase-height mapping and out-of-plane calibration rather than in-plane calibration. Projection based model is another choice to describe the system. Non-telecentric system can be described with perspective projection [13,21] and telecentric system with orthographic projection [22]. Moreover, the lens distortion can be formulated with additional polynomial.

This paper presents a newly developed FP-3DM. A projector illuminates the object’s surface with fringe patterns through a vertically fixed LWD lens, while a camera captures deformed fringe patterns through a tilted fixed LWD lens. To make full use of the limited depth of field (DOF) in microscopy, the angle between the camera’s sensor and the imaging LWD lens is designed based on the Scheimpflug principle, which will lead to larger common focus area. The general imaging model is employed to describe such a system, where projection and imaging (including the lens distortion) can be regarded as a black box connecting pixels with corresponding rays thus no longer depend on the specific optical layout. Three dimensional reconstruction relies on the intersection between outgoing rays from the projector and incoming rays from the camera. Dedicated calibration approach is proposed to achieve quantitative 3D imaging. To the best of our knowledge, it is the first time to establish the FP-3DM by using the Scheimpflug principle and the general imaging model. The validity of proposed FP-3DM system is demonstrated through a series of experiments.

## 2. System configuration and principle

#### 2.1 System configuration

A traditional binocular setup of the fringe projection 3D microscope (FP-3DM) is shown schematically in Fig. 1(a). In general, the FP-3DM consists of two modules, the projection branch and the imaging branch. In the projection branch, the fringe pattern is generated from the digital projection chip (e.g., the digital micromirror device, or DMD) of the projector, and then projected onto the object’s surface through an LWD lens. In the imaging branch, the deformed fringe pattern is reflected from the surface and then imaged on the sensor chip (e.g., the complementary metal-oxide-semiconductor (CMOS)) of the camera through another LWD lens. Such a system can work well in medium scale, but in micro scale, it will be constrained by the limited common focus area. Even for low-power micro imaging, the field of view (FOV) can reach several millimeters, but the DOF is usually a fraction of a millimeter, which causes a flat focus area. For ordinary projection or imaging systems, the flat focus area is parallel to the projection or sensor chip. The angle between the projection and imaging branches dramatically reduces the common focus area, as shown in Fig. 1(b).

To address these limitations, we employed the Scheimpflug principle to modify the system configuration, as shown in Fig. 2. First, the projection branch is designed perpendicular to the object plane so the focus area of projection and the object overlap. Then the orientation and position of the sensor chip are carefully fitted so that the sensor plane, the principal LWD imaging plane of the lens, and the object plane make a Scheimpflug intersection. In this way, the focus area overlaps the object, and thus the focus areas of projection and imaging overlap, making full use of the limited DOF.

#### 2.2 Working principle

The working process of the proposed system is as follows: Standard fringe patterns are projected onto the object’s surface, and then the fringe patterns deformed by the surface topography are captured by the camera. With these, the surface topography can be reconstructed with appropriate system model and phase retrieval algorithms. The FPP system is described by numerous models; these can be generally divided into two categories: the phase-height model [23–29] and the stereoscopy model [30–33]. Whereas, for the former, in addition to out-of-plane calibration to determine the phase-height mapping, in-plane calibration is also required to realize full 3D reconstruction, the latter formulates 3D coordinates *x*, *y*, *z* with one model simultaneously. Thus, the stereoscopy model is the better choice for full 3D reconstruction. However, due to the specific configuration of the proposed system, the well-developed active stereoscopy based on pinhole camera model cannot be used. Here the general imaging model is introduced to describe the proposed system.

The basis for the general imaging model is that any imaging system collects incoming rays from the scene onto a photosensitive element (e.g., CMOS) [34,35]. Each pixel of the photosensitive element corresponds to a specific pencil of rays. Within the imaging system’s focus area, a pencil of rays can be considered to be well focused, thus can be represented with the single chief ray, especially when studying the geometric properties of the imaging system. In this sense, the imaging system can be regarded as a black box, which gives the corresponding relationship between the pixel **m** and the incoming (chief) ray **l**, as shown in Fig. 3. Using a known point **A** on the ray and the direction vector **D**, the imaging system can be expressed as follows

**X**on the ray

**l**is determined by $X=A-sD$, and

*s*is the distance factor. The symbol ↔ denotes the implicit correspondence between

**m**and

**l**. When considering lens distortion, the exact correspondence will change slightly but the symbolic expression in Eq. (1) will remain unchanged. For the digital projection system, the corresponding relationship between the pixel on the DMD and the outgoing ray can be determined using the absolute phase

*φ*[33], which implies that the projection is just an inverse process of imaging. Consequently, the projection system can also be expressed with Eq. (1).

Based on the above consideration, 3D imaging of the FP-3DM relies on the intersection between outgoing rays from the DMD and corresponding rays coming into the CMOS, as shown in Fig. 4. When the FP-3DM is working, the absolute phase map **φ*** _{p}* in the DMD is known priori, and the absolute phase map

**φ**

*in the CMOS can be calculated with a phase retrieval algorithm. For an arbitrary point*

_{i}**m**

*in the CMOS, if there exists a point*

_{i}**m**

*in the DMD that satisfies the constraint ${\phi}_{i}({m}_{i})={\phi}_{p}({m}_{p})$,*

_{p}**m**

*and*

_{i}**m**

*must correspond to a particular 3D point*

_{p}**X**and usually called the homologous point pairs. The reconstruction of 3D range image is just calculating the intersection of rays corresponding to each homologous point pairs. Hence the FP-3DM can be formulated with the following expression

Here the subscripts *i* and *p* denote the imaging and the projection branch of the FP-3DM, respectively. The proposed general imaging model is distinctly different from existing models based on lookup table (LUT) [24,27] although all of them regard the system as black box and have pixel-by-pixel expression. In the LUT model phases are used as variables that directly determine heights, while in the proposed model phases are utilized as just markers that identify the homologous point pairs; this implies that the proposed model can also be applied for FPP system with a configuration of two cameras and one projector. Regarding lens distortion, the LUT model derives the nonlinear expression between phase and height, while the proposed model has an identical expression. The LUT model requires separation of out-of-plane and in-plane calibrations; the proposed model can be calibrated with a dedicated approach (see the following section).

## 3. System calibration

Because, the accuracy of 3D reconstruction with the proposed model depends on the accuracy of ray parameters, a dedicated calibration approach is required. For the general nonmicroscopic imaging, a generic calibration method based on at least 3 different poses of the planar target has been developed [36]. Since there are numerous intermediate variables to be estimated, sufficient orientation difference among target poses is important for stable calibration. However, due to the microscope’s limited DOF, the orientation of the planar target can be changed only slightly, which reduces the numerical stability of parameter estimation. To decrease the number of variables, we used a simplified calibration approach.

#### 3.1 Principle of calibration

We employed a planar target with known benchmarks as the calibration target, which is shown in Fig. 5(a). The target should be placed in 3 different poses during calibration. We required that the 1st and the 3rd poses have the same orientation (parallel to each other) and their translation distance is known; the 2nd pose may be chosen arbitrarily but cannot be parallel to the 1st pose, as shown in Fig. 5(b). The target coordinate system (TCS) is defined as shown in Fig. 5(c), the *x-o-y* plane is just the target plane. Considering the 3 points corresponding to an arbitrary pixel **m** that result from the intersection between ray **l** and planes of the target in different poses. In the corresponding TCS, the homogeneous coordinate of these 3 points, which are indicated with **Q**, **Q′**, **Q″,** respectively, has the following details

**R′**is the rotation matrix from the 2nd to the 1st TCS,

**t′**and

**t″**are the translation vectors from the 2nd and 3rd TCS to the 1st TCS, and

**I**is the 3 × 3 identity matrix. If the pose parameters are known, Eq. (4) will give 3 known points on the ray

**l**, which is sufficient to establish the expression of this ray. The next step becomes the estimation of the target’s pose parameters.

For the general imaging model, the only constraint comes from the collinearity of points on the same ray. The 3 points in Fig. 5(b) are obviously collinear; thus, their coordinates in Eq. (4) can be stacked into the following 4 × 3 matrix

**M**is smaller than 3, which means the determinants of all 3 × 3 submatrices of

**M**must vanish. If each row of

**M**is removed, the remainder will become a 3 × 3 submatrix. However, the 3rd row of

**M**implies the planarity of the target, which should always be reserved. Hence there are only three submatrices that can be utilized. Calculating the determinant of each submatrix and rearranging the items, the following equations will be validwhere

*k*indicates the removed row number when constructing the submatrix.

*C*denotes the coupled pose parameters and ${T}_{i}^{k}$, the corresponding coefficients, the details of which are given in Table 1.

_{i}The least square solutions of *C _{i}* can be solved up to a scale factor

*λ*from Eq. (6). And then, the elements of

**R′**can be expressed with the combination of

*C*,

_{i}*λ,*and the elements of

**t″**. Exploiting the normal orthogonality of the rotation matrix and the known translation distance,

**t″**and

*λ*will be determined, after which

**R′**and

**t′**can be easily calculated [37]. Once the pose parameters are estimated, collinear points taken from different target poses can be transformed into the WCS with Eq. (4). With a scattered data interpolation algorithm, collinear points for each pixel can be calculated with collinear points for benchmarks, and then the ray equations like Eq. (1) will be established for each pixel in the WCS. After above calibration procedure, the 3D range image of the FP-3DM can be reconstructed according to Eq. (2) directly.

#### 3.2 Details of calibration

The proposed calibration approach includes three major stages: data acquisition, pose estimation, and ray expression, as shown in Fig. 6. The middle and last stages are consistent with the principle discussed in Sec. 3.1, and thus here we explain the first stage. In the first stage, the calibration targets should be placed on 3 different poses; a sequence of specific patterns will be projected onto the target on each pose and the camera will capture the corresponding images. Because the first pattern is a homogeneous bright field, the camera will capture an image of the target under uniform illumination, which is then used to extract and locate the benchmarks. Next, the phase-shifted fringes plus the complementary Gray-code [38] of the orthogonal directions (vertical and horizontal) are projected sequentially and corresponding target images are captured synchronously. Using captured images, the absolute phase of orthogonal directions can be calculated with a phase retrieval algorithm. With the absolute phase, the projector can be treated as an inverse camera [33], so we can calibrate the FP-3DM in a unified framework without considering the projector and camera distinguishably.

In practice, how to obtain the coordinates of the 3 points corresponding to the same pixel is a seemingly simple but important issue. To address this problem, each benchmark on the target is assigned the unique index (In our experiments, we use the target pattern as shown in Fig. 10, where the index of each benchmark is determined by its relative location to the four rings), with which the coordinates of each benchmark in TCS can be directly known. For the 1st pose, benchmarks on the target are naturally chosen as the point **Q**, thus the coordinate of **Q** in its TCS can be determined by its index. The coordinate (*m _{u}*,

*m*) of the corresponding pixel

_{v}**m**in the pixel coordinate system (PCS) of the 1st pose can be determined by using a feature location algorithm (for the pixels on the DMD, coordinate evaluation requires the absolute phase maps). However, for the 2nd pose, the point

**Q′**corresponding to the same pixel

**m**is generally not located on any benchmarks. To determine the exact location of point

**Q′**, the benchmarks surrounding point

**Q′**should firstly be found out by searching the 4 nearest benchmarks to pixel

**m**in the PCS of the 2nd pose. And then the coordinates of the surrounding benchmarks in TCS can be known after recognizing the indexes of the surrounding benchmarks. At present, the coordinates of the 4 benchmarks surrounding point

**Q′**are known in both the PCS and TCS, with which the local homography matrix

**H′**can be estimated.

**H′**describes the transformation between the pixel plane of the 2nd pose neighboring pixel

**m**and the standard target plane neighboring point

**Q′**. Thus the coordinate of point

**Q′**in its TCS can be calculated with the following equation

**Q″**exists.

## 4. Experiments

To verify the proposed imaging model and the calibration approach, a prototype of the FP-3DM is established based on a specially designed framework. A projection module (based on TI 0.45” WXGA DMD, resolution 1280 × 800) combined with a LWD lens of 0.8 × magnification serve as the projection branch, while a CMOS camera (China Daheng (Group) Co., Ltd, MER-130-30UM, resolution 1280 × 1024) with a LWD lens of 1 × magnification serve as the imaging branch. The pattern of the calibration target is a white 9 × 11 dot (benchmark) array distributed uniformly on a black background, as shown in Fig. 8(a). The distance between adjacent benchmarks is 0.6 mm (see Fig. 8 for the relative size of the target and a coin). A motorized 5-axis stage (x-y-z translation, rotation, tilt) is utilized for quantitative translation and orientation change. And a coin of fifty cents shown in Fig. 8(b) is chosen as the test sample. A ceramic flat shown in Fig. 8(c) is taken as the standard plane to evaluate the accuracy of 3D imaging.

The first experiment is designed to demonstrate the common focus area of proposed system. A white flat is first placed parallel to the *x-o-y* plane but slightly low than the common focus area, and then translated 20 times along the direction of *z-axis*, as shown in Fig. 9(a). The translation distance between adjacent positions is 50µm. Binary fringe with period of 6 pixels is projected onto the flat and captured by the camera, as shown in Fig. 9(b). Due to the limited fitting space of designed framework, the DMD chip has to be assembled with horizontal rotation of 45°, which leads to the sloping fringe in captured image. Since the binary fringe passes through the projection lens and imaging lens sequentially, the contrast of the captured image is influenced by the focus degree of both projection and imaging branches, which can be regarded as an indication of common focus area. Within the FOV, 5 areas (A-E) which have distributed locations are selected, as shown in Fig. 9(c). Their contrast corresponding to each translated position is plotted in Fig. 9(d). For a certain translation index, the contrast of all the 5 areas is approximately equal to each other, which means the common focus area is approximately parallel to the *x-o-y* plane. For each selected area, the contrast curve has single maximum and declines on both sides, which means the common focus area has certain depth. And the depth of common focus area depends on the tolerance to contrast decrease. With the aforementioned information, it can be inferred that the common focus area is a flat volume parallel to the *x-o-y* plane, which coincides with our design goal shown in Fig. 2.

Phase-shifting and complementary Gray-code [38] are employed to get the absolute phase map. To suppress the noise arising from the projection illumination in microscopic imaging, big step number (16 steps) and small spatial period (16 pixels) are selected for phase-shifting. With the consideration of the rotated DMD chip, the standard fringe patterns should also be rotated 45° from horizontal and vertical directions, as shown in Fig. 10. The rotation of the chip and the fringe pattern will not give rise to an error in 3D reconstruction. Because the general imaging model describes the system with implicit correspondence between pixels and rays, any change in the structure (such as the rotation of the chip) and the coordinate system (such as the rotation of the fringe pattern) will appear as the change of correspondence relationship. And the exact correspondence relationship can be determined accurately with system calibration, which can guarantee the accuracy of 3D reconstruction.

Figure 11 shows a typical set of target data for calibration, which includes benchmark location, horizontal and vertical absolute phase maps of three different poses. Here the red crosses indicate the center location of benchmarks. And the intensity change from dark to bright in absolute phase maps reflects the increasing trend of absolute phase values. The translation distance between the 1st and 3rd poses is 0.5mm and the pose parameters estimated with the above target data are

To evaluate the accuracy of 3D reconstruction, the ceramic flat is first placed parallel to the *x-o-y* plane and then translated 4 times along the direction of *z-axis*. The translation distance between adjacent positions is 100µm. The 3D range image of the ceramic flat is reconstructed with the calibrated prototype on each position. 9 points within the FOV of the system are randomly selected, and for each selected point, plane fitting is applied to the neighborhood in all the range images, then the translation distance can be calculated with fitted plane parameters. Measurement results of 4 translation distances on 9 selected points are shown in Table 2. It can be seen the standard deviation (Std.) of distance measurement is less than 4µm. The mean value (Mean) of distance measurement has relatively large variation, maybe partly because the translation error of the motorized stage.

The 3D-range images of different regions on the surface of the coin (marked with red ellipses in Fig. 8(b)) acquired with the calibrated prototype are shown in Fig. 12, which clearly demonstrate the surface topography of raised characters.

## 5. Conclusion

In conclusion, newly developed FP-3DM is presented in this paper. The Scheimpflug principle is considered in system design to make full use of the limit DOF in microscopy. The general imaging model and a dedicated calibration approach are established to realize quantitative 3D imaging. The general imaging model is independent of optical layout, which has potential to describe various FPP system. For system calibration, only a planar target and three poses are needed, which make calibration easy to implement. The effectiveness of proposed system and methods have been completely demonstrated by experiments on common focus area, system calibration, accuracy evaluation and 3D range imaging.

## Acknowledgments

The financial support from the Natural Science Foundation of China (NSFC) under the grant 61201355, 61377017, 61405122 and that from the Sino-German Center for Research Promotion (SGCRP) under the grant GZ 760 is gratefully acknowledged. The Scientific and Technological Project of the Shenzhen government (JCYJ20140509172609158, JCYJ20140828163633999) and the grant established by the State Key Laboratory of Precision Measuring Technology and Instruments (Tianjin University) are also acknowledged. The authors would like to thank Dr. Fan Wang and Mr. Yan Cheng of Shenzhen Anhua Optoelectronics Technology Co., Ltd., for their help in establishing the experimental prototype.

## References and links

**1. **S. S. Gorthi and P. Rastogi, “Fringe projection techniques: Whither we are?” Opt. Lasers Eng. **48**(2), 133–140 (2010). [CrossRef]

**2. **G. Sansoni, M. Trebeschi, and F. Docchio, “State-of-the-art and applications of 3D imaging sensors in industry, cultural heritage, medicine, and criminal investigation,” Sensors (Basel) **9**(1), 568–601 (2009). [CrossRef] [PubMed]

**3. **Y. Yin, D. He, Z. Liu, X. Liu, and X. Peng, “Phase aided 3D imaging and modeling: dedicated systems and case studies,” Proc. SPIE **9132**, 91320Q (2014).

**4. **A. Anand, A. Faridian, V. Chhaniwal, G. Pedrini, W. Osten, and B. Javidi, “High-resolution quantitative phase microscopic imaging in deep UV with phase retrieval,” Opt. Lett. **36**(22), 4362–4364 (2011). [CrossRef] [PubMed]

**5. **C. Li, Z. Liu, and H. Xie, “A measurement method for micro 3D shape based on grids-processing and stereovision technology,” Meas. Sci. Technol. **24**(4), 045401 (2013). [CrossRef]

**6. **J. Notbohm, A. Rosakis, S. Kumagai, S. Xia, and G. Ravichandran, “Three-dimensional displacement and shape measurement with a diffraction-assisted grid method,” Strain **49**(5), 399–408 (2013).

**7. **C. Zuo, Q. Chen, W. Qu, and A. Asundi, “Noninterferometric single-shot quantitative phase microscopy,” Opt. Lett. **38**(18), 3538–3541 (2013). [CrossRef] [PubMed]

**8. **D. Wu, H. Xie, C. Li, and R. Wang, “Application of the digital phase-shifting method in 3D deformation measurement at micro-scale by SEM,” Meas. Sci. Technol. **25**(12), 125002 (2014). [CrossRef]

**9. **R. Windecker, M. Fleischer, and H. J. Tiziani, “Three-dimensional topometry with stereo microscopes,” Opt. Eng. **36**(12), 3372–3377 (1997). [CrossRef]

**10. **C. Zhang, P. S. Huang, and F.-P. Chiang, “Microscopic phase-shifting profilometry based on digital micromirror device technology,” Appl. Opt. **41**(28), 5896–5904 (2002). [CrossRef] [PubMed]

**11. **K.-P. Proll, J.-M. Nivet, K. Körner, and H. J. Tiziani, “Microscopic three-dimensional topometry with ferroelectric liquid-crystal-on-silicon displays,” Appl. Opt. **42**(10), 1773–1778 (2003). [CrossRef] [PubMed]

**12. **R. Rodriguez-Vera, K. Genovese, J. A. Rayas, and F. Mendoza-Santoyo, “Vibration analysis at microscale by Talbot fringe projection method,” Strain **45**(3), 249–258 (2009). [CrossRef]

**13. **A. Li, X. Peng, Y. Yin, X. Liu, Q. Zhao, K. Körner, and W. Osten, “Fringe projection based quantitative 3D microscopy,” Optik **124**(21), 5052–5056 (2013). [CrossRef]

**14. **C. Quan, X. Y. He, C. F. Wang, C. J. Tay, and H. M. Shang, “Shape measurement of small objects using LCD fringe projection with phase shifting,” Opt. Commun. **189**(1–3), 21–29 (2001). [CrossRef]

**15. **C. Quan, C. J. Tay, X. Y. He, X. Kang, and H. M. Shang, “Microscopic surface contouring by fringe projection method,” Opt. Laser Technol. **34**(7), 547–552 (2002). [CrossRef]

**16. **J. Chen, T. Guo, L. Wang, Z. Wu, X. Fu, and X. Hu, “Microscopic fringe projection system and measuring method,” Proc. SPIE **8759**, 87594U (2013). [CrossRef]

**17. **D. S. Mehta, M. Inam, J. Prakash, and A. M. Biradar, “Liquid-crystal phase-shifting lateral shearing interferometer with improved fringe contrast for 3D surface profilometry,” Appl. Opt. **52**(25), 6119–6125 (2013). [CrossRef] [PubMed]

**18. **G. Notni, S. Riehemann, P. Kuehmstedt, L. Heidler, and N. Wolf, “OLED microdisplays: a new key element for fringe projection setups,” Proc. SPIE **5532**, 170–177 (2004). [CrossRef]

**19. **G. Frankowski and R. Hainich, “DLP-based 3D metrology by structured light or projected fringe technology for life sciences and industrial metrology,” Proc. SPIE **7210**, 72100C (2009). [CrossRef]

**20. **X. Su and W. Chen, “Fourier transform profilometry,” Opt. Lasers Eng. **35**(5), 263–284 (2001). [CrossRef]

**21. **S. Ma, R. Zhu, C. Quan, L. Chen, C. J. Tay, and B. Li, “Flexible structured-light-based three-dimensional profile reconstruction method considering lens projection-imaging distortion,” Appl. Opt. **51**(13), 2419–2428 (2012). [CrossRef] [PubMed]

**22. **D. Li, C. Liu, and J. Tian, “Telecentric 3D profilometry based on phase-shifting fringe projection,” Opt. Express **22**(26), 31826–31835 (2014). [CrossRef] [PubMed]

**23. **Q. Hu, P. S. Huang, Q. Fu, and F.-P. Chiang, “Error compensation for a three-dimensional shape measurement system,” Opt. Eng. **42**(2), 482–493 (2003). [CrossRef]

**24. **H. Liu, W.-H. Su, K. Reichard, and S. Yin, “Calibration-based phase-shifting projected fringe profilometry for accurate absolute 3D surface profile measurement,” Opt. Commun. **216**(1–3), 65–80 (2003). [CrossRef]

**25. **H. Guo, H. He, Y. Yu, and M. Chen, “Least-squares calibration method for fringe projection profilometry,” Opt. Eng. **44**(3), 033603 (2005). [CrossRef]

**26. **H. Du and Z. Wang, “Three-dimensional shape measurement with an arbitrarily arranged fringe projection profilometry system,” Opt. Lett. **32**(16), 2438–2440 (2007). [CrossRef] [PubMed]

**27. **M. Fujigaki, A. Takagishi, T. Matui, and Y. Morimoto, “Development of real-time shape measurement system using whole-space tabulation method,” Proc. SPIE **7066**, 706606 (2008). [CrossRef]

**28. **L. Huang, P. S. K. Chua, and A. Asundi, “Least-squares calibration method for fringe projection profilometry considering camera lens distortion,” Appl. Opt. **49**(9), 1539–1548 (2010). [CrossRef] [PubMed]

**29. **Z. Zhang, H. Ma, T. Guo, S. Zhang, and J. Chen, “Simple, flexible calibration of phase calculation-based three-dimensional imaging system,” Opt. Lett. **36**(7), 1257–1259 (2011). [PubMed]

**30. **R. Legarda-Sáenz, T. Bothe, and W. P. Jüptner, “Accurate procedure for the calibration of a structured light system,” Opt. Eng. **43**(2), 464–471 (2004). [CrossRef]

**31. **S. Zhang and P. S. Huang, “Novel method for structured light system calibration,” Opt. Eng. **45**(8), 083601 (2006). [CrossRef]

**32. **Z. Li, Y. Shi, C. Wang, and Y. Wang, “Accurate calibration method for a structured light system,” Opt. Eng. **47**(5), 053604 (2008). [CrossRef]

**33. **Y. Yin, X. Peng, A. Li, X. Liu, and B. Z. Gao, “Calibration of fringe projection profilometry with bundle adjustment strategy,” Opt. Lett. **37**(4), 542–544 (2012). [CrossRef] [PubMed]

**34. **M. D. Grossberg and S. K. Nayar, “A general imaging model and a method for finding its parameters,” in Proceedings of Eighth IEEE International Conference on Computer Vision (ICCV 2001), (2001), 108–115. [CrossRef]

**35. **T. Bothe, W. Li, M. Schulte, C. von Kopylow, R. B. Bergmann, and W. P. O. Jüptner, “Vision ray calibration for the quantitative geometric description of general imaging and projection optics in metrology,” Appl. Opt. **49**(30), 5851–5860 (2010). [CrossRef] [PubMed]

**36. **S. Ramalingam, P. Sturm, and S. K. Lodha, “Towards complete generic camera calibration,” in Proceedings of the 2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR 2005), (2005), 1093–1098. [CrossRef]

**37. **Y. Yin, M. Wang, A. Li, X. Liu, and X. Peng, “Ray-based calibration for the micro optical metrology system,” Proc. SPIE **9132**, 91320K (2014).

**38. **Q. Zhang, X. Su, L. Xiang, and X. Sun, “3-D shape measurement based on complementary Gray-code light,” Opt. Lasers Eng. **50**(4), 574–579 (2012). [CrossRef]