## Abstract

This research presents a novel method to calibrate a microscopic structured light system using a camera with a telecentric lens. The pin-hole projector calibration follows the standard pin-hole camera calibration procedures. With the calibrated projector, the 3D coordinates of those feature points used for projector calibration are then estimated through iterative Levenberg-Marquardt optimization. Those 3D feature points are further used to calibrate the camera with a telecentric lens. We will describe the mathematical model of a telecentric lens, and demonstrate that the proposed calibration framework can achieve very high accuracy: approximately 10 *μ*m with a volume of approximately 10(H) mm × 8(W) mm × 5(D) mm.

© 2015 Optical Society of America

## 1. Introduction

With recent advances in precision manufacturing, there has been an increasing demand for the development of efficient and accurate micro-level 3D metrology approaches. A structured light (SL) system with digital fringe projection technology is regarded as a potential solution to micro-scale 3D profilometry owing to its capability of high-speed, high-resolution measurement [1]. To migrate this 3D imaging technology into micro-scale level, a variety of approaches were carried out, either by modifying one channel of a stereo microscope with different projection technologies [2–6], or using small field-of-view (FOV), non-telecentric lenses with long working distance (LWD) [7–11].

Apart from the technologies mentioned above, an alternative approach for microscopic 3D imaging is to use telecentric lenses because of their unique properties of orthographic projection, low distortion and invariant magnification over a specific distance range [12]. However, the calibration of such optical system is not straightforward especially for *Z* direction, since the telecentricity will result in insensitivity of depth changing along optical axis. Zhu et al. [13] proposed to use a camera with telecentric lens and a speckle projector to perform deformation measurement with digital-image-correlation (DIC). Essentially the *Z* direction in this system was calibrated using a translation stage and a simple polynomial fitting method. To improve the calibration flexibility and accuracy, Li and Tian [12] has formulated the orthographic projection of telecentric lens into an intrinsic and an extrinsic matrix, and successfully employed this model to an SL system with two telecentric lenses in their later research [14]. This technology has shown the possibility to calibrate a telecentric SL system analogously to a regular pin-hole SL system.

These aforementioned approaches for telecentric lens calibration have been proven successful to achieve different accuracies. However, Zhu’s approach [13] is based on a polynomial fitting method along *Z* direction using a high-accuracy translation stage. And the use of a high-accuracy translation stage for system calibration is usually difficult to setup (e.g., moving direction perfectly perpendicular to *Z* axis) and expensive. While Li’s method [12, 14] increases the calibration flexibility and simplifies the calibration system setup, it is difficult for such a method to achieve high accuracy for extrinsic parameters calibration due to the strong constraints requirement (e.g. orthogonality of rotation matrices). Moreover, the magnification ratio and extrinsic parameters are calibrated separately, further complicating the calibration process and increasing the uncertainties of modeling accuracy since the magnification and extrinsic parameters are naturally coupled together and are difficult to separate.

To address the aforementioned limitations of the state-of-art system calibration, we propose to use a LWD pin-hole lens to calibrate a telecentric lens. Namely, we developed a system that includes a camera with telecentric lens and a projector with small FOV and a projector with a LWD pin-hole lens. Since the pin-hole imaging model has been well-established and its calibration is well studied, we can use a calibrated pin-hole projector to assist the calibration of a camera with a telecentric lens. To the best of our knowledge, there are no flexible and accurate approach to calibrate an SL system using a camera with telecentric lens and a projector with a pin-hole lens.

In this research, we propose a novel framework to calibrate such a structured light system using a camera with a telecentric lens and a projector using a pin-hole lens. The pin-hole projector calibration follows the flexible and standard pin-hole camera calibration procedures enabled by making a projector to *capture* images like a camera, a method developed by Zhang and Huang [15]. With the calibrated projector, 3D coordinates of those feature points used for projector calibration are then estimated through iterative Levenberg-Marquardt optimization. Those reconstructed 3D feature points are further used to calibrate the camera with a telecentric lens. Since the same set of points are used for both projector and camera calibration, the calibration process is quite fast; and because a standard flat board with circle patterns are used and posed flexibly for the whole system calibration, the proposed calibration approach of a structured light system using a telecentric camera lens is very flexible.

Section 2 introduces the principles of the telecentric and pinhole imaging systems. Section 3 illustrates the procedures of the proposed calibration framework. Section 4 demonstrates the experimental validation of this proposed calibration framework. Section 5 summaries the contributions of this research.

## 2. Principle

The camera model with a telecentric lens is demonstrated in Fig. 1. Basically, the telecentric lens simply performs a magnification in both *X* and *Y* direction, while it is not sensitive to the depth in *Z* direction. By carrying out ray transfer matrix analysis for such an optical system, the relationship between the camera coordinate (*o ^{c}*;

*x*,

^{c}*y*,

^{c}*z*) and the image coordinate ( ${o}_{0}^{c}$;

^{c}*u*,

^{c}*v*) can be described as follows:

^{c}*X*and

*Y*direction. While the transformation from the world coordinate (

*o*;

^{w}*x*,

^{w}*y*,

^{w}*z*) to the camera coordinate can be formulated as follows:

^{w}The projector model respects a well-known pin-hole imaging model is illustrated in Fig. 2. The projection from 3D object points (*o ^{w}*;

*x*,

^{w}*y*,

^{w}*z*) to 2D projector sensor points ( ${o}_{0}^{p}$;

^{w}*u*,

^{p}*v*) can be described as follows:

^{p}*s*represents the scaling factor;

^{p}*α*and

*β*are respectively the effective focal lengths of the camera

*u*and

*v*direction. The effective focal length is defined as the distance from the pupil center to the imaging sensor plane. Since the majority software (e.g., OpenCV camera calibration toolbox) gives these two parameters in pixels, the actual effective focal lengths can be computed by multiplying the camera pixel size in

*u*and

*v*direction, respectively. ( ${u}_{0}^{c}$, ${v}_{0}^{c}$) are the coordinates of the principle point; ${\mathbf{R}}_{\mathbf{3}\times \mathbf{3}}^{\mathbf{p}}$ and ${\mathbf{t}}_{\mathbf{3}\times \mathbf{1}}^{\mathbf{p}}$ are the rotation and translation parameters.

## 3. Procedures

The calibration framework includes seven major steps:

*Step 1: Image capture.*Use a 9 × 9 circle board [see Fig. 3(a)] as the calibration target. Put the calibration target at different spatial orientations. Capture a set of images for each target pose, which is composed of the projection of horizontal patterns, vertical patterns and a pure white frame.*Step 2: Camera circle center determination.*Pick the captured image with pure white fringe (no pattern) projection, extract the circle centers (*u*,^{c}*v*) as the feature points. An example is shown in Fig. 3(b).^{c}*Step 3: Absolute phase retrieval.*To calibrate the projector, we need to generate a “captured” image for the projector since the projector cannot capture images by itself. This is achieved by mapping a camera point to a projector point using absolute phase. To obtain the phase information, we use a least-square phase-shifting algorithm with 9 steps (*N*= 9). The*k*-th projected fringe image can be expressed as follows: where*I′*(*x*,*y*) represents the average intensity,*I″*(*x*,*y*) the intensity modulation, and*ϕ*(*x*,*y*) the phase to be solved for,$$\varphi (x,y)={\text{tan}}^{-1}\left[\frac{{\sum}_{k=1}^{N}{I}_{k}\text{sin}(2k\pi /N)}{{\sum}_{k=1}^{N}{I}_{k}\text{cos}(2k\pi /N)}\right].$$This equation produces wrapped phase map ranging [−*π*, +*π*). Using a multi-frequency phase-shifting technique as described in [16], we can extract the absolute phase maps Φand Φ_{ha}without 2_{va}*π*discontinuities respectively from captured images with horizontal [see Fig. 3(c)] and vertical pattern projection [see Fig. 3(d)].*Step 4: Projector circle center determination.*Using the absolute phases obtained from*step 3*, the projector circle centers (*u*,^{p}*v*) can be uniquely determined from the camera circle centers obtained from^{p}*Step 2*: where*P*_{1}and*P*_{2}are respectively the fringe periods of the horizontal and vertical patterns used in*Step 3*for absolute phase recovery, which are 18 and 36 pixels in our experiments. This step simply converts the absolute phase values into projector pixels.*Step 5: Projector intrinsic calibration.*Using the projector circle centers extracted from the previous step, the projector intrinsic parameters (i.e.*α*,*β*,*γ*, ${u}_{0}^{c}$, ${v}_{0}^{c}$) can be estimated using standard OpenCV camera calibration toolbox.*Step 6: Estimation of 3D target points.*Align the world coordinate (*o*;^{w}*x*,^{w}*y*,^{w}*z*) with the projector coordinate (^{w}*o*;^{p}*x*,^{p}*y*,^{p}*z*), then the projector extrinsic matrix [ ${\mathbf{R}}_{\mathbf{3}\times \mathbf{3}}^{\mathbf{p}}$, ${\mathbf{t}}_{\mathbf{3}\times \mathbf{1}}^{\mathbf{p}}$] becomes [^{p}**I**_{3×3},**0**_{3×1}], which is composed of a 3 × 3 identity matrix**I**_{3×3}and a 3 × 1 zeros vector**0**_{3×1}. To obtain the 3D world coordinates of target points, we first define the target coordinate (*o*;^{t}*x*,^{t}*y*, 0) by assuming its Z coordinate to be zero, and assign the upper left circle center (point A in Fig. 3(a)) to be the origin. For each target pose, we estimate the transformation matrix [^{t}**R**,_{t}**t**] from target coordinate to world coordinate using iterative Levenberg-Marquardt optimization method provided by OpenCV toolbox. Essentially this optimization approach iteratively minimizes the difference between the observed projections and the projected object points, which can be formulated as the following functional:_{t}$$\underset{{\mathbf{R}}_{i},{\mathbf{t}}_{i}}{\text{min}}\Vert \left[\begin{array}{c}{u}^{p}\\ {v}^{p}\\ 1\end{array}\right]-{\mathbf{M}}_{p}[{\mathbf{R}}_{t},{\mathbf{t}}_{t}]\left[\begin{array}{c}{x}^{t}\\ {y}^{t}\\ 0\\ 1\end{array}\right]\Vert ,$$where ‖ · ‖ denotes the least square difference.**M**denotes the projection from the world coordinate to image coordinate. After this step, the 3D coordinates of the target points (i.e. circle centers) can be obtained by applying this transformation to the points in target coordinate._{p}*Step 7: Camera calibration.*Once the 3D coordinates of the circle centers on each target pose are determined, the camera parameters ${m}_{ij}^{c}$ can be solved in the least-square sense using Eq. (3).

After calibrating both the camera and the projector, using Eq. (3) and Eq. (4), we can compute the 3D coordinate (*x ^{w}*,

*y*,

^{w}*z*) of a real-world object based on calibration.

^{w}## 4. Experiments

We have conducted some experiments to validate the accuracy of our calibration model. The test system includes a digital CCD camera (Imaging Source DMK 23U274) with a pixel resolution of 1600 × 1200, and a DLP projector (LightCrafter PRO4500) with a pixel resolution of 1140 × 912. The telecentric lens used for the camera is Opto-engineering TC4MHR036-C with a magnification of 0.487. It has a working distance of 102.56 mm a field depth of 5 mm. The LWD lens used for the projector has a working distance of 700 mm and an FOV of 400 mm × 250 mm. Since both lenses used for the camera and the projector have a distortion ratio less than 0.1%, therefore, we ignored the lens distortion from both the camera and the projector lenses for simplicity. To validate the accuracy of our model, we examined the reprojection error for both the camera and the projector, as shown in Fig. 4. It indicates that our model is sufficient to describe both the camera and the projector imaging, since the errors for both the camera and the projector are mostly within ±5*μm*. The root-mean-square (RMS) errors are respectively 1.8 *μm* and 1.2 *μm*. Since the camera calibration was based on a calibrated projector, one may notice that the projector calibration has slightly higher accuracy than camera calibration. This could be a result of the coupling error from the mapping besides optimization error, or because the camera has smaller pixel size than the projector.

We first measured the lengths of two diagonals
$\overline{\mathit{AC}}$ and
$\overline{\mathit{BD}}$ [see Fig. 3(a)] of the calibration target under 10 different orientations. The two diagonals are formed by the circle centers on the corner. The circle centers on this calibration target was precisely manufactured with a distance *d* of 1.0000 ± 0.0010 mm. Therefore, the actual length of
$\overline{\mathit{AC}}$ and
$\overline{\mathit{BD}}$ can be expressed by
$8\sqrt{2}d$, or 11.3137 mm. We reconstructed the 3D geometry for each target pose, and then extracted the 3D coordinates of the 4 points (i.e. *A*, *B*, *C*, *D*). Finally, the euclidean distances
$\overline{\mathit{AC}}$ and
$\overline{\mathit{BD}}$ will be computed and compared with the actual value (i.e. 11.3137 mm). The results are shown in Table 1, from which we can see that the measurement results are consistently accurate for different target poses. On average, the measurement error is around 9 *μm* with the maximum being 16 *μm*. Considering the length of the measured diagonal, the percentage error is quite small (around 0.10%), which is even comparable to the manufacturing uncertainty. The major error sources could come from the error introduced by circle center extraction or the bilinear interpolation of 3D coordinates.

We then put this calibration target on a precision vertical translation stage (Model: Newport M-MVN80, sensitivity: 50 nm) and translated it to different stage heights (spacing: 50 *μ*m); We measured the 3D coordinates of the circle center point *D* [see Fig. 3(a)] at each stage, noted as *D _{i}*(

*x*,

*y*,

*z*), where

*i*denotes the

*i*-th stage position. Then we computed the rigid translation

**t**from

_{i}*D*

_{1}(

*x*,

*y*,

*z*) to

*D*(

_{i}*x*,

*y*,

*z*). The magnitude of ‖

**t**‖ are then compared with the actual stage translation. The results are shown in Table 2. The results indicate that our calibration is able to provide a quite accurate estimation of a rigid translation. On average, the error is around 1.7

_{i}*μm*. Considering the spacing for the stage translation (i.e. 50

*μ*m), this error is quite small.

To further examine measurement uncertainty, we measured the 3D geometry a flat plane and compare it with an ideal plane obtained through least-square fitting. Figure 5(a) shows the 2D color-coded error map, and Figure 5(b) shows one of its cross section. The root-mean-square (RMS) error for the measured plane is 4.5 *μ*m, which is very small comparing with the random noise level (approximately 20 *μ*m). The major sources of error could come from the roughness of this measured surface, or/and the random noise of the camera. This result indicates that our calibration method can provide a good accuracy for 3D geometry reconstruction.

To visually demonstrate the success of our calibration method, we measured two different types of objects with complex geometry. We first measured a ball grid array [Fig. 6(a)] and then measured a flat surface with octagon grooves [Fig. 6(d)]. The reconstructed 3D geometries and the corresponding cross sections are shown in Fig. 6(b)–(c) and Fig. 6(e)–(f). From those results, it indicates that our calibration algorithm works well for different types of geometry (e.g. spheres, ramps, planes), which further confirms the success of our calibration framework. One may notice that there is a small slope in the cross section shown in Fig. 6(c). This is owing to the fact that the ball grid array sample has a little tilted bottom surface, which deviates a little bit from Z plane.

## 5. Conclusion

In this research, we have presented a novel calibration method for a unique type of microscopic SL system, which is comprised of a camera with telecentric lens and a regular pin-hole projector using LWD lens with small FOV. The proposed calibration approach is flexible since only standard flat board with circle patterns are used and the calibration targets are posed freely for the whole system calibration. The experimental results have demonstrated the success of our calibration framework by achieving very high measurement accuracy: approximately 10 *μ*m accuracy with a calibration volume of 10(H) mm × 8(W) mm × 5(D) mm.

## Acknowledgments

This study was sponsored by the National Science Foundation (NSF) under Grant No. CMMI-1300376. Any opinion, findings, and conclusions or recommendations expressed in this article are those of the authors and do not necessarily reflect the views of the NSF.

## References and links

**1. **S. S. Gorthi and P. Rastogi, “Fringe projection techniques: whither we are?” Opt. Lasers Eng **48**, 133–140 (2010). [CrossRef]

**2. **R. Windecker, M. Fleischer, and H. J. Tiziani, “Three-dimensional topometry with stereo microscopes,” Opt. Eng. **36**, 3372–3377 (1997). [CrossRef]

**3. **C. Zhang, P. S. Huang, and F.-P. Chiang, “Microscopic phase-shifting profilometry based on digital micromirror device technology,” Appl. Opt. **41**, 5896–5904 (2002). [CrossRef] [PubMed]

**4. **K.-P. Proll, J.-M. Nivet, K. Körner, and H. J. Tiziani, “Microscopic three-dimensional topometry with ferroelectric liquid-crystal-on-silicon displays,” Appl. Opt. **42**, 1773–1778 (2003). [CrossRef] [PubMed]

**5. **R. Rodriguez-Vera, K. Genovese, J. Rayas, and F. Mendoza-Santoyo, “Vibration analysis at microscale by talbot fringe projection method,” Strain **45**, 249–258 (2009). [CrossRef]

**6. **A. Li, X. Peng, Y. Yin, X. Liu, Q. Zhao, K. Körner, and W. Osten, “Fringe projection based quantitative 3d microscopy,” Optik **124**, 5052–5056 (2013). [CrossRef]

**7. **C. Quan, X. Y. He, C. F. Wang, C. J. Tay, and H. M. Shang, “Shape measurement of small objects using lcd fringe projection with phase shifting,” Opt. Commun. **189**, 21–29 (2001). [CrossRef]

**8. **C. Quan, C. J. Tay, X. Y. He, X. Kang, and H. M. Shang, “Microscopic surface contouring by fringe projection method,” Opt. Laser Technol. **34**, 547–552 (2002). [CrossRef]

**9. **J. Chen, T. Guo, L. Wang, Z. Wu, X. Fu, and X. Hu, “Microscopic fringe projection system and measuring method,” in “Proc. SPIE,” (Chengdu, China, 2013), pp. 87594U. [CrossRef]

**10. **D. S. Mehta, M. Inam, J. Prakash, and A. Biradar, “Liquid-crystal phase-shifting lateral shearing interferometer with improved fringe contrast for 3d surface profilometry,” Appl. Opt. **52**, 6119–6125 (2013). [CrossRef]

**11. **Y. Yin, M. Wang, B. Z. Gao, X. Liu, and X. Peng, “Fringe projection 3d microscopy with the general imaging model,” Opt. Express **23**, 6846–6857 (2015). [CrossRef] [PubMed]

**12. **D. Li and J. Tian, “An accurate calibration method for a camera with telecentric lenses,” Opt. Lasers Eng. **51**, 538–541 (2013). [CrossRef]

**13. **F. Zhu, W. Liu, H. Shi, and X. He, “Accurate 3d measurement system and calibration for speckle projection method,” Opt. Lasers Eng. **48**, 1132–1139 (2010). [CrossRef]

**14. **D. Li, C. Liu, and J. Tian, “Telecentric 3d profilometry based on phase-shifting fringe projection,” Opt. Express **22**, 31826–31835 (2014). [CrossRef]

**15. **S. Zhang and P. S. Huang, “Novel method for structured light system calibration,” Opt. Eng. **45**, 083601 (2006). [CrossRef]

**16. **Y. Wang and S. Zhang, “Superfast multifrequency phase-shifting technique with optimal pulse width modulation,” Opt. Lett. **19**, 5149–5155 (2011).