## Abstract

Digital projectors are used as standard parts at present in fringe projection profilometry systems to project structured-light patterns onto the object surface to be measured, and the distortion of the projector lens must be calibrated and compensated accurately to satisfy the accuracy requirement of industrial applications. A novel method is proposed to determine the projector pixel coordinates of the marker points of a calibration target accurately in terms of projective transform. With the method, the projector can be calibrated with accuracy of sub-pixel level. The method is applicable for the calibration target with a chessboard pattern or a circle pattern, and the calibration result is independent on the results of camera calibration. Experimental results are shown to demonstrate the effectiveness and validity of the proposed method.

© 2017 Optical Society of America

## 1. Introduction

Fringe projection profilometry (FPP) is an optical 3D scanning technology and of interest to both researchers and engineers due to rapid measurement, high spatial resolution and high point density [1–3]. Digital light processing (DLP) projectors are low in cost and flexible to programming, and are used as standard parts at present in FPP systems to project structured-light patterns onto the object surface to be measured [4]. As the lens in the DLP projector is not ideal in practice, the projected patterns are distorted and additional phase errors are introduced into the absolute phase maps of the images of the projected patterns [5], and the projector distortion is one of the main sources of uncertainty of the FPP system. Therefore, the lens distortion of the DLP projector must be calibrated and compensated accurately in order to obtain precise 3D profiles with the FPP system. The DLP projector can be considered as the inverse of a camera and described with the same mathematical model. As we cannot capture an image with the DLP projector directly, we must calibrate it with the help of the camera of the FPP system.

Several projector calibration methods have been proposed and can be classified into two kinds of methods. In the first kind of methods, projector parameters are determined with a calibrated camera [6-7], and the calibration error of the camera is further accumulated and enlarged in the calibration process of the projector.

In the other kind of methods, fringe patterns are generated and projected onto a calibration target to determine the projector pixel coordinates of the marked points, i.e. corner points for chessboard patterns or circle centers for circle patterns, of the calibration target [5,8–11]. The accuracy of these methods mainly relies on the mapping accuracy between the locations of the marked points in the image plane of the camera and the corresponding projector pixel coordinates, and is independent on the camera calibration results. The mapping is pixel-to-pixel (P2P) based in the methods proposed in [5,8-9], and the calibration accuracy is limited. Huang et al. [10] proposed a method to improve the precision of the mapping to sub-pixel level, and this method can only be applied with calibration targets of circle patterns. In this method, pixels on the circle edge of the image of the calibration target are extracted and mapped onto the digital micro-mirror device (DMD) of the projector. The locations of the circle centers on the DMD are computed with the least-squares fitting technique to achieve sub-pixel precision. However, it is well known that there is an eccentricity error in the projection of a circular target [12]. In other words, there is a theoretical flaw in this method. Zhang et al. [11] proposed a projector calibration method using a camera optical coaxial with the projector. Similar to the methods proposed in [5,8-9], P2P mapping is utilized. Besides, it is difficult to align the camera and the projector in practice.

Different from the above two kinds of methods, Liu et al. [13] most recently proposed a method to generate adaptive fringe patterns to compensate the lens distortion in the projector without projector calibration. Similar to the first kind of methods, the camera must be calibrated in advance, and it is impossible to prevent the compensation effect from the influence of camera calibration.

In this paper, a novel method to improve the accuracy of projector calibration is proposed, and the principle of projective geometry, including projective invariance of the cross ratio, is employed to achieve sub-pixel level mapping between the location of a marked point in the camera image and its corresponding projector pixel coordinates. There is no restriction on the pattern type of the calibration target in the method. Besides, camera calibration is not a prerequisite to projector calibration for the method, and the accuracy of camera calibration does not affect that of the method. Projective invariance of the cross ratio has been employed in calibrating cameras [14-15], but not in calibrating the projector, to the best of our knowledge.

The rest of the paper is organized as follows. Section 2 introduces the principle and procedure of the proposed calibration method. Experimental results are given in Section 3 to demonstrate the effectiveness of the method. The paper is summarized in Section 4.

## 2. Principle

In this section, the projector calibration model and the P2P method to determine the projector pixel coordinates of the marked points of the calibration target are introduced briefly, and our sub-pixel method is then presented in detail.

#### 2.1 Projector calibration model

Similar to the camera, the projector can also be described with the pinhole model as well as radial and tangential lens distortion, and the pinhole models of the camera and the projector along with their coordinate systems and a calibration target are shown in Fig. 1. Let *P* be a corner point on the calibration target,${P}_{\text{w}}=\left(x,y,z,1\right)$denote the 3D homogeneous coordinates of the point in the world coordinate system,${P}_{\text{p}}=\left({u}_{p},{v}_{p},1\right)$denote the corresponding 2D homogeneous coordinates of the point in the projector pixel coordinate system. The relationship between${P}_{\text{w}}$and${P}_{\text{p}}$can be described as

*s*is a scaling factor;${\text{f}}_{u}$and${f}_{v}$are the focal length measured in width and height of the pixels of the projector, respectively; (${\text{u}}_{\text{p}0}$,${v}_{p0}$) represent the projector pixel coordinates of the principal point; and

**and**

*R***represent the rotation matrix and translation vector, respectively.**

*T*Considering radial and tangential distortion [16], the distorted projector pixel coordinates${P}_{\text{d}}\left({u}_{d},{v}_{d},1\right)$can be expressed as

*k*

_{1}and

*k*

_{2}are coefficients of radial distortion, and

*p*

_{1}and

*p*

_{2}are coefficients of tangential distortion. Thus, the projector calibration model can be described with Eqs. (1-2). In this paper, only intrinsic parameters (${\text{f}}_{u}$,${f}_{v}$,${\text{u}}_{p0}$,${\text{v}}_{p0}$) and distortion coefficients (

*k*

_{1},

*k*

_{2},

*p*

_{1},

*p*

_{2}) are calibrated.

The projector can be calibrated with a planar chessboard calibration target or a calibration target with a circle pattern following Zhang’s method [17]. As mentioned in Section 1, we must calibrate it with the help of a camera, and the camera image coordinates of the marked points of the calibration target are mapped onto the projector DMD to determine the projector pixel coordinates of the marked points. Then the intrinsic parameters and the distortion coefficients of the projector can be determined with Zhang’s method.

#### 2.2 P2P mapping method

In order to obtain the projector pixel coordinates of the marked points, two groups of sinusoidal fringe patterns, one vertical and the other horizontal, are generated with a computer and projected onto the surface of the calibration target with the projector. Then the images of these projected fringe patterns are captured. The vertical intensity of the captured fringe patterns can be expressed as

The phase${\Phi}_{V}\left({u}_{c},{v}_{c}\right)$and${\Phi}_{H}\left({u}_{c},{v}_{c}\right)$can be retrieved as follows:

The image coordinates of the marked points can be extracted from the image of the calibration target. Based on the extracted image coordinates, the absolute phases of the marked points can be obtained from the phase distribution${\Phi}_{V}\left({u}_{c},{v}_{c}\right)$and${\Phi}_{H}\left({u}_{c},{v}_{c}\right)$. Then, the projector pixel coordinates of a marked point are calculated as follows:

In Eq. (6), the projector pixel coordinates of the marked point are computed based on the absolute phases which correspond to the integral pixels in the camera image. Therefore, the projector pixel coordinates of the marked point are determined at accuracy of pixel-level although the image coordinates of the marked point can be extracted at sub-pixel level. This yields inaccurate projector pixel coordinates and finally decreases the accuracy of projector calibration.

#### 2.3 A novel sub-pixel mapping method

In this paper, the principle of projective geometry [18] is employed to determine the projector pixel coordinates of the marked point at accuracy of sub-pixel level. As shown in Fig. 1, the camera image and the projector “image” are the projective transforms of the same calibration target, respectively, if the distortions of the camera and the projector are ignored. The cross-ratio is essentially the only projective invariant of a quadruple of collinear points in projective geometry, and the cross ratio of a quadruple of collinear points$\left({P}_{1},{P}_{2},{P}_{3},{P}_{4}\right)$is defined as

In Fig. 2(a), quadruple points$\left({P}_{1},{P}_{2},{P}_{3},{P}_{4}\right)$,$\left({P}_{\text{c}1},{P}_{c2},{P}_{c3},{P}_{c4}\right)$,$\left({P}_{p1},{P}_{p2},{P}_{p3},{P}_{p4}\right)$are collinear, respectively, and $\left({P}_{\text{c}1},{P}_{c2},{P}_{c3},{P}_{c4}\right)$and$\left({P}_{p1},{P}_{p2},{P}_{p3},{P}_{p4}\right)$are the projective transform of $\left({P}_{1},{P}_{2},{P}_{3},{P}_{4}\right)$ on the camera image plane and the projector DMD, respectively. Therefore,$\left({P}_{1},{P}_{2};{P}_{3},{P}_{4}\right)\text{=}\left({P}_{\text{c}1},{P}_{c2};{P}_{c3},{P}_{c4}\right)$, and$\left({P}_{1},{P}_{2};{P}_{3},{P}_{4}\right)\text{=}\left({P}_{p1},{P}_{p2};{P}_{p3},{P}_{p4}\right)$as well, so that$\left({P}_{c1},{P}_{c2};{P}_{c3},{P}_{c4}\right)\text{=}\left({P}_{p1},{P}_{p2};{P}_{p3},{P}_{p4}\right)$.

In projective geometry, the intersection of the projective transform of two lines is the projective transform of the intersection of the two lines. As shown in Fig. 2(b), two lines intersect at *P* on the surface of the object to be measured. The camera images of the two lines intersect at${P}_{c}$, and the projective transforms of the two lines on the projector DMD intersect at${P}_{p}$.${P}_{c}$and${P}_{p}$are also the projective transforms of *P* on the camera image plane and the projector DMD, respectively. Thus${P}_{c}$and${P}_{p}$are corresponding points.

Generally, seven auxiliary points are utilized to obtain the projector pixel coordinates of the marker point in this method. The marker point${P}_{c}$and the auxiliary points (${A}_{c}$to${G}_{c}$) in the camera image plane are shown in Fig. 3(a). The auxiliary points${A}_{c}$to${D}_{c}$are four nearest integral pixels which surround${P}_{c}$,${E}_{c}$is the intersection between line${A}_{c}{C}_{c}$and line${B}_{c}{D}_{c}$. Line${A}_{c}{C}_{c}$and line${B}_{c}{D}_{c}$divide the area${A}_{c}{B}_{c}{C}_{c}{D}_{c}$into four subareas, i.e.I-IV. Suppose that${P}_{c}$locates inside subareaIVor on line${C}_{c}{D}_{c}$, but not on line${E}_{c}{C}_{c}$or line${E}_{c}{D}_{c}$. Then, the coordinates of ${F}_{c}$and${G}_{c}$can be determined by intersecting line${A}_{c}{C}_{c}$and line${B}_{c}{P}_{c}$, and line${B}_{c}{D}_{c}$and line${A}_{c}{P}_{c}$, respectively. Since${A}_{c}$,${B}_{c}$,${C}_{c}$and${D}_{c}$are integral pixel points, their mapping points, i.e.${A}_{p}$,${B}_{p}$,${C}_{p}$and${D}_{p}$shown in Fig. 3(b), onto the projector DMD can be determined by substituting the phases of the four points into Eq. (6).

The mapping of a line in the camera image plane onto the projector DMD is distorted as shown in Fig. 3(b). However, such distortion can be ignored locally if the region is sufficient small in size. Apparently, the region bounded by${A}_{p}{B}_{p}{C}_{p}{D}_{p}$is quite small as${A}_{c}$ to${D}_{c}$are four nearest integral pixels which surround${P}_{c}$. Therefore, the region bounded by${A}_{p}{B}_{p}{C}_{p}{D}_{p}$ is considered to be a projective transform of the region bounded by${A}_{c}{B}_{c}{C}_{c}{D}_{c}$. Therefore, the intersection between line${A}_{p}{C}_{p}$ and line${B}_{p}{D}_{p}$, i.e. point${E}_{p}$, is considered as the mapping point of ${E}_{c}$. The mapping points of ${F}_{c}$and${G}_{c}$, i.e.${F}_{p}$and${G}_{p}$, locate on line${A}_{p}{C}_{p}$and line ${B}_{p}{D}_{p}$, respectively, and$\left({A}_{\text{p}},{E}_{p};{F}_{p},{C}_{p}\right)=\left({A}_{c},{E}_{c};{F}_{c},{C}_{c}\right)$and$\left({B}_{p},{E}_{p};{G}_{p},{D}_{p}\right)=$$\left({B}_{c},{E}_{c};{G}_{c},{D}_{c}\right)$. The intersection of line${A}_{p}{G}_{p}$and line${B}_{p}{F}_{p}$is considered to be the mapping point of ${P}_{c}$.

Let$a={A}_{\text{p}}$,$b={{\rm E}}_{p}-{A}_{p}$,$\overline{a}={B}_{p}$,$\overline{b}={E}_{p}-{B}_{p}$,${A}_{p}$to${G}_{p}$are the projector pixel coordinates of points${A}_{p}$to ${G}_{p}$, which can be described as:

Substituting Eq. (8) into Eq. (7), Eq. (7) is reformulated as

As$\left({A}_{\text{p}},{E}_{p};{F}_{p},{C}_{p}\right)=\left({A}_{c},{E}_{c};{F}_{c},{C}_{c}\right)$and$\left({B}_{p},{E}_{p};{G}_{p},{D}_{p}\right)=\left({B}_{c},{E}_{c};{G}_{c},{D}_{c}\right)$, and $\left({A}_{c},{E}_{c};{F}_{c},{C}_{c}\right)$and$\left({B}_{c},{E}_{c};{G}_{c},{D}_{c}\right)$are known, it is straightforward to determine coefficients${\lambda}_{3}$and${\tau}_{3}$with Eq. (9). Therefore, the coordinates of ${F}_{p}$and${G}_{p}$can be computed with Eq. (8). Finally, the coordinates of ${P}_{p}$are determined as the intersection between lines${C}_{p}{G}_{p}$and${D}_{p}{F}_{p}$.

Two kinds of degenerated cases as shown in Fig. 4 must be considered in computing the mapping of${P}_{c}$: (i)${P}_{c}$locates on${E}_{c}{C}_{c}$; (ii)${P}_{c}$locates on${E}_{c}{D}_{c}$. For the first case, as $\left({A}_{\text{p}},{E}_{p};{P}_{p},{C}_{p}\right)=\left({A}_{c},{E}_{c};{P}_{c},{C}_{c}\right)$, the coordinates of ${P}_{p}$can be determined. For the second case, as$\left({B}_{p},{E}_{p};{P}_{p},{D}_{p}\right)=\left({B}_{c},{E}_{c};{P}_{c},{D}_{c}\right)$, the coordinates of ${P}_{p}$can be determined.

If the marked point${P}_{c}$locates in other subareas, i.e.I-III, the corresponding projector pixel coordinates can be determined with the similar method. As the projector pixel coordinates of the marked point is computed with that of four nearest integral pixels in the proposed method, sub-pixel accuracy is achieved. After the projector pixel coordinates of the marked points are determined, the projector can be calibrated accurately.

Since the projector pixel coordinates of the marked points are computed directly based on the image coordinates of the marked points and the absolute phases, thus the camera calibration procedure is not a prerequisite in our proposed method, as mentioned in Section 1. This means that the calibration accuracy of our proposed method cannot be influenced by the result of camera calibration.

#### 2.4 Procedures of projector calibration

The complete calibration procedure with the proposed method can be summarized into the following steps:

- 1) Fix a calibration target with a fixture and take an image using the camera.
- 2) Fix a white plate in the same position. Then project two sets of fringe patterns, one horizontal and the other vertical, onto the white plate and capture the images of these fringe patterns, respectively.
- 3) Randomly change the pose of the calibration target, and repeat steps 1 and 2 to acquire at least three groups of images.
- 4) For each group of images, extract the camera image coordinates of the marker points in the calibration target image and calculate the absolute phase maps from the images of the fringe patterns.
- 5) Compute the projector pixel coordinates of each marker point with the method given in Section 2.3.
- 6) Estimate the intrinsic parameters and the distortion coefficients of the projector with Zhang’s method.

## 3. Experiments and results

Several experiments have been carried out to demonstrate the validity of our proposed method. The experimental system mainly consists of a DLP projector (Lightcrafter 4500 with resolution of $912\times 1140$pixels), a CCD camera (Guppy PRO F-125B/C with resolution of $1292\times 964$pixels), two planar calibration targets (one with a chessboard pattern and the other with a circle pattern) and a white ceramic plate.

In an arbitrary position, one image of the calibration target with a chessboard pattern and 32 images of the projected fringe patterns, 16 patterns horizontal and 16 patterns vertical, are captured. The image of the calibration target and two images of the fringe patterns, one vertical and the other horizontal, are shown in Fig. 5(a), 5(b) and 5(c), respectively. The corner points in the image of the calibration target are extracted as marked points, and 336 marked points are extracted. The absolute phase maps of vertical and horizontal fringe patterns are also computed, respectively, and the results are shown in Fig. 5(d) and 5(e). Totally 3 groups of images of the calibration target and the fringe patterns in 3 different poses are captured for the projector calibration. For each group of images, the above computation is repeated.

The intrinsic parameters and distortion coefficients of the projector are calibrated with our proposed method and the method based on P2P mapping, respectively. The calibration results are listed in Table 1 and Table 2, and it is obvious that the standard errors of the calibration results with our method are much smaller than that of the P2P mapping based method.

In projector calibration, re-projection error (RPE) is generally adopted to measure the matching degree between the calibration data and the calibrated parameters [8–11]. The RPE distribution of the P2P method and our proposed method are shown in Fig. 6(a) and 6(b), respectively. Figure 6(c) shows the partial enlargement of Fig. 6(b). Compared with the RPE of P2P method, the RPE of our proposed method is reduced from (0.3927, 0.2868) to (0.0550, 0.0344). The experimental results demonstrate that the accuracy of projector calibration can be improved significantly with our method.

In order to compare the calibration accuracy with the circle fitting method proposed in [10], the projector is also calibrated with a calibration target of circle patterns as shown in Fig. 7(a), and the circles labeled with cross at the centers are utilized in calibrating the projector. The circle centers, circle edges and the absolute phase maps in two different directions are shown in Fig. 7(b) and 7(c). The RPE distribution of the P2P method, the circle fitting method and our method are shown in Fig. 7(d), 7(e) and 7(f). The projector calibration accuracy can be improved with the circle fitting method and our method, and our proposed method outperforms the circle fitting method slightly.

Based on the calibrated distortion coefficients, the compensation phase can be calculated and added to the initial fringe patterns to reduce the phase errors caused by lens distortion of the projector. After compensating the phase errors, residual phase errors of absolute phase map on the white ceramic plate are evaluated to compare the accuracy of the P2P method, the circle fitting method and our method. The compensation result of the adaptive fringe pattern method proposed in [13] which can reduce projector distortion without projector calibration is also shown in this paper for comparison.

The image of the white ceramic plate with white illumination is shown in Fig. 8(a), and the absolute phase map is shown in Fig. 8(b). In this experiment, Zhang’s method [17] has been employed to compensate the phase error caused by the camera lens distortion, and the remaining phase errors are mainly caused by the distortion of the projector lens.

The phase error distribution shown in Fig. 8(c) is acquired without compensation, and the phase error distributions which are compensated with the P2P method, the circle fitting method, the adaptive fringe pattern method and our proposed method are shown in Fig. 8(d), 8(e), 8(f) and 8(g), respectively.

After compensation with our method, the maximum phase error is reduced from 0.1127rad (without compensation) to 0.019rad. By contrast, the maximum phase error of the P2P method, the circle fitting method and the adaptive fringe pattern method are reduced to 0.055rad, 0.024rad and 0.037rad, respectively. The phase errors which are caused by projector lens distortion can be reduced more effectively with our method.

As for the method proposed in [11], it is difficult to align the camera and the projector in practice, and no result is given for comparison.

## 4. Conclusion

In this paper, a novel method with accuracy of sub-pixel level has been proposed to calibrate the projector of a fringe projection profilometry system. Experimental results demonstrate that the phase errors which are caused by projector lens distortion can be reduced effectively with the proposed method, and the method outperforms exiting methods. Besides, there is no constraint on the pattern style of the calibration target in the method, and the calibration result is independent on the result of camera calibration.

## Funding

Introducing Talents of Discipline to Universities (B12019); National Natural Science Foundation of China (NSFC) (51375137); National Key Scientific Apparatus Development Project (2013YQ220893).

## References and links

**1. **F. Chen, G. W. Brown, and M. Song, “Overview of the three-dimentional shape measurement using optical methods,” Opt. Eng. **39**(1), 10–22 (2000). [CrossRef]

**2. **S. S. Gorthi and P. Rastogi, “Fringe projection techniques: Whither we are?” Opt. Lasers Eng. **48**(2), 133–140 (2010). [CrossRef]

**3. **Z. Y. Wang, D. A. Nguyen, and J. C. Barnes, “Some practical considerations in fringe projection profilometry,” Opt. Lasers Eng. **48**(2), 218–225 (2010). [CrossRef]

**4. **D. Li, C. Liu, and J. Tian, “Telecentric 3D profilometry based on phase-shifting fringe projection,” Opt. Express **22**(26), 31826–31835 (2014). [CrossRef] [PubMed]

**5. **Z. W. Li, Y. S. Shi, C. J. Wang, and Y. Y. Wang, “Accurate calibration method for a structured light system,” Opt. Eng. **47**(5), 053604 (2008). [CrossRef]

**6. **S. Zhang and R. Chung, “Use of LCD panel for calibrating structured-light-based range sensing system,” IEEE Trans. Instrum. Meas. **57**(11), 2623–2630 (2008). [CrossRef]

**7. **J. Lu, R. Mo, H. Sun, and Z. Chang, “Flexible calibration of phase-to-height conversion in fringe projection profilometry,” Appl. Opt. **55**(23), 6381–6388 (2016). [CrossRef] [PubMed]

**8. **S. Zhang, “Novel method for structured light system calibration,” Opt. Eng. **45**(8), 083601 (2006). [CrossRef]

**9. **H. Anwar, I. Din, and K. Park, “Projector calibration for 3D scanning using virtual target images,” Int. J. Precis. Eng. Manuf. **13**(1), 125–131 (2012). [CrossRef]

**10. **Z. R. Huang, J. T. Xi, Y. G. Yu, and Q. H. Guo, “Accurate projector calibration based on a new point-to-point mapping relationship between the camera and projector images,” Appl. Opt. **54**(3), 347–356 (2015). [CrossRef]

**11. **S. Huang, L. Xie, Z. Wang, Z. Zhang, F. Gao, and X. Jiang, “Accurate projector calibration method by using an optical coaxial camera,” Appl. Opt. **54**(4), 789–795 (2015). [CrossRef] [PubMed]

**12. **D. He, X. Liu, X. Peng, Y. Ding, and B. Z. Gao, “Eccentricity error identification and compensation for high-accuracy 3D optical measurement,” Meas. Sci. Technol. **24**(7), 075402 (2013). [CrossRef] [PubMed]

**13. **J. Peng, X. Liu, D. Deng, H. Guo, Z. Cai, and X. Peng, “Suppression of projector distortion in phase-measuring profilometry by projecting adaptive fringe patterns,” Opt. Express **24**(19), 21846–21860 (2016). [CrossRef] [PubMed]

**14. **L. Xu, L. Chen, X. Li, and T. He, “Projective rectification of infrared images from air-cooled condenser temperature measurement by using projection profile features and cross-ratio invariability,” Appl. Opt. **53**(28), 6482–6493 (2014). [CrossRef] [PubMed]

**15. **D. Li, G. Wen, B. W. Hui, S. Qiu, and W. Wang, “Cross-ratio invariant based line scan camera geometric calibration with static linear data,” Opt. Lasers Eng. **62**(6), 119–125 (2014). [CrossRef]

**16. **R. Y. Tsai, “A versatile camera calibration technique for high accuracy 3D machine vision metrology using off-the shelf TV cameras and lenses,” IEEE J. Robot. Autom. **3**(4), 323–344 (1987). [CrossRef]

**17. **Z. Zhang, “A flexible new technique for camera calibration,” IEEE Trans. Pattern Anal. Mach. Intell. **22**(11), 1330–1334 (2000). [CrossRef]

**18. **E. Casas-Alvero, *Analytic Projective Geometry* (European Mathematical Society, 2014).