## Abstract

Projector calibration is one of the most essential steps for structured light systems. Some methods have high precision but require a complicated calibration procedure, such as the method based on phase-shifting. Other methods take advantage of simple implementation but cannot meet the accuracy requirement, for example, the method based on homography. In this paper, we proposed a compensation method for flexible and accurate projector calibration. To make the calibration procedure easy to operate, the homographic matrix between the projector and camera is established through feature points projected. Then, the 2D image points compensation method based on the re-projection error iteration algorithm was carried out, and a modified bundle adjustment (BA) algorithm is put forward to refine the calibration parameters of the system. Finally, the feature point reconstruction experiment is implemented to verify the high flexibility and accuracy performance of the proposed method.

© 2020 Optical Society of America under the terms of the OSA Open Access Publishing Agreement

## 1. Introduction

Structured light method has shown an enormous potential for the significantly increasing applications in many areas, including robotic vision, industrial inspection and virtual reality, etc., because it’s non-contact, low cost, highly accurate, capable of full-field acquisition and ease of implementation [1–5]. However, the system calibration is still a challenging issue in the research area, as the accuracy of the system measurement is mainly determined by the system calibration, especially the projector calibration. The main ideas behind the existing projector calibration methods can be classified into two categories [6,7]. Ones based on mapping from the phase to the height [8–12] and the other ones based on the concept of stereo vision [13–17]. In the second category, the projector can be considered as a “reverse camera”, so the system can be flexibly calibrated by mature stereo vision technique.

One of the most vital processes in the projector calibration method is to establish the mapping relationship between the 3D feature points and their correspondent 2D image points efficiently and accurately. Many methods have been proposed to solve the issue, such as phase-shifting method [18–20], homography method [21–23], Digital image correlation method [24], and so on. In the phase-shifting method, many vertical and horizontal structured light patterns need to be projected on the surface of the calibration board at each position. Besides, the method often needs heavy calculations to find the absolute phase map, which makes it inflexible and time-consuming. Recently, many efforts have been made to establish the homographic matrix between the coordinate of the projector and the camera for its advantage in simplifying projector calibration procedure. For example, Anwar [21] proposed a method by moving the projector and fixing both the camera and the screen, then using the constant transformation to obtain the 3D coordinates of the projected point, which facilitates the projector calibration but with low calibration precision. Huang *et al*. [22] developed a correspondence algorithm based on De Bruijn patterns to set up the homography matrix robust and adopted a bundle adjustment algorithm to optimize the estimated camera and projector models. It is useful in applications that need frequent re-calibration, although the calibration accuracy is affected by using imperfect planar structured light nodes. Juarez-Salazar *et al*. [23] utilized superposed color checkerboards to build the homography matrix flexibly.

However, no matter which mapping method is adopted, the accuracy of 2D image points in the projector is still an urgent problem to be solved. Therefore, recent literature have proposed different solutions. Liu *et al*. [25] presented a curve fitting method based on the analysis of the photoelectric module to obtain the accurate pixel coordinates and the residuals can be reduced by a polynomial distortion representation. Wang *et al*. [26] utilized the interpolation techniques to keep the pixel coordinates in the sub-pixel accuracy and a full-field phase map to evaluate the effectiveness of the non-linear distortion procedure. Zhou *et al*. [27] focused on the calibration errors of the principal point and focal length and proposed a systematic recalibration method. Zhou *et al*. [19] adopted the principle of projective invariance of the cross ratio to improve the accuracy of the phase mapping with sub-pixel level. Ren *et al*. [24] further presented a two-dimensional digital image correlation (DIC) method to refine the accuracy of the mapping based on affine transformation theory to obtain the sub-pixel matching precision. The DIC method can refine the 2D image points based on both the phase-shifting method and the homographic method, but it is time-consuming with the complex algorithm or more image processing. Focusing on the projector distortion residual compensation method, the literature [20,28,29] proposed a method based on adaptive fringe patterns or distorted fringe patterns or pre-distorted fringe patterns to compensate for the residual distortion map error effectively. For the accuracy issue of the projector calibration, bundle adjustment (BA) algorithm, which is widely applied in the close-range photography field for its efficient ability of optimization, is proved to be useful and suitable for the projector calibration [7,30,31].

In the proposed paper, utilizing the method based on homography, the projector can obtain the feature points only once at each calibration position without any sophisticated ancillary equipment or complicated procedure. Furthermore, the reprojection error iteration algorithm is presented to improve the accuracy of the image point and solve the initial value of the BA algorithm. Then, the camera and the projector can be optimized jointly based on the BA algorithm with the same high precision calibration pattern board.

The rest of the paper is organized as follows: In Section 2, the overall scheme of projector calibration is introduced. The projector calibration method is detailed in Section 3. In Section 4, experimental verification of the proposed method is given and conclusion of this work is presented in Section 5.

## 2. Overall scheme of projector calibration

The projector calibration method firstly regards the projector as a “reverse camera”. The 2D image points on the projector is obtained by establishing the homographic relationship between the image plane of the projector and camera. The initial calibration of the projector is realized by Zhang's [32] calibration method and the reprojection error of each point is obtained. Then the accuracy of 2D image points are improved by reprojection error iteration algorithm. Finally, based on the high-precision 3D feature points on the calibration board, the 2D image points, the internal and external parameters of the projector are optimized via BA algorithm, and the accurate calibration results of the projector can be obtained. There are obvious advantages of the presented method, as the following:

- (1) By projecting feature points, the homographic relationship between the image plane of the projector and camera is established, and only one image needs to be taken for each position, which can increase the flexibility and rapidity of the calibration procedure.
- (2) The bundle adjustment algorithm is adopted for further calibration optimization, which will improve the calibration accuracy of the projector. In addition, the initial value of the BA is achieved via the simple reprojection errors iteration algorithm.
- (3) The same accurate 3D feature points on the calibration board are applied for the projector and camera simultaneously, so the whole bundling optimization of the projector and camera can be carried out. The flowchart of the overall calibration method as shown in Fig. 1.

## 3. Calibration of projector

#### 3.1 Mathematical model of projector calibration

From the perspective of working principle and optical imaging, the projector can be regarded as a “reverse camera”, so the model representation of the camera is suitable for the projector model too. Ideally, the projector can be considered as a pinhole model too, so the relationship between the 3D feature points ${{\boldsymbol P}_w}({{X_w},{Y_w},{Z_w},1} )$ and the 2D image points ${{\boldsymbol P}_p}({{u_p},{v_p},1} )$ of the projector can be expressed as:

Where $d{x_p}$ and $d{y_p}$ represent the physical size of a single pixel on the *X* axis and *Y* axis, respectively. ${f_{pu}}$ and ${f_{pv}}$ are the effective focal length of the projector. $({{u_{p0}},{v_{p0}}} )$ is the coordinate of the main point of the image. $[\begin{array}{{cc}} {{{\boldsymbol R}_p}}&{{{\boldsymbol T}_p}} \end{array}]$ is the external parameter matrix of the projector.

In order to obtain higher calibration accuracy, lens distortion must be corrected. After considering the distortion error, the image point denote as ${{\boldsymbol P^{\prime}}_p}({{{u^{\prime}}_p},{{v^{\prime}}_p},1} )$, so

#### 3.2 Establishment of homographic relationship

According to projective geometry knowledge, without considering distortion, the imaging of 3D feature points on the image plane of the camera and projector both can be regarded as a simple projective transformation relation. Therefore, the approximate projective transformation relationship between the image plane of the camera and projector can be established and described by a homographic matrix [33].

Suppose any point in the 3D feature points set is denoted as ${\boldsymbol P}_w^k$ on the calibration board. The 2D image points on the image plane of the projector is denoted as ${\boldsymbol P}_p^k$, after captured by the camera, the 2D image point on the camera image plane is denoted as ${\boldsymbol P}_c^k$. The plane of the calibration board is set to satisfy the equation:

Where*n*represents the normal vector of the calibration plane, $d$ is translation. Then combining with Eq. (1), we can obtain:

*H*, so Eq. (4) can be rewritten as where ${\gamma}$ is scaling factor,

*H*can be represent as

From the above equation, it can be known that a set of corresponding points can construct two constraint equations, and then only four sets of corresponding points are needed to obtain the above-mentioned homographic matrix. However, due to the existence of noise and other interference factors, there usually exists a large error when the homographic matrix obtained by only four points. In this paper, the weight is introduced to improve the accuracy of the homographic matrix. In other words, a large number of marked points are projected onto the calibration board, four marked points are taken to calculate the homographic matrix, and the distance between the sampling center and the whole image center is calculated, so as to assign weights to the corresponding homographic matrix. Therefore, the optimized homographic matrix can be expressed as:

Where ${w^i}$ represents the weight of the homologous matrix, it can be calculated as ${w^i} = {d_i}/\sum\limits_{i = 1}^n {d{}_i} $, ${d_i}$ is the distance between the sampling center and the image center. Then the image points of the feature points on the projector image plane can be calculated by the homographic matrix, so the projector can “capture” the feature points on the calibration board for Zhang's calibration method.

#### 3.3 Principle of reprojection error iteration algorithm

According to the principle of Zhang's calibration method, when the input 2D image points change, the internal and external parameters of the projector will change accordingly. The calibration based on the above method has large reprojection error for the low precision image points obtained only by homographic relationship. Some methods, such as DIC method [24] or compensation methods [28,29,34] have been adopted to improve the accuracy of the 2D image points, but the calculations are time-consuming. Therefore, a method based on error iteration is proposed. Its essence is to adjust the uncertain 2D image coordinates according to the value and direction of the reprojection errors.

Suppose that ${P_w}$ is a corner point on the calibration board, which accuracy is high enough. ${{\boldsymbol P}_P}({u_p},{v_p})$ is the image point obtained by homographic relationship which is mentioned in Section 3.2. Therefore, the projector can be calibrated by Zhang’s method. The reprojection image point, which calculated by the internal and external parameters, is denoted as ${{\boldsymbol P^{\prime}}_P}({u^{\prime}_p},{v^{\prime}_p})$, the error between ${{\boldsymbol P}_P}$ and ${{\boldsymbol P^{\prime}}_P}$ is represented as $(\Delta u,\Delta v)$, that is also called as the reprojection error, which is shown in Fig. 3. The reprojection error of every corner point on each checkerboard placement position can be obtained. Usually the reprojection error is used as the evaluation standard of calibration accuracy, but it is used as the basis for iteration here. The specific iterative algorithm steps are as follows:

- (1) Carry out the projector calibration using the image points ${{\boldsymbol P}_P}$ obtained by the homographic matrix. The initial internal and external parameters of the projector and reprojection errors can be obtained.
- (2) Express the reprojection error as $\Delta u = {u^{\prime}_p} - {u_p}$ and $\Delta v = {v^{\prime}_p} - {v_p}$, which includes the error’s direction and value.
- (3) Replace the image points ${{\boldsymbol P}_p}({u_p},{v_p})$ by ${{\boldsymbol P}_{pi}}({u_{pi}},{v_{pi}})$, where ${u_{p\textrm{i}}}\textrm{ = }{u_p} + \Delta u/2$ and ${v_{p\textrm{i}}}\textrm{ = }{v_p} + \Delta v/2$.
- (4) Calculate the projector again with the adjustment point ${{\boldsymbol P}_{pi}}({u_{pi}},{v_{pi}})$, the new internal and external parameters of the projector and reprojection errors can be obtained.
- (5) Repeat steps (2)–(4) until it reaches the set calibration accuracy.

After iteration, the high-precision internal and external parameters and image point coordinates under the calibration model can be obtained. The method is easy to implement, and theoretically, the accuracy can reach zero error. In actual measurement, it is necessary to form a binocular system with the camera, so it is used as the initial value of the BA algorithm.

#### 3.4 Calibration parameters optimization based on the BA algorithm

The BA algorithm can effectively optimize the 2D image points, the projector parameters and the 3D feature points separately or simultaneously. The 3D feature points on the calibration board are accurate by manufacture, so only the image point coordinates, the internal and external parameters are optimized simultaneously in the proposed BA optimization process. However, when there are different types of observed values in the adjustment problem, it is necessary to estimate the prior unit weight variance. In order to improve the accuracy of variance estimation, a posterior variance estimation method is proposed to calculate the variances of various observed values and determine the weights according to their values.

### 3.4.1. Establishment of the bundle adjustment model

According to Eq. (1), the collinear equation is established as follows:

### 3.4.2. Posterior variance estimation

According to the free extreme principle, the following equation can be obtained from Eq. (14)

The weight of the observation value can be determined by some empirical formulas, but practice shows that it is not accurate enough in many cases. The reasonable ratio of the weights of different observations is the key to the study, and the reasonable ratio of the weights depends on the reasonable determination of its variance. Obviously, the correctness of the prior variance estimation will have a direct impact on the bundle adjustment results. Therefore, it is required to estimate the prior variance during the adjustment to determine the weight. The correctness of the prior variance is usually compared with that of the posterior variance. When the two are inconsistent, the prior variance is considered inappropriate and needs to be re-weighted according to the posterior variance.

There are many ways to use the posterior variance to determine the weight. The least square estimation method is presented in this paper. First, assign initial weights to various observations and perform pre-adjustment. The correction number *V* of each observation value obtained by the pre-adjustment is used to estimate the prior variance. The weights of the various observations ${P_i}$ given in the first adjustment are generally unreasonable, that is, their corresponding unit weight variances are not equal. Therefore, if the variances of the unit weights are not equal or the difference is large, it means that the weighting is unreasonable and the weighting needs to be reset.

Denote the unit weight variance of each observed value as $\sigma _1^2$, $\sigma _2^2$, $\sigma _3^2$ and $\sigma _4^2$, respectively, so ${{\boldsymbol D}_{{L_i}}} = {\sigma _i}^2{{\boldsymbol P}_i}^{ - 1}$ $({i = 1,2,3,4} )$ and the value of $\sigma _1^2$, $\sigma _2^2$, $\sigma _3^2$ and $\sigma _4^2$ can be estimated through the sum of squares of the correction vector *V* formed by each adjustment, that is ${{\boldsymbol V}_i}^T{{\boldsymbol P}_i}{{\boldsymbol V}_i}$, till each $\sigma _i^2$ is the same. According to the mathematical expectation equation of quadratic function, it can be obtained as follows,

*C*is a constant and the first $\sigma _i^2$ is selected as $\sigma _1^2$. The calculation steps of variance estimation are as follows:

- (1) According to the error of different parameters, carry out the pre-test weight estimation respectively, and determine the initial weight value ${{\boldsymbol P}_{\boldsymbol i}}$ of all kinds of error.
- (2) Make the first adjustment and obtain ${{\boldsymbol V}_i}^T{{\boldsymbol P}_i}{{\boldsymbol V}_i}$.
- (4) Repeat step 2 and step 3, namely adjustment – variance component estimation – adjustment after weighting.
- (5) If the expression $({{\sigma_1}^2 = {\sigma_2}^2 = {\sigma_3}^2 = {\sigma_4}^2} )$ is true, complete adjustment, otherwise repeat steps (2) and (3).

#### 3.5 System calibration

Due to the projector calibration can be accomplished with the same mathematical model and calibration board as the camera, the system composed of a camera and a projector is developed, and the system can be regarded as a classic “stereo vision” system. Therefore, all the mature stereo calibration technologies can be adopted in the proposed system.

## 4. Experiment results

#### 4.1 System composition

In order to verify the effectiveness of the proposed method, a structured light vision system is developed as shown in Fig. 4. The hardware consists of the following parts. (1) An industrial CCD camera (imaging source DMK23U274) with resolution of 1600×1200 pixels and frame rate of 25f/s. (2) A projector (Optoma OSX808) with resolution of 1920×1080 pixels. (3) A circular dot calibration board with a white background, the size of the calibration board is 400mm×300mm and the distance to circular feature point is 30mm, the manufacturing accuracy is about 0.0025mm, the number of grids is 11×8, and (4) a desktop for software installation.

The software of the system adopts Microsoft Visual Studio 2013 as a development platform combination with OpenCV, EIGEN, SBA and so on. It mainly includes a system calibration module and structured light measurement module. In the system calibration module, the proposed projector calibration method is implemented.

#### 4.2 Experiment of projector calibration

According to Zhang's calibration principle, it is necessary to obtain the 2D images of the calibration board at different positions. In this method, the projection feature points are projected on the white background of the calibration pattern board and the projector and camera can “capture” the image simultaneously, so only one image taken in each calibration position. Then the ellipse fitting method is applied to extract the center of the circle and sort the center according to the slope, which is shown in Fig. 5. Then, the projection feature point image is separated from the original image of the calibration board. One of the resulting images is shown in Fig. 6. The mapping relationship between camera and projector can be established through the projection feature points using the method proposed in Section 3.2, and the homographic matrix can be calculated. Based on the homographic mapping relation, the image points on the projector, which formed by the original feature points, can be obtained simply, the result is shown in Fig. 7. Then both the camera and the projector can be calibrated by Zhang's calibration method based on the same calibration pattern board.

The reprojection errors of the camera and projector are obtained after the calibration respectively, and the reprojection errors of some positions are shown in Fig. 8, respectively. Figure 9 shows the error comparing of the camera and projector and the detail error results of the calibration are listed in Table 1. It shows that: firstly, the calibration accuracy of the camera is much higher than that of the projector. Secondly, the results of the projector calibration fluctuate greatly under different positions, Malformations are often present such as shown in position 2 and 3. The main reason is that the lens distortion of the projector is larger, and more importantly, the projector cannot obtain the image directly, so the low precision 2D image points obtained by homographic matrix cause the poor calibration result.

In order to make comparison between before and after the optimization, stereo calibration is carried out for the measurement system and feature points on any position of the calibration board are reconstructed. Plane fitting was performed on the reconstructed results and the Z direction error is estimated by flatness error. One of the 3D results as shown in Fig. 10, the results show that the calibration result is low precision. The plane fitting error is in the range of ±0.2 mm. The distance between the points on the calibration board was measured as shown in Fig. 11, the maximum deviation of the X direction is about 0.04 mm, and in Y direction is 0.06 mm.

To improve the accuracy, the reprojection error iteration method was proposed to refine the 2D image points. The details have been addressed in Section 3.3. In the procedure, keeping the 3D accurate feature points unchanged, optimize the inner and outer parameters by reducing the deviation of 2D image points. The reprojection error of the 7 times iteration for the camera and projector are shown in Fig. 12, respectively. The results show that the reprojection error can be refined greatly by the presented method, almost up to 0.002 pixel for camera and 0.008 pixel for projector after only 7 times iteration, which is shown in Fig. 13. The 3D reconstruction of the same feature points are calculated too, the fitting error results of the reconstruction for feature points is shown as Fig. 14. Figure 15 shows the comparing result between before and after iteration, and the distance error in X and Y direction is shown in Fig. 16, respectively. It is obvious that the accuracy of the 3D points is improved dramatically.

The bundle adjustment algorithm, which introduced in Section 3.4, is adopted to further optimize the calibration parameters. The reprojection error distribution of the camera and projector are both further improved, which is shown in Fig. 17. Figure 18 shows that the malformed error distribution of the projector is reduced greatly and the accuracy of the projector is almost up to the same as the camera. Figures 19 and 20 illustrate that the high accuracy of the 3D feature points can be obtained by the proposed method.

In order to improve the calibration accuracy, a quadrature function fitting algorithm is adopted to compensate the error of the 2D image points, which is gained by the phase-shifting method in Ref. [34]. For comparison, the compensation method is applied to our calibration procedure. First, the homography relationship is carried out to obtain the 2D image points, and then the error of each position can be fit by the quadrature function fitting algorithm, one of the results is shown as Fig. 21. Moreover, the 2D image points are improved by the compensation method and the BA algorithm is implemented to optimize the calibration parameters. Finally, the 3D coordinates of the feature points are reconstructed by the same method of our proposed method. Figures 22 and 23 show the comparison result between two compensation methods. It is obvious that our method, which is based on iteration, can obtain more accurate calibration results even if the error gained by the homography relationship method is larger than the phase-shifting method.

The calibration method has been used in our structured light system, it can measure 3D data successfully. For example, a car fender was measured by the system using four steps phase-shifting method, one of the progresses is showed as Fig. 24, and the 3D data of the measurement is shown as Fig. 25.

## 5. Conclusion

In this paper, an accurate projector calibration method that has the same procedure as camera and without any restriction or ancillary equipment, was proposed. For this, the 2D image points of the projector could be obtained flexibly based on homography and compensated easily by the error iteration algorithm. The parameters of both the camera and the projector reliably obtained by the bundle adjustment algorithm.

The application of the homographic matrix made the calibration procedure easy to implement for only one image that needed to be “captured” at each calibration position. High-accuracy projector parameters can be obtained by the error iteration and bundle adjustment method, which were tested to be effective and robust. The proposed method is useful for online or in-situ structured light system calibration. In order to obtain the real calibration accuracy, this paper does not consider the influence of various factors in the measurement process, such as phase accuracy, ambient light, etc. The future work will focus on the measurement accuracy of the dynamic object with the consideration of other factors, such as phase error, stereo matching error and so on.

## Funding

National Science and Technology Planning Project (2015BAF24B00); Fujian Province Industry-University-Research Program (2017H6012, 2019H6016); Key (Guiding) Projects in Fujian Province (2017H0019, 2018H0020); China Scholarship Council (201807540008).

## Disclosures

The authors declare no conflicts of interest.

## References

**1. **C. Zuo, S. Feng, L. Huang, T. Tao, W. Yin, and Q. Chen, “Phase shifting algorithms for fringe projection profilometry: A review,” Opt. Lasers Eng. **109**, 23–59 (2018). [CrossRef]

**2. **Y. He and S. Chen, “Advances in sensing and processing methods for three-dimensional robot vision,” Int. J. Adv. Robot. Syst. **15**(2), 172988141876062 (2018). [CrossRef]

**3. **G. Sansoni, M. Trebeschi, and F. Docchio, “State-of-the-art and applications of 3D imaging sensors in industry, cultural heritage, medicine, and criminal investigation,” Sensors **9**(1), 568–601 (2009). [CrossRef]

**4. **S. S. Gorthi and P. Rastogi, “Fringe projection techniques: Whither we are?” Opt. Lasers Eng. **48**(2), 133–140 (2010). [CrossRef]

**5. **C. Portalés, P. Casanova-Salas, S. Casas, J. Gimeno, and M. Fernández, “An interactive cameraless projector calibration method,” Virtual Real. **24**(1), 109–121 (2020). [CrossRef]

**6. **M. Vo, Z. Wang, T. Hoang, and D. Nguyen, “Flexible calibration technique for fringe-projection-based three-dimensional imaging,” Opt. Lett. **35**(19), 3192 (2010). [CrossRef]

**7. **X. Liu, Z. Cai, Y. Yin, H. Jiang, D. He, W. He, Z. Zhang, and X. Peng, “Calibration of fringe projection profilometry using an inaccurate 2D reference target,” Opt. Lasers Eng. **89**, 131–137 (2017). [CrossRef]

**8. **F. J. Cuevas, M. Servin, O. N. Stavroudis, and R. Rodriguez-Vera, “Multi-layer neural network applied to phase and depth recovery from fringe patterns,” Opt. Commun. **181**(4–6), 239–259 (2000). [CrossRef]

**9. **J. Villa, M. Araiza, D. Alaniz, R. Ivanov, and M. Ortiz, “Transformation of phase to (x,y,z)-coordinates for the calibration of a fringe projection profilometer,” Opt. Lasers Eng. **50**(2), 256–261 (2012). [CrossRef]

**10. **J. Lu, R. Mo, H. Sun, and Z. Chang, “Flexible calibration of phase-to-height conversion in fringe projection profilometry,” Appl. Opt. **55**(23), 6381 (2016). [CrossRef]

**11. **W. Zhao, X. Su, and W. Chen, “Discussion on accurate phase–height mapping in fringe projection profilometry,” Opt. Eng. **56**(10), 1 (2017). [CrossRef]

**12. **W. Guo, Z. Wu, R. Xu, Q. Zhang, and M. Fujigaki, “A fast reconstruction method for three-dimensional shape measurement using dual-frequency grating projection and phase-to-height lookup table,” Opt. Laser Technol. **112**, 269–277 (2019). [CrossRef]

**13. **B. Li, N. Karpinsky, and S. Zhang, “Novel calibration method for structured-light system with an out-of-focus projector,” Appl. Opt. **53**(16), 3415 (2014). [CrossRef]

**14. **Z. Li, “Accurate calibration method for a structured light system,” Opt. Eng. **47**(5), 053604 (2008). [CrossRef]

**15. **P. S. Huang, “Novel method for structured light system calibration,” Opt. Eng. **45**(8), 083601 (2006). [CrossRef]

**16. **S. Huang, L. Xie, Z. Wang, Z. Zhang, F. Gao, and X. Jiang, “Accurate projector calibration method by using an optical coaxial camera,” Appl. Opt. **54**(4), 789 (2015). [CrossRef]

**17. **S. Yang, M. Liu, J. Song, S. Yin, Y. Guo, Y. Ren, and J. Zhu, “Projector calibration method based on stereo vision system,” Opt. Rev. **24**(6), 727–733 (2017). [CrossRef]

**18. **R. Chen, J. Xu, H. Chen, J. Su, Z. Zhang, and K. Chen, “Accurate calibration method for camera and projector in fringe patterns measurement system,” Appl. Opt. **55**(16), 4293 (2016). [CrossRef]

**19. **W. Zhang, W. Li, L. Yu, H. Luo, H. Zhao, and H. Xia, “Sub-pixel projector calibration method for fringe projection profilometry,” Opt. Express **25**(16), 19158 (2017). [CrossRef]

**20. **S. Yang, M. Liu, J. Song, S. Yin, Y. Ren, J. Zhu, and S. Chen, “Projector distortion residual compensation in fringe projection system,” Opt. Lasers Eng. **114**, 104–110 (2019). [CrossRef]

**21. **H. Anwar, “Calibrating projector flexibly for a real-time active 3D scanning system,” Optik (Munich, Ger.) **158**, 1088–1094 (2018). [CrossRef]

**22. **B. Huang, S. Ozdemir, Y. Tang, C. Liao, and H. Ling, “A Single-Shot-Per-Pose Camera-Projector Calibration System for Imperfect Planar Targets,” Adjun. Proc. - 2018 IEEE Int. Symp. Mix. Augment. Reality, ISMAR-Adjunct 201815–20 (2018).

**23. **R. Juarez-Salazar and V. H. Diaz-Ramirez, “Flexible camera-projector calibration using superposed color checkerboards,” Opt. Lasers Eng. **120**, 59–65 (2019). [CrossRef]

**24. **M. Liu, C. Sun, S. Huang, and Z. Zhang, “An accurate projector calibration method based on polynomial distortion representation,” Sensors **15**(10), 26567–26582 (2015). [CrossRef]

**25. **Z. Wang, M. Liu, S. Yang, S. Huang, X. Bai, X. Liu, J. Zhu, X. Liu, and Z. Zhang, “Precise full-field distortion rectification and evaluation method for a digital projector,” Opt. Rev. **23**(5), 746–752 (2016). [CrossRef]

**26. **P. Zhou, Y. Yu, G. Cai, and S. Huang, “Projector recalibration of three-dimensional profilometry system,” Appl. Opt. **55**(9), 2294 (2016). [CrossRef]

**27. **M. Ren, J. Liang, B. Wei, and W. Pai, “Novel projector calibration method for monocular structured light system based on digital image correlation,” Optik (Munich, Ger.) **132**, 337–347 (2017). [CrossRef]

**28. **K. Li, J. Bu, and D. Zhang, “Lens distortion elimination for improving measurement accuracy of fringe projection profilometry,” Opt. Lasers Eng. **85**, 53–64 (2016). [CrossRef]

**29. **A. Gonzalez and J. Meneses, “Accurate calibration method for a fringe projection system by projecting an adaptive fringe pattern,” Appl. Opt. **58**(17), 4610 (2019). [CrossRef]

**30. **H. Liu, H. Lin, and L. Yao, “Calibration method for projector-camera-based telecentric fringe projection profilometry system,” Opt. Express **25**(25), 31492 (2017). [CrossRef]

**31. **M. E. Deetjen and D. Lentink, “Automated calibration of multi-camera-projector structured light systems for volumetric high-speed 3D surface reconstructions,” Opt. Express **26**(25), 33278 (2018). [CrossRef]

**32. **Z. Zhang, “Flexible camera calibration by viewing a plane from unknown orientations,” Proc. IEEE Int. Conf. Comput. Vis. **1**, 666–673 vol.1 (1999). [CrossRef]

**33. **R. Hartley and A. Zisserman, (2004). * Multiple View Geometry in Computer Vision* (2nd ed.). Cambridge University Press, Cambridge.

**34. **Z. Wang, J. Huang, J. Gao, and Q. Xue, “Calibration of the structured light measurement system with bundle adjustment,” Jixie Gongcheng Xuebao/Journal Mech. Eng. **49**, 4–13 (2013).