Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Flexible structured light system calibration method with all digital features

Open Access Open Access

Abstract

We propose an innovative method for single-camera and single-projector structured light system calibration in that it eliminates the need for calibration targets with physical features. Instead, a digital display such as a liquid crystal display (LCD) screen is used to present a digital feature pattern for camera intrinsic calibration, while a flat surface such as a mirror is used for projector intrinsic and extrinsic calibration. To carry out this calibration, a secondary camera is required to facilitate the entire process. Because no specially made calibration targets with real physical features are required for the entire calibration process, our method offers greater flexibility and simplicity in achieving accurate calibration for structured light systems. Experimental results have demonstrated the success of this proposed method.

© 2023 Optica Publishing Group under the terms of the Optica Open Access Publishing Agreement

1. Introduction

Three-dimensional (3D) optical metrology has grown its importance from the traditional manufacturing industry to numerous disciplines ranging from law enforcement to healthcare [1]. Due to its low cost, flexibilities, and easy to implementation, the structured light technique is one of the most extensively adopted methods [2]. Despite rapid progresses and recent advancements, calibrating structured light system accurately remains as one of the most critical and yet challenging issues [3].

Structured light system calibration methods can be broadly classified into three categories: the reference-plane-based methods, the mathematical model methods, and the hybrid methods. The reference plane methods calibrate the system by measuring a flat surface without features and then develop the relationship between the depth and the structured pattern deformation for each pixel [419]. These methods could work well for structured light systems using telecentric lenses. However, it often requires a high precision translational stage to calibrate a system with pinhole lens.

To simply the calibration process, Zhang [20] developed a flexible camera calibration method that allows flexibly moving the a calibration target with distinct features. This method essentially describes the camera lens as a pinhole model and the calibration process estimates the linear transformations and lens distortions. This simple camera calibration method was extended to structured light system calibration by enabling the projector to “capture” images like a camera [2123]. Such a lens distortion model works reasonably well for a typical imaging system where the optical axis is near the center of the lens, but could have problems for an off-axis imaging system such as a typical projector [24,25].

The hybrid methods attempt to achieve higher calibration accuracy or flexibility by combining the aforementioned two methods. Marrugo et al. [26] took advantage of the lens model and the well established calibration approach for initial calibration, measure a flat reference surface to determine the pixel-wise error functions to correct each coordinate for each pixel, and then establish the pixel-wise functions between each coordinate and the phase. This method substantially improved the conventional calibration methods especially for large field of view (FOV) systems. Yet, if the initial calibration is not good enough or the calibration target and the flat surface has different quality, it is difficult to determine the accurate error from the flat surface measurement. Zhang [3,27] developed a method that estimates the pixel-wise functions between each coordinate and the phase by determining 3D coordinates of the calibration target and capturing the phase map simultaneously. Since calibration feature points are used in the process, the calibration target 3D reconstruction is as accurate as the calibration target is made. One drawback of this method is, though, the high contrast edge of calibration features causes phase error and thus calibration error if such areas are not completely pre-treated.

The existing methods for calibrating single-camera and single-projector structured light systems with pinhole lenses either require a precisely made calibration target with real physical features or an expensive high-precision translation stage. This paper proposes a method that addresses the existing challenges using commonly available hardware. Specifically, it uses a digital display such as an LCD screen to display a calibration feature image for camera calibration and the backside of a mirror as a reference flat surface for projector and projector-to-camera transformation calibration. Furthermore, a second camera is used to facilitate the calibration process, but it is detached after calibration. To enable the use of existing camera and stereo calibration toolbox available in open-source software packages such as OpenCV, we also developed a computational framework. Experimental results demonstrate that our proposed method accurately calibrates structured light systems.

Section 2 explains principles. Section 3 presents experimental results. Section 4 discuss the practice of the proposed calibration method, and Sec. 5 summarizes this proposed work.

2. Principle

This section explains the principle behind the proposed method. Specifically, we will explain the pinhole lens model, the standard stereo vision 3D reconstruction method, the phase-shifting algorithm, the ideal flat plane reconstruction, the virtual feature encoding and determination, and the overall computational framework.

2.1 Pinhole lens model

The pin-hole lens is typically mathematically modeled as linear transformations and nonlinear distortions. The linear transformation describes the transformation from 3D world coordinates $(x^w, y^w, z^w)$ to 2D image coordinates $(u, v)$ as

$$s[u, v, 1]^T = \mathbf{A} \cdot \left[ \mathbf{R}, \mathbf{t}\right] \cdot [ x^w, y^w, z^w,1]^T,$$
where $s$ denotes the scaling factor, $\mathbf {A}$ the $3\times 3$ intrinsic matrix, $\mathbf {R}$ the $3\times 3$ rotation matrix, $\mathbf {t}$ the $3\times 1$ translation vector, and $T$ the matrix transpose. $\mathbf {R}$ and $\mathbf {t}$ are often regarded as the extrinsic parameter matrices.

The nonlinear lens distortions consider the radial and tangential distortions as

$$\begin{bmatrix} \tilde{u}\\\tilde{v} \end{bmatrix} = (1+k_1r^2+k_2r^4+k_3r^6)\begin{bmatrix} \bar{u} \\ \bar{v} \end{bmatrix} +\begin{bmatrix} 2p_1\bar{u}\bar{v} + p_2(r^2+2\bar{u}^2) \\ 2p_2\bar{u}\bar{v} + p_1(r^2+2\bar{v}^2) \\ \end{bmatrix} \enspace,$$
with
$$r^2 = \bar{u}^2 + \bar{v}^2 \enspace,$$
where $[k_1, k_2, k_3]$ are the distortion coefficients, $[p_1, p_2]$ the tangential distortion coefficients, $[\tilde {u}, \tilde {v}]^T$ the distorted image points, and $[\bar {u}, \bar {v}]^T$ the normalized image coordinates. Once the lens distortion coefficients are calibrated, all captured images are undistorted first before 3D reconstruction where only the linear models are neccessary.

2.2 Stereo-vision 3D reconstruction

Since all nonlinear lens distortions are taken care of before 3D reconstruction, it is straightforward to determine 3D coordinates of a matched pair from two calibrated cameras under the same world coordinate system. After calibration, $\mathbf {A}$, $\mathbf {R}$, and $\mathbf {t}$ in Eq. (1) are known for each camera with four unknowns $(x^w, y^w, z^w)$ and $s$ for these three linear equations, thus at least one constraint equation is required to solve for $(x^w, y^w, z^w)$. For a given $(u^1, v^1)$ on the first camera, if the corresponding pixel $(u^2, v^2)$ is known on the second camera, 3D coordinates can be calculated using the least squares with 5 unknowns and 6 questions. It is well known that the stereo-vision system fails if the captured images does not have distinctive features, which is the case for a typical reference plane (e.g., a flat white surface). To accurately and reliably establish corresponding pairs, a phase-shifting algorithm is employed.

2.3 Phase-shifting algorithm

Phase-shifting algorithms are widely used for high-accuracy 3D shape measurement because of its speed, accuracy, and resolution. The kth fringe image in an $N$-step phase-shifting algorithm with equal phase shifts can be written as,

$$I = I' + I^{\prime\prime} \cos(\phi + 2k\pi/N),$$
where $I'$ denotes the average intensity and $I''$ the intensity modulation. The $\phi$ can be calculated by simultaneously solving those $N$ (N > 2) equations,
$$\phi ={-}\tan^{{-}1}\left[\frac{I_k \sin (2k\pi/N)}{ I_k \cos (2k\pi/N)}\right].$$

Here the arctangent function has $2\pi$ discontinuities that can be resolved by employing one of the phase unwrapping algorithms [28]. The phase unwrapping process determines the appropriate number $\kappa$ of $2\pi$ to be added for each point,

$$\Phi = \phi + 2\pi \times \kappa,$$
where $\Phi$ denotes the unwrapped phase without $2\pi$ discontinuities. In this research, we use a gray coding method to determine $\kappa$ for each pixel.

2.4 Ideal plane 3D reconstruction

To accurately determine corresponding pairs between two cameras, we project two orthogonal (i.e., horizontal and vertical) directions fringe patterns. Since for each projected points, the phase value is uniquely defined, the corresponding point pairs $(u^1, v^1)$ and $(u^2, v^2)$ between two camera images are determined by minimizing the phase difference. It is straightforward to calculate 3D coordinates $(x, y, z)$ for each corresponding pairs. This method allows us to accurately measure a flat surface without features.

As discussed in the Sec. 1, accurately calibrating a camera is never trivial, and thus the reconstructed 3D shape may has errors, i.e., the measured points may not be on a plane for planar surface measurements. Since we know that the calibration target must be planar, we fit the measured data as an ideal plane function,

$$a x + b y + c = z,$$
and we denote the plane normal as a vector $\mathbf {n} = [a, b, -1]^T$.

Assume the world coordinate system is defined on the main camera (Camera 1) lens, and the reconstructed 3D points $(x^w, y^w, z^w)$ are in the same coordinate system, i.e., $(x^w, y^w, z^w) = (x, y, z)$. From the lens pinhole model defined in Eq. (1), for a given pixel $(u_i, v_i)$, the 3D coordinates $(x_i, y_i, z_i)$ can be solved as

$$[ x_i, y_i, z_i]^T = \mathbf{A^{{-}1}}\cdot[u_i, v_i, 1]^T.$$

Here $^{-1}$ denotes the matrix inverse. Since the reconstructed $(x_i, y_i, z_i)$ may not be on the fitted plane, one additional step is required to find the corresponding 3D points on the flat plane using Eq. (7). This point can be obtained by determining the intersection of the line defined by (0, 0, 0) and $(x_i, y_i, z_i)$ and the plane function, Eq. (7). Once the ideal 3D plane is reconstructed, we then encode feature points on the plane from those feature points defined on the camera image, which will be discussed in the next subsection.

It is worth noting that the 3D points reconstructed from the stereo cameras were not used directly to determine the 3D calibration target points. Instead, they were used to estimate the plane function of the calibration target, which is assumed to be flat. Fitting a plane function does not require the reconstruction of 3D points for every pixel. To expedite the calibration process, approximately 2500 uniformly distributed sample points were reconstructed to estimate the plane function for each pose.

2.5 Virtual feature encoding and determination

To use the flexible camera calibration method developed by Zhang [20] and the corresponding open source software packages, it is required that the calibration target is flat and the $z$ axis aligns with the calibration target surface normal. It is also preferable to use the same number of feature points for all calibration poses. To meet these requirements, we transform the coordinate system such that $\hat {x}-\hat {y}$ plane of the world coordinate system is on the calibration target plane for each pose. It can be proven that the transformation between the transformed world coordinates $(\hat {x}, \hat {y}, \hat {z})$ and its original coordinates $(x, y, z)$ on the ideal plane is

$$\begin{bmatrix} \hat{x}\\ \hat{y}\\ \hat{z} \end{bmatrix} = \begin{bmatrix} n_{11} & n_{12} & n_{13}\\ n_{21} & n_{22} & n_{23}\\ n_{31} & n_{32} & n_{33} \end{bmatrix} \begin{bmatrix} x-x_0\\ y-y_0\\ z-z_0 \end{bmatrix} ,$$
where $(x_0, y_0, z_0)$ is the origin of the new coordinate system on the plane, the $\hat {z}$-axis is defined as the normalized normal of the fitted plane Eq. (7)
$$\mathbf{n_{\hat{z}}} = \begin{bmatrix} n_{31} & n_{32} & n_{33} \end{bmatrix}^T = \frac{\mathbf{n}}{||\mathbf{n}||},$$
and the $\hat {x}$-axis is defined as
$$\mathbf{n_{\hat{x}}} = \begin{bmatrix} n_{11} & n_{12} & n_{13} \end{bmatrix}^T = \frac{\mathbf{N_{\hat{x}}}}{||\mathbf{N_{\hat{x}}}||},$$
where $||\cdot ||$ denotes the length of a vector, and
$$\mathbf{N_{\hat{x}}} = \begin{bmatrix} x_1-x_0 & y_1-y_0 & z_1-z_0 \end{bmatrix}^T,$$
here $(x_1, y_1, z_1)$ is the point on the plane along $\hat {x}$-axis. From which we can determine the $\hat {y}$-axis as
$$\mathbf{n_{\hat{y}}} = \begin{bmatrix} n_{21} & n_{22} & n_{23} \end{bmatrix}^T = \mathbf{n_{\hat{z}}}\times \mathbf{n_{\hat{x}}},$$
where $\times$ denotes vector cross product.

For each feature point defined on the camera image, the corresponding 3D coordinates $(x, y, z)$ on the measured plane (i.e., calibration target object) can be transformed to the object feature coordinates $(\hat {x}, \hat {y}, \hat {z})$. Since the $\hat {x}-\hat {y}$ plane aligns with the measurement plane, $\hat {z}\equiv 0$. Thus, the desired object feature point coordinates are $(\hat {x}, \hat {y}, 0)$. In the meantime, for each feature point, we find the corresponding projector point $(u^p, v^p)$ from the captured horizontal and vertical phase maps.

For each plane measurement by stereo cameras, we reconstruct an ideal 3D plane and compute both horizontal and vertical phase maps. We define a set of points on the camera captured image as feature points, determine the corresponding transformed object feature points using the transformed coordinates, and determine the corresponding projector feature points from those phase maps. Once these feature points are determined, the structured light system calibration process follows exactly the same process of a standard structured light system calibration. However, the standard calibration process requires a flat calibration target with well-defined physical features (e.g. circle patterns), but our method only requires a flat plane but not features.

2.6 Overall computational framework

Figure 1 summarizes the proposed calibration framework. A secondary camera (Camera 2) is first added to the structured light system to form a stereo vision system with the structured light system camera (Camera 1). The stereo cameras are then calibrated using feature points shown on digital display such as a LCD screen. It is preferable to define the Camera 1 lens coordinate system as the world coordinate system.

 figure: Fig. 1.

Fig. 1. Proposed computational framework for single-camera and single-projector structured light system calibration. To calibrate the stereo cameras, we first used an iPad LCD screen displaying a calibration circle pattern and followed the standard stereovision calibration process. For each pose, the projector projected horizontal and vertical fringe patterns onto the backside of a mirror surface. Phase constraints were then used to establish corresponding pairs between the stereo cameras, which were subsequently used to reconstruct 3D points. These points were sampled for plane fitting, and for each feature point selected on the camera, we calculated the corresponding projector feature point using the phase constraints. The light ray intersection point on the fitted plane was then regarded as the object feature point. We used these feature points to perform standard camera-projector calibration using a stereo calibration software package such as OpenCV. After system calibration, the secondary camera was detached.

Download Full Size | PDF

The second step is to use the structured light system projector to project both horizontal and vertical fringe patterns on the back side of a mirror surface. For each pose, both cameras capture all projected fringe patterns, from which two phase maps $(\Phi ^1_h, \Phi ^1_v)$, $(\Phi ^2_h, \Phi ^2_v)$ can be generated for each camera. These phase maps are first used to find corresponding pairs for these two cameras for 3D reconstruction. The reconstructed 3D points are fitted with an ideal plane function, from which the ideal 3D points for each pixel on Camera 1 are calculated using the method described in Subsection 2.4.

Next, a number of feature points are defined on the Camera 1 image for each pose. These feature points are called “camera points”. For each camera point $(u^c, v^c)$, the absolute phase maps, $\Phi _h^1$ and $\Phi _v^1$, give the corresponding projector point $(u^p, v^p)$ as

$$\begin{bmatrix} u^p\\ v^p \end{bmatrix} = \begin{bmatrix} \frac{2\pi\Phi^1_v(u, v)}{T_v}\\ \frac{2\pi\Phi^1_h(u, v)}{T_h} \end{bmatrix}$$
where $T_h$ and $T_v$ represents the vertical and horizontal fringe periods in pixels. These feature points are called “projector points”.

For each camera feature point, the $(x, y, z)$ coordinates on the ideal calibration plane are then computed. The $(x, y, z)$ coordinates are then transformed to the calibration target plane coordinate system to compute $(\hat {x}, \hat {y})$, which is the corresponding point on the object space, and also “object point”. Once the corresponding camera, projector and object feature points are known for a number for poses, the standard stereo calibration procedures can be used to estimate both intrinsic and extrinsic parameters and the distortion coefficients for the structured light system (i.e., Camera 1 and projector). Once the structured light system is calibrated, Camera 2 is detached from the system.

3. Experimental results

In this research, we assembled a structured light system to experimentally verify the performance of our proposed method. The hardware system includes a digital-light-processing (DLP) projector (model: Lightcrafter 4500), a complementary metal oxide semiconductor (CMOS) camera or Camera 1(model: FLIR Blackfly BFS-U3-50S5M) , and an Arduino Uno board. We then added a secondary CMOS camera or Camera 2(model: FLIR Blackfly BFS-U3-23S3M) to facilitate the calibration process. Each camera was attached with an 8 mm lens (model: Computar M0814-MP2). The projector resolution is $912 \times 1140$, and the resolution of both cameras was set as $900 \times 1200$, and the Arduino generates the same 15 Hz signal to trigger the projector and both cameras to ensure they are synchronized.

To calibrate the stereo cameras, an 11-inch iPad Pro (the 3rd generation) was used to display $12 \times 16$ circle dots. The circle center distance was set as 90 pixels. Since the iPad resolution was 264 dot per inch (dpi), the distance between circle center is 8.6591 mm. 60 poses were used to calibrate the stereo cameras. The OpenCV 4.20 stereo vision system calibration package was used to estimate all camera and projector parameters.

For the structured light system, 18-step phase-shifting algorithm was used to create high-quality horizontal and vertical phase maps, and the gray code was used to unwrap the phase. For each pose, approximately 2500 matching pairs are detected based on the phase maps for 3D reconstruction using the estimated camera parameters. Figure 2(a) shows the reconstructed points, apparently, are not perfectly planar. These data points are fitted with an ideal plane function. For this example, the plane function is

$$0.5018x -0.0014 y + 270.6360 = z.$$

Figure 2(b) shows the plane fitting error, demonstrating that the 3D reconstructed points do not lie on the ideal plane. From the plane function, 3D coordinates for each Camera 1 pixel are reconstructed. Figure 2(c) shows those sample pixels.

 figure: Fig. 2.

Fig. 2. Ideal 3D plane reconstruction example. (a) 3D reconstructed points using stereo calibration data; (b) plane fitting error; (c) reconstructed ideal 3D plane.

Download Full Size | PDF

On Camera 1, we detected a region that has fringes, and sampled the region with $12 \times 16$ uniformly distributed feature points, as shown in Fig. 3(a). These feature points were then mapped to the projector to create projector feature points shown in Fig. 3(b). The 3D coordinates of each pixel is then transformed to plane coordinates to create object feature points. Figure 3(c) shows the corresponding object feature points.

 figure: Fig. 3.

Fig. 3. Representative feature points in different spaces. (a) Sampled feature points on the camera image; (b) mapped feature points on the projector image; (c) mapped feature points on the object plane.

Download Full Size | PDF

We captured 30 different poses for our system calibration. It is important to note that since the camera has already been calibrated during the stereo calibration stage, the camera does not have to be calibrated again. In this research, we consider all distortions for both the camera and the projector.

We first evaluated the performance of our calibration method by measuring the flat surface at 19 different number of poses. For each measurement, an ideal plane was fitted and the point-wise measurement error was calculated as the shortest distance from the measured point to the fitted plane, and the sign is determined as a point below the ideal plane being negative and a point above the ideal plane being positive.

Figure 4(a) show one example from the standard structured light system 3D reconstruction method that only uses one directional fringe patterns. Figure 4(b) shows the corresponding pixel-by-pixel error map. Figure 4(c) shows the error distribution is close to be a normal distribution. The root-mean-square (rms) error ($\sigma$) for this pose measurement is $\sigma = 0.039$ mm, which is quite small considering that the overall measurement area is approximately $165 (x) \times 127 (y)$ mm$^2$. Note that the measured data here and those presented in the rest of the paper in this paper were filtered with $5\times 5$ Gaussian filter to reduce the most significant random noise.

 figure: Fig. 4.

Fig. 4. Example flat surface measurement results before considering projector lens distortions. (a) 3D reconstructed shape; (b) flatness measurement error map; (c) error histogram.

Download Full Size | PDF

Table 1 summarizes the data for 19 plane measurements with different orientations and positions. Both the camera and the projector lens distortion models are used. The measurement volume is approximately $193 (x) \times 143 (y) \times 148 (z)$ mm$^3$. This table data showed that all rms errors are all below 0.050 mm with a mean value of 0.034 mm and standard deviation of 0.007 mm. These experimental results further demonstrated that the proposed method can successfully achieve high measurement accuracy for a standard structured light system using the pinhole models.

Tables Icon

Table 1. Testing planes and the corresponding measurement rmse $\sigma$ (mm) after considering projector lens distortions.

We also measured a sphere with a radius of approximately 76 mm to further evaluate the performance of our proposed method. Figure 5 shows the measurement data from the pinhole model considering both projector and camera lens distortions. We then fitted the data with an ideal sphere. For each point, the measurement error was calculated as the difference between the measured point to the fitted sphere center and the fitted radius. Figure 5(a) shows the overlap between fitted sphere and the raw measured data. And Figs. 5(b) and 5(c) respectively show the error map, and histogram of error map. The rms error $\sigma =$ 0.038 mm, which is pretty small considering that the sphere radius is approximately 76 mm.

 figure: Fig. 5.

Fig. 5. Sphere measurement results. (a) 3D reconstructed sphere; (b) fitted sphere overlay with measured sphere; (c) pixel-wise error map ($\sigma$ = 0.038 mm).

Download Full Size | PDF

Finally, we measured a more complex surface to visually demonstrate the success of our proposed method. Figure 6(a) shows the object photograph, Fig. 6(b) shows one of the fringe images, and Fig. 6(c) shows 3D reconstructed shape. This experiment demonstrated that the proposed method can reconstruct complex 3D shape with fine details, as expected.

 figure: Fig. 6.

Fig. 6. Measurement result of a complex statue. (a) Photograph; (b) one of the phase-shifted fringe patterns; (c) 3D reconstruction.

Download Full Size | PDF

4. Discussion

For the proposed calibration method to be successful, three assumptions must be met: 1) the LCD screen must be flat; 2) the pixel size of the LCD screen must be uniform; and 3) the mirror surface must be flat. In our research, we found that the iPad satisfied the first two assumptions. However, when selecting other LCD screens, caution should be exercised, as some are designed to be curved. Additionally, since not all mirror surfaces are flat, one should carefully choose a suitable mirror. We recommend the use of a high-quality thick glass mirror that is resistant to deformation caused by temperature or orientation changes, especially for calibrating systems with a large field of view.

5. Summary

This paper has presented a novel method for calibrating a single-camera and single-projector structured light system without using any physically made features. Compared to existing methods, our proposed method offers two significant advantages. 1) it does not require new optimization algorithms for lens intrinsic, extrinsic parameters, or distortions estimation, making it easy to implement. 2) it does not require a highly precise calibration target with physical features, making it more flexible. We have demonstrated the feasibility of our proposed method with a standard stereo reconstruction model. It should be straightforward to adopt our proposed method to any of the pixel-wise models for potentially more accurate structured light system calibration.

Funding

National Institute of Justice (2019-R2-CX-0069).

Acknowledgments

The author would like to thank his graduate student Wang Xiang for helping to derive the coordinate transformation equations. This work was sponsored by National Institute of Justice (NIJ) under grant No. 2019-R2-CX-0069. Views expressed here are those of the author and not necessarily those of the NIJ.

Disclosures

SZ: Ori Inc (C), Orbbec 3D (C), Vision Express Optics Inc (I).

Data availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the author upon reasonable request.

References

1. A. G. Marrugo, F. Gao, and S. Zhang, “State-of-the-art active optical techniques for three-dimensional surface metrology: a review,” J. Opt. Soc. Am. A 37(9), B60–B77 (2020). [CrossRef]  

2. S. Zhang, “High-speed 3d shape measurement with structured light methods: a review,” Opt. Laser Eng. 106, 119–131 (2018). [CrossRef]  

3. S. Zhang, “Pixel-wise structured light calibration method with a color calibration target,” Opt. Express 30(20), 35817–35827 (2022). [CrossRef]  

4. M. Vo, Z. Wang, T. Hoang, and D. Nguyen, “Flexible calibration technique for fringe-projection-based three-dimensional imaging,” Opt. Lett. 35(19), 3192–3194 (2010). [CrossRef]  

5. W.-S. Zhou and X.-Y. Su, “A direct mapping algorithm for phase-measuring profilometry,” J. Mod. Opt. 41, 89–94 (1994). [CrossRef]  

6. Y. Wen, S. Li, H. Cheng, X. Su, and Q. Zhang, “Universal calculation formula and calibration method in fourier transform profilometry,” Appl. Opt. 49(34), 6563–6569 (2010). [CrossRef]  

7. Y. Xiao, Y. Cao, and Y. Wu, “Improved algorithm for phase-to-height mapping in phase measuring profilometry,” Appl. Opt. 51(8), 1149–1155 (2012). [CrossRef]  

8. H. Guo, H. He, Y. Yu, and M. Chen, “Least-squares calibration method for fringe projection profilometry,” Opt. Eng. 44, 033603 (2005). [CrossRef]  

9. W. Zhao, X. Su, and W. Chen, “Whole-field high precision point to point calibration method,” Opt. Laser Eng. 111, 71–79 (2018). [CrossRef]  

10. X. Su, W. Song, Y. Cao, and L. Xiang, “Phase-height mapping and coordinate calibration simultaneously in phase-measuring profilometry,” Opt. Eng. 43(3), 708–712 (2004). [CrossRef]  

11. S. Cui and X. Zhu, “A generalized reference-plane-based calibration method in optical triangular profilometry,” Opt. Express 17(23), 20735–20746 (2009). [CrossRef]  

12. A. Asundi and Z. Wensen, “Unified calibration technique and its applications in optical triangular profilometry,” Appl. Opt. 38(16), 3556–3561 (1999). [CrossRef]  

13. J. Huang and Q. Wu, “A new reconstruction method based on fringe projection of three-dimensional measuring system,” Opt. Laser Eng. 52, 115–122 (2014). [CrossRef]  

14. Y. Li, X. Su, and Q. Wu, “Accurate phase-height mapping algorithm for pmp,” J. Mod. Opt. 53(14), 1955–1964 (2006). [CrossRef]  

15. M. Fujigaki, T. Sakaguchi, and Y. Murata, “Development of a compact 3d shape measurement unit using the light-source-stepping method,” Opt. Laser Eng. 85, 9–17 (2016). [CrossRef]  

16. W. Zhao, X. Su, and W. Chen, “Discussion on accurate phase–height mapping in fringe projection profilometry,” Opt. Eng. 56(10), 1–11 (2017). [CrossRef]  

17. H. Luo, J. Xu, N. H. Binh, S. Liu, C. Zhang, and K. Chen, “A simple calibration procedure for structured light system,” Opt. Laser Eng. 57, 6–12 (2014). [CrossRef]  

18. J. Xu, J. Douet, J. Zhao, L. Song, and K. Chen, “A simple calibration method for structured light-based 3d profile measurement,” Opt. Laser Technol. 48, 187–193 (2013). [CrossRef]  

19. J. Xu and S. Zhang, “Status, challenges, and future perspectives of fringe projection profilometry,” Opt. Laser Eng. 135, 106193 (2020). [CrossRef]  

20. Z. Zhang, “A flexible new technique for camera calibration,” IEEE Trans. Pattern Anal. Machine Intell. 22(11), 1330–1334 (2000). [CrossRef]  

21. S. Zhang and P. S. Huang, “Novel method for structured light system calibration,” Opt. Eng. 45(8), 083601 (2006). [CrossRef]  

22. R. Vargas, A. G. Marrugo, S. Zhang, and L. A. Romero, “Hybrid calibration procedure for fringe projection profilometry based on stereo vision and polynomial fitting,” Appl. Opt. 59(13), D163–D167 (2020). [CrossRef]  

23. X. Liu, Z. Cai, Y. Yin, H. Jiang, D. He, W. He, Z. Zhang, and X. Peng, “Calibration of fringe projection profilometry using an inaccurate 2d reference target,” Opt. Laser Eng. 89, 131–137 (2017). [CrossRef]  

24. S. Yang, M. Liu, J. Song, S. Yin, Y. Ren, J. Zhu, and S. Chen, “Projector distortion residual compensation in fringe projection system,” Opt. Laser Eng. 114, 104–110 (2019). [CrossRef]  

25. S. Lv, Q. Sun, Y. Zhang, Y. Jiang, J. Yang, J. Liu, and J. Wang, “Projector distortion correction in 3D shape measurement using a structured-light system by deep neural networks,” Opt. Lett. 45(1), 204–207 (2020). [CrossRef]  

26. A. G. Marrugo, R. Vargas, L. A. Romero, and S. Zhang, “Method for large-scale structured-light system calibration,” Opt. Express 29(11), 17316–17329 (2021). [CrossRef]  

27. S. Zhang, “Flexible and high-accuracy method for uni-directional structured light system calibration,” Opt. Laser Eng. 143, 106637 (2021). [CrossRef]  

28. S. Zhang, “Absolute phase retrieval methods for digital fringe projection profilometry: a review,” Opt. Laser Eng. 107, 28–37 (2018). [CrossRef]  

Data availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the author upon reasonable request.

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (6)

Fig. 1.
Fig. 1. Proposed computational framework for single-camera and single-projector structured light system calibration. To calibrate the stereo cameras, we first used an iPad LCD screen displaying a calibration circle pattern and followed the standard stereovision calibration process. For each pose, the projector projected horizontal and vertical fringe patterns onto the backside of a mirror surface. Phase constraints were then used to establish corresponding pairs between the stereo cameras, which were subsequently used to reconstruct 3D points. These points were sampled for plane fitting, and for each feature point selected on the camera, we calculated the corresponding projector feature point using the phase constraints. The light ray intersection point on the fitted plane was then regarded as the object feature point. We used these feature points to perform standard camera-projector calibration using a stereo calibration software package such as OpenCV. After system calibration, the secondary camera was detached.
Fig. 2.
Fig. 2. Ideal 3D plane reconstruction example. (a) 3D reconstructed points using stereo calibration data; (b) plane fitting error; (c) reconstructed ideal 3D plane.
Fig. 3.
Fig. 3. Representative feature points in different spaces. (a) Sampled feature points on the camera image; (b) mapped feature points on the projector image; (c) mapped feature points on the object plane.
Fig. 4.
Fig. 4. Example flat surface measurement results before considering projector lens distortions. (a) 3D reconstructed shape; (b) flatness measurement error map; (c) error histogram.
Fig. 5.
Fig. 5. Sphere measurement results. (a) 3D reconstructed sphere; (b) fitted sphere overlay with measured sphere; (c) pixel-wise error map ($\sigma$ = 0.038 mm).
Fig. 6.
Fig. 6. Measurement result of a complex statue. (a) Photograph; (b) one of the phase-shifted fringe patterns; (c) 3D reconstruction.

Tables (1)

Tables Icon

Table 1. Testing planes and the corresponding measurement rmse σ (mm) after considering projector lens distortions.

Equations (15)

Equations on this page are rendered with MathJax. Learn more.

s [ u , v , 1 ] T = A [ R , t ] [ x w , y w , z w , 1 ] T ,
[ u ~ v ~ ] = ( 1 + k 1 r 2 + k 2 r 4 + k 3 r 6 ) [ u ¯ v ¯ ] + [ 2 p 1 u ¯ v ¯ + p 2 ( r 2 + 2 u ¯ 2 ) 2 p 2 u ¯ v ¯ + p 1 ( r 2 + 2 v ¯ 2 ) ] ,
r 2 = u ¯ 2 + v ¯ 2 ,
I = I + I cos ( ϕ + 2 k π / N ) ,
ϕ = tan 1 [ I k sin ( 2 k π / N ) I k cos ( 2 k π / N ) ] .
Φ = ϕ + 2 π × κ ,
a x + b y + c = z ,
[ x i , y i , z i ] T = A 1 [ u i , v i , 1 ] T .
[ x ^ y ^ z ^ ] = [ n 11 n 12 n 13 n 21 n 22 n 23 n 31 n 32 n 33 ] [ x x 0 y y 0 z z 0 ] ,
n z ^ = [ n 31 n 32 n 33 ] T = n | | n | | ,
n x ^ = [ n 11 n 12 n 13 ] T = N x ^ | | N x ^ | | ,
N x ^ = [ x 1 x 0 y 1 y 0 z 1 z 0 ] T ,
n y ^ = [ n 21 n 22 n 23 ] T = n z ^ × n x ^ ,
[ u p v p ] = [ 2 π Φ v 1 ( u , v ) T v 2 π Φ h 1 ( u , v ) T h ]
0.5018 x 0.0014 y + 270.6360 = z .
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.