Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Pixel-wise structured light calibration method with a color calibration target

Open Access Open Access

Abstract

We propose to use a calibration target with a narrow spectral color range for the background (e.g., from blue) and broader spectral color range for the feature points (e.g., blue + red circles), and fringe patterns matching the background color for accurate phase extraction. Since the captured fringe patterns are not affected by the high contrast of the calibration target, phase information can be accurately extracted without edging artifacts. Those feature points can be clearly “seen” by the camera if the ambient light matches the feature color or without the background color. We extract each calibration pose for three-dimensional coordinate determination for each pixel, and then establish pixel-wise relationship between each coordinate and phase. Comparing with our previously published method, this method significantly fundamentally simplifies and improves the algorithm by eliminating the computational framework estimate smooth phase near high-contrast feature edges. Experimental results demonstrated the success of our proposed calibration method.

© 2022 Optica Publishing Group under the terms of the Optica Open Access Publishing Agreement

1. Introduction

High accuracy three-dimensional (3D) optical measurements play an increasingly important role in various industries and academia ranging from quality control to robotics [1]. Structured light techniques are ones of the most popular 3D optical metrology methods for surface metrology primarily because of its low cost and easy implementations [2]. Over the years, tremendous progresses have been made thanks to effort contributed by a large number of researchers in both academia and industry. Yet, structured light system calibration remains to be one of the most important and challenging issues.

Accurate structured light calibration methods have evolved from directly establishing the relationship between 3D coordinates and the phase using a reference plane, to estimating lens model functions, to obtaining camera pixel light ray functions. The reference-plane-based method is difficult for pinhole lens calibration because it often requires the use of an accurate translation stage, albeit it could work well for telecentric lens calibration [318].

To simplify system calibration, linear matrix operations are used to describe the imaging process, from 3D to 2D projection, and lens distortions are further considered as smooth functions with radial and tangential distortions. Zhang [19] developed a flexible camera calibration method to estimate those model parameters by freely moving a flat calibration target. This development drastically simplified camera calibration, yet did not address problems associated with projector calibration that is part of the structured light system. Zhang and Huang [20] developed a structured light system calibration that allows the projector to “capture” images like a camera, simplifying the projector calibration. Various methods have been developed to further improve this calibration method [2124]. One of the major drawbacks of this calibration approach is that the mathematical model only considers smooth distortions of a typical symmetrical and pinhole lens. For a non-symmetrical lens, a non-pinhole lens, or a lens with artifacts, this calibration approach may not work well at least to certain areas.

If each camera pixel is calibrated precisely without other mathematical constraints by neighboring pixels, it is possible to achieve the best possible accuracy. Yet, calibrating pixel ray function remains difficult. Instead of directly calibrating each pixel ray, Marrugo et al. [25] took advantage of the mathematical model and the well established calibration approach to build accurate pixel ray functions. This method follows the following procedures: 1) calibrate the system using standard calibration approach; 2) place a flat surface at different locations; 3) reconstruct 3D coordinates for each camera pixel; 4) determine measurement error for each pixel; 5) refine 3D coordinates for each pixel after considering measurement errors; and 6) establish camera pixel ray functions with respect to the phase. This method has demonstrated significant improvements over the conventional calibration methods especially for large scale measurements. However, the refined measurements may deviate from true value if initial measurements have a large error.

Instead of relying on measuring another flat surface, Zhang [26] developed a method that directly estimated 3D shape of the calibration target, and further establish the pixel ray functions with respect to phase since phase information is also available. Since calibration feature points are used in the process, the calibration target 3D reconstruction is accurate. However, the phase values are not accurate near the edge of feature points due to high contrast. The poly-surface fitting method was able to alleviate the problem caused by phase artifacts. Since the phase value used for calibration is no longer the true value, this method has its inherent limitations.

To address the problem associated with our previous method [26], we propose to use a color calibration target. The calibration target has a narrow spectral color background (e.g., blue) and broader spectral color feature points (e.g., blue + red circles). The color of the projected fringe patterns matches the background color for phase extraction. Since the captured fringe patterns does not “see” high contrast feature points, the extracted phase information does not have edging artifacts. Since the proposed approach follows the same principle as our previously developed method but does not require the poly-surface fitting to remove phase artifacts, the calibration process is simpler and more accurate. Our experimental results demonstrated that our proposed method achieved a high measurement accuracy for a structured light system.

Section 2. explains the principles behind the proposed work. Section 3. presents experimental results to evaluate the performance of our proposed method, and Sec. 4. summarizes this paper.

2. Principle

This section explains the principles behind the proposed method and the rationale of using a color calibration target.

2.1 Phase-shifting algorithm

Phase-shifting methods have been extensively adopted in optical metrology due to its speed, accuracy, and resolution. For a $N$-step phase-shifting algorithm with equal phase shifts, the kth fringe pattern can be described as

$$I = I' + I^{\prime\prime}, \cos(\phi + 2k\pi/N),$$
where $I'$ is the average intensity, $I''$ is the intensity modulation, and $\phi$ is the phase to be solved for. Simultaneously solving these equations leads to
$$\phi ={-}\tan^{{-}1}\left[\frac{I_k \sin (2k\pi/N)}{ I_k \cos (2k\pi/N)}\right].$$

The arctangent function gives phase value with a modulus of $2\pi$ which has to be taken care of before 3D reconstruction [27]. The phase unwrapping step essentially adds an integer number $\kappa$ of $2\pi$ to each point accordingly,

$$\Phi = \phi + 2\pi \times \kappa,$$
where $\Phi$ is the unwrapped phase without $2\pi$ discontinuities, and $\kappa$ is often regarded as the fringe order.

2.2 Pinhole lens model

A standard linear pin-hole lens model can be mathematically described as

$$s[u, v, 1]^{T} = \mathbf{A} \cdot \left[ \mathbf{R}, \mathbf{t}\right] \cdot [ x^{w}, y^{w}, z^{w},1]^{T},$$
where $s$ denotes the scaling factor, $(u, v)$ the camera pixel coordinates, $\mathbf {A}$ the $3\times 3$ intrinsic matrix, $\mathbf {R}$ the $3\times 3$ rotation matrix, $\mathbf {t}$ the $3\times 1$ translation vector, and $(x^{w}, y^{w}, z^{w})$ the world coordinates, and T the matrix transpose.

The nonlinear lens distortion can be mathematically modeled as

$$\begin{bmatrix} \hat{u}\\\hat{v} \end{bmatrix} = (1+k_1r^{2}+k_2r^{4}+k_3r^{6})\begin{bmatrix} \bar{u} \\ \bar{v} \end{bmatrix} +\begin{bmatrix} 2p_1\bar{u}\bar{v} + p_2(r^{2}+2\bar{u}^{2}) \\ 2p_2\bar{u}\bar{v} + p_1(r^{2}+2\bar{v}^{2}) \\ \end{bmatrix} \enspace,$$
with
$$r^{2} = \bar{u}^{2} + \bar{v}^{2} \enspace,$$
where $[k_1, k_2, k_3]$ denotes the radial distortion coefficients, $[p_1, p_2]$ the tangential distortion coefficients, $[\hat {u}, \hat {v}]^{T}$ the distorted image points, and $[\bar {u}, \bar {v}]^{T}$ the normalized image coordinates. The overall distortion parameters are $\mathbf {D} = [k_1, k_2, k_3, p_1, p_2]$.

Such a lens distortion model works reasonably well for a typical imaging system where the optical axis is near the center of the lens, but has problem for an off-axis imaging system such as a typical projector [23,24]. In this research, we employed a linear model to describe our projector and a nonlinear model with $\mathbf {D} = [k_1, k_2, 0, 0, 0]$ to describe our camera.

2.3 Calibration target 3D reconstruction

Matrix $\mathbf {A}$ in Eq. (4) is also called camera intrinsic matrix that describes the focal lengthen and the principle point. These intrinsic parameters can be estimated through a standard camera calibration procedure such as the flat plane based method [19].

Given known intrinsic parameter matrix $\mathbf {A}$, the rotation matrix $\mathbf {R}$ and the translation vector $\mathbf {t}$ of a calibration target pose can be estimated using those feature points on the target. Standard flat plane based camera calibration method assumes that the world coordinate system is defined on the calibration target with $z^{w} = 0$, so $x^{w}$ and $y^{w}$ for each pixel $(u, v)$ can be solved for using Eq. (4). Once the world coordinates for each pixel is known, they can be transformed to the camera lens coordinate system as

$$[x, y, z]^{T} = \mathbf{R}[x^{w}, y^{w},0]^{T} + \mathbf{t}.$$

Apparently, the camera lens coordinate system is attached to the lens with $z-$axis aligning with the optical axis and $x-y$ plane on the lens plane.

Figure 1 shows an example of reconstructing 3D shape of calibration pose. In this example, the calibrated camera intrinsic parameters are

$$\mathbf{A}= \begin{bmatrix} 2428.153330 & 0 & 650.575354 \\ 0 & 2429.402994 & 491.263602 \\ 0 & 0 & 1 \\ \end{bmatrix}$$
and $\mathbf {D} = [-0.105535, 0.196205, 0, 0, 0]$. For a pose image shown in Fig. 1(a), once the circle dots are detected, the extrinsic parameters can be obtained as
$$\mathbf{R}= \begin{bmatrix} 0.999915 & -0.011569 & -0.006030 \\ -0.011614 & -0.999905 & -0.007430 \\ -0.005943 & 0.007500 & -0.999954 \\ \end{bmatrix}$$
$$\mathbf{t}= [{-}74.969971, 50.660998, 318.685583]^{T}.$$

 figure: Fig. 1.

Fig. 1. 3D reconstruction for a typical calibration target pose. (a) Circle pattern image with feature points being detected; (b) reconstructed 3D shape.

Download Full Size | PDF

Figure 1(b) shows the reconstructed 3D shape of the pose when all these calibration parameters are known.

2.4 Pixel-wise phase-to-coordinate conversion

As discussed in Subsec. 2.1, the pixel-wise phase can be calculated if a set of fringe patterns are projected. As such, for a calibration target pose, we can obtain the pixel wise phase $\Phi$ and $(x, y, z)$ coordinates in the lens coordinate system, and then we can build the pixel-wise relationship between 3D coordinates and the phase $\Phi$ as polynomial functions

$$ x(u, v) = \sum_{n = 0}^{m} a_n(u, v) [\Phi(u, v)]^{n}, $$
$$y(u, v) = \sum_{n = 0}^{m} b_n(u, v) [\Phi(u, v)]^{n}, $$
$$ z(u, v) = \sum_{n = 0}^{m} c_n(u, v) [\Phi(u, v)]^{n}, $$
where $a_n(u, v)$, $b_n(u, v)$, and $c_n(u, v)$ are constants for each pixel; and these constants can be calibrated by capturing the calibration target at different locations and poses. Our research found that third-order polynomials are often sufficient to describe these pixel-wise relationships of a typical structured light system.

2.5 Use of color calibration target

To calibrate the phase-to-coordinate functions descried in Subsec. 2.4, high-quality phase maps are required. Unfortunately, a standard calibration target often creates phase artifacts near the high-contrast edges [28]. For example, Fig. 2(a) shows the image of a standard high-contrast calibration target; and Fig. 2(b) shows one of the fringe images. The reconstructed phase map is shown in Fig. 2(c). The phase map has obvious artifacts near the high-contrast edges. Figure 2(e) shows one of the cross section of the phase map after removing the overall slope. It is important to note that this type of phase error cannot be eliminated by a standard filtering such as a Gaussian filter. Figures 2(d) and 2(f) shows the phase after applying an $11 \times 11$ Gaussian filter.

 figure: Fig. 2.

Fig. 2. Phase artifacts near high-contrast edges on a standard calibration target. (a) Circle pattern image; (b) one of the phase-shifted fringe patterns; (c) phase map; (d) phase map after applying a Gaussian filter; (e) one of cross sections of (c) after removing overall slope; (f) one of cross sections of (d) after removing overall slope.

Download Full Size | PDF

Various methods [2830] have been proposed to remove such phase artifacts, but they are either computationally intensive or the final phase quality is not sufficient for our proposed calibration method. We also attempted to take care of such artifacts by fitting phase maps with poly surfaces [26]. Since the fitted phase maps are not true phase maps, the process could fail under certain situations. As a result, we had to check each pose fitting quality enable calibration accuracy, which is not desirable for fully-automated structured light system calibration.

In this research, we propose to use a color calibration target to fundamentally eliminate the problem associated the standard calibration target. Figure 3 illustrates the basic idea behind the proposed method. In this case, the background color is RGB (0, 0, 255), and the circle foreground color is RGB (255, 0, 255). If the projector projects a pattern with the color of RGB(255, 0, 0), the calibration target appears like a normal calibration target with the black background and white foreground. The standard feature detection algorithm can then be used to extract feature points for intrinsic or extrinsic parameter estimation. In contrast, if the projector projects fringe patterns with the color of RGB(0, 0, 255), the calibration target appears like a uniform flat board without circle features. The captured fringe patterns can then be analyzed to obtain high-quality phase map. As a result, the proposed color calibration target can be used for calibrate structured light system accurately without worrying about phase artifacts near the high-contrast edges.

 figure: Fig. 3.

Fig. 3. Reduce phase error caused by standard calibration target by using color.

Download Full Size | PDF

Figure 4 shows the result with our proposed calibration target. Figure 4(c) shows the phase map and Fig. 4(e) shows one of the cross sections. These results demonstrate that the phase artifacts near circle edge regions are not obvious. The $11 \times 11$ Gaussian filter can be further used to reduce random noise, and Fig. 4(d) and Fig. 4(f) show the final results. The high-quality phase can then be used for high-accuracy pixel-by-pixel system calibration.

 figure: Fig. 4.

Fig. 4. Phase artifacts near high-contrast edges on the proposed color calibration target. (a) Circle pattern image; (b) one of the phase-shifted fringe patterns; (c) phase map; (d) phase map after applying a Gaussian filter; (e) one of cross sections of (c) after removing overall slope; (f) one of cross sections of (d) after removing overall slope.

Download Full Size | PDF

2.6 Proposed calibration framework

Figure 5 summarizes our proposed high accuracy system calibration framework. Camera intrinsic parameters (i.e., $\mathbf {A}$ and $\mathbf {D}$ matrices) are first calibrated using the standard camera calibration approach by illuminating the calibration target with a red light and moving the calibration target to different positions and orientations. The calibration target is then translated to $N$ different number of positions along depth direction. At each position, blue fringe patterns are captured to extract a phase map $\Phi$; red uniform light circle pattern images are used to extract the extrinsic parameters (i.e., $\mathbf {R}$ and $\mathbf {t}$ matrices) of the calibration target; combined intrinsic and extrinsic parameters are used to reconstruct 3D coordinates $(x, y, z)$ of the calibration target pixel by pixel; and the relationships between each coordinate and the corresponding phase value are established using Eqs. (11)–13. Finally, the mapping between 3D coordinates and phase can be established by optimizing the relationships between 3D coordinates and the phase for each pose.

 figure: Fig. 5.

Fig. 5. The framework of our calibration method.

Download Full Size | PDF

3. Experimental results

We developed a structured light prototype comprising of a digital-light-processing (DLP) projector (model: Lightcrafter 4500), a complementary metal oxide semiconductor (CMOS) camera (model: FLIR Blackfly BFS-U3-28S5M-C), and an Arduino board. The camera is attached with an 8 mm lens (model: Computar M0814-MP2). The projector operated at 15 Hz with a full resolution of $912 \times 1140$, the camera resolution was set as $1280 \times 960$, and the Arduino board generates trigger signals to synchronize the camera and the projector.

In this research, the calibration target has a blue ground with magenta (red + blue) circle patterns. Our study found that simply creating an image with RGB (0, 0, 255) and RGB (255, 0, 255) and printing it out did not work well because 1) the printing process is not spectrally linear for all colors, and 2) the camera sensor’s sensitivity differs for a different spectrum of light. To make a nearly ideal calibration target for this proposed work, we had to make the calibration target several times by adjusting the red/blue ratios to accommodate the color spectral responses of the printer and the camera sensor.

The camera intrinsic parameters was first calibrated with a calibration target being moved to 60 different poses and orientations. The calibration target has 16 $\times$ 12 circle dots. For all these poses, we also projected and captured 9-step phase-shifted fringe patterns and 6 gray-coded binary patterns to extract phase maps for both horizontal and vertical directions. For the conventional methods, the circle images and all fringe images of all 60 poses were used to estimate camera and projector parameters. For our proposed calibration approach, we only used the circle images to calibrate camera intrinsic parameters, and then used 20 of these 60 poses to optimize the pixel-wise relationships between $(x, y, z)$ coordinates and the phase described in Eqs. (11)–13. It is important to note that our proposed method only requires one directional fringe patterns of those 20 poses.

We first evaluated the performance of our proposed method by measuring a flat plane (the back of a mirror) with the calibration data from the proposed calibration method and those from the traditional structured light system calibration method. For each measurement, we fit the point-cloud data with an ideal plane and determine the measurement error as the shortest distance from the measured point to the ideal plane, and the sign is determined as a point below the ideal plane being negative and a point above the ideal plane being positive. Figure 6(a) shows one raw 3D reconstruction using our proposed method. The pixel-by-pixel error map is calculated and shown in Fig. 6(b). The root-mean-square (rms) error ($\sigma$) for this pose measurement is $\sigma = 0.035$ mm, which is pretty small considering that the overall measurement area is approximately $150 \times 100$ mm$^{2}$. Figure 6(c) shows the histogram of the error map, which is close to be a normal distribution, as expected.

 figure: Fig. 6.

Fig. 6. Example flat surface measurement results (associated with Visualization 1). (a) 3D reconstructed shape using our proposed method; (b) flatness error map of (a); (c) corresponding histogram of (b); (d) 3D reconstructed shape using traditional method; (e) flatness error map of (d); (f) corresponding histogram of (e).

Download Full Size | PDF

We then reconstructed 3D shape of the same pose using the calibration data from a traditional calibration method. Figures 6(d)–6(f) show the corresponding results. Comparing with the results obtained from our methods, the traditional method produced much large error ($\sigma = 0.118$ mm), and the measured surface bends more on the outer areas. Clearly, the error map distribution does not follow a normal distribution, indicating that the traditional calibration method does not calibrate all pixels equally well.

To further evaluate the performance of the calibration method, we measured the same flat surface at 39 different poses. Figure 7 summarizes the rms error for our proposed method comparing against the traditional calibration method. Visualization 1 shows measurement results visually. Overall, comparing with the the traditional calibration method, 1) our proposed method achieved higher measurement accuracy: approximately 1/3 of the rms error; and 2) our proposed method performs more consistently regardless of the poses and location of the evaluation plane.

 figure: Fig. 7.

Fig. 7. Measurement rms error for all testing planes (associated with Visualization 1).

Download Full Size | PDF

We also measured a more complex surface. Figure 8(a) shows the photograph of the object being measured. Figure 8(b) shows one of the phase-shifted fringe patterns. 3D reconstruction using our proposed calibration method and 3D reconstruction using the traditional method are shown in Fig. 8(c) and Fig. 8(d), respectively. Figure 9 shows corresponding 480-th column cross sections of these 3D shapes. It clearly shows that both reconstructed 3D results show a similar level of details, demonstrating that our proposed method does not reduce measurement resolution although it involves pixel-wise fitting.

 figure: Fig. 8.

Fig. 8. Measurement result of a complex statue. (a) Photograph; (b) One of the phase-shifted fringe patterns; (c) 3D reconstruction using our proposed method; (d) 3D reconstruction using the traditional calibration method.

Download Full Size | PDF

 figure: Fig. 9.

Fig. 9. Cross section of the 3D shape shown in Fig. 8. (a) One cross section of 3D result reconstructed from our proposed method, shown in Fig. 8(c); (b) One cross section of 3D result reconstructed from the traditional calibration method, shown in Fig. 8(d).

Download Full Size | PDF

4. Summary

This paper has presented a novel pixel-wise structured light system calibration method using a color calibration target. Comparing with the traditional calibration method, our proposed method achieved higher accuracy without losing details: 1) the rms error is approximately 1/3 for flat surface measurements; and 2) the residual error distribution follows normal distribution. Comparing with our previously developed method using a standard calibration target, the proposed color calibration target eliminates the needs for complex phase artifacts reduction, and maintains high calibration accurate because true instead of fitted phase values are used. In addition, this method only requires one-directional fringe patterns for calibration, making it easier to be implemented for calibrating a broader range of structured light systems.

Funding

National Institute of Justice (2019-R2-CX-0069).

Acknowledgments

This work was sponsored by National Institute of Justice (NIJ) under grant No. 2019-R2-CX-0069. Views expressed here are those of the author and not necessarily those of the NIJ.

Disclosures

SZ: ORI LLC (C), Orbbec 3D (C), Vision Express Optics Inc (I).

Data availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the author upon reasonable request.

References

1. A. G. Marrugo, F. Gao, and S. Zhang, “State-of-the-art active optical techniques for three-dimensional surface metrology: a review,” J. Opt. Soc. Am. A 37(9), B60–B77 (2020). [CrossRef]  

2. S. Zhang, “High-speed 3d shape measurement with structured light methods: a review,” Opt. Laser Eng. 106, 119–131 (2018). [CrossRef]  

3. M. Vo, Z. Wang, T. Hoang, and D. Nguyen, “Flexible calibration technique for fringe-projection-based three-dimensional imaging,” Opt. Lett. 35(19), 3192–3194 (2010). [CrossRef]  

4. W.-S. Zhou and X.-Y. Su, “A direct mapping algorithm for phase-measuring profilometry,” J. Mod. Opt. 41(1), 89–94 (1994). [CrossRef]  

5. Y. Wen, S. Li, H. Cheng, X. Su, and Q. Zhang, “Universal calculation formula and calibration method in fourier transform profilometry,” Appl. Opt. 49(34), 6563–6569 (2010). [CrossRef]  

6. Y. Xiao, Y. Cao, and Y. Wu, “Improved algorithm for phase-to-height mapping in phase measuring profilometry,” Appl. Opt. 51(8), 1149–1155 (2012). [CrossRef]  

7. H. Guo, H. He, Y. Yu, and M. Chen, “Least-squares calibration method for fringe projection profilometry,” Opt. Eng. 44(3), 033603 (2005). [CrossRef]  

8. W. Zhao, X. Su, and W. Chen, “Whole-field high precision point to point calibration method,” Opt. Laser Eng. 111, 71–79 (2018). [CrossRef]  

9. X. Su, W. Song, Y. Cao, and L. Xiang, “Phase-height mapping and coordinate calibration simultaneously in phase-measuring profilometry,” Opt. Eng. 43(3), 708–712 (2004). [CrossRef]  

10. S. Cui and X. Zhu, “A generalized reference-plane-based calibration method in optical triangular profilometry,” Opt. Express 17(23), 20735–20746 (2009). [CrossRef]  

11. A. Asundi and Z. Wensen, “Unified calibration technique and its applications in optical triangular profilometry,” Appl. Opt. 38(16), 3556–3561 (1999). [CrossRef]  

12. J. Huang and Q. Wu, “A new reconstruction method based on fringe projection of three-dimensional measuring system,” Opt. Laser Eng. 52, 115–122 (2014). [CrossRef]  

13. Y. Li, X. Su, and Q. Wu, “Accurate phase-height mapping algorithm for pmp,” J. Mod. Opt. 53(14), 1955–1964 (2006). [CrossRef]  

14. M. Fujigaki, T. Sakaguchi, and Y. Murata, “Development of a compact 3d shape measurement unit using the light-source-stepping method,” Opt. Laser Eng. 85, 9–17 (2016). [CrossRef]  

15. W. Zhao, X. Su, and W. Chen, “Discussion on accurate phase–height mapping in fringe projection profilometry,” Opt. Eng. 56, 1 (2017). [CrossRef]  

16. H. Luo, J. Xu, N. H. Binh, S. Liu, C. Zhang, and K. Chen, “A simple calibration procedure for structured light system,” Opt. Laser Eng. 57, 6–12 (2014). [CrossRef]  

17. J. Xu, J. Douet, J. Zhao, L. Song, and K. Chen, “A simple calibration method for structured light-based 3d profile measurement,” Optics Laser Techn. 48, 187–193 (2013). [CrossRef]  

18. J. Xu and S. Zhang, “Status, challenges, and future perspectives of fringe projection profilometry,” Opt. Laser Eng. 135, 106193 (2020). [CrossRef]  

19. Z. Zhang, “A flexible new technique for camera calibration,” IEEE Trans. Pattern Anal. Mach. Intell. 22(11), 1330–1334 (2000). [CrossRef]  

20. S. Zhang and P. S. Huang, “Novel method for structured light system calibration,” Opt. Eng. 45(8), 083601 (2006). [CrossRef]  

21. R. Vargas, A. G. Marrugo, S. Zhang, and L. A. Romero, “Hybrid calibration procedure for fringe projection profilometry based on stereo vision and polynomial fitting,” Appl. Opt. 59(13), D163 (2020). [CrossRef]  

22. X. Liu, Z. Cai, Y. Yin, H. Jiang, D. He, W. He, Z. Zhang, and X. Peng, “Calibration of fringe projection profilometry using an inaccurate 2d reference target,” Opt. Laser Eng. 89, 131–137 (2017). [CrossRef]  

23. S. Yang, M. Liu, J. Song, S. Yin, Y. Ren, J. Zhu, and S. Chen, “Projector distortion residual compensation in fringe projection system,” Opt. Laser Eng. 114, 104–110 (2019). [CrossRef]  

24. S. Lv, Q. Sun, Y. Zhang, Y. Jiang, J. Yang, J. Liu, and J. Wang, “Projector distortion correction in 3D shape measurement using a structured-light system by deep neural networks,” Opt. Lett. 45(1), 204–207 (2020). [CrossRef]  

25. A. G. Marrugo, R. Vargas, L. A. Romero, and S. Zhang, “Method for large-scale structured-light system calibration,” Opt. Express 29(11), 17316–17329 (2021). [CrossRef]  

26. S. Zhang, “Flexible and high-accuracy method for uni-directional structured light system calibration,” Opt. Laser Eng. 143, 106637 (2021). [CrossRef]  

27. S. Zhang, “Absolute phase retrieval methods for digital fringe projection profilometry: a review,” Opt. Laser Eng. 107, 28–37 (2018). [CrossRef]  

28. H. Yue, H. G. Dantanarayana, Y. Wu, and J. M. Huntley, “Reduction of systematic errors in structured light metrology at discontinuities in surface reflectivity,” Opt. Laser Eng. 112, 68–76 (2019). [CrossRef]  

29. Y. Wu, X. Cai, J. Zhu, H. Yue, and X. Shao, “Analysis and reduction of the phase error caused by the non-impulse system psf in fringe projection profilometry,” Opt. Laser Eng. 127, 105987 (2020). [CrossRef]  

30. J. Burke and L. Zhong, “Suppression of contrast-related artefacts in phase-measuring structured light techniques,” Proc. SPIE 10329, 103290T (2017). [CrossRef]  

Supplementary Material (1)

NameDescription
Visualization 1       Flat surface measurement

Data availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the author upon reasonable request.

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (9)

Fig. 1.
Fig. 1. 3D reconstruction for a typical calibration target pose. (a) Circle pattern image with feature points being detected; (b) reconstructed 3D shape.
Fig. 2.
Fig. 2. Phase artifacts near high-contrast edges on a standard calibration target. (a) Circle pattern image; (b) one of the phase-shifted fringe patterns; (c) phase map; (d) phase map after applying a Gaussian filter; (e) one of cross sections of (c) after removing overall slope; (f) one of cross sections of (d) after removing overall slope.
Fig. 3.
Fig. 3. Reduce phase error caused by standard calibration target by using color.
Fig. 4.
Fig. 4. Phase artifacts near high-contrast edges on the proposed color calibration target. (a) Circle pattern image; (b) one of the phase-shifted fringe patterns; (c) phase map; (d) phase map after applying a Gaussian filter; (e) one of cross sections of (c) after removing overall slope; (f) one of cross sections of (d) after removing overall slope.
Fig. 5.
Fig. 5. The framework of our calibration method.
Fig. 6.
Fig. 6. Example flat surface measurement results (associated with Visualization 1). (a) 3D reconstructed shape using our proposed method; (b) flatness error map of (a); (c) corresponding histogram of (b); (d) 3D reconstructed shape using traditional method; (e) flatness error map of (d); (f) corresponding histogram of (e).
Fig. 7.
Fig. 7. Measurement rms error for all testing planes (associated with Visualization 1).
Fig. 8.
Fig. 8. Measurement result of a complex statue. (a) Photograph; (b) One of the phase-shifted fringe patterns; (c) 3D reconstruction using our proposed method; (d) 3D reconstruction using the traditional calibration method.
Fig. 9.
Fig. 9. Cross section of the 3D shape shown in Fig. 8. (a) One cross section of 3D result reconstructed from our proposed method, shown in Fig. 8(c); (b) One cross section of 3D result reconstructed from the traditional calibration method, shown in Fig. 8(d).

Equations (13)

Equations on this page are rendered with MathJax. Learn more.

I = I + I , cos ( ϕ + 2 k π / N ) ,
ϕ = tan 1 [ I k sin ( 2 k π / N ) I k cos ( 2 k π / N ) ] .
Φ = ϕ + 2 π × κ ,
s [ u , v , 1 ] T = A [ R , t ] [ x w , y w , z w , 1 ] T ,
[ u ^ v ^ ] = ( 1 + k 1 r 2 + k 2 r 4 + k 3 r 6 ) [ u ¯ v ¯ ] + [ 2 p 1 u ¯ v ¯ + p 2 ( r 2 + 2 u ¯ 2 ) 2 p 2 u ¯ v ¯ + p 1 ( r 2 + 2 v ¯ 2 ) ] ,
r 2 = u ¯ 2 + v ¯ 2 ,
[ x , y , z ] T = R [ x w , y w , 0 ] T + t .
A = [ 2428.153330 0 650.575354 0 2429.402994 491.263602 0 0 1 ]
R = [ 0.999915 0.011569 0.006030 0.011614 0.999905 0.007430 0.005943 0.007500 0.999954 ]
t = [ 74.969971 , 50.660998 , 318.685583 ] T .
x ( u , v ) = n = 0 m a n ( u , v ) [ Φ ( u , v ) ] n ,
y ( u , v ) = n = 0 m b n ( u , v ) [ Φ ( u , v ) ] n ,
z ( u , v ) = n = 0 m c n ( u , v ) [ Φ ( u , v ) ] n ,
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.