Abstract

Object motion can introduce phase error and thus measurement error for phase-shifting profilometry. This paper proposes a generic motion error compensation method based on our finding that the dominant motion-introduced phase error doubles the frequency of the projected fringe frequency, and the Hilbert transform shifts the phase of a fringe pattern by π/2. We apply a Hilbert transform to phase-shifted fringe patterns to generate another set of fringe patterns, calculate one phase map using the original fringe patterns and another phase map using Hilbert transformed fringe patterns, and then use the average of these two phase maps for three-dimensional reconstruction. Both simulation and experiments demonstrated that the proposed method can substantially reduce motion-introduced measurement error.

© 2018 Optical Society of America under the terms of the OSA Open Access Publishing Agreement

1. Introduction

Three-dimensional (3D) shape measurement using digital fringe projection (DFP) technique has been exhaustively studied and widely applied due to its simple setup, high-speed and high-resolution measurement capabilities [1–3].

Conventional DFP technique typically projects 8-bit sinusoidal fringe patterns, and its measurement speed is limited to the maximum refresh rate of the projector, typically 120 Hz. Therefore, it is challenging for the conventional DFP technique to measure rapidly moving objects if the object moves too much while acquiring the required phase-shifted fringe patterns for phase retrieval. To overcome this limitation, the digital binary defocusing techniques have been developed to generate quasi-sinusoidal fringes with 1-bit binary patterns through projector lens defocusing [4,5], and the advanced digital-light-processing (DLP) projection platform allows researchers to achieve speed breakthroughs [6–8]. However, the binary defocusing method still has measurement error if the object moves too quickly. Phase-shifting profilometry works well with the assumption that the object stays quasi-static when the required phase-shifted fringe patterns are captured. Therefore, any motion of the object could introduce phase error and thus measurement error, and we refer this type of error as motion-introduced phase error.

The motion-introduced phase error has the similarity to the phase-shift error introduced by the interferometry systems when the phase shift cannot be precisely generated. However, the motion-introduced phase error is more complex because the motion-introduced phase error might not be homogeneous but the phase-shift error in the interferometry is homogeneous. This is because for the interferometry system, the major phase-shift error is introduced by the displacement error of the mirror driven by a piezoelectric device, and the phase-shift error is overall homogeneous (e.g., remains the same across the entire measurement surface). However, the phase error introduced by motion could be nonhomogeneous because the motion of a dynamically deformable object varies from one point to another.

To eliminate the motion introduced error, Lu et al. [9] proposed a method to reduce error caused by the planar motion parallel to the imaging plane. By placing a few markers on the object and analyzing the movement of markers, the rigid-body motion of an object can be estimated. Later, they improved their method to handle error induced by translation in the direction of object height [10]. The motion is estimated using the arbitrary phase shift estimation method developed by Wang and Han [11]. However, this method is limited to homogeneous background intensity and modulation amplitude. Feng et al. [12] proposed to solve the nonhomogeneous motion artifact problem by segmenting objects with different rigid shifts. However, this method still assumes that the phase-shift error within a single segmented object is homogeneous, thus it may not work well for dynamically deformable objects where the phase-shift error varies from one pixel to another. Cong et al. [13] proposed a Fourier-assisted approach to correct the phase-shift error by differentiating the phase maps of two successive fringe images. However, the accuracy is limited due to the use of Fourier Transform Profilometry (FTP) for phase shift estimation. Liu et al. [14] developed a method to find the phase shift more precisely with an iterative process based on the assumption that the motion is uniform between two adjacent 3D frames. However, such a method requires the acquisition of another set of fringe patterns for motion estimation, and thus such a method may not work well if the object moves too rapidly when the motion cannot be precisely estimated.

We propose a motion-induced phase error compensation method based on our finding that the dominant motion introduced phase error doubles the frequency of the projected fringe frequency, and Hilbert transform shifts the phase of a fringe pattern by π/2. Our proposed method includes four major steps: 1) apply Hilbert transform to the phase-shifted fringe patterns to generate another set of fringe patterns; 2) calculate one phase map ϕ using the original fringe patterns and another phase map ϕH using Hilbert transformed fringe patterns; 3) generate the final phase map ϕf by averaging ϕ and ϕH, i.e., ϕf = (ϕ + ϕH)/2; and 4) reconstruct 3D shape using the averaged phase map ϕf. This method can substantially reduce motion-introduced phase error for objects with rigid uniform motion as well as for dynamically deformable objects with non-uniform motion.

Section 2 explains the principle of the proposed method. Section 3 shows experimental results to verify the performance of the proposed method; and Sec. 4 summarizes the paper.

2. Principle

2.1. Multi-step phase-shifting algorithm

Phase-shifting methods are widely used in optical metrology because of their speed and accuracy [15]. The intensity distributions of the n–th fringe for an N–step phase-shifting algorithm with a phase shift of δn can be described as,

In(x,y)=A(x,y)+B(x,y)cos(Φ+δn),
where A(x, y) is the average intensity, B(x, y) the intensity modulation, and Φ(x, y) the phase to be solved for. If N ≥ 3, the wrapped phase can be calculated by
ϕ(x,y)=tan1[n=0N1In(x,y)sinδnn=0N1In(x,y)cosδn],
where the arctangent function results in a value ranging [−π, +π) with 2π discontinuities. The continuous phase map Φ (x, y) can be obtained by applying a phase unwrapping algorithm to determine the fringe order k(x, y),
Φ(x,y)=ϕ(x,y)+k(x,y)×2π.
The unwrapped phase can be used for 3D reconstruction once the system is calibrated.

2.2. Motion-introduced phase error analysis

The phase-shifting algorithm could determine accurate phase information ϕ(x, y) if and only if the phase shift δn is precisely known. For DFP systems, the phase-shift can be accurately controlled with digital projectors. However, due to the existence of motion, the actual phase shift δ′n could be different from the projected value δn,

δn(x,y)=δn+n(x,y),
where n(x, y) is caused by motion and could vary from one pixel to another.

Figure 1 illustrates the phase-shift error caused by the motion of a deformable object. For an image point C1, the corresponding surface point changes from S1 to 1 if the object moves, thus the phase-shift error for the adjacent fringes will be ΔΦ(1P1). While for another image point C2, similarly we can find the phase-shift error will be ΔΦ(2P2). Apparently, these two values could be different for different image points, and thus the phase-shift errors caused by object motion is non-homogeneous.

 figure: Fig. 1

Fig. 1 Illustration of non-homogeneous phase-shift error introduced by the motion of a deformable object.

Download Full Size | PPT Slide | PDF

For the N-step phase-shifting algorithm, the phase shift error n(x, y) will result in pattern distortion and final phase calculation as follows,

In(x,y)=A(x,y)+B(x,y)cos(ϕ(x,y)+δn),
ϕ(x,y)=tan1[n=0N1In(x,y)sinδnn=0N1In(x,y)cosδn].

Therefore, we can obtain the motion-introduced phase error by

Δϕ(x,y)=ϕ(x,y)ϕ(x,y)
=tan1[cosϕn=0N1In(x,y)sinδn+sinϕn=0N1Incosδnsinϕn=0N1Insinδncosϕn=0N1Incosδn],
=tan1[cosϕn=0N1cos(ϕ+δn)sinδn+sinϕn=0N1cos(ϕ+δn)cosδnsinϕn=0N1cos(ϕ+δn)sinδncosϕn=0N1cos(ϕ+δn)cosδn],
=tan1[cos2ϕn=0N1sinαn+sin2ϕn=0N1cosαnn=0N1sinnsin2ϕn=0N1sinαncos2ϕn=0N1cosαnn=0N1sinn],
where αn = 2δn + n.

From Eq. 10, it can be seen that the motion-introduced phase error Δϕ highly correlates to the real phase ϕ. Since three-step and four-step phase-shifting algorithms are more extensively used for high-speed applications, we provide detailed analysis of phase error for these two special cases.

For a three-step phase-shifting algorithm, the fringe patterns, wrapped phase and phase error can be respectively described as,

In(x,y)=A+Bcos[ϕ(x,y)+(n1)*(2π/3+)],n=0,1,2,
ϕ(x,y)=tan1[3(I0I2)2I1I0I2],
Δϕ(x,y)=tan1{sin2ϕ[cos(+π/3)1/2](cos+1/2)cos2ϕ[cos(+π/3)1/2]}.
Given that is very small, we will have
cos(+π/3)1/2=cos()cos(π/3)sin()sin(π/3)1/23/2,
and
cos+1/23/2.
Therefore, Eq. 13 can be further approximated as,
Δϕ(x,y)tan1[3sin2ϕ3+3cos2ϕ],
tan1[3sin2ϕ3],
(33)sin2ϕ.
This equation indicates that the phase error approximately doubles the fringe frequency of the projected fringe pattern.

Similarly, we deduced the phase error model for the four-step phase-shifting algorithm as follows. The fringe patterns, wrapped phase and phase error can be respectively described as,

In(x,y)=A+Bcos[ϕ(x,y)+(2n3)*(π/4+)],n=0,1,2,3,
ϕ(x,y)=tan1[I3I1I0I2]+3π/4,
Δϕ(x,y)=tan1[(I3I1)cos(ϕ3π/4)(I0I2)sin(ϕ3π/4)(I3I1)sin(ϕ3π/4)+(I0I2)cos(ϕ3π/4)],
=tan1[γ0cos2(ϕ3π/4)γ0sin2(ϕ3π/4)+γ1],
where γ0 = 2[sin(3) − sin ], and γ1 = 2[cos(3) + cos ].

Once again, because is very small, we will have,

γ0=2[sin(3)sin]2[3]=4,
γ1=2[cos(3)+cos]4.

Therefore, Equation 22 becomes

Δϕ(x,y)tan1[cos2(ϕ3π/4)],()sin2ϕ.
The above analysis also reveals that for the four-step phase-shifting algorithms, the motion-induced phase error also approximately doubles the frequency of the projected fringe frequency.

To better understand the characteristics of the motion-introduced phase error, simulations were carried out to test the phase-shift errors caused by both uniform and non-uniform motion. First, we tested three-step phase-shifting algorithm under uniform motion with the phase-shift errors ∈ [−0.1, 0.1] rad. And the resultant phase error plots for one period of pattern are shown in Fig. 2(a). The simulation results indicate that the dominant phase error introduced by uniform-motion approximately doubles the frequency of the projected fringe patterns. Similarly, we simulated the non-uniform motion by setting the phase-shift errors as 0 = 0, 1 = , 2 = 3×. This means the acceleration rate is set to be the same with the moving speed. Figure 2(b) shows the results. It can be found that the phase error also shares the same characteristics: primarily doubling the frequency of the fringe pattern, albeit the amplitude is relatively larger. Similar simulations were also conducted for four-step phase-shifting algorithm, with the results shown in Fig. 3, from which we can notice that the phase error also doubles the fringe frequency.

 figure: Fig. 2

Fig. 2 Simulation results of the motion-introduced phase error for three-step phase-shifting algorithm. (a) Uniform motion; (b) non-uniform motion.

Download Full Size | PPT Slide | PDF

 figure: Fig. 3

Fig. 3 Simulation results of the motion-introduced phase error for four-step phase-shifting algorithm. (a) Uniform motion; (b) non-uniform motion.

Download Full Size | PPT Slide | PDF

Based on this finding, we came to the idea that if we shift the original phase map by 1/4 period (or π/2), and then we can use the shifted phase map to compensate the motion-induced error by averaging it with the original phase map. This method can simultaneously tackle error caused by the uniform and non-uniform motions. Meanwhile, this method does not require additional pattern acquisition, nor motion estimation, which is especially important for high-speed applications.

2.3. Phase error compensation using Hilbert transform

Hilbert transform is widely used in signal processing, and mathematically it is a linear operator that takes a function, μ(t) of a real variable and produces another function of a real variable H(μ)(t) by convolution as,

H(μ)(t)=1πμ(τ)tτdτ.

Hilbert transform actually is a multiplier operator in frequency domain,

(H(μ)(ω))=δH(ω)×(μ)(ω),
where is Fourier transform operator.
δH(ω)={i=eiπ/2,ifω<00,ifω=0i=eiπ/2,ifω>0

This indicates that Hilbert transform has the effect of shifting the phase of negative frequency components by π/2 and the phase of positive frequency components by −π/2. For a real signal cos(ωt), Hilbert transform actually results in cos(ωtπ/2) = sin(ωt). Therefore, Hilbert transform provides a means to shift the phase by π/2, which can address motion-introduced phase error problems.

Applying Hilbert transform to Eq. 1 leads to

InH(x,y)=H[In(x,y)]=A(x,y)+B(x,y)sin[ϕ(x,y)+δn].
where A′(x, y) is the average intensity that might be different from the original one. Then another phase map can be calculated using Hilbert transformed fringe patterns as,
ϕH(x,y)=tan1[n=1NInH(x,y)cosδnn=1NInH(x,y)sinδn].
Since the phase error of the original phase ϕ(x, y) and that of Hilbert phase ϕH(x, y) has opposite distributional tendencies, we can generate another phase map by
ϕf(x,y)=[ϕ(x,y)+ϕH(x,y)]/2.
The averaged phase map ϕf(x, y) could significantly reduce periodic motion phase error.

To test the performance of this proposed method, we first evaluated the phase error for both uniform and non-uniform motion cases for three-step phase-shifting algorithm. In the simulations, we set the uniform motion induced phase-shift errors as 0 = 0 rad, 1 = 0.1 rad, 2 = 0.2 rad. The phase error plots of one period are shown in Fig. 4(a). It can be seen that the phase error ϕe obtained from the original phase-shifted patterns, and the phase error ϕeH obtained from Hilbert transformed fringe patterns indeed have different tendencies. The averaged phase error ϕef is significantly reduced: from root-mean-square(rms) of 0.042 rad to 0.0012 rad. Figure 4(c) shows the phase rms errors when the phase-shift error 1 varies from −0.1 rad to 0.1 rad and 2 = 21. It can be seen that the phase rms error increases approximately linearly with the phase-shift error, and the proposed method can effectively reduce the phase error caused by uniform motion. Figure 4(b) shows simulation results for non-uniform motion when setting the phase-shift error as 0 = 0 rad, 1 = 0.1 rad, 2 = 31. It can be found the proposed method can effectively reduce the phase rms error from 0.066 rad to 0.003 rad. Figure 4(d) plots the results when 1 varies from −0.1 rad to 0.1 rad and 2 = 31. Once again, the proposed method can drastically reduce phase error caused by non-uniform motion.

 figure: Fig. 4

Fig. 4 Simulation results of phase error compensation for three-step phase-shifting algorithm. (a) Phase error plots when 1 = 0.1 rad and 2 = 0.2 rad; (b) Phase error plots when 1 = 0.1 rad and 2 = 0.3 rad; (c) phase rms error with uniform motion; (d) phase rms error with nonuniform motion.

Download Full Size | PPT Slide | PDF

Besides, similar simulations were also conducted to test the proposed method for four-step phase-shifting algorithm with the results shown in Fig. 5. As shown in Fig. 5(a), the proposed method can reduce the phase rms error from 0.035 rad to 0.0009 rad for uniform motion. And for nonuniform motion as shown in Fig. 5(b), the phase rms error decreases from 0.072 rad to 0.0036 rad. Figures 5(c) and 5(d) show the compensation results with different phase-shifting errors for uniform and nonuniform motions, which obviously indicate the effectiveness of the proposed compensation method.

 figure: Fig. 5

Fig. 5 Phase error compensation for four-step phase-shifting algorithm. (a) Phase error plots when 1 = 0.1 rad, 2 = 0.2 rad, and 3 = 0.3 rad; (b) Phase error plots when 1 = 0.1 rad, 2 = 0.3 rad, and 3 = 0.6 rad; (c)phase rms error with uniform motion; (d) phase rms error with nonuniform motion.

Download Full Size | PPT Slide | PDF

3. Experiments

We further evaluated the performance of our proposed method through experiments. The system includes a CMOS camera (model: PointGrey Grasshopper3 GS3-U3-23S6M) that is attached with a 8 mm focal length lens (model: Computar M0814-MP2), and a DLP projection developmental kit (model: DLP Lightcrafter 4500). The camera resolution was set as 640 × 480 pixels while the projector resolution was 912 × 1140 pixels. The system was calibrated using the method described by Li et al. [16]. The projector and the camera were synchronized with a speed of 120 Hz, and a three-frequency temporal phase-unwrapping algorithm was adopted for absolute phase retrieval.

First, a moving sphere with a diameter of 79.2 mm at the speed of approximately 80 mm/s was used to quantitatively evaluate the performance of the proposed method for both three-step and four-step phase-shifting algorithms. Figure 6 shows the results for three-step phase-shifting algorithm. Figure 6(a) shows the 3D result obtained from original fringe patterns where the motion-introduced measurement error (i.e. vertical stripes) is obvious. Figure 6(b) shows the 3D result obtained from Hilbert-transformed fringe patterns, and the motion-introduced measurement error is also obvious. Figure 6(c) shows the 3D result obtained from our proposed method, clearly demonstrating that measurement error is greatly reduced (i.e., smoother surface). To quantitatively evaluate the improvements, we compared the measurement results with an ideal sphere, and the corresponding error maps are obtained as shown in Figs. 6(d)–6(f). For the result obtained from the original fringe patterns, the mean measurement error is 0.172 mm and the standard deviation is 0.124 mm. Similarly, Hilbert transformed fringe patterns gives nearly the same error: the mean error of 0.160 mm and the standard deviation of 0.116 mm. In contrast, the result from our proposed method reduces the mean error to 0.038 mm and the standard deviation to 0.032 mm.

 figure: Fig. 6

Fig. 6 Measurement result of a moving sphere using three-step phase-shifting algorithm (associated Visualization 1). (a) 3D result from original phase-shifted fringe patterns; (b) 3D result from Hilbert transformed fringe patterns; (c) 3D result using our proposed method; (d) error map of the result in (a) (mean: 0.172 mm, standard deviation: 0.124 mm); (e) error map of (b) (mean: 0.160 mm, standard deviation: 0.116 mm); (f) error map of (c) (mean: 0.038 mm, standard deviation: 0.032 mm).

Download Full Size | PPT Slide | PDF

We then did the same experiments using four-step phase-shifting algorithm with the results shown in Fig. 7. Figures 7(a)–7(c) respectively show the 3D reconstructed results for original fringe patterns, Hilbert transformed patterns and the proposed method, while Figs. 7(d)–7(f) show the corresponding error maps. For the result obtained from the original fringe patterns, the mean measurement error is 0.118 mm and the standard deviation is 0.099 mm. And Hilbert transformed fringe patterns gives the close error: the mean error of 0.104 mm and the standard deviation of 0.090 mm. In contrast, the result from our proposed method reduces the mean error to 0.031 mm and the standard deviation to 0.027 mm.

 figure: Fig. 7

Fig. 7 Measurement result of a moving sphere for four-step phase-shifting algorithm. (a) 3D result from original phase-shifted fringe patterns; (b) 3D result from Hilbert transformed fringe patterns; (c) 3D result using our proposed method; (d) error map of the result in (a) (mean: 0.118 mm, standard deviation: 0.099 mm); (e) error map of (b) (mean: 0.104 mm, standard deviation: 0.090 mm); (f) error map of (c) (mean: 0.031 mm, standard deviation: 0.027 mm).

Download Full Size | PPT Slide | PDF

Besides, a moving vase with more complex structure was also measured using three-step phase-shifting algorithm, with the texture shown in Fig. 8(a). Figure 8(b) shows the 3D result obtained from the original phase-shifted fringe patterns, depicting clear motion-introduced error. Figure 8(c) shows the 3D result obtained from our proposed method, and most vertical stripes caused by motion are no longer obvious. Even though Hilbert transform could theoretically smooth out the complex structure due to the filtering effect, our experiments demonstrate that the filtering effect is not obvious even for complex surface geometry like the one shown here.

 figure: Fig. 8

Fig. 8 Measurement result of a moving vase with complex surface structures. (a) Photograph; (b) raw 3D result; (c) 3D result using our proposed method.

Download Full Size | PPT Slide | PDF

Lastly, we evaluated our proposed method by measuring dynamically deformable objects such as human facial expressions. Figure 9(a) shows the 3D result from the original fringe patterns, while Fig. 9(b) shows the 3D result obtained from Hilbert transformed fringe patterns. Again both 3D results shows severe motion introduced error. Figure 9(c) shows the result obtained from our proposed method, demonstrating that our proposed method can effectively alleviate the motion introduced error and greatly improve measurement quality.

 figure: Fig. 9

Fig. 9 Measurement result of a dynamically deformable facial expressions (associated Visualization 2). (a) 3D result from original fringe patterns; (b) 3D result from Hilbert transformed fringe patterns; (c) 3D result using our proposed method.

Download Full Size | PPT Slide | PDF

In addition, we noticed that our proposed method can also significantly improve the quality of texture (e.g., A (x, y) in Eq. 1). Figure 10(a) shows the recovered texture using the original fringe patterns. Due to motion, obvious vertical stripes present on the image. After correcting the phase with the proposed method, the texture information can also be greatly improved, as shown in Fig. 10(b). These experimental data clearly demonstrated the effectiveness of our proposed method even for dynamically deformable objects with complex surface geometry.

 figure: Fig. 10

Fig. 10 Experimental results of texture image generation. (a) Texture image from original fringe patterns; (b) texture image using corrected phase.

Download Full Size | PPT Slide | PDF

In our research, we have an assumption that the camera speed is not significantly slower than the motion, thus the mismatch problem between adjacent frames is not serious. Actually, our proposed method also works for high-speed motions if we adopt the binary defocusing technique with a high-speed structure light system.

4. Summary

This paper has presented a motion-induced error compensation algorithm based on Hilbert transform, without requiring the acquisition of additional images or estimation of unknown phase shifts. We successfully demonstrated that the proposed method can drastically reduce measurement error introduced by homogeneous and non-homogeneous motions. Since this method requires no extra fringe pattern, it is suitable for high-speed applications.

Funding

National Institute of Justice (NIJ) (2016-DN-BX-0189); National Science Foundation (NSF) (IIS-1637961); National Natural Science Foundation of China (NSFC) (61603360).

5. Acknowledgements

We thank the National Institute of Justice (NIJ), and the National Science Foundation (NSF), and the National Natural Science Foundation of China (NSFC). Views expressed in this paper are those of the authors and not necessarily those of the NIJ, NSF, or NSFC.

References

1. S. Gorthi and P. Rastogi, “Fringe projection techniques: Whither we are?” Opt. Laser. Eng. 48, 133–140 (2010). [CrossRef]  

2. S. Zhang, “Recent progresses on real-time 3-d shape measurement using digital fringe projection techniques,” Opt. Laser Eng. 48, 149–158 (2010). [CrossRef]  

3. X. Su and Q. Zhang, “Dynamic 3-d shape measurement method: A review,” Opt. Laser. Eng 48, 191–204 (2010). [CrossRef]  

4. S. Lei and S. Zhang, “Flexible 3-d shape measurement using projector defocusing,” Opt. Lett. 34, 3080–3082 (2009). [CrossRef]   [PubMed]  

5. C. Zuo, Q. Chen, S. Feng, F. Feng, G. Gu, and X. Sui, “Optimized pulse width modulation pattern strategy for three-dimensional profilometry with projector defocusing,” Appl. Opt. 51, 4477–4490 (2012). [CrossRef]   [PubMed]  

6. B. Li, Y. Wang, J. Dai, W. Lohry, and S. Zhang, “Some recent advances on superfast 3d shape measurement with digital binary defocusing techniques,” Opt. Laser Eng. 54, 236–246 (2014). [CrossRef]  

7. Y. Wang and S. Zhang, “Superfast multifrequency phase-shifting technique with optimal pulse width modulation,” Opt. Express 19, 5143–5148 (2011).

8. J. Zhu, P. Zhou, X. Su, and Z. You, “Accurate and fast 3d surface measurement with temporal-spatial binary encoding structured illumination,” Opt. Express 24, 28549–28560 (2016). [CrossRef]   [PubMed]  

9. L. Lu, J. Xi, Y. Yu, and Q. Guo, “New approach to improve the accuracy of 3-d shape measurement of moving object using phase shifting profilometry,” Opt. Express 21, 30610–30622 (2013). [CrossRef]  

10. L. Lu, J. Xi, Y. Yu, and Q. Guo, “Improving the accuracy performance of phase-shifting profilometry for the measurement of objects in motion,” Opt. Lett. 39, 6715–6718 (2014). [CrossRef]   [PubMed]  

11. Z. Wang and B. Han, “Advanced iterative algorithm for phase extraction of randomly phase-shifted interferograms,” Opt. Lett. 29, 1671–1673 (2004). [CrossRef]   [PubMed]  

12. S. Feng, C. Zuo, T. Tao, Y. Hu, M. Zhang, Q. Chen, and G. Gu, “Robust dynamic 3-d measurements with motion-compensated phase-shifting profilometry,” Opt. Laser. Eng. 103, 127–138 (2018). [CrossRef]  

13. P. Cong, Z. Xiong, Y. Zhang, S. Zhao, and F. Wu, “Accurate dynamic 3d sensing with fourier-assisted phase shifting,” IEEE J. Sel. Top. Signal Process. 9, 396–408 (2015). [CrossRef]  

14. Z. Liu, P. C. Zibley, and S. Zhang, “Motion-induced error compensation for phase shifting profilometry,” Opt. Express 26, 12632–12637 (2018). [CrossRef]   [PubMed]  

15. D. Malacara, ed., Optical Shop Testing, 3rd ed. (John Wiley and Sons, New York, 2007). [CrossRef]  

16. B. Li, N. Karpinsky, and S. Zhang, “Novel calibration method for structured light system with an out-of-focus projector,” Appl. Opt. 53, 3415–3426 (2014). [CrossRef]   [PubMed]  

References

  • View by:

  1. S. Gorthi and P. Rastogi, “Fringe projection techniques: Whither we are?” Opt. Laser. Eng. 48, 133–140 (2010).
    [Crossref]
  2. S. Zhang, “Recent progresses on real-time 3-d shape measurement using digital fringe projection techniques,” Opt. Laser Eng. 48, 149–158 (2010).
    [Crossref]
  3. X. Su and Q. Zhang, “Dynamic 3-d shape measurement method: A review,” Opt. Laser. Eng 48, 191–204 (2010).
    [Crossref]
  4. S. Lei and S. Zhang, “Flexible 3-d shape measurement using projector defocusing,” Opt. Lett. 34, 3080–3082 (2009).
    [Crossref] [PubMed]
  5. C. Zuo, Q. Chen, S. Feng, F. Feng, G. Gu, and X. Sui, “Optimized pulse width modulation pattern strategy for three-dimensional profilometry with projector defocusing,” Appl. Opt. 51, 4477–4490 (2012).
    [Crossref] [PubMed]
  6. B. Li, Y. Wang, J. Dai, W. Lohry, and S. Zhang, “Some recent advances on superfast 3d shape measurement with digital binary defocusing techniques,” Opt. Laser Eng. 54, 236–246 (2014).
    [Crossref]
  7. Y. Wang and S. Zhang, “Superfast multifrequency phase-shifting technique with optimal pulse width modulation,” Opt. Express 19, 5143–5148 (2011).
  8. J. Zhu, P. Zhou, X. Su, and Z. You, “Accurate and fast 3d surface measurement with temporal-spatial binary encoding structured illumination,” Opt. Express 24, 28549–28560 (2016).
    [Crossref] [PubMed]
  9. L. Lu, J. Xi, Y. Yu, and Q. Guo, “New approach to improve the accuracy of 3-d shape measurement of moving object using phase shifting profilometry,” Opt. Express 21, 30610–30622 (2013).
    [Crossref]
  10. L. Lu, J. Xi, Y. Yu, and Q. Guo, “Improving the accuracy performance of phase-shifting profilometry for the measurement of objects in motion,” Opt. Lett. 39, 6715–6718 (2014).
    [Crossref] [PubMed]
  11. Z. Wang and B. Han, “Advanced iterative algorithm for phase extraction of randomly phase-shifted interferograms,” Opt. Lett. 29, 1671–1673 (2004).
    [Crossref] [PubMed]
  12. S. Feng, C. Zuo, T. Tao, Y. Hu, M. Zhang, Q. Chen, and G. Gu, “Robust dynamic 3-d measurements with motion-compensated phase-shifting profilometry,” Opt. Laser. Eng. 103, 127–138 (2018).
    [Crossref]
  13. P. Cong, Z. Xiong, Y. Zhang, S. Zhao, and F. Wu, “Accurate dynamic 3d sensing with fourier-assisted phase shifting,” IEEE J. Sel. Top. Signal Process. 9, 396–408 (2015).
    [Crossref]
  14. Z. Liu, P. C. Zibley, and S. Zhang, “Motion-induced error compensation for phase shifting profilometry,” Opt. Express 26, 12632–12637 (2018).
    [Crossref] [PubMed]
  15. D. Malacara, ed., Optical Shop Testing, 3rd ed. (John Wiley and Sons, New York, 2007).
    [Crossref]
  16. B. Li, N. Karpinsky, and S. Zhang, “Novel calibration method for structured light system with an out-of-focus projector,” Appl. Opt. 53, 3415–3426 (2014).
    [Crossref] [PubMed]

2018 (2)

S. Feng, C. Zuo, T. Tao, Y. Hu, M. Zhang, Q. Chen, and G. Gu, “Robust dynamic 3-d measurements with motion-compensated phase-shifting profilometry,” Opt. Laser. Eng. 103, 127–138 (2018).
[Crossref]

Z. Liu, P. C. Zibley, and S. Zhang, “Motion-induced error compensation for phase shifting profilometry,” Opt. Express 26, 12632–12637 (2018).
[Crossref] [PubMed]

2016 (1)

2015 (1)

P. Cong, Z. Xiong, Y. Zhang, S. Zhao, and F. Wu, “Accurate dynamic 3d sensing with fourier-assisted phase shifting,” IEEE J. Sel. Top. Signal Process. 9, 396–408 (2015).
[Crossref]

2014 (3)

2013 (1)

2012 (1)

2011 (1)

2010 (3)

S. Gorthi and P. Rastogi, “Fringe projection techniques: Whither we are?” Opt. Laser. Eng. 48, 133–140 (2010).
[Crossref]

S. Zhang, “Recent progresses on real-time 3-d shape measurement using digital fringe projection techniques,” Opt. Laser Eng. 48, 149–158 (2010).
[Crossref]

X. Su and Q. Zhang, “Dynamic 3-d shape measurement method: A review,” Opt. Laser. Eng 48, 191–204 (2010).
[Crossref]

2009 (1)

2004 (1)

Chen, Q.

S. Feng, C. Zuo, T. Tao, Y. Hu, M. Zhang, Q. Chen, and G. Gu, “Robust dynamic 3-d measurements with motion-compensated phase-shifting profilometry,” Opt. Laser. Eng. 103, 127–138 (2018).
[Crossref]

C. Zuo, Q. Chen, S. Feng, F. Feng, G. Gu, and X. Sui, “Optimized pulse width modulation pattern strategy for three-dimensional profilometry with projector defocusing,” Appl. Opt. 51, 4477–4490 (2012).
[Crossref] [PubMed]

Cong, P.

P. Cong, Z. Xiong, Y. Zhang, S. Zhao, and F. Wu, “Accurate dynamic 3d sensing with fourier-assisted phase shifting,” IEEE J. Sel. Top. Signal Process. 9, 396–408 (2015).
[Crossref]

Dai, J.

B. Li, Y. Wang, J. Dai, W. Lohry, and S. Zhang, “Some recent advances on superfast 3d shape measurement with digital binary defocusing techniques,” Opt. Laser Eng. 54, 236–246 (2014).
[Crossref]

Feng, F.

Feng, S.

S. Feng, C. Zuo, T. Tao, Y. Hu, M. Zhang, Q. Chen, and G. Gu, “Robust dynamic 3-d measurements with motion-compensated phase-shifting profilometry,” Opt. Laser. Eng. 103, 127–138 (2018).
[Crossref]

C. Zuo, Q. Chen, S. Feng, F. Feng, G. Gu, and X. Sui, “Optimized pulse width modulation pattern strategy for three-dimensional profilometry with projector defocusing,” Appl. Opt. 51, 4477–4490 (2012).
[Crossref] [PubMed]

Gorthi, S.

S. Gorthi and P. Rastogi, “Fringe projection techniques: Whither we are?” Opt. Laser. Eng. 48, 133–140 (2010).
[Crossref]

Gu, G.

S. Feng, C. Zuo, T. Tao, Y. Hu, M. Zhang, Q. Chen, and G. Gu, “Robust dynamic 3-d measurements with motion-compensated phase-shifting profilometry,” Opt. Laser. Eng. 103, 127–138 (2018).
[Crossref]

C. Zuo, Q. Chen, S. Feng, F. Feng, G. Gu, and X. Sui, “Optimized pulse width modulation pattern strategy for three-dimensional profilometry with projector defocusing,” Appl. Opt. 51, 4477–4490 (2012).
[Crossref] [PubMed]

Guo, Q.

Han, B.

Hu, Y.

S. Feng, C. Zuo, T. Tao, Y. Hu, M. Zhang, Q. Chen, and G. Gu, “Robust dynamic 3-d measurements with motion-compensated phase-shifting profilometry,” Opt. Laser. Eng. 103, 127–138 (2018).
[Crossref]

Karpinsky, N.

Lei, S.

Li, B.

B. Li, Y. Wang, J. Dai, W. Lohry, and S. Zhang, “Some recent advances on superfast 3d shape measurement with digital binary defocusing techniques,” Opt. Laser Eng. 54, 236–246 (2014).
[Crossref]

B. Li, N. Karpinsky, and S. Zhang, “Novel calibration method for structured light system with an out-of-focus projector,” Appl. Opt. 53, 3415–3426 (2014).
[Crossref] [PubMed]

Liu, Z.

Lohry, W.

B. Li, Y. Wang, J. Dai, W. Lohry, and S. Zhang, “Some recent advances on superfast 3d shape measurement with digital binary defocusing techniques,” Opt. Laser Eng. 54, 236–246 (2014).
[Crossref]

Lu, L.

Rastogi, P.

S. Gorthi and P. Rastogi, “Fringe projection techniques: Whither we are?” Opt. Laser. Eng. 48, 133–140 (2010).
[Crossref]

Su, X.

Sui, X.

Tao, T.

S. Feng, C. Zuo, T. Tao, Y. Hu, M. Zhang, Q. Chen, and G. Gu, “Robust dynamic 3-d measurements with motion-compensated phase-shifting profilometry,” Opt. Laser. Eng. 103, 127–138 (2018).
[Crossref]

Wang, Y.

B. Li, Y. Wang, J. Dai, W. Lohry, and S. Zhang, “Some recent advances on superfast 3d shape measurement with digital binary defocusing techniques,” Opt. Laser Eng. 54, 236–246 (2014).
[Crossref]

Y. Wang and S. Zhang, “Superfast multifrequency phase-shifting technique with optimal pulse width modulation,” Opt. Express 19, 5143–5148 (2011).

Wang, Z.

Wu, F.

P. Cong, Z. Xiong, Y. Zhang, S. Zhao, and F. Wu, “Accurate dynamic 3d sensing with fourier-assisted phase shifting,” IEEE J. Sel. Top. Signal Process. 9, 396–408 (2015).
[Crossref]

Xi, J.

Xiong, Z.

P. Cong, Z. Xiong, Y. Zhang, S. Zhao, and F. Wu, “Accurate dynamic 3d sensing with fourier-assisted phase shifting,” IEEE J. Sel. Top. Signal Process. 9, 396–408 (2015).
[Crossref]

You, Z.

Yu, Y.

Zhang, M.

S. Feng, C. Zuo, T. Tao, Y. Hu, M. Zhang, Q. Chen, and G. Gu, “Robust dynamic 3-d measurements with motion-compensated phase-shifting profilometry,” Opt. Laser. Eng. 103, 127–138 (2018).
[Crossref]

Zhang, Q.

X. Su and Q. Zhang, “Dynamic 3-d shape measurement method: A review,” Opt. Laser. Eng 48, 191–204 (2010).
[Crossref]

Zhang, S.

Zhang, Y.

P. Cong, Z. Xiong, Y. Zhang, S. Zhao, and F. Wu, “Accurate dynamic 3d sensing with fourier-assisted phase shifting,” IEEE J. Sel. Top. Signal Process. 9, 396–408 (2015).
[Crossref]

Zhao, S.

P. Cong, Z. Xiong, Y. Zhang, S. Zhao, and F. Wu, “Accurate dynamic 3d sensing with fourier-assisted phase shifting,” IEEE J. Sel. Top. Signal Process. 9, 396–408 (2015).
[Crossref]

Zhou, P.

Zhu, J.

Zibley, P. C.

Zuo, C.

S. Feng, C. Zuo, T. Tao, Y. Hu, M. Zhang, Q. Chen, and G. Gu, “Robust dynamic 3-d measurements with motion-compensated phase-shifting profilometry,” Opt. Laser. Eng. 103, 127–138 (2018).
[Crossref]

C. Zuo, Q. Chen, S. Feng, F. Feng, G. Gu, and X. Sui, “Optimized pulse width modulation pattern strategy for three-dimensional profilometry with projector defocusing,” Appl. Opt. 51, 4477–4490 (2012).
[Crossref] [PubMed]

Appl. Opt. (2)

IEEE J. Sel. Top. Signal Process. (1)

P. Cong, Z. Xiong, Y. Zhang, S. Zhao, and F. Wu, “Accurate dynamic 3d sensing with fourier-assisted phase shifting,” IEEE J. Sel. Top. Signal Process. 9, 396–408 (2015).
[Crossref]

Opt. Express (4)

Opt. Laser Eng. (2)

B. Li, Y. Wang, J. Dai, W. Lohry, and S. Zhang, “Some recent advances on superfast 3d shape measurement with digital binary defocusing techniques,” Opt. Laser Eng. 54, 236–246 (2014).
[Crossref]

S. Zhang, “Recent progresses on real-time 3-d shape measurement using digital fringe projection techniques,” Opt. Laser Eng. 48, 149–158 (2010).
[Crossref]

Opt. Laser. Eng (1)

X. Su and Q. Zhang, “Dynamic 3-d shape measurement method: A review,” Opt. Laser. Eng 48, 191–204 (2010).
[Crossref]

Opt. Laser. Eng. (2)

S. Gorthi and P. Rastogi, “Fringe projection techniques: Whither we are?” Opt. Laser. Eng. 48, 133–140 (2010).
[Crossref]

S. Feng, C. Zuo, T. Tao, Y. Hu, M. Zhang, Q. Chen, and G. Gu, “Robust dynamic 3-d measurements with motion-compensated phase-shifting profilometry,” Opt. Laser. Eng. 103, 127–138 (2018).
[Crossref]

Opt. Lett. (3)

Other (1)

D. Malacara, ed., Optical Shop Testing, 3rd ed. (John Wiley and Sons, New York, 2007).
[Crossref]

Supplementary Material (2)

NameDescription
Visualization 1       Visualization 1
Visualization 2       Visualization 2

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (10)

Fig. 1
Fig. 1 Illustration of non-homogeneous phase-shift error introduced by the motion of a deformable object.
Fig. 2
Fig. 2 Simulation results of the motion-introduced phase error for three-step phase-shifting algorithm. (a) Uniform motion; (b) non-uniform motion.
Fig. 3
Fig. 3 Simulation results of the motion-introduced phase error for four-step phase-shifting algorithm. (a) Uniform motion; (b) non-uniform motion.
Fig. 4
Fig. 4 Simulation results of phase error compensation for three-step phase-shifting algorithm. (a) Phase error plots when 1 = 0.1 rad and 2 = 0.2 rad; (b) Phase error plots when 1 = 0.1 rad and 2 = 0.3 rad; (c) phase rms error with uniform motion; (d) phase rms error with nonuniform motion.
Fig. 5
Fig. 5 Phase error compensation for four-step phase-shifting algorithm. (a) Phase error plots when 1 = 0.1 rad, 2 = 0.2 rad, and 3 = 0.3 rad; (b) Phase error plots when 1 = 0.1 rad, 2 = 0.3 rad, and 3 = 0.6 rad; (c)phase rms error with uniform motion; (d) phase rms error with nonuniform motion.
Fig. 6
Fig. 6 Measurement result of a moving sphere using three-step phase-shifting algorithm (associated Visualization 1). (a) 3D result from original phase-shifted fringe patterns; (b) 3D result from Hilbert transformed fringe patterns; (c) 3D result using our proposed method; (d) error map of the result in (a) (mean: 0.172 mm, standard deviation: 0.124 mm); (e) error map of (b) (mean: 0.160 mm, standard deviation: 0.116 mm); (f) error map of (c) (mean: 0.038 mm, standard deviation: 0.032 mm).
Fig. 7
Fig. 7 Measurement result of a moving sphere for four-step phase-shifting algorithm. (a) 3D result from original phase-shifted fringe patterns; (b) 3D result from Hilbert transformed fringe patterns; (c) 3D result using our proposed method; (d) error map of the result in (a) (mean: 0.118 mm, standard deviation: 0.099 mm); (e) error map of (b) (mean: 0.104 mm, standard deviation: 0.090 mm); (f) error map of (c) (mean: 0.031 mm, standard deviation: 0.027 mm).
Fig. 8
Fig. 8 Measurement result of a moving vase with complex surface structures. (a) Photograph; (b) raw 3D result; (c) 3D result using our proposed method.
Fig. 9
Fig. 9 Measurement result of a dynamically deformable facial expressions (associated Visualization 2). (a) 3D result from original fringe patterns; (b) 3D result from Hilbert transformed fringe patterns; (c) 3D result using our proposed method.
Fig. 10
Fig. 10 Experimental results of texture image generation. (a) Texture image from original fringe patterns; (b) texture image using corrected phase.

Equations (31)

Equations on this page are rendered with MathJax. Learn more.

I n ( x , y ) = A ( x , y ) + B ( x , y ) cos ( Φ + δ n ) ,
ϕ ( x , y ) = tan 1 [ n = 0 N 1 I n ( x , y ) sin δ n n = 0 N 1 I n ( x , y ) cos δ n ] ,
Φ ( x , y ) = ϕ ( x , y ) + k ( x , y ) × 2 π .
δ n ( x , y ) = δ n + n ( x , y ) ,
I n ( x , y ) = A ( x , y ) + B ( x , y ) cos ( ϕ ( x , y ) + δ n ) ,
ϕ ( x , y ) = tan 1 [ n = 0 N 1 I n ( x , y ) sin δ n n = 0 N 1 I n ( x , y ) cos δ n ] .
Δ ϕ ( x , y ) = ϕ ( x , y ) ϕ ( x , y )
= tan 1 [ cos ϕ n = 0 N 1 I n ( x , y ) sin δ n + sin ϕ n = 0 N 1 I n cos δ n sin ϕ n = 0 N 1 I n sin δ n cos ϕ n = 0 N 1 I n cos δ n ] ,
= tan 1 [ cos ϕ n = 0 N 1 cos ( ϕ + δ n ) sin δ n + sin ϕ n = 0 N 1 cos ( ϕ + δ n ) cos δ n sin ϕ n = 0 N 1 cos ( ϕ + δ n ) sin δ n cos ϕ n = 0 N 1 cos ( ϕ + δ n ) cos δ n ] ,
= tan 1 [ cos 2 ϕ n = 0 N 1 sin α n + sin 2 ϕ n = 0 N 1 cos α n n = 0 N 1 sin n sin 2 ϕ n = 0 N 1 sin α n cos 2 ϕ n = 0 N 1 cos α n n = 0 N 1 sin n ] ,
I n ( x , y ) = A + B cos [ ϕ ( x , y ) + ( n 1 ) * ( 2 π / 3 + ) ] , n = 0 , 1 , 2 ,
ϕ ( x , y ) = tan 1 [ 3 ( I 0 I 2 ) 2 I 1 I 0 I 2 ] ,
Δ ϕ ( x , y ) = tan 1 { sin 2 ϕ [ cos ( + π / 3 ) 1 / 2 ] ( cos + 1 / 2 ) cos 2 ϕ [ cos ( + π / 3 ) 1 / 2 ] } .
cos ( + π / 3 ) 1 / 2 = cos ( ) cos ( π / 3 ) sin ( ) sin ( π / 3 ) 1 / 2 3 / 2 ,
cos + 1 / 2 3 / 2 .
Δ ϕ ( x , y ) tan 1 [ 3 sin 2 ϕ 3 + 3 cos 2 ϕ ] ,
tan 1 [ 3 sin 2 ϕ 3 ] ,
( 3 3 ) sin 2 ϕ .
I n ( x , y ) = A + B cos [ ϕ ( x , y ) + ( 2 n 3 ) * ( π / 4 + ) ] , n = 0 , 1 , 2 , 3 ,
ϕ ( x , y ) = tan 1 [ I 3 I 1 I 0 I 2 ] + 3 π / 4 ,
Δ ϕ ( x , y ) = tan 1 [ ( I 3 I 1 ) cos ( ϕ 3 π / 4 ) ( I 0 I 2 ) sin ( ϕ 3 π / 4 ) ( I 3 I 1 ) sin ( ϕ 3 π / 4 ) + ( I 0 I 2 ) cos ( ϕ 3 π / 4 ) ] ,
= tan 1 [ γ 0 cos 2 ( ϕ 3 π / 4 ) γ 0 sin 2 ( ϕ 3 π / 4 ) + γ 1 ] ,
γ 0 = 2 [ sin ( 3 ) sin ] 2 [ 3 ] = 4 ,
γ 1 = 2 [ cos ( 3 ) + cos ] 4 .
Δ ϕ ( x , y ) tan 1 [ cos 2 ( ϕ 3 π / 4 ) ] , ( ) sin 2 ϕ .
H ( μ ) ( t ) = 1 π μ ( τ ) t τ d τ .
( H ( μ ) ( ω ) ) = δ H ( ω ) × ( μ ) ( ω ) ,
δ H ( ω ) = { i = e i π / 2 , if ω < 0 0 , if ω = 0 i = e i π / 2 , if ω > 0
I n H ( x , y ) = H [ I n ( x , y ) ] = A ( x , y ) + B ( x , y ) sin [ ϕ ( x , y ) + δ n ] .
ϕ H ( x , y ) = tan 1 [ n = 1 N I n H ( x , y ) cos δ n n = 1 N I n H ( x , y ) sin δ n ] .
ϕ f ( x , y ) = [ ϕ ( x , y ) + ϕ H ( x , y ) ] / 2 .

Metrics