Abstract

The recently proposed binary defocusing technique has brought speed breakthroughs for three-dimensional (3D) shape measurement with a digital fringe projection system. Despite this, motion-induced phase error is still inevitable due to the multi-shot nature of the phase-shifting algorithm. To alleviate this problem, this paper proposes a motion-induced error reduction method by taking advantage of additional temporal sampling. Particularly, each illuminated fringe pattern will be captured twice in one projection cycle, resulting in two sets of phase shifted fringe images being obtained. Due to the mechanism of binary defocusing projection, the motion-induced phase error could be effectively separated from the fixed phase shift value by evaluating the difference between the two phase maps. Based on this, an iterative compensation strategy is further applied to compensate the phase error until high-quality phase maps are generated. Meanwhile, different synchronization schemes are also proposed and tested to evaluate the error compensation effects. Both simulation and experiments demonstrated that the proposed methods can substantially reduce motion-introduced measurement errors. Since defocused 1-bit binary patterns are utilized to bypass rigid camera-projector synchronization, this method has potential for high-speed applications.

© 2019 Optical Society of America under the terms of the OSA Open Access Publishing Agreement

1. Introduction

3D shape measurement for highly dynamic scenes plays an important role in a variety of applications such as manufacturing, biology and entertainment. The digital fringe projection technique [1–3] has gained increasing popularity given its nature of fast and accurate 3D shape measurements, and thus has significant advantages to measure scenes containing dynamic motion.

Conventional DFP technique typically projects 8-bit sinusoidal fringe patterns, and its measurement speed is limited to the maximum refresh rate of the projector, typically 120 Hz. Therefore, it is challenging for the conventional DFP technique to measure rapidly moving objects if the object moves too much while acquiring the required phase-shifted fringe patterns for phase retrieval. To overcome this limitation, the binary defocusing techniques have been developed to generate quasi-sinusoidal fringes with 1-bit binary patterns through projector lens defocusing [4,5], and the advanced digital-light-processing (DLP) projection platform allows researchers to achieve speed breakthroughs [6–8]. However, the binary defocusing method still has measurement error if the object moves too rapidly. Phase-shifting profilometry works well with the assumption that the object stays quasi-static when the required phase-shifted fringe patterns are captured. Therefore, any motion of the object could introduce phase error and thus measurement error, and we refer this as motion-induced phase error.

Over the years, many different techniques have been developed to address motion-induced phase errors. Lu et al. [9] proposed a method to reduce error caused by the planar motion parallel to the imaging plane. By placing a few markers on the object and analyzing the movement of markers, the rigid-body motion of an object can be estimated. Later, they improved their method to handle error induced by translation in the direction of object height [10]. The motion is estimated using the arbitrary phase shift estimation method developed by Wang and Han [11]. However, this method is limited to homogeneous background intensity and modulation amplitude. Feng et al. [12] proposed to solve the nonhomogeneous motion artifact problem by segmenting objects with different rigid shifts. However, this method still assumes that the phase-shift error within a single segmented object is homogeneous, thus it may not work well for dynamically deformable objects where the phase-shift error varies from one pixel to another. Cong et al. [13] proposed a Fourier-assisted approach to correct the phase-shift error by differentiating the phase maps of two successive fringe images. However, the accuracy is limited due to the use of Fourier Transform Profilometry (FTP) for phase shift estimation. Liu et al. [14] developed a method to find the phase shift more precisely with an iterative process based on the assumption that the motion is uniform between adjacent 3D frames. However, such a method requires a sequence of phase-shifting sets for motion estimation, and thus may have limitation for short-time motion measurement.

This research proposes an alternative method for motion-induced error reduction via additional temporal hardware sampling. Firstly, each illuminated fringe pattern on moving objects will be captured twice within one illumination period, resulting in two sets of phase shifted fringes being acquired in one projection cycle. Due to the mechanism of binary defocusing projection, the motion-induced phase error could be effectively separated from the fixed phase shift value for these two phase-shifting sets. Specifically, after extracting the phase maps for both sets, a quantitative description of phase changes due to motion will be evaluated by analyzing the difference between the two phase maps. Then, based on this, an iterative compensation strategy is further applied to compensate the phase error until it converges. Finally, with the system calibration parameters, the 3D frames for moving objects can be reconstructed with high accuracy. Furthermore, given that the synchronization accuracy is crucial for the proposed method, different synchronization schemes are also tested and compared. Simulation and experimental results demonstrated that the proposed methods can substantially reduce motion-introduced measurement errors, and different synchronization strategies will result in different effectiveness for the final error compensation.

Section 2 explains the principle of the proposed method. Section 3 shows experimental results to verify the performance of the proposed method; and Sec. 5 summarizes the paper.

2. Principle

2.1. Multi-step phase-shifting algorithm

For a generic N-step phase-shifting algorithm, the fringe images can be mathematically modeled as,

Ik(x,y)=I(x,y)+I(x,y)cos[Φ(x,y)δk],
where I′(x, y), I″(x, y) and Φ(x, y) respectively represent the average intensity, intensity modulation and the phase to be computed, δk is the phase shift for the k-th frame. In order to obtain the phase map, the following terms are first defined
[a0(x,y)a1(x,y)a2(x,y)]=[Ncosδksinδkcosδkcos2δkcosδksinδksinδkcosδksinδksin2δk]1[IkIkcosδkIksinδk].

Simultaneously solving the above equations will uniquely solve for the phase map ϕ(x, y) as

φ(x,y)=tan1[a2(x,y)a1(x,y)].
This process is called “phase wrapping” given that the computed phase map will exhibit 2π discontinuities. Following this step, a phase unwrapping algorithm is needed to obtain a continuous phase map Φ(x, y) as
Φ(x,y)=φ(x,y)+k(x,y)×2π.
In this work, we adopted a temporal phase unwrapping method as introduced by Hyun and Zhang [15] for absolute phase retrieval, and a calibration-based triangulation method for 3D reconstruction [16]. It should be noted that in this paper, we adopt the three-step phase-shifting algorithm since it requires the least number of phase-shifting steps, and thus is most suitable for dynamic measurements.

2.2. Binary defocusing technique

We use a technology called binary defocusing technique for generating sinusoidal fringe patterns. The principle of such technique is shown in Fig. 1. Essentially, a computer generated binary structured patterns will retain its binarized structure at the projector’s focal plane, yet once it is away from the focal plane, the pattern becomes blurred such that quasi-sinusoidal structures are created. The advantages of using binary structures to generate sinusoidal fringe patterns are in the following three major aspects:

  • Speed breakthrough. Commercially available projector typically has a refreshing limit of 120 Hz for 8-bit sinusoidal patterns; yet the speed can be boosted up to kilohertz level using the advanced digital-light-processing (DLP) technique once 1-bit patterns are used.
  • Not requiring gamma correction. For conventional fringe projection method using 8-bit sinusoidal patterns, the projector’s nonlinear gamma needs to be corrected beforehand. Binary defocusing technique utilizes 1-bit pattern with only two intensity levels (0 and 255), therefore gamma correction is no longer required.
  • Not requiring the capture of whole projector cycle. For a correct capture of 8-bit grayscale intensities, typically capturing the entire projection cycle is required since each grayscale value is generated by time integration of digital micromirrors. However, if binary patterns are projected, such requirement becomes unnecessary since one only needs to get the black/white contrast, not the exact grayscale values. This makes the capture of more than one images within a projection cycle possible, which leads to the introduction of the “double-shot-in-single-illumination technique” in the next section.

 figure: Fig. 1

Fig. 1 A schematic demonstration of the binary deofucsing technique which generates sinusoidal profile via projector defocusing.

Download Full Size | PPT Slide | PDF

2.3. Proposed motion-induced error reduction method

For phase-shifting algorithm, motion-induced phase error will be coupled with fixed phase shifts, thus making the estimation of motion-induced phase errors very complicated. In order to solve this problem, we proposed to separate these terms by using the double-shot-in-single-illumination (DSSI) technique. Specifically, the DSSI technique let the camera have two repeated captures during one projector’s illumination cycle, with the timing control of such idea illustrated in Fig. 2. Both projector and camera trigger could be generated and controlled by a programmable microcontroller, such as Arduino Uno. In the projector’s illumination period I1, two image captures I11 and I12 were made with the same exposure time. In this way, there is only the motion-induced phase error between images I11 and I12 given that they are from the same illumination cycle. Similarly, we could obtain four other phase-shifting fringes, and finally two sets of phase-shifting patterns can be captured as [I11, I21, I31] and [I12, I22, I32].

 figure: Fig. 2

Fig. 2 The timing control of proposed double-shot-in-single-illumination technique.

Download Full Size | PPT Slide | PDF

By assuming that the motion is uniform and the camera trigger signal are well controlled with equal intervals, there will be a motion-induced phase drift of δ between the consecutive fringes in one set, like I11, I21, and I31, and a half of the phase drift δ/2 between the two sets [I11, I21, I31] and [I12, I22, I32]. Therefore, the two sets of phase-shifting fringes can be rewritten as

I11(x,y)=I(x,y)+I(x,y)cos[Φ(x,y)2π/3],
I21(x,y)=I(x,y)+I(x,y)cos[Φ(x,y)+δ],
I31(x,y)=I(x,y)+I(x,y)cos[Φ(x,y)+2π/3+2δ].
I12(x,y)=I(x,y)+I(x,y)cos[Φ(x,y)2π/3+δ/2],
I22(x,y)=I(x,y)+I(x,y)cos[Φ(x,y)+δ+δ/2],
I32(x,y)=I(x,y)+I(x,y)cos[Φ(x,y)+2π/3+2δ+δ/2].

Given this, it becomes possible to estimate the phase error term by comparing with the two phase maps Φ1(x, y) and Φ2(x, y). To realize this, we propose the overall computational framework for phase error estimation and reduction as shown in Fig. 3. The major steps are summarized as follows.

  • Step 1: Apply the DSSI technique to obtain two sets of three-step phase-shifting fringe images [I11, I21, I31] and [I12, I22, I32];
  • Step 2: For the initial iteration, retrieve the phase maps ϕ1(x, y) and ϕ2(x, y) according to Eq. (3) with the assumption of δ = 0; otherwise, more accurate motion error δ can be used for phase retrieval;
  • Step 3: Unwrap the phase maps as Φ1(x, y) and Φ2(x, y), and evaluate the motion error term by subtracting the two phase maps δ/2 = Φ2(x, y) − Φ1(x, y);
  • Step 4: Update the phase error term δ using the computed value from previous step. Repeat Step 2, 3 and evaluate the phase difference for the consecutive iterations. If ΔΦi is not smaller enough, goes to Step 2 and repeat the iteration; Otherwise, the error compensation process will be terminated and results in high-quality phase maps for 3D reconstruction.

 figure: Fig. 3

Fig. 3 Proposed computational framework for motion-induced phase error reduction.

Download Full Size | PPT Slide | PDF

For the proposed motion-induced phase error reduction strategy, a crucial step is to ensure that the second set of phase shifting images are captured exactly in the center time instance of two consecutive fringes in the first set. Some delays or misalignment induced by the camera trigger control would affect the final compensation accuracy. Considering this, we also introduce a simpler trigger synchronization scheme as shown in Fig. 4. The difference is that we project the same fringe pattern twice and set the camera exactly synchronized with the projector. In such a way, the camera trigger could be more accurately controlled with less possibilities for causing delays. However, it may require the projector to have higher switching rate, which is about twice that of DSSI technique. Therefore, both synchronization schemes have their advantages and disadvantages, and we will show their performance in the next section.

 figure: Fig. 4

Fig. 4 The modified timing control scheme.

Download Full Size | PPT Slide | PDF

To test the effects of the proposed motion-induced phase error reduction method, simulations were carried out for three-step phase-shifting algorithm under uniform motion. First, we tested the phase error reduction effect for different number of iterations when phase-shift error δ = 0.1 rad, with the results shown in Fig. 5(a). This simulation results indicate that after 3 or 4 times of iterations, the phase error could be reduced significantly, close to 0. Then, we tested the proposed method for different phase-shift errors δ ∈ [−0.1, 0.1] rad. Figure 5(b) shows the phase error plots after applying the proposed method. Furthermore, since the proposed method utilizes twice as much fringe images as the conventional three-step phase shifting, for fair comparison, we also added one more simulation which doubles the capturing speed of the conventional three-step phase shifting method. Figure 5(b) shows the result which compares the conventional three-step phase shifting, the double-speed three-step phase shifting and the proposed method. The result shows that the double-speed three-step phase shifting can reduce the phase error by half, yet its performance is still worse than the proposed method. This simulation proves the effectiveness of the proposed method.

 figure: Fig. 5

Fig. 5 Simulation results for three-step phase-shifting algorithm. (a) Phase error reduction effect for different iteration times when δ = 0.1 rad; (b) the phase error plots for different phase-shift errors δ ∈ [−0.1, 0.1] rad after applying the conventional three-step phase shifting, the double-speed three-step phase shifting and the proposed method.

Download Full Size | PPT Slide | PDF

To further test the robustness of the proposed method, we also emulated cases with object having non-uniform motion. The acceleration for the non-uniform motion is set to be one third of the moving speed, meaning that the phase shift errors between the adjacent frames of three phase-shifting images are respectively δ and δ + δ/3. Figure 6(a) first shows the results obtained from different number of iterations with a fixed δ = 0.1 rad. Once again, it indicates that the phase error will converge after 3 or 4 iterations but will not be as close to 0 as the results obtained from uniform motion. Figure 6(b) shows the phase errors with the change of δ ∈ [−0.1, 0.1] rad. It shows that even with non-uniform motion, the proposed method can still greatly reduce the phase error up to 2/3 of the original value. This simulation indicates that the proposed method can still work effectively even if non-uniform motion may be present.

 figure: Fig. 6

Fig. 6 Simulation results for three-step phase-shifting algorithm with non-uniform motion. (a) Phase error reduction for different iterations; (b) The phase error plots before and after applying the proposed compensation method.

Download Full Size | PPT Slide | PDF

3. Experiments

We further evaluated the performance of our proposed method through experiments. In the experiment, a complementary-metal-oxide-semiconductor (CMOS) camera (model: FLIR Grasshopper3 GS3-U3-41C6C-C) was used for image acquisition, and a high-speed DLP developmental kit (model: Texas Instrument DLP Lightcrafter 4500) was used for fringe pattern illumination. The camera is attached to a lens with 12 mm focal length. The camera resolution was set to 1280 × 960 pixels and the projector has a 912 × 1140 native resolution. For the timing control illustrated in Fig. 2, the camera was set to its maximum image acquisition rate of 166 Hz and the projector was configured to refresh images at 166 ÷ 2 = 83 frames/second; for the timing control illustrated in Fig. 4, both the image acquisition rate and the pattern illumination speed are configured as 166 Hz. Both timing controls were realized by a programmable microcontroller (Arduino Uno) and its associated software platform (Arduino IDE).

We measured a dynamically moving spherical object to evaluate the effectiveness of the proposed motion-induced error reduction method. The results are shown in Figs. 7 and 8 where both timing controls illustrated in Figs. 2 and 4 are tested. Figures 7(a)–7(b) show a sample frame of 3D result obtained from the standard three-step phase shifting method and our proposed motion-induced error reduction method based on the timing chart in Fig. 2. Figures 8(a)–8(b) show a sample frame of 3D result obtained from the standard three-step phase shifting method and our proposed motion-induced error reduction method based on the timing chart in Fig. 4. From visual inspection, it can be clearly seen that the results obtained from both of the proposed methods have apparently better performance compared to the conventional three-step phase shifting.

 figure: Fig. 7

Fig. 7 Experimental results of measuring a moving spherical object using the timing control illustrated in Fig. 2 (associated Visualization 1). (a)–(b) A sample reconstructed 3D frame using the standard three-step phase-shifting method and the proposed error compensation method; (c) cross-section plots of Fig. 7(a), Fig. 7(b) and an ideal sphere; (d) cross-section plots of errors of Figs. 7(a)–7(b). The mean and root-mean-square (RMS) errors are 0.5356 mm and 0.8482 mm for the standard three-step phase-shifting method; and 0.056 mm and 0.4130 mm for the proposed error compensation method.

Download Full Size | PPT Slide | PDF

 figure: Fig. 8

Fig. 8 Experimental results of measuring a moving spherical object using the timing control illustrated in Fig. 4 (associated Visualization 2). (a)–(b) A sample reconstructed 3D frame using the standard three-step phase-shifting method and the proposed error compensation method; (c) cross-section plots of Fig. 8(a), Fig. 8(b) and an ideal sphere; (d) cross-section plots of errors of Figs. 8(a)–8(b). The mean and RMS errors are 0.4116 mm and 0.6154 mm for the standard three-step phase-shifting method; and 0.0643 mm and 0.3731 mm for the proposed error compensation method.

Download Full Size | PPT Slide | PDF

To qualitatively evaluate the accuracy, we produced a ground truth geometry of an ideal sphere by performing sphere fitting using the measured diameter (d = 147.726 mm) with a precision caliper. Figures 7(c) and 8(c) show an overlay of cross-sections of the ideal sphere, the 3D result from conventional three-step phase shifting and the 3D results from the proposed method. The corresponding cross-sections of errors were plotted in Figs. 7(d) and 8(d) which were obtained by subtracting the ideal spherical geometry from the two measured ones. The error plots clearly show that our proposed method has an apparently smaller overall error compared to the conventional three-step phase shifting. On the result, one can clearly see that the motion-induced periodical errors were significantly reduced after applying the proposed error compensation framework. For the result obtained using the timing control illustrated in Fig. 2, the mean errors are 0.5356 mm and 0.056 mm for the conventional three-step phase shifting and proposed method; and the corresponding root-mean-square (RMS) errors are 0.8482 mm and 0.4130 mm, respectively; and for the result obtained using the timing control illustrated in Fig. 4, the mean errors are 0.4116 mm and 0.0643 mm for the conventional three-step phase shifting and proposed method; and the corresponding RMS errors are 0.6154 mm and 0.3731 mm, respectively. This dynamic measurement has clearly demonstrated the effectiveness of our proposed method in reducing motion-induced measurement errors. It can be seen that the different synchronization schemes will result in different effects of motion-induced phase error compensation. This means the synchronization accuracy is crucial for the proposed method. Our current experiment setup is not the optimal. Therefore, we will further explore new strategies to guarantee the synchronization accuracy, and better suppress the motion-induce phase error.

To further validate the robustness of the proposed method, we conducted additional experiments by measuring different types of motions (i.e., planar and rotational) and by measuring a moving complex object. The results are shown in Fig. 9, where the first column shows the photographs of the original objects used; the second column shows the results obtained from the conventional three-step phase shifting method; and the third column shows the results obtained from the proposed method. Moreover, the first row shows the results obtained by measuring a spherical object moving mainly in planar xy direction; the second row shows the results obtained by measuring a bottle-like object with rotational movement; and the third row shows the results obtained by measuring a complex moving sculpture head. All results show that our proposed method can apparently reduce the motion-induced phase errors with different types of motions and object topographical surfaces, which further validates the robustness of the proposed method.

 figure: Fig. 9

Fig. 9 Additional experimental results to test the robustness of the algorithm. (a) – (c) Results of a sphere moving mainly planar xy movement (associated Visualization 3); (d) – (f) corresponding result of a bottle with rotational movement (associated Visualization 4); (g) – (i) corresponding result of a complex object (associated Visualization 5).

Download Full Size | PPT Slide | PDF

4. Discussion

Compared to existing technologies for motion-induced error reduction, the proposed technique has the following advantages:

  • Only assumes uniform motion within a short period of time. Compared to another iterative motion-induced error reduction technology [14], this proposed method only assumes that the object moves at a constant speed within the projection of a single set of three-step phase shifted patterns, which is a more viable assumption than assuming uniform motion between fringes generated in two measurement cycles.
  • Suitable for high-speed applications. The proposed method achieves an additional temporal sampling by taking advantage of the non-rigid camera-projector synchronization requirement of using binary patterns. Since binary patterns can be switched at least 10 times faster than conventional sinusoidal patterns, the proposed method can be extended to high speed applications if a high speed camera is available.

However, given that 1) we still assume that the object moves uniformly between three-step phase shifted patterns, and 2) inevitable trigger delays may affect the accuracy of additional temporal sampling, one can still find residual errors in both simulation and experimental results. Future work is necessary to incorporate our proposed technique with more sophisticated modeling of object motion and trigger controls. Moreover, the proposed method also has an assumption that the camera speed is not significantly slower than the motion, thus the mismatch problem is not serious between adjacent frames in one measurement cycle. Actually, this assumption can be well satisfied given the high-speed advantage of using binary patterns. If such assumption is violated by chance, the marker based approaches [9,10] can be incorporated for precise tracking of motion.

5. Summary

This paper has presented a motion-induced error compensation method via additional temporal sampling. The proposed method takes advantage of the non-rigid requirement for camera-projector synchronization with defocused binary pattern projection. Within each fringe pattern illumination cycle, two camera shots are taken to produce two sets of three-step phase shifted fringe patterns. For each set of three-step phase shifting, an unwrapped absolute phase map will be extracted in preparation for error compensation. The difference of the two phase maps are evaluated to extract the phase error terms induced by motion. Iterative algorithms are also applied to reduce the ultimate phase errors and thus 3D shape measurement errors. Dynamic experiments were conducted to successfully demonstrate the effectiveness of the proposed algorithm.

Funding

National Natural Science Foundation of China (NSFC) (61603360). Iowa State University (Startup).

References

1. S. Gorthi and P. Rastogi, “Fringe projection techniques: Whither we are?” Opt. Laser. Eng. 48, 133–140 (2010). [CrossRef]  

2. S. Zhang, “Recent progresses on real-time 3-d shape measurement using digital fringe projection techniques,” Opt. Laser Eng. 48, 149–158 (2010). [CrossRef]  

3. X. Su and Q. Zhang, “Dynamic 3-d shape measurement method: A review,” Opt. Laser. Eng 48, 191–204 (2010). [CrossRef]  

4. S. Lei and S. Zhang, “Flexible 3-d shape measurement using projector defocusing,” Opt. Lett. 34, 3080–3082 (2009). [CrossRef]   [PubMed]  

5. C. Zuo, Q. Chen, S. Feng, F. Feng, G. Gu, and X. Sui, “Optimized pulse width modulation pattern strategy for three-dimensional profilometry with projector defocusing,” Appl. Opt. 51, 4477–4490 (2012). [CrossRef]   [PubMed]  

6. B. Li, Y. Wang, J. Dai, W. Lohry, and S. Zhang, “Some recent advances on superfast 3d shape measurement with digital binary defocusing techniques,” Opt. Laser Eng. 54, 236–246 (2014). [CrossRef]  

7. Y. Wang and S. Zhang, “Superfast multifrequency phase-shifting technique with optimal pulse width modulation,” Opt. Express 19, 5143–5148 (2011).

8. J. Zhu, P. Zhou, X. Su, and Z. You, “Accurate and fast 3d surface measurement with temporal-spatial binary encoding structured illumination,” Opt. Express 24, 28549–28560 (2016). [CrossRef]   [PubMed]  

9. L. Lu, J. Xi, Y. Yu, and Q. Guo, “New approach to improve the accuracy of 3-d shape measurement of moving object using phase shifting profilometry,” Opt. Express 21, 30610–30622 (2013). [CrossRef]  

10. L. Lu, J. Xi, Y. Yu, and Q. Guo, “Improving the accuracy performance of phase-shifting profilometry for the measurement of objects in motion,” Opt. Lett. 39, 6715–6718 (2014). [CrossRef]   [PubMed]  

11. Z. Wang and B. Han, “Advanced iterative algorithm for phase extraction of randomly phase-shifted interferograms,” Opt. Lett. 29, 1671–1673 (2004). [CrossRef]   [PubMed]  

12. S. Feng, C. Zuo, T. Tao, Y. Hu, M. Zhang, Q. Chen, and G. Gu, “Robust dynamic 3-d measurements with motion-compensated phase-shifting profilometry,” Opt. Laser. Eng. 103, 127–138 (2018). [CrossRef]  

13. P. Cong, Z. Xiong, Y. Zhang, S. Zhao, and F. Wu, “Accurate dynamic 3d sensing with fourier-assisted phase shifting,” IEEE J. Sel. Top. Signal Process. 9, 396–408 (2015). [CrossRef]  

14. Z. Liu, P. C. Zibley, and S. Zhang, “Motion-induced error compensation for phase shifting profilometry,” Opt. Express 26, 12632–12637 (2018). [CrossRef]   [PubMed]  

15. J.-S. Hyun and S. Zhang, “Enhanced two-frequency phase-shifting method,” Appl. Opt. 55, 4395–4401 (2016). [CrossRef]   [PubMed]  

16. B. Li, N. Karpinsky, and S. Zhang, “Novel calibration method for structured light system with an out-of-focus projector,” Appl. Opt. 53, 3415–3426 (2014). [CrossRef]   [PubMed]  

References

  • View by:

  1. S. Gorthi and P. Rastogi, “Fringe projection techniques: Whither we are?” Opt. Laser. Eng. 48, 133–140 (2010).
    [Crossref]
  2. S. Zhang, “Recent progresses on real-time 3-d shape measurement using digital fringe projection techniques,” Opt. Laser Eng. 48, 149–158 (2010).
    [Crossref]
  3. X. Su and Q. Zhang, “Dynamic 3-d shape measurement method: A review,” Opt. Laser. Eng 48, 191–204 (2010).
    [Crossref]
  4. S. Lei and S. Zhang, “Flexible 3-d shape measurement using projector defocusing,” Opt. Lett. 34, 3080–3082 (2009).
    [Crossref] [PubMed]
  5. C. Zuo, Q. Chen, S. Feng, F. Feng, G. Gu, and X. Sui, “Optimized pulse width modulation pattern strategy for three-dimensional profilometry with projector defocusing,” Appl. Opt. 51, 4477–4490 (2012).
    [Crossref] [PubMed]
  6. B. Li, Y. Wang, J. Dai, W. Lohry, and S. Zhang, “Some recent advances on superfast 3d shape measurement with digital binary defocusing techniques,” Opt. Laser Eng. 54, 236–246 (2014).
    [Crossref]
  7. Y. Wang and S. Zhang, “Superfast multifrequency phase-shifting technique with optimal pulse width modulation,” Opt. Express 19, 5143–5148 (2011).
  8. J. Zhu, P. Zhou, X. Su, and Z. You, “Accurate and fast 3d surface measurement with temporal-spatial binary encoding structured illumination,” Opt. Express 24, 28549–28560 (2016).
    [Crossref] [PubMed]
  9. L. Lu, J. Xi, Y. Yu, and Q. Guo, “New approach to improve the accuracy of 3-d shape measurement of moving object using phase shifting profilometry,” Opt. Express 21, 30610–30622 (2013).
    [Crossref]
  10. L. Lu, J. Xi, Y. Yu, and Q. Guo, “Improving the accuracy performance of phase-shifting profilometry for the measurement of objects in motion,” Opt. Lett. 39, 6715–6718 (2014).
    [Crossref] [PubMed]
  11. Z. Wang and B. Han, “Advanced iterative algorithm for phase extraction of randomly phase-shifted interferograms,” Opt. Lett. 29, 1671–1673 (2004).
    [Crossref] [PubMed]
  12. S. Feng, C. Zuo, T. Tao, Y. Hu, M. Zhang, Q. Chen, and G. Gu, “Robust dynamic 3-d measurements with motion-compensated phase-shifting profilometry,” Opt. Laser. Eng. 103, 127–138 (2018).
    [Crossref]
  13. P. Cong, Z. Xiong, Y. Zhang, S. Zhao, and F. Wu, “Accurate dynamic 3d sensing with fourier-assisted phase shifting,” IEEE J. Sel. Top. Signal Process. 9, 396–408 (2015).
    [Crossref]
  14. Z. Liu, P. C. Zibley, and S. Zhang, “Motion-induced error compensation for phase shifting profilometry,” Opt. Express 26, 12632–12637 (2018).
    [Crossref] [PubMed]
  15. J.-S. Hyun and S. Zhang, “Enhanced two-frequency phase-shifting method,” Appl. Opt. 55, 4395–4401 (2016).
    [Crossref] [PubMed]
  16. B. Li, N. Karpinsky, and S. Zhang, “Novel calibration method for structured light system with an out-of-focus projector,” Appl. Opt. 53, 3415–3426 (2014).
    [Crossref] [PubMed]

2018 (2)

S. Feng, C. Zuo, T. Tao, Y. Hu, M. Zhang, Q. Chen, and G. Gu, “Robust dynamic 3-d measurements with motion-compensated phase-shifting profilometry,” Opt. Laser. Eng. 103, 127–138 (2018).
[Crossref]

Z. Liu, P. C. Zibley, and S. Zhang, “Motion-induced error compensation for phase shifting profilometry,” Opt. Express 26, 12632–12637 (2018).
[Crossref] [PubMed]

2016 (2)

2015 (1)

P. Cong, Z. Xiong, Y. Zhang, S. Zhao, and F. Wu, “Accurate dynamic 3d sensing with fourier-assisted phase shifting,” IEEE J. Sel. Top. Signal Process. 9, 396–408 (2015).
[Crossref]

2014 (3)

2013 (1)

2012 (1)

2011 (1)

2010 (3)

S. Gorthi and P. Rastogi, “Fringe projection techniques: Whither we are?” Opt. Laser. Eng. 48, 133–140 (2010).
[Crossref]

S. Zhang, “Recent progresses on real-time 3-d shape measurement using digital fringe projection techniques,” Opt. Laser Eng. 48, 149–158 (2010).
[Crossref]

X. Su and Q. Zhang, “Dynamic 3-d shape measurement method: A review,” Opt. Laser. Eng 48, 191–204 (2010).
[Crossref]

2009 (1)

2004 (1)

Chen, Q.

S. Feng, C. Zuo, T. Tao, Y. Hu, M. Zhang, Q. Chen, and G. Gu, “Robust dynamic 3-d measurements with motion-compensated phase-shifting profilometry,” Opt. Laser. Eng. 103, 127–138 (2018).
[Crossref]

C. Zuo, Q. Chen, S. Feng, F. Feng, G. Gu, and X. Sui, “Optimized pulse width modulation pattern strategy for three-dimensional profilometry with projector defocusing,” Appl. Opt. 51, 4477–4490 (2012).
[Crossref] [PubMed]

Cong, P.

P. Cong, Z. Xiong, Y. Zhang, S. Zhao, and F. Wu, “Accurate dynamic 3d sensing with fourier-assisted phase shifting,” IEEE J. Sel. Top. Signal Process. 9, 396–408 (2015).
[Crossref]

Dai, J.

B. Li, Y. Wang, J. Dai, W. Lohry, and S. Zhang, “Some recent advances on superfast 3d shape measurement with digital binary defocusing techniques,” Opt. Laser Eng. 54, 236–246 (2014).
[Crossref]

Feng, F.

Feng, S.

S. Feng, C. Zuo, T. Tao, Y. Hu, M. Zhang, Q. Chen, and G. Gu, “Robust dynamic 3-d measurements with motion-compensated phase-shifting profilometry,” Opt. Laser. Eng. 103, 127–138 (2018).
[Crossref]

C. Zuo, Q. Chen, S. Feng, F. Feng, G. Gu, and X. Sui, “Optimized pulse width modulation pattern strategy for three-dimensional profilometry with projector defocusing,” Appl. Opt. 51, 4477–4490 (2012).
[Crossref] [PubMed]

Gorthi, S.

S. Gorthi and P. Rastogi, “Fringe projection techniques: Whither we are?” Opt. Laser. Eng. 48, 133–140 (2010).
[Crossref]

Gu, G.

S. Feng, C. Zuo, T. Tao, Y. Hu, M. Zhang, Q. Chen, and G. Gu, “Robust dynamic 3-d measurements with motion-compensated phase-shifting profilometry,” Opt. Laser. Eng. 103, 127–138 (2018).
[Crossref]

C. Zuo, Q. Chen, S. Feng, F. Feng, G. Gu, and X. Sui, “Optimized pulse width modulation pattern strategy for three-dimensional profilometry with projector defocusing,” Appl. Opt. 51, 4477–4490 (2012).
[Crossref] [PubMed]

Guo, Q.

Han, B.

Hu, Y.

S. Feng, C. Zuo, T. Tao, Y. Hu, M. Zhang, Q. Chen, and G. Gu, “Robust dynamic 3-d measurements with motion-compensated phase-shifting profilometry,” Opt. Laser. Eng. 103, 127–138 (2018).
[Crossref]

Hyun, J.-S.

Karpinsky, N.

Lei, S.

Li, B.

B. Li, Y. Wang, J. Dai, W. Lohry, and S. Zhang, “Some recent advances on superfast 3d shape measurement with digital binary defocusing techniques,” Opt. Laser Eng. 54, 236–246 (2014).
[Crossref]

B. Li, N. Karpinsky, and S. Zhang, “Novel calibration method for structured light system with an out-of-focus projector,” Appl. Opt. 53, 3415–3426 (2014).
[Crossref] [PubMed]

Liu, Z.

Lohry, W.

B. Li, Y. Wang, J. Dai, W. Lohry, and S. Zhang, “Some recent advances on superfast 3d shape measurement with digital binary defocusing techniques,” Opt. Laser Eng. 54, 236–246 (2014).
[Crossref]

Lu, L.

Rastogi, P.

S. Gorthi and P. Rastogi, “Fringe projection techniques: Whither we are?” Opt. Laser. Eng. 48, 133–140 (2010).
[Crossref]

Su, X.

Sui, X.

Tao, T.

S. Feng, C. Zuo, T. Tao, Y. Hu, M. Zhang, Q. Chen, and G. Gu, “Robust dynamic 3-d measurements with motion-compensated phase-shifting profilometry,” Opt. Laser. Eng. 103, 127–138 (2018).
[Crossref]

Wang, Y.

B. Li, Y. Wang, J. Dai, W. Lohry, and S. Zhang, “Some recent advances on superfast 3d shape measurement with digital binary defocusing techniques,” Opt. Laser Eng. 54, 236–246 (2014).
[Crossref]

Y. Wang and S. Zhang, “Superfast multifrequency phase-shifting technique with optimal pulse width modulation,” Opt. Express 19, 5143–5148 (2011).

Wang, Z.

Wu, F.

P. Cong, Z. Xiong, Y. Zhang, S. Zhao, and F. Wu, “Accurate dynamic 3d sensing with fourier-assisted phase shifting,” IEEE J. Sel. Top. Signal Process. 9, 396–408 (2015).
[Crossref]

Xi, J.

Xiong, Z.

P. Cong, Z. Xiong, Y. Zhang, S. Zhao, and F. Wu, “Accurate dynamic 3d sensing with fourier-assisted phase shifting,” IEEE J. Sel. Top. Signal Process. 9, 396–408 (2015).
[Crossref]

You, Z.

Yu, Y.

Zhang, M.

S. Feng, C. Zuo, T. Tao, Y. Hu, M. Zhang, Q. Chen, and G. Gu, “Robust dynamic 3-d measurements with motion-compensated phase-shifting profilometry,” Opt. Laser. Eng. 103, 127–138 (2018).
[Crossref]

Zhang, Q.

X. Su and Q. Zhang, “Dynamic 3-d shape measurement method: A review,” Opt. Laser. Eng 48, 191–204 (2010).
[Crossref]

Zhang, S.

Zhang, Y.

P. Cong, Z. Xiong, Y. Zhang, S. Zhao, and F. Wu, “Accurate dynamic 3d sensing with fourier-assisted phase shifting,” IEEE J. Sel. Top. Signal Process. 9, 396–408 (2015).
[Crossref]

Zhao, S.

P. Cong, Z. Xiong, Y. Zhang, S. Zhao, and F. Wu, “Accurate dynamic 3d sensing with fourier-assisted phase shifting,” IEEE J. Sel. Top. Signal Process. 9, 396–408 (2015).
[Crossref]

Zhou, P.

Zhu, J.

Zibley, P. C.

Zuo, C.

S. Feng, C. Zuo, T. Tao, Y. Hu, M. Zhang, Q. Chen, and G. Gu, “Robust dynamic 3-d measurements with motion-compensated phase-shifting profilometry,” Opt. Laser. Eng. 103, 127–138 (2018).
[Crossref]

C. Zuo, Q. Chen, S. Feng, F. Feng, G. Gu, and X. Sui, “Optimized pulse width modulation pattern strategy for three-dimensional profilometry with projector defocusing,” Appl. Opt. 51, 4477–4490 (2012).
[Crossref] [PubMed]

Appl. Opt. (3)

IEEE J. Sel. Top. Signal Process. (1)

P. Cong, Z. Xiong, Y. Zhang, S. Zhao, and F. Wu, “Accurate dynamic 3d sensing with fourier-assisted phase shifting,” IEEE J. Sel. Top. Signal Process. 9, 396–408 (2015).
[Crossref]

Opt. Express (4)

Opt. Laser Eng. (2)

B. Li, Y. Wang, J. Dai, W. Lohry, and S. Zhang, “Some recent advances on superfast 3d shape measurement with digital binary defocusing techniques,” Opt. Laser Eng. 54, 236–246 (2014).
[Crossref]

S. Zhang, “Recent progresses on real-time 3-d shape measurement using digital fringe projection techniques,” Opt. Laser Eng. 48, 149–158 (2010).
[Crossref]

Opt. Laser. Eng (1)

X. Su and Q. Zhang, “Dynamic 3-d shape measurement method: A review,” Opt. Laser. Eng 48, 191–204 (2010).
[Crossref]

Opt. Laser. Eng. (2)

S. Gorthi and P. Rastogi, “Fringe projection techniques: Whither we are?” Opt. Laser. Eng. 48, 133–140 (2010).
[Crossref]

S. Feng, C. Zuo, T. Tao, Y. Hu, M. Zhang, Q. Chen, and G. Gu, “Robust dynamic 3-d measurements with motion-compensated phase-shifting profilometry,” Opt. Laser. Eng. 103, 127–138 (2018).
[Crossref]

Opt. Lett. (3)

Supplementary Material (5)

NameDescription
Visualization 1       Experimental results of measuring a moving spherical object using double-shot-in-single-illumination
Visualization 2       Experimental results of measuring a moving spherical object using repeated fringe projection
Visualization 3       Experimental data for measuring a moving object with mainly planar x-y motions.
Visualization 4       Experimental data for measuring a moving object with mainly rotational motions.
Visualization 5       Experimental data for measuring a complex moving object.

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (9)

Fig. 1
Fig. 1 A schematic demonstration of the binary deofucsing technique which generates sinusoidal profile via projector defocusing.
Fig. 2
Fig. 2 The timing control of proposed double-shot-in-single-illumination technique.
Fig. 3
Fig. 3 Proposed computational framework for motion-induced phase error reduction.
Fig. 4
Fig. 4 The modified timing control scheme.
Fig. 5
Fig. 5 Simulation results for three-step phase-shifting algorithm. (a) Phase error reduction effect for different iteration times when δ = 0.1 rad; (b) the phase error plots for different phase-shift errors δ ∈ [−0.1, 0.1] rad after applying the conventional three-step phase shifting, the double-speed three-step phase shifting and the proposed method.
Fig. 6
Fig. 6 Simulation results for three-step phase-shifting algorithm with non-uniform motion. (a) Phase error reduction for different iterations; (b) The phase error plots before and after applying the proposed compensation method.
Fig. 7
Fig. 7 Experimental results of measuring a moving spherical object using the timing control illustrated in Fig. 2 (associated Visualization 1). (a)–(b) A sample reconstructed 3D frame using the standard three-step phase-shifting method and the proposed error compensation method; (c) cross-section plots of Fig. 7(a), Fig. 7(b) and an ideal sphere; (d) cross-section plots of errors of Figs. 7(a)–7(b). The mean and root-mean-square (RMS) errors are 0.5356 mm and 0.8482 mm for the standard three-step phase-shifting method; and 0.056 mm and 0.4130 mm for the proposed error compensation method.
Fig. 8
Fig. 8 Experimental results of measuring a moving spherical object using the timing control illustrated in Fig. 4 (associated Visualization 2). (a)–(b) A sample reconstructed 3D frame using the standard three-step phase-shifting method and the proposed error compensation method; (c) cross-section plots of Fig. 8(a), Fig. 8(b) and an ideal sphere; (d) cross-section plots of errors of Figs. 8(a)–8(b). The mean and RMS errors are 0.4116 mm and 0.6154 mm for the standard three-step phase-shifting method; and 0.0643 mm and 0.3731 mm for the proposed error compensation method.
Fig. 9
Fig. 9 Additional experimental results to test the robustness of the algorithm. (a) – (c) Results of a sphere moving mainly planar xy movement (associated Visualization 3); (d) – (f) corresponding result of a bottle with rotational movement (associated Visualization 4); (g) – (i) corresponding result of a complex object (associated Visualization 5).

Equations (10)

Equations on this page are rendered with MathJax. Learn more.

I k ( x , y ) = I ( x , y ) + I ( x , y ) cos [ Φ ( x , y ) δ k ] ,
[ a 0 ( x , y ) a 1 ( x , y ) a 2 ( x , y ) ] = [ N cos δ k sin δ k cos δ k cos 2 δ k cos δ k sin δ k sin δ k cos δ k sin δ k sin 2 δ k ] 1 [ I k I k cos δ k I k sin δ k ] .
φ ( x , y ) = tan 1 [ a 2 ( x , y ) a 1 ( x , y ) ] .
Φ ( x , y ) = φ ( x , y ) + k ( x , y ) × 2 π .
I 11 ( x , y ) = I ( x , y ) + I ( x , y ) cos [ Φ ( x , y ) 2 π / 3 ] ,
I 21 ( x , y ) = I ( x , y ) + I ( x , y ) cos [ Φ ( x , y ) + δ ] ,
I 31 ( x , y ) = I ( x , y ) + I ( x , y ) cos [ Φ ( x , y ) + 2 π / 3 + 2 δ ] .
I 12 ( x , y ) = I ( x , y ) + I ( x , y ) cos [ Φ ( x , y ) 2 π / 3 + δ / 2 ] ,
I 22 ( x , y ) = I ( x , y ) + I ( x , y ) cos [ Φ ( x , y ) + δ + δ / 2 ] ,
I 32 ( x , y ) = I ( x , y ) + I ( x , y ) cos [ Φ ( x , y ) + 2 π / 3 + 2 δ + δ / 2 ] .

Metrics