## Abstract

Object motion can introduce unknown phase shift and thus measurement error in multi-image phase-shifting methods of fringe projection profilometry. This paper presents a new method to estimate the unknown phase shifts and reduce the motion-induced error by using three phase maps computed over a multiple measurement sequence and calculating the difference between phase maps. The pixel-wise estimation of the motion-induced phase shifts permits phase-error compensation for non-homogeneous surface motion. Experiments demonstrated the ability of the method to reduce motion-induced error in real-time, for shape measurement of surfaces with high depth variation, and moving and deforming surfaces.

© 2019 Optical Society of America under the terms of the OSA Open Access Publishing Agreement

## 1. Introduction

Optical non-contact three-dimensional (3D) surface-shape measurement has diverse applications in manufacturing [1], biomedical engineering [2], computer vision, and heritage digitization [3,4]. Fringe projection profilometry (FPP) is a common technique that permits full-field 3D surface-shape measurement with high-accuracy [5]. One or more spatially-continuous fringe patterns are projected onto the object surface by a projector, and images of deformed patterns are captured by a camera. A phase map can be obtained from a single sinusoidal fringe pattern using Fourier transform profilometry (FTP) [6], or multiple phase-shifted fringe patterns using phase-shifting profilometry (PSP) [7]. The computed phase map is generally wrapped in a range from −*π* to *π* with 2*π* discontinuities. Phase-unwrapping [8] or system-geometric-constraint [9,10] methods can solve the phase ambiguity of the wrapped phase map and permit determination of camera-projector correspondences. The object surface shape can then be reconstructed by stereovision techniques using geometric and optical calibration parameters of the pre-calibrated FPP system. For measurement of static object surfaces, PSP, which use multiple fringe patterns, may be a preferred technique over FTP methods, because of the higher accuracy of multiple-image PSP [11]. However, for applications requiring dynamic 3D measurement of moving or deforming surfaces, the object surface motion between successive camera-captured images (when multiple patterns are used) will cause phase errors and consequently reduce measurement accuracy.

There have been several approaches to improve measurement accuracy for dynamic measurement [11]. Increasing image acquisition speed to reduce the effect of object motion during measurement has been achieved using high-speed projectors and cameras [12]. For example, Goes Before Optics (GOBO) projection can achieve high pattern-switching speeds with a high-speed spinning wheel [13]. However, high speed pattern projection and image capture is achieved at a higher system cost. High projection speed has been achieved using defocused binary patterns to generate sinusoidal fringe patterns [14]. While this can reduce the effect of object motion between successive images on measurement accuracy, further reduction of motion-induced errors would be desirable.

Reduction of the number of projected patterns can also reduce the effect of object motion during measurement on measurement accuracy. The single-image FTP [6] method is ideal for dynamic measurement of fast-moving objects. However, FTP is limited to a finite maximum slope of object depth variation, beyond which, the fundamental frequency component would overlap other components and thus cannot be retrieved unambiguously. This would lead to phase errors in regions of high depth variation. Windowed Fourier transform [15] and wavelet transform [16] methods can achieve higher phase accuracy than FTP [17]; however, they have higher computational cost, which is not favorable for real-time measurement. Other single-image methods use a composite fringe pattern [18] or a color fringe pattern [19], where several sinusoidal fringe patterns with different frequency or phase shift are embedded into one image. However, complex image demodulation and high sensitivity to object surface color and ambient light lead to decreased signal-to-noise ratio (SNR) of the extracted patterns.

Because of the high accuracy obtained using multiple phase-shifted-pattern methods, there has been a growing interest in developing methods to compensate for errors caused by object motion between the successive images captured during measurement. Marker based [20,21] and Scale Invariant Feature Transform (SIFT) [22] methods can compensate for measurement error due to planar object motion without handling motion in the depth direction. In PSP, fringe patterns are projected with known phase shift, and PSP phase-analysis algorithms can solve for the phase shift due to the object surface geometry. However, object surface motion will cause an additional unknown phase-shift in the captured images, resulting in motion-induced phase error and thus depth measurement error. Several error compensation methods can handle errors due to motion in the depth direction. One phase error compensation method for dynamic 3D measurement is based on the assumption that the motion-induced phase-shift error is homogeneous within a single object [23]. However, the estimation of phase-shift error may not be accurate if the object is deforming and the motion-induced phase-shift error is non-homogeneous. One pixel-independent phase-shift error estimation method using FTP computed phase map differences of successive captured fringe images [24]. Another method fused FTP and PSP surface reconstruction, guided by phase-based pixel-wise motion detection [25]. Although the latter two methods work well to handle object motion, the measurement accuracy would be limited where there is high depth variation due to the use of FTP. Another method used the Hilbert transform to generate an additional set of fringe images and additional phase map, which can substantially compensate motion-induced error by averaging the original and additional phase maps [26]. However, use of the Hilbert transform requires additional processing to suppress errors at fringe edges [27].

Iterative methods [28,29] can estimate non-homogeneous phase-shift error and then compute phase using the generic phase-shifting algorithm while considering the phase-shift error. The method can achieve very high accuracy with little motion-induced error after several iterations. While these motion-induced-error compensation methods work well for fast moving or deforming surfaces, they are computationally expensive and not suitable for real-time applications. A method that can perform pixel-wise motion-induced-error compensation in real-time measurement and also handle objects with large depth variation is still needed.

This paper presents a new motion-induced-error compensation method that is non-iterative and thus suitable for real-time 3D surface-shape measurement of dynamic surfaces. The method performs pixel-wise estimation of the phase-shift due to surface motion without the assumption of homogeneous motion across the surface. To permit real-time measurement, system geometry-constraints are used to solve the phase ambiguity of the wrapped phase map without requiring additional patterns as in temporal phase-unwrapping methods.

## 2. Principle and method

#### 2.1. Phase-shifting method

The intensity of each pixel of camera-captured *N*-step phase-shifted fringe images can be modeled by:

*A*(

*x*,

*y*) and

*B*(

*x*,

*y*) represent the unknown background intensity and amplitude of modulation, respectively, and

*ϕ*(

*x*,

*y*) is the unknown phase map.

*θ*(

_{n}*n*= 1, 2, …,

*N*) is the known phase shift for the

*n*-th projected fringe pattern. Equation (1) can be rewritten as:

*B*

_{1}(

*x*,

*y*) =

*B*(

*x*,

*y*)cos[

*ϕ*(

*x*,

*y*)] and

*B*

_{2}(

*x*,

*y*) =

*B*(

*x*,

*y*)sin[

*ϕ*(

*x*,

*y*)]. The following equation can be obtained [30]:where

**X**(

*x*,

*y*) = [

*A*(

*x*,

*y*),

*B*

_{1}(

*x*,

*y*),

*B*

_{2}(

*x*,

*y*)]

^{T},

**I**(

*x*,

*y*) = [

*I*

_{1}(

*x*,

*y*),

*I*

_{2}(

*x*,

*y*), ...,

*I*(

_{N}*x*,

*y*)]

^{T}, and

For a standard 4-step phase-shifting method with phase shift *θ _{n}* = 2

*π*(

*n*−1)/

*N*. The wrapped phase can be computed using the following equation:

#### 2.2. Motion-induced phase error

If the object is moving or deforming between successive captured images, the phase shift at each pixel in the captured images will have an additional unknown phase-shift due to the object surface motion. This phase-shift error can be determined pixel by pixel using the system geometric and optical parameters and object motion, if the motion is known [28]. The object motion induced phase-shift error for the standard 4-step phase-shifting method, where the object surface is moving toward the camera at varying speed, is shown in Fig. 1. When capturing fringe images with phase-shift error due to motion, ${{I}^{\prime}}_{1}(x,y)$, ${{I}^{\prime}}_{2}(x,y)$, ${{I}^{\prime}}_{3}(x,y)$ and ${{I}^{\prime}}_{4}(x,y)$, the position of the object surface for any camera pixel is at *p*_{1}(*x*, *y*), *p*_{2}(*x*, *y*), *p*_{3}(*x*, *y*) and *p*_{4}(*x*, *y*), respectively. Three unknown phase-shift errors *ε*_{1}(*x*, *y*), *ε*_{2}(*x*, *y*) and *ε*_{3}(*x*, *y*) are caused by the object motion from *p*_{1} to *p*_{2}, *p*_{2} to *p*_{3}, and *p*_{3} to *p*_{4}, respectively. Similar phase-shift errors can be observed, for any direction the object is moving in [23]. Since the position of the object is unknown and changing during the measurement, an object position *P*(*x*, *y*) (red dashed line) that has zero phase-shift error (i.e. no motion-induced error), can be defined as a reference to aid in determining the phase-shift error in each image ${{I}^{\prime}}_{n}(x,y)$ (*n* = 1, 2, 3, 4).

It is assumed that the object moves in one direction during the image acquisition of four phase-shifted images, and that the position *P*(*x*, *y*) is between the middle two positions *p*_{2} and *p*_{3}, where the phase-shift error between positions *p*_{2} and *P* is half the phase-shift error between *p*_{2} and *p*_{3}, *ε*_{2}(*x*, *y*)/2. Note that *P* is not necessarily at mid-depth between *p*_{2} and *p*_{3}.

The intensity of 4-step phase-shifted fringe images considering object motion can be described by the following equations:

*ϕ*(

*x*,

*y*) is the unknown phase related to the object surface geometry to be solved for. Since the motion-induced phase-shift errors are unknown, the wrapped phase

*ϕ*′(

*x*,

*y*) given by the standard 4-step phase-shifting method:

*ϕ*(

*x*,

*y*):Camera pixel coordinates (

*x*,

*y*) may be omitted hereinafter for brevity. Equation (8) can be rewritten as:

*ε*, sin(

*ε*)≈

*ε*and cos(

*ε*)≈1. Then, Eq. (10) can be approximated as:

*ϕ*(

*x*,

*y*) (Eq. (9)) can be derived as:

#### 2.3. Simulation of motion-induced phase error

Equation (13) shows that the motion-induced phase error approximately correlates to 2*ϕ*. The mean phase error (DC component) is close to zero if the object motion is at constant speed, where *ε*_{1}(*x*, *y*) ≈*ε*_{3}(*x*, *y*). The phase error amplitude increases as phase-shift errors *ε*_{1} and *ε*_{3} increase. A phase measurement simulation was performed using a given phase-shift error between successive positions: *ε*_{1} = *ε*_{2} = *ε*_{3} = 0.2 rad, and assuming constant speed motion. Note that the phase maps in this simulation are unwrapped. The phase map *ϕ*′(*x*, *y*) with motion-induced phase error, simulated using the standard 4-step phase-shifting method (Eq. (8)) (black curve), is shown with the simulated phase at the four positions *p*_{1}, *p*_{2}, *p*_{3}, *p*_{4} (four colored lines) in Fig. 2(a).

A phase map *ϕ*(*x*, *y*) without error is simulated by generating images using Eq. (7) and assuming that the object has no motion, thus using *ε*_{1} = *ε*_{2} = *ε*_{3} = 0 rad. The computed motion-induced phase error using *ϕ*′−*ϕ*, and the simulated phase error using Eq. (13), are shown in Fig. 2(b).

A simulation of phase measurement with varying speed object motion was performed using a given phase-shift error between successive positions: *ε*_{1} = 0.15 rad, *ε*_{2} = 0.2 rad, and *ε*_{3} = 0.25 rad. The phase map *ϕ*′(*x*, *y*) with motion-induced phase error, simulated using the standard 4-step phase-shifting method (Eq. (8)) (black curve), is shown with the simulated phase at the four positions *p*_{1}, *p*_{2}, *p*_{3}, *p*_{4} (four colored lines) in Fig. 3(a). The computed motion-induced phase error (blue) using *ϕ*′−*ϕ* and the simulated phase error (red) using Eq. (13) are shown in Fig. 3(b). The small difference between the computed and simulated phase curves in both Figs. 2(b) and 3(b) shows that the motion-induced phase error can be approximated by Eq. (13).

#### 2.4. Motion-induced phase-shift error estimation

Although the wrapped phase can be obtained once the phase shift is known using the phase-shifting method (Section 2.1), the computed phase map will have unknown phase error (Section 2.2) from the additional unknown phase shift due to object surface motion. Determination of the unknown phase-shift error caused by object motion is key to retrieve the phase with no motion artifact. The approach in this paper is to compute three phase maps at intervals over a multiple measurement sequence and estimate the phase-shift error by calculating the difference between the computed phase maps. To generate the three phase maps over the sequence, the standard 4-step phase-shifted images (frames) at *p*_{1} to *p*_{4} (Fig. 1) are used together with the previous two frames at *p*_{-1} and *p*_{0}, and the subsequent two frames at *p*_{5} and *p*_{6}, where the object is moving from position *p _{-}*

_{1}to position

*p*

_{6}(Fig. 4).

Consider a phase-shift error *ε _{i}* caused by object motion from any position

*p*to the subsequent position

_{i}*p*

_{i+}_{1}(Fig. 4). While it is possible to compute five different phase maps ${{\varphi}^{\prime}}_{k}$ (

*k*= 0, 1, 2, 3, 4), only ${{\varphi}^{\prime}}_{0}$, ${{\varphi}^{\prime}}_{2}$ and ${{\varphi}^{\prime}}_{4}$ are used as discussed below. The phase maps ${{\varphi}^{\prime}}_{0}$, ${{\varphi}^{\prime}}_{2}$ and ${{\varphi}^{\prime}}_{4}$ are each computed using four images captured at

*p*

_{-1},

*p*

_{0},

*p*

_{1,}

*p*

_{2};

*p*

_{1},

*p*

_{2},

*p*

_{3},

*p*

_{4}; and

*p*

_{3},

*p*

_{4},

*p*

_{5},

*p*

_{6}; respectively, using the standard 4-step phase-shifting method for all three phase maps, and where ${{\varphi}^{\prime}}_{2}$ is the phase map that is being corrected, for the current measurement. Referring to Fig. 4, according to Eq. (13), the motion-induced phase error of phase map ${{\varphi}^{\prime}}_{2}$ is:

*ϕ*

_{2}and

*ϕ*

_{4}consists of a default phase shift of –

*π*and a phase shift due to object motion:

*ϕ*

_{2}(

*x*,

*y*) and Δ

*ϕ*

_{4}(

*x*,

*y*) will be similar according to Eqs. (14) and (15), if the phase-shift errors are small. Calculating the difference of the two phase maps with similar phase error can partially cancel the effect of the phase error. Considering that there is a default phase shift of –

*π*between ${{\varphi}^{\prime}}_{2}$ and ${{\varphi}^{\prime}}_{4}$, computing the

*π*offset phase difference ${{\varphi}^{\prime}}_{4}-{{\varphi}^{\prime}}_{2}+\pi $ gives:

*π*offset phase difference ${{\varphi}^{\prime}}_{4}-{{\varphi}^{\prime}}_{2}+\pi $ (Eq. (18)) contains a DC component, which is approximately twice the phase-shift error

*ε*

_{3}, and a small periodic AC component correlated to 2

*ϕ*

_{2}. A simulation of ${{\varphi}^{\prime}}_{4}-{{\varphi}^{\prime}}_{2}+\pi $ computed for different values of phase-shift error

*ε*

_{3}= 0.1, 0.15 and 0.2 rad, results in DC components 0.2, 0.3, and 0.4 rad, respectively (twice the phase shift errors) (Fig. 5), for assumed constant speed object motion. As the phase-shift error

*ε*

_{3}increases, the sinusoidal error increases, since the phase errors of phase maps ${{\varphi}^{\prime}}_{2}$ and ${{\varphi}^{\prime}}_{4}$ become more different.

The phase-shift error *ε*_{3} can be estimated by $({{\varphi}^{\prime}}_{4}-{{\varphi}^{\prime}}_{2}+\pi )/2$; however, there is a residual AC component that contributes to inaccuracy in *ε*_{3}. To improve the accuracy of the *ε*_{3} estimation, in this paper, an averaging operation is further performed over a small region around each pixel after computing $({{\varphi}^{\prime}}_{4}-{{\varphi}^{\prime}}_{2}+\pi )/2$, to eliminate the additional sinusoidal error (AC component). The phase-shift error *ε*_{3}(*x*, *y*) can thus be estimated by:

*V*around pixel (

*x*,

*y*). The window size of the small region is the wavelength of the projected fringe. It is computationally expensive if the above averaging operation is performed at each pixel. An integral image [31] is used to reduce the computational cost, making it suitable for parallel computing. Similarly, by computing the offset phase difference of phase maps ${{\varphi}^{\prime}}_{0}$ and ${{\varphi}^{\prime}}_{2}$, the phase-shift error

*ε*

_{1}(

*x*,

*y*) can be estimated by:

*ε*

_{2}(

*x*,

*y*) is estimated by

*ε*

_{2}(

*x*,

*y*) = [

*ε*

_{1}(

*x*,

*y*) +

*ε*

_{3}(

*x*,

*y*)]/2. After the estimates of

*ε*

_{1},

*ε*

_{2}and

*ε*

_{3}are computed for each pixel, the wrapped phase

*ϕ*(

*x*,

*y*), which has reduced motion-induced phase error, can be obtained by the phase-shifting method Eqs. (1)–(5) (Section 2.1), where:

It should be noted that during an object measurement, the 3D surface reconstruction would be based on a phase map computed using only four images (at positions *p*_{1}, *p*_{2}, *p*_{3}, *p*_{4} in Fig. 4). The use of eight images (at positions *p _{-}*

_{1}to

*p*

_{6}in Fig. 4) and three phase maps, ${{\varphi}^{\prime}}_{0}$, ${{\varphi}^{\prime}}_{2}$ and ${{\varphi}^{\prime}}_{4}$, are only used to estimate the phase-shift error. A quality map [32] is used to determine a valid measurement region following computation of all phase maps above.

#### 2.5. Unwrapped phase using geometry-constraints

To solve the phase ambiguity of the wrapped phase map, system geometry-constraint based methods have the advantage over temporal phase unwrapping methods in not requiring additional patterns. In this paper, wrapped phase maps with reduced motion-induced phase error are used to determine the correspondence between a projector and a right and left camera, and geometry constraints are used to minimize the number of candidate points for correspondence [33–35]. The number of candidate positions in the measurement volume is only one due to the use of a very short baseline between the left camera and projector. There is thus no need to embed additional information in the fringe patterns to achieve correspondence reliability. The object surface shape can finally be reconstructed by stereovision techniques.

#### 2.6. Summary of method

The new motion-induced-error compensation method is summarized as follows:

- 1. Continuously project standard π/2-shifted fringe patterns onto surface. Capture fringe pattern images and perform lens distortion correction.
- 2. Compute three wrapped phase maps ${{\varphi}^{\prime}}_{0}$, ${{\varphi}^{\prime}}_{2}$ and ${{\varphi}^{\prime}}_{4}$ over eight successive captured images using the standard 4-step phase-shifting method (Section 2.4). A quality map is used to determine the valid measurement region.
- 3. Estimate the motion-induced phase-shift errors
*ε*_{1},*ε*_{3}using Eqs. (19) and (20). Estimate the motion-induced phase-shift error*ε*_{2}using*ε*_{2}(*x*,*y*) = [*ε*_{1}(*x*,*y*) +*ε*_{3}(*x*,*y*)]/2. - 4. For each camera, compute a single wrapped phase map with reduced motion-induced phase error using Eq. (21) and the phase-shifting method Eqs. (1)–(5) (Section 2.1).
- 5. Compute 3D coordinates at all camera pixels employing system geometry-constraint based methods (Section 2.5) using the single wrapped phase map for each camera.

## 3. Experiments and results

To verify the performance of the new motion-induced-error compensation method, an experimental system was developed consisting of two monochrome cameras (Basler acA1300-200um) using 800 × 600 images and a DLP projector (Wintech PRO4500) with 912 × 1140 resolution. The left camera was placed beneath the projector to achieve a very short left-camera projector baseline (26.5 mm), while the camera-camera baseline was long (116.9 mm). The system was calibrated using the method described in [36]. The working distance of the system from the object was approximately 700 mm. The image capture from the two cameras was synchronized with the pattern projection, which had a speed of 120 Hz. The range of depth of the measurement volume used for geometry-constraint based phase unwrapping was set to 200 mm. The wavelength of projected fringe pattern was set to 24 pixels.

#### 3.1. Qualitative evaluation

The performance of the motion-induced-error compensation method was first evaluated by measurement of a moving multi-step object (Fig. 6(a)). The object was moved by hand during the measurement. The object motion consisted of both translation (in depth direction) and rotation. The first captured fringe image of the 4-step PSP method is shown in Fig. 6(b). The 3D measurement using the standard 4-step PSP method had large motion artifact in the form of ripples on the reconstructed surfaces on both the multi-step object and hand, as shown in Fig. 6(c). There was less motion artifact when using the new motion-induced-error compensation method (Fig. 6(d)). The depth of points located on the red line segment (300th row) in Fig. 6(b) without error compensation shows motion artifact in the form of ripples in Fig. 7(a). Using the new error compensation method, the motion artifact is again seen to be largely eliminated in the depth plot (Fig. 7(b)). A comparison of 3D surface shape measurements of the moving object over the entire measurement sequence with and without error compensation is shown in Visualization 1, and a comparison of the single-row depth plots computed over the measurement sequence with and without error compensation is shown in Visualization 2.

#### 3.2. Quantitative evaluation

To quantitatively evaluate the performance of the new motion-induced-error compensation method, a double hemisphere object (with true radii 50.800 ± 0.015 mm and distance between hemisphere centers 120.000 ± 0.005 mm, based on manufacturing specification and precision) was measured while it was moving with an approximate speed of 17 cm/s in the depth direction. The surface was reconstructed using the standard 4-step PSP method and the new error compensation method, respectively (Fig. 8). The reconstructed surface had motion artifacts in the form of ripples for the standard measurement (Fig. 8(a)), and less artifact when using the new error compensation method (Fig. 8(b)).

For comparison to the new error compensation method, in addition to measurement by the standard 4-step PSP method, a surface measurement was performed by a single-image real-time FTP method using only one of the four captured images. The measurement error for the three methods was determined by least-square fitting of a sphere to each 3D reconstructed hemisphere point cloud. The sphere-fitting residual errors (indicated by color) for the new error compensation method were mostly under 0.1 mm (Fig. 9(c)). These errors, which may be partly due to non-constant surface speed and acceleration, were lower than the errors for the standard 4-step PSP (Fig. 9(a)), seen as ripples of multiple colors with errors as high as 0.5 mm, and lower than the errors for single-image FTP (Fig. 9(e)), which were as high as 0.5 mm, mainly at the edges of the hemisphere. The measurement error distributions for the three methods in Figs. 9(b), 9(d), and 9(f), respectively, show lower errors for the motion-induced-error compensation method (Fig. 9(d)), compared to the other two methods (Figs. 9(b) and 9(f)). Even with the use of multiple images in the new method, the error reduction is highly effective to bring the errors (Figs. 9(c) and 9(d)) to a lower level than the errors for the single-image FTP (Figs. 9(e) and 9(f)). As discussed earlier, measurement accuracy of the FTP method is limited where depth variation is large (Fig. 9(e)), as seen at the outer edges, where errors are approximately 0.5 mm.

The calculated radii of the two hemispheres, distance between two sphere centers, root mean square (RMS) error based on differences between measured points on the hemisphere and the true radius, and sphere-fitting standard deviation (SD), are shown in Table 1 for the three measurement methods. The RMS errors and the sphere fitting SD are much lower using the new error compensation method compared to the standard PSP and single-image FTP methods. The uncorrected motion artifacts contribute to the higher RMS and SD errors of the standard PSP. The limited measurement accuracy at regions of large depth variation contributes to the higher error of the FTP method. The new error compensation method can reduce the motion-induced error and also handle the large depth variation of an object surface.

#### 3.3. Real-time measurement

To verify the performance of the new motion-induced-error compensation method for real-time applications, real-time measurements were performed on a desktop computer with a NVIDIA GeForce GTX1080ti graphic card and an Intel i7-3820 processor. Four standard 4-step PSP fringe patterns were pre-stored in the projector and then projected sequentially. System calibration and geometric parameters were pre-calculated and stored on the GPU before measurement. All computations were performed on the GPU. Pixel-wise computation permitted parallel computing to achieve real-time motion-induced-error compensation during surface-shape measurement.

A moving manikin head was measured by the system. The measurement, including 3D reconstruction of a point cloud using motion-induced-error compensation and display, was performed in real-time with image capture. The reconstructed point cloud was displayed using OpenGL, with colors blue to red representing near to far (Fig. 10 and Visualization 4). The mean GPU runtime of a single 3D measurement with motion-induced-error compensation was approximately 25 ms. The display rate (including 3D construction, data transfer, and rendering) achieved approximately 30 fps (In Visualization 4 using Bandicam software, the number shown is the current frame rate in fps).

A further measurement using real-time motion-induced-error compensation was performed on a deforming surface, a deflating balloon. The reconstructed 3D point cloud is displayed in Fig. 11, with grayscale texture (Fig. 11(a)) and with colors blue to red representing near to far (Fig. 11(b)), and in Visualization 5. The display rate slightly dropped to 25 fps, due to the additional rendering of the grayscale texture (Fig. 11(a)). The video demonstrates that the error compensation method was effective even for real-time measurement with non-rigid body motion (deforming surfaces).

#### 3.4. Discussion

The new motion-induced-error compensation method in this paper can estimate the motion-induced phase-shift error and reduce the motion artifact by using three phase maps computed over a multiple measurement sequence and calculating the difference between phase maps. A phase map with reduced motion-induced error is then computed from four images using the estimated phase shift error. The pixel-wise estimation of the motion-induced phase shifts permits phase-error compensation for non-homogeneous surface motion. The new method has low computational cost, which is suitable for real-time 3D measurement. The experimental results demonstrated the effectiveness of the new real-time motion-induced-error compensation method in 3D surface-shape measurement for objects with large depth variations and deforming surfaces.

The motion-induced phase-shift error estimation was based on the assumption that the phase-shift error *ε* is small, such that sin(*ε*)≈*ε* and cos(*ε*)≈1. If the phase-shift error is not small, possibly due to fast surface motion or low camera speed, the relationship between approximated phase error Δ*ϕ*(*x*, *y*) and phase-shift errors *ε*_{1} and *ε*_{3} in Eq. (13) will be more inaccurate. The inaccuracy in approximated phase error Δ*ϕ*(*x*, *y*) can be seen in the mismatch of curves in Fig. 2(b) and also in Fig. 3(b). A greater *ε* would result in a greater inaccuracy of Eq. (13) and a greater mismatch of curves (Eq. (9) and Eq. (13)). The inaccuracy in approximated phase error Δ*ϕ*(*x*, *y*) would lead to inaccurate estimation of *ε*_{1} and *ε*_{3} in Eqs. (19)–(20), and ultimately greater residual errors in the phase map computed by Eqs. (1)–(5). The residual phase errors would appear as residual measurement errors (ripples) in the reconstructed surface. Faster surface motion and lower camera speed would contribute to higher residual phase and measurement errors. The motion-induced phase-shift error for non-homogeneous surface motion was estimated also with the assumption that the object motion for the pixel analyzed has constant speed or constant acceleration. Non-constant speed and acceleration would contribute to higher residual phase and measurement errors. Future research may focus on real-time measurement of surfaces with faster motion and non-constant acceleration.

The method focused on motion-induced phase shift errors due to motion in the depth direction. Further research will also investigate methods to handle greater planar motion (perpendicular to depth) combined with motion in the depth direction.

## 4. Conclusion

A new real-time motion-induced-error compensation method was developed for dynamic 3D surface-shape measurement. Three phase maps are computed over a multiple measurement sequence and the unknown phase shifts due to surface motion are estimated by calculating the differences between the computed phase maps. A phase map with reduced motion-induced error is then computed from four images using the estimated phase shift error. The method achieved higher measurement accuracy than the standard PSP and single-image FTP, reducing the motion artifact due to surface motion, while also handling measurement of surfaces with high depth variation. The motion-induced phase shift estimation and error compensation are performed pixel-wise, which enables parallel computing using a GPU to reduce the processing time for real-time measurement. Experiments demonstrated the ability of the method to reduce motion-induced error in real time, for shape measurement of surfaces with high depth variation, and moving and deforming surfaces.

## Funding

Natural Sciences and Engineering Research Council of Canada; University of Waterloo; China Scholarship Council.

## References

**1. **K. Zhong, Z. Li, X. Zhou, Y. Li, Y. Shi, and C. Wang, “Enhanced phase measurement profilometry for industrial 3D inspection automation,” Int. J. Adv. Manuf. Technol. **76**(9–12), 1563–1574 (2014).

**2. **A. J. Das, T. A. Valdez, J. A. Vargas, P. Saksupapchon, P. Rachapudi, Z. Ge, J. C. Estrada, and R. Raskar, “Volume estimation of tonsil phantoms using an oral camera with 3D imaging,” Biomed. Opt. Express **7**(4), 1445–1457 (2016). [CrossRef] [PubMed]

**3. **W. H. Su and W. T. Co, “A real-time, full-field, and low-cost velocity sensing approach for linear motion using fringe projection techniques,” Opt. Lasers Eng. **81**, 11–20 (2016). [CrossRef]

**4. **R. Ramm, C. Bräuer-Burchardt, P. Kühmstedt, and G. Notni, “High-resolution mobile optical 3D scanner with color mapping,” Proc. SPIE **10331**, 103310D (2017).

**5. **C. Zuo, S. Feng, L. Huang, T. Tao, W. Yin, and Q. Chen, “Phase shifting algorithms for fringe projection profilometry: A review,” Opt. Lasers Eng. **109**, 23–59 (2018). [CrossRef]

**6. **M. Takeda and K. Mutoh, “Fourier transform profilometry for the automatic measurement of 3-D object shapes,” Appl. Opt. **22**(24), 3977–3982 (1983). [CrossRef] [PubMed]

**7. **V. Srinivasan, H. C. Liu, and M. Halioua, “Automated phase-measuring profilometry of 3-D diffuse objects,” Appl. Opt. **23**(18), 3105 (1984). [CrossRef] [PubMed]

**8. **S. Zhang, “Absolute phase retrieval methods for digital fringe projection profilometry: A review,” Opt. Lasers Eng. **107**, 28–37 (2018). [CrossRef]

**9. **C. Jiang, B. Li, and S. Zhang, “Pixel-by-pixel absolute phase retrieval using three phase-shifted fringe patterns without markers,” Opt. Lasers Eng. **91**, 232–241 (2017). [CrossRef]

**10. **K. Zhong, Z. Li, Y. Shi, C. Wang, and Y. Lei, “Fast phase measurement profilometry for arbitrary shape objects without phase unwrapping,” Opt. Lasers Eng. **51**(11), 1213–1222 (2013). [CrossRef]

**11. **S. Zhang, “High-speed 3D shape measurement with structured light methods: A review,” Opt. Lasers Eng. **106**, 119–131 (2018). [CrossRef]

**12. **C. Zuo, T. Tao, S. Feng, L. Huang, A. Asundi, and Q. Chen, “Micro Fourier transform profilometry (μFTP): 3D shape measurement at 10,000 frames per second,” Opt. Lasers Eng. **102**, 70–91 (2018). [CrossRef]

**13. **S. Heist, P. Lutzke, I. Schmidt, P. Dietrich, P. Kühmstedt, A. Tünnermann, and G. Notni, “High-speed three-dimensional shape measurement using GOBO projection,” Opt. Lasers Eng. **87**, 90–96 (2016). [CrossRef]

**14. **J.-S. Hyun, B. Li, and S. Zhang, “High-speed high-accuracy three-dimensional shape measurement using digital binary defocusing method versus sinusoidal method,” Opt. Eng. **56**(7), 074102 (2017). [CrossRef]

**15. **Q. Kemao, “Applications of windowed Fourier fringe analysis in optical measurement: A review,” Opt. Lasers Eng. **66**, 67–73 (2015). [CrossRef]

**16. **L. R. Watkins, “Review of fringe pattern phase recovery using the 1-D and 2-D continuous wavelet transforms,” Opt. Lasers Eng. **50**(8), 1015–1022 (2012). [CrossRef]

**17. **L. Huang, Q. Kemao, B. Pan, and A. K. Asundi, “Comparison of Fourier transform, windowed Fourier transform, and wavelet transform methods for phase extraction from a single fringe pattern in fringe projection profilometry,” Opt. Lasers Eng. **48**(2), 141–148 (2010). [CrossRef]

**18. **C. Guan, L. Hassebrook, and D. Lau, “Composite structured light pattern for three-dimensional video,” Opt. Express **11**(5), 406–417 (2003). [CrossRef] [PubMed]

**19. **Z. Zhang, D. P. Towers, and C. E. Towers, “Snapshot color fringe projection for absolute three-dimensional metrology of video sequences,” Appl. Opt. **49**(31), 5947 (2010). [CrossRef]

**20. **L. Lu, J. Xi, Y. Yu, and Q. Guo, “New approach to improve the accuracy of 3-D shape measurement of moving object using phase shifting profilometry,” Opt. Express **21**(25), 30610–30622 (2013). [CrossRef] [PubMed]

**21. **L. Lu, J. Xi, Y. Yu, and Q. Guo, “New approach to improve the performance of fringe pattern profilometry using multiple triangular patterns for the measurement of objects in motion,” Opt. Eng. **53**(11), 112211 (2014). [CrossRef]

**22. **L. Lu, Y. Ding, Y. Luan, Y. Yin, Q. Liu, and J. Xi, “Automated approach for the surface profile measurement of moving objects based on PSP,” Opt. Express **25**(25), 32120–32131 (2017). [CrossRef] [PubMed]

**23. **S. Feng, C. Zuo, T. Tao, Y. Hu, M. Zhang, Q. Chen, and G. Gu, “Robust dynamic 3-D measurements with motion-compensated phase-shifting profilometry,” Opt. Lasers Eng. **103**, 127–138 (2018). [CrossRef]

**24. **P. Cong, Z. Xiong, Y. Zhang, S. Zhao, and F. Wu, “Accurate dynamic 3D sensing with Fourier-assisted phase shifting,” IEEE J. Sel. Top. Signal Process. **9**(3), 396–408 (2015). [CrossRef]

**25. **J. Qian, T. Tao, S. Feng, Q. Chen, and C. Zuo, “Motion-artifact-free dynamic 3D shape measurement with hybrid Fourier-transform phase-shifting profilometry,” Opt. Express **27**(3), 2713–2731 (2019). [CrossRef] [PubMed]

**26. **Y. Wang, Z. Liu, C. Jiang, and S. Zhang, “Motion induced phase error reduction using a Hilbert transform,” Opt. Express **26**(26), 34224–34235 (2018). [CrossRef] [PubMed]

**27. **H. Chen, Y. Yin, Z. Cai, W. Xu, X. Liu, X. Meng, and X. Peng, “Suppression of the nonlinear phase error in phase shifting profilometry: considering non-smooth reflectivity and fractional period,” Opt. Express **26**(10), 13489–13505 (2018). [CrossRef] [PubMed]

**28. **Z. Liu, P. C. Zibley, and S. Zhang, “Motion-induced error compensation for phase shifting profilometry,” Opt. Express **26**(10), 12632–12637 (2018). [CrossRef] [PubMed]

**29. **L. Lu, Y. Yin, Z. Su, X. Ren, Y. Luan, and J. Xi, “General model for phase shifting profilometry with an object in motion,” Appl. Opt. **57**(36), 10364–10369 (2018). [CrossRef] [PubMed]

**30. **N. Pears, Y. Liu, and P. Bunting, *3D Imaging, Analysis and Applications* (Springer, 2012).

**31. **P. Viola and M. J. Jones, “Robust Real-time Object Detection,” in *Proceedings of IEEE Workshop on Statistical and Computational Theories of Vision* (IEEE, 2001), pp. 137–154.

**32. **S. Zhang, X. Li, and S. T. Yau, “Multilevel quality-guided phase unwrapping algorithm for real-time three-dimensional shape reconstruction,” Appl. Opt. **46**(1), 50–57 (2007). [CrossRef] [PubMed]

**33. **T. Tao, Q. Chen, S. Feng, J. Qian, Y. Hu, L. Huang, and C. Zuo, “High-speed real-time 3D shape measurement based on adaptive depth constraint,” Opt. Express **26**(17), 22440–22456 (2018). [CrossRef] [PubMed]

**34. **T. Tao, Q. Chen, J. Da, S. Feng, Y. Hu, and C. Zuo, “Real-time 3-D shape measurement with composite phase-shifting fringes and multi-view system,” Opt. Express **24**(18), 20253–20269 (2016). [CrossRef] [PubMed]

**35. **X. Liu and J. Kofman, “High-frequency background modulation fringe patterns based on a fringe-wavelength geometry-constraint model for 3D surface-shape measurement,” Opt. Express **25**(14), 16618–16628 (2017). [CrossRef] [PubMed]

**36. **X. Liu and J. Kofman, “Real-time 3D surface-shape measurement using background-modulated modified Fourier transform profilometry with geometry-constraint,” Opt. Lasers Eng. **115**, 217–224 (2019). [CrossRef]