Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Digital image correlation assisted absolute phase unwrapping

Open Access Open Access

Abstract

This paper presents an absolute phase unwrapping method for high-speed three-dimensional (3D) shape measurement. This method uses three phase-shifted patterns and one binary random pattern on a single-camera, single-projector structured light system. We calculate the wrapped phase from phase-shifted images and determine the coarse correspondence through the digital image correlation (DIC) between the captured binary random pattern of the object and the pre-captured binary random pattern of a flat surface. We then developed a computational framework to determine fringe order number pixel by pixel using the coarse correspondence information. Since only one additional pattern is used, the proposed method can be used for high-speed 3D shape measurement. Experimental results successfully demonstrated that the proposed method can achieve high-speed and high-quality measurement of complex scenes.

© 2022 Optica Publishing Group under the terms of the Optica Open Access Publishing Agreement

1. Introduction

Three-dimensional (3D) shape measurement has numerous applications, such as evidence capture in forensic science and perception in robotics. Commonly seen 3D measurement techniques include stereo vision, time of flight, and phase-shifting profilometry. Among all these methods, phase-based methods have the advantage over intensity methods due to their robustness and accuracy. They can achieve high spatial resolutions and provide denser 3D point clouds. Extensively adopted phase-based 3D measurement methods include the Fourier method [1], the Windowed Fourier method [2], and the phase-shifting methods [3]. Typically, these phase-based methods only provide phase information ranging from $-\pi$ to $\pi$ with $2\pi$ discontinuity between each period. Thus, the phase unwrapping algorithms which eliminate the $2\pi$ discontinuities have to be applied.

Phase unwrapping algorithms can be roughly classified into two categories, spatial phase unwrapping algorithms and temporal phase unwrapping algorithms. The spatial phase unwrapping algorithms detect the $2\pi$ discontinuities on the phase map itself and remove these $2\pi$ discontinuities by adding or subtracting an integer number of $2\pi$s to each pixel. These integers are usually called fringe order, $K$. The spatial phase unwrapping algorithm determines the fringe order $K$ by analyzing the phase values of the neighboring pixels on the wrapped phase map and identifies the $2\pi$ discontinuities. Some spatial phase unwrapping methods include scan-line unwrapping methods, quality-guided methods [46], and multi-anchor unwrapping method [7]. These spatial phase unwrapping algorithms usually yield relative phase maps because the phase is unwrapped related to a starting point on the wrapped phase map. Therefore, the 3D points reconstructed have a relative instead of absolute depth. On the other hand, temporal phase unwrapping algorithms fundamentally eliminate the $2\pi$ discontinuities by acquiring more information. Essentially, temporal phase unwrapping methods determine the fringe order $K$ by capturing additional images, such as additional fringe images. Since temporal phase unwrapping does not depend on a starting point on the wrapped phase map, the algorithm generates absolute phase maps. Over the years, researchers have come up with several temporal phase unwrapping methods, including the binary-coding method [8], the gray-coding method [9], the multi-wavelength phase unwrapping [1013], and the phase encoding method [14]. All these temporal phase unwrapping methods are well capable of retrieving absolute phase maps yet they required capturing additional images, which means that the projector needs to project more designed patterns while measuring. Given this, temporal phase unwrapping methods slow down the measuring process, which is not desirable for high-speed 3D measurement applications, such as capturing dynamic objects.

In view of this, An et al. [15] developed a phase unwrapping method that utilizes the geometric constraints of the structured light system without requiring additional cameras and fringe images. Thus, the 3D measurement speeds are not compromised. In brief, an artificial absolute phase map $\Phi _{min}$ is generated using the geometry constraint between the camera and the projector base on the assumption of the minimum depth range $Z_{min}$ we are measuring. Then the $\Phi _{min}$ are used to unwrap the wrapped phase pixel by pixel. Since $\Phi _{min}$ is generated in the projector space, it is absolute, and so does the unwrapped phase. This method has the advantage of high measurement speed since no additional images need to be projected. However, this phase unwrapping method has some limitations. First, in order to create $\Phi _{min}$, knowing the approximate depth of the object is required. Anything located closer than $Z_{min}$ will not be correctly reconstructed. Second, there is a range limitation for the wrapped phase to be correctly unwrapped. The range limitation relies on the spatial span of one projected fringe period and the angle between the projector projection direction and the camera capture direction. Therefore, if the geometry variation of the object exceeds the depth range limitation, those parts out of the range cannot be correctly reconstructed.

The above limitations can be alleviated by adding the second camera to a standard single-camera, single-projector structured light system. Stereo-assisted phase-shifting profilometry (PSP) methods [1621] have been proposed in the past few years. Because the cameras capture the scene in different perspectives simultaneously, stereo and epipolar geometric constraints can be combined with the phase information to determine the pixel correspondences without employing the conventional spatial and temporal phase unwrapping algorithms. However, using multiple cameras increases the hardware cost and algorithm complexity [22]. In addition, only the region that can be simultaneously observed by stereo cameras and projector can be reconstructed.

An and Zhang [23] combine the binary statistical pattern matching with the phase-shifting method for a single-camera and single projector system. This method requires only projecting one additional pattern and does not depend on prior knowledge of the object’s depth and geometry variation. Nevertheless, the projector and the camera usually have different sensor sizes and lenses. Therefore, before the disparity generation, they need to crop and down sample (or up sample) either projector or camera images to match the field-of-view and resolution. This process is computationally expensive and thus very slow. In addition, the binary projector image that camera images match is an ideal computer-generated image, which is not affected by the lens, the object geometry, and the environment and does not undergo distortion and blurring that the camera-captured images do. Thus, the binary matching result requires extensive interpolation, hole filling, and refinement to correct the phase values. Zhang et al. [24] proposed a method that generates a wrapped phase-to-height lookup table (LUT) by capturing speckle embedded fringe patterns of reference planes at different distances. Then the LUT and speckle correlation are used to eliminate the height ambiguity. This method needs only four speckle embedded fringe patterns and the LUT improves the computational efficiency. However, this method requires capturing reference planes at many different heights. Furthermore, the embedded speckle patterns into the fringe patterns could affect the phase quality for final 3D shape measurement.

In this research, we proposed an absolute phase unwrapping method on a single-camera, single-projector structured light system that combines the digital image correlation (DIC) with the phase-shifting method to perform high-speed 3D measurement while maintaining high accuracy. Our proposed method only utilizes four patterns, three phase-shifting patterns, plus one random binary pattern. Three phase-shifting patterns are used to calculate the wrapped phase and the random binary pattern is used by the DIC algorithm to establish the coarse correspondence between the camera coordinate and the projector coordinate. We then utilize this coarse correspondence to unwrap the wrapped phase. In order to address the correspondence error in the DIC result, the spatial phase unwrapping algorithm [7] is deployed in the local error region to generate the final absolute phase map. While the DIC algorithm is only used for assisting the phase unwrapping process in our proposed method, using DIC with a single camera to measure out-of-plane deformation will not compromise the accuracy of the phase-shifting algorithm. Our proposed method has the merit that only a single camera and a single projector are needed. Furthermore, only 4 patterns have to be projected for one frame of 3D reconstruction. Prior knowledge of the depth of the measuring object is not required, and the measuring range is not limited by the geometry setting of the camera and the projector. The experiment results verified the success of our proposed method and showed the potential of performing high-speed absolute 3D shape measurement by developing a system capable of reconstructing absolute depths of multiple isolated surfaces with arbitrary geometry and measuring a dynamic human face in 75 Hz with a pattern projection rate of 300 Hz.

2. Principle

2.1 Three-step phase-shifting algorithm

Over the years, numerous fringe-analysis methods have been developed, including phase-shifting fringe analysis and Fourier-based fringe analysis methods. Among all these methods, phase-shifting fringe analysis methods have been extensively used because of their robustness and accuracy. Therefore, the phase-shifting fringe analysis is adopted in this research. The minimum number of fringe patterns for successful pixel-wise phase information retrieval in the phase-shifting fringe analysis is three. Three phase-shifted fringe images with equal phase shifts can be mathematically written as

$$ I_1(x,y) =I^{\prime}(x,y)+I^{\prime\prime}(x,y)\cos[\phi(x,y)-2\pi/3], $$
$$ I_2(x,y) =I^{\prime}(x,y)+I^{\prime\prime}(x,y)\cos[\phi(x,y)], $$
$$ I_3(x,y) =I^{\prime}(x,y)+I^{\prime\prime}(x,y)\cos[\phi(x,y)+2\pi/3], $$
where $(x, y)$ is the pixel coordinate, $I^{\prime }(x,y)$ is the average intensity, $I^{\prime \prime }(x,y)$ is the intensity modulation, and $\phi (x,y)$ is the phase to be solved. Solving equation simultaneously leads to
$$\phi(x,y)=\tan^{{-}1}\left[\frac{\sqrt{3}(I_1-I_3)}{2I_2-I_1-I_3}\right],$$
and
$$\gamma=\frac{I^{\prime\prime}(x,y)}{I^{\prime}(x,y)},$$
where $\gamma$ is the data modulation indicating the fringe quality with 1 being the best. Due to the arctangent operation, the phase $\phi$ obtained from Eq. (4) ranges from $-\pi$ to $\pi$ with $2\pi$ discontinuities between each period, which we called it the wrapped phase. To eliminate the $2\pi$ discontinuities and obtain the continuous phase called the unwrapped phase, spatial or temporal phase unwrapping methods can be used. The mathematical relationship between the wrapped phase $\phi$ and unwrapped phase $\Phi$ can be expressed as
$$\Phi(x,y)=\phi(x,y)+2\pi\times K,$$
where $K$ is often referred to as fringe order. A phase unwrapping algorithm is essentially finding the integer fringe order $K$ of each pixel such that the unwrapped phase is continuous without $2\pi$ discontinuities. In this research, we develop a phase unwrapping method similar to An et al.’s approach [15]. In their approach, they specify a minimum depth of interest called $z_{min}$, then use a virtual plane at depth $z_{min}$ to generate an artificial phase map called $\Phi _{min}$ using the geometry constraints of the calibrated structured light system. The fringe order $K$ of each pixel is then determined as
$$K(x,y) = ceil\left[ \frac{\Phi_{min}-\phi}{2\pi}\right],$$
where $ceil()$ is the ceiling operator that obtains the closest integer that is larger than or equal to the value. Thus, the absolute unwrapped phase is retrieved pixel by pixel. This unwrapping method has the advantage of enabling high-speed measurements. However, it does have some limitations. First, we need to know the minimum depth of the object we measure. Second, there is a depth range limitation for the wrapped phase to be correctly unwrapped. Therefore, we propose a method that automatically finds the low accuracy absolute phase of each pixel in the measuring scene, and uses the low accuracy phase map to find the fringe order $K$ for each pixel to create a high accuracy absolute phase map.

2.2 Low accuracy absolute phase extraction using DIC

Prior to any 3D measurement, we project a random binary pattern onto a white flat surface and capture the image, $I_r$. We save this pre-captured image for future use. We also project two sets of 15 phase-shifted fringe patterns to extract the phase map in horizontal and vertical directions, $\Phi _{r}^{h}$ and $\Phi _{r}^{v}$, using the multi-wavelength phase-shifting algorithm. Since the same object is used, there exist a unique mapping between $I_r$, $\Phi _{r}^{h}$ and $\Phi _{r}^{v}$. It is important to note that it is not required to re-capture these images during any 3D measurements. For a 3D measurement, we project the same random binary pattern and capture the image of it reflected by the object, $I_o$. We also project three phase-shifting fringe patterns to obtain the wrapped phase $\phi$ of the scene by using Eq. (4).

The basic principle of the DIC algorithm is as follows. In order to track the same point in $I_o$ and $I_r$, a reference subset with the center at the point of interest is taken from $I_o$. Then, the target subset taken from $I_r$ with the center at the initial guess is transformed during each iteration and compared with the reference subset. Once the transformed target subset that has the best match with the reference subset is found, the corresponding points between $I_o$ and $I_r$ are established.

The criterion we use to determine the best match between the reference and target subset is the modified Zero-mean Normalized Sum of Squared Differences (ZNSSD) criterion [25], which is insensitive to the potential scale and offset changes of the subset intensity. The ZNSSD coefficient can be expressed as,

$$C_{ZNSSD}(\Delta P) = \sum_{\xi}{\Bigg\{ \frac{f(\boldsymbol{x}+\boldsymbol{W(\xi;\Delta p)}) - \bar{f}}{\Delta f} - \frac{g(\boldsymbol{x}+\boldsymbol{W(\xi;p)}) - \bar{g}}{\Delta g} \Bigg\}^2},$$
where $f(\boldsymbol {x})$ and $g(\boldsymbol {x})$ denote the grayscale value at $\boldsymbol {x}=(x, y, 1)^{T}$ of $I_r$ and $I_o$. To deal with the pixels at the boundary of the region of interest, we incorporate the mask generated by $\gamma$ and $I^{\prime \prime }$ which indicates the valid object pixels and invalid background pixels. $N$ is the total number of valid object pixels within the subset. $\boldsymbol {W(\xi ; p)}$ is the first order warp function, and $\boldsymbol {W(\xi ; \Delta p)}$ is the incremental warp function. $\boldsymbol {\xi }=(\Delta x, \Delta y, 1)^{T}$ is the local coordinates of the valid object pixels in each subset. $\boldsymbol {P} = (u, \frac {\partial u}{\partial x}, \frac {\partial u}{\partial y}, v, \frac {\partial v}{\partial x}, \frac {\partial v}{\partial y})^T$ is the deformation vector to be solved for, and $\boldsymbol {\Delta P}$ is the incremental deformation vector. $u$, $v$ denote the displacement components. $\bar {f} = \frac {1}{N}\sum _{\xi }{f(\boldsymbol {x}+\boldsymbol {W(\xi ;\Delta p)})}$ and $\bar {g} = \frac {1}{N}\sum _{\xi }{g(\boldsymbol {x}+\boldsymbol {W(\xi ; p)})}$. $\Delta {f} = \sqrt {\sum _{\xi }{[f(\boldsymbol {x}+\boldsymbol {W(\xi ;\Delta p)})-\bar {f}]^2}}$ and $\Delta {g} = \sqrt {\sum _{\xi }{[g(\boldsymbol {x}+\boldsymbol {W(\xi ; p)})-\bar {g}]^2}}$. In order to solve for the deformation vector $\boldsymbol {P}$, we use the inverse compositional Gauss-Newton (IC-GN) algorithm [26], which aims to optimize (minimize) the $C_{ZNSSD}$ criterion in Eq. (8). In the DIC sub-pixel registration algorithms, the IC-GN algorithm has been proved to have similar accuracy as the forward additive Newton–Raphson (FA-GN) algorithm [27] but has much higher computation efficiency because the Hessian matrix is constant throughout the iterations, thus can be pre-computed [2831].

The convergence characteristic of the IC-GN algorithm relies heavily on the initial guess of the deformation vector $\boldsymbol {P}$. In other words, an accurate enough initial guess must be provided to the IC-GN algorithm in order for it to converge correctly. Conventionally, the initial guess is provided by the full image integer-pixel search [32]. However, here, we take advantage of the epipolar constraint and the wrapped phase information to increase the accuracy and speed up the process.

Unlike the multi-camera systems that have epipolar constraints between images of the current scene from different perspectives, we need to find the epipolar points of $I_o$ on $I_r$. Therefore, we first find the epipolar line of each pixel of $I_o$ in the projector pixel coordinate using the camera and projector calibration data. Because each point in the projector pixel coordinate has a unique phase coordinate (horizontal and vertical phases), we then map the epipolar line in the projector pixel coordinate to pixels on $I_{r}$ by locating the pixels on each row (direction with small phase variation) of $I_r$ that has phase coordinate $(\Phi _{r}^{h}, \Phi _{r}^{v})$ closest to the epipolar line in projector phase coordinate. Additionally, according to the phase-shifting algorithm, the corresponding pixels on $I_o$ and $I_r$ should have the same wrapped phase value. The mathematical relationship between the unwrapped phase map $\Phi _r$ and the wrapped phase map $\phi _r$ of $I_r$ is

$$\phi_r = \Phi_r \bmod(2\pi).$$
Thus, by comparing $\phi _r$ and $\phi$, we can narrow down the epipolar points to those that have the same wrapped phase value as the pixel of interest (POI) on $I_o$.

Mathematically, the ZNSSD coefficient is related to the zero-mean normalized cross-correlation (ZNCC) coefficient by the following equation,

$$C_{ZNCC}(\boldsymbol{P}) = 1-0.5\times C_{ZNSSD}(\boldsymbol{P}).$$
Because the ZNCC coefficient ranges from -1 to 1 with the larger the number the higher the similarity degree between the target subset and the reference subset, it is more straightforward when demonstrating the similarity. We calculate $C_{ZNCC}$ of each remaining epipolar point and use the point that has the highest $C_{ZNCC}$ as the initial guess of the ICGN algorithm. Figure 1 shows the schematic process of generating the initial guess for the IC-GN algorithm.

 figure: Fig. 1.

Fig. 1. The schematic process of providing initial guesses to the IC-GN algorithm. (a) $I_o$ with the POI (red plus sign). (b) $I_r$ with the epipolar points (yellow line). (c) $I_r$ with the epipolar points (yellow and green plus signs) after applying the wrapped phase constraint. The epipolar point that has the highest $C_{ZNCC}$ is selected as the initial guess (green plus sign).

Download Full Size | PDF

We perform DIC on each grid point on $I_o$ to establish the sub-pixel correspondences between $I_o$ and $I_r$. The grid points have a grid step distance between each other and the correspondences are established on a grid block (subset $grid~step \times grid~step$ pixels center at the grid points) one at a time. Since we already know the horizontal absolute phase value of each pixel of $I_r$ from $\Phi _{r}^{h}$, and the fringe periods of multi-wavelength and three-step phase-shifting algorithms, we can extract the low accuracy absolute phase map $\tilde {\Phi }_1$ of $I_o$ using the correspondences.

2.3 High accuracy absolute phase unwrapping

Our algorithm aims to unwrap the wrapped phase $\phi$ to obtain the final high accuracy absolute unwrapped phase. First, we mask out the bad quality pixels in $\tilde {\Phi }_1$ and utilize a $3\times 3$ grid block-based median filter to remove the spikes and obtained $\tilde {\Phi }_2$. The bad quality pixels include the edges and also the pixels that have low $C_{ZNCC}$. Then, we determine the fringe order $\tilde {K}$ using the following equation,

$$\tilde K(x,y) = ceil\left[ \frac{(\tilde{\Phi}_2-\pi)-\phi}{2\pi}\right].$$
$\tilde {K}$ is then used to generate $\Phi _1$. The purpose of subtracting $\pi$ from $\tilde {\Phi }_2$ in Eq. (11) is to make the phase falls within the $2\pi$ range below the high accuracy absolute phase so that the wrapped phase can be correctly unwrapped [15]. Though most pixels in $\Phi _1$ have the correct absolute phase, some errors occur in areas that have a low signal-to-noise ratio or large deformation. We compensate for these errors by removing the discontinuous regions in $\Phi _1$. We check the area size of the connected regions and remove those that are too small to obtain a cleaner phase map, $\Phi _2$. Finally, we do local spatial phase unwrapping [7] with respect to $\Phi _2$ to determine the absolute phase of the pixels that have not been unwrapped in the mask used in the DIC matching and obtain the final high accuracy absolute phase $\Phi$. Figure 2 shows examples of the mask used in the DIC matching, $\Phi _2$, and the remaining pixels in the mask that we do spatial phase unwrapping on. Because our algorithm unwraps the phase locally, it can acquire the absolute phase of isolated surfaces and even if there are unwrapping errors, it will not propagate throughout the entire row or column like the conventional scan line spatial phase unwrapping algorithm.

 figure: Fig. 2.

Fig. 2. Example of determining the pixels that will be spatial unwrapped. (a) Mask used in the DIC matching. (b) $\Phi _2$. (c) Pixels (white) that will be unwrapped by local spatial phase unwrapping.

Download Full Size | PDF

The overall framework of our proposed method is summarized in Fig. 3. In total, there are four patterns projected. We use the phase-shifted fringe patterns to generate the wrapped phase map $\phi$. Then, we perform DIC between $I_o$ and $I_r$. Because the corresponding absolute phase map $\Phi _{r}^{h}$ of $I_r$ has been pre-computed, if the correspondence between $I_r$ and $I_o$ is established, $\tilde {\Phi }_1$ of $I_o$ can be extracted. Next, we mask out the bad quality pixels and remove the phase spikes to generate $\tilde {\Phi }_2$. Then, we use $\tilde {\Phi }_2$ to unwrap $\phi$ and generate $\Phi _1$. After that, we filter out the discontinuous region in $\Phi _1$ to obtain $\Phi _2$. Finally, we do local spatial phase unwrapping only on pixels remaining in the mask used in the DIC matching. The high accuracy absolute phase map $\Phi$ can be used to perform high accuracy 3D reconstruction [33].

 figure: Fig. 3.

Fig. 3. The overall framework of our proposed method.

Download Full Size | PDF

3. Experiments

To verify the proposed method, we developed a structured light system, Fig. 4, which includes one camera (FLIR Grasshopper3 GS3-U3-23S6C) attached with a 16mm focal length lens (Computar M1614-MP2) and one projector (Texas Instrument LightCrafter 4500). The full resolution of the camera is $1920\times 1200$ pixels. The resolution of the projector is $912\times 1140$ pixels. The fringe period of the three-step phase-shifting fringe patterns is 18 pixels. The system was calibrated using the method proposed by Zhang and Huang [33], and the camera coordinate was chosen to be the world coordinate. For all the experiments, the subset size for DIC is $31\times 31$ pixels, the grid step is 7 pixels, and the $\gamma$ and $I^{\prime \prime }$ threshold for the mask is 0.2 and 5. The $C_{ZNCC}$ threshold for the bad quality region is 0.3. The minimum area of the connected region is 10 grid blocks.

 figure: Fig. 4.

Fig. 4. Photograph of our structured light prototype system.

Download Full Size | PDF

Prior to any 3D measurements, we obtain $I_r$ and its corresponding absolute phase $\Phi _{r}^{h}$ and $\Phi _{r}^{v}$ by sequentially projecting and capturing the random binary pattern and 15 multi-wavelength phase-shifting patterns in horizontal and vertical directions of a white flat surface. The horizontal fringe periods for the multi-wavelength algorithm are 36, 216, and 1140 pixels and the vertical fringe periods for the multi-wavelength algorithm are 18, 144, and 912 pixels. Figure 5 shows the ideal random binary pattern, the captured reference pattern image $I_r$, and the corresponding absolute phase maps $\Phi _{r}^{h}$ and $\Phi _{r}^{v}$.

 figure: Fig. 5.

Fig. 5. Pre-captured reference data. (a) Computer-designed ideal random binary pattern. (b) Captured binary image of the a flat surface, $I_r$. (c) The retrieved horizontal reference absolute phase $\Phi _{r}^{h}$ of the same flat surface. (d) The retrieved vertical reference absolute phase $\Phi _{r}^{v}$ of the same flat surface.

Download Full Size | PDF

We tested our proposed method by measuring two isolated 3D objects a sphere and a dog sculpture in one scene. Figure 6 shows the results. Figure 6(a) shows the photograph of the 3D objects. Figure 6(b) shows one of the three phase-shifted images. The wrapped phase retrieved from the phase-shifted fringe patterns is shown in Fig. 6(c). Figure 6(d) shows the captured random image $I_o$. We performed DIC between $I_o$ and the pre-captured image $I_r$ to extract the low accuracy absolute phase by referring to the pre-captured reference phase $\Phi _r^h$. Figure 6(e) shows obvious error points due to the incorrect correspondence from DIC. We then applied our computational framework to filter these error points and partially unwrap Fig. 6(c). Figure 6(f) shows the resultant phase map. We then performed the spatial phase unwrapping algorithm discussed in Sec. 2.3 to unwrap the remaining pixels. Figure 6(g) shows the final high accuracy absolute phase. The unwrapped phase map was further used to reconstruct the 3D shapes of the objects with calibration parameters. Figure 6(h) shows the 3D reconstruction result. Clearly, our algorithm successfully acquires the correct absolute depth of both objects and reconstructs the details of the sculpture including edges where DIC usually fails due to the low image contrast. This indicates that our method can reconstruct the absolute depth of isolated surfaces and does not have the depth range limitation as in the geometric constraint-based phase unwrapping method.

We evaluate the measurement accuracy of our proposed method on the sphere in Fig. 6(h). The point cloud data were fitted to an ideal sphere function

$$(x-x_0)^2+(y-y_0)^2+(z-z_0)^2 = r^2,$$
where $(x_0, y_0, z_0)$ is the center of the sphere and $r$ is the radius of the sphere. The error map was created by taking the difference between the ideal sphere and the measured sphere data. A Gaussian filter with a size of $5 \times 5$ pixels and a standard deviation of $5/3$ pixels was applied to our data to reduce the most significant random noise. Figure 7(a) shows the overlap of the fitted ideal sphere and the raw point cloud data. Figure 7(b) shows the corresponding error maps. Our proposed method achieved the root mean square (RMS) error of 0.056 mm, which is pretty small considering that the sphere radius is approximately 39.37 mm,

 figure: Fig. 6.

Fig. 6. Phase unwrapping process of the 3D objects. (a) Photograph of the 3D objects. (b) One of the fringe images. (c) $\phi$. (d) $I_o$. (e) $\tilde {\Phi }_1$. (f) $\Phi _2$. (g) $\Phi$. (h) 3D reconstruction result using (g).

Download Full Size | PDF

 figure: Fig. 7.

Fig. 7. Accuracy evaluation of the measuring result of a sphere. (units:mm) (a) Overlap 3D data of the sphere from Fig. 6(h) and the ideal sphere. (b) Error map of (a) (RMS: 0.056 mm).

Download Full Size | PDF

We also compared the results generated by our proposed method and that by the multi-frequency unwrapping method, shown in Fig. 8. Figure 8(b) and 8(d) shows no difference between our method and the multi-wavelength method in smooth areas. However, our proposed method produced incorrectly unwrapped phases near the abrupt surfaces (e.g., around the ear of the dog sculpture) because the DIC result is masked out due to the spatial phase unwrapping in those areas. It is important to note that only a small number of pixels (1599 out of 351605 pixels) are incorrectly unwrapped even for such a complex scene, demonstrating the success of our proposed algorithm.

 figure: Fig. 8.

Fig. 8. Comparison results between our method and multi-frequency unwrapping method. (a) Unwrapped phase using multi-frequency unwrapping method. (b) Unwrapped phase difference map between our method and multi-frequency unwrapping method. (c) 3D reconstruction result using multi-frequency unwrapping method. (d) One cross-section the reconstructed 3D shape using our method, shown in Fig. 6(h), and and that using the multi-frequency unwrapping method.

Download Full Size | PDF

Finally, we changed the settings of the same structured light system to demonstrate the high-speed capability. In this experiment, we set the camera’s resolution to $800 \times 600$ pixels and its frame rate to 300 Hz, and the projector’s refresh rate to 300 Hz. Since our proposed method only needs 4 patterns to reconstruct one 3D frame, it achieved a 3D measurement speed of 75 Hz. Figure 9 and the associate Visualization 1 show the results of a dynamic human face measurement. This experiment demonstrated that our proposed method worked well on a dynamic scene with complex surface texture and geometry.

 figure: Fig. 9.

Fig. 9. Typical 3D frames of capturing a moving face at 75 Hz (Visualization 1). (a)-(d) Texture images of the face. (e)-(h) 3D geometry of (a)-(d).

Download Full Size | PDF

4. Summary

This paper has presented an absolute phase unwrapping method for high-speed 3D shape measurement on a single-camera, single-projector structured light system. Three phase-shifted fringe patterns are projected for the phase-shifting method, and one designed random binary pattern is projected for the DIC matching. The low accuracy absolute phase of the object is first extracted using the DIC result. Then, it is used along with the local spatial phase unwrapping to obtain the final high accuracy absolute phase using our computational framework. Experiment results indicated that our proposed method can measure the absolute depth of multiple isolated objects with arbitrary geometry without having prior knowledge of the depth of the objects and achieve the measurement RMS error of 0.056 mm for a sphere with a radius of approximately 39.37 mm. It also showed the potential of high-speed measurement by measuring a dynamic human face at 75 HZ with a camera frame rate and a projector refreshing rate at 300Hz.

Funding

Directorate for Computer and Information Science and Engineering (IIS-1763689).

Acknowledgments

When the work was done, Manzhu Xu was a visiting student at Purdue University. This work was sponsored by National Science Foundation (NSF) under the grant No. IIS-1763689. Views expressed here are those of the authors and not necessarily those of the NSF.

Disclosures

YL: The author declares no conflicts of interest. MX: The author declares no conflicts of interest. SZ: ORI LLC (C), Orbbec 3D (C), Vision Express Optics Inc (I).

Data availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

References

1. M. Takeda and K. Mutoh, “Fourier transform profilometry for the automatic measurement of 3-D object shapes,” Appl. Opt. 22(24), 3977–3982 (1983). [CrossRef]  

2. Q. Kemao, “Windowed Fourier transform for fringe pattern analysis,” Appl. Opt. 43(13), 2695–2702 (2004). [CrossRef]  

3. D. Malacara, Optical Shop Testing, (John Wiley & Sons, Ltd, 2007), Chap.14.

4. S. Zhang and X. Li and S.-T. Yau, “Multilevel quality-guided phase unwrapping algorithm for real-time three-dimensional shape reconstruction,” Appl. Opt. 46(1), 50–57 (2007). [CrossRef]  

5. M. Zhao, L. Huang, Q. Zhang, X. Su, A. Asundi, and Q. Kemao, “Quality-guided phase unwrapping technique: comparison of quality maps and guiding strategies,” Appl. Opt. 50(33), 6214–6224 (2011). [CrossRef]  

6. K. Chen, J. Xi, and Y. Yu, “Quality-guided spatial phase unwrapping algorithm for fast three-dimensional measurement,” Opt. Commun. 294, 139–147 (2013). [CrossRef]  

7. S. Xiang, Y. Yang, H. Deng, J. Wu, and L. Yu, “Multi-anchor spatial phase unwrapping for fringe projection profilometry,” Opt. Express 27(23), 33488–33503 (2019). [CrossRef]  

8. S. Zhang, High-Speed 3D Imaging with Digital Fringe Projection Techniques, (CRC, 2018).

9. G. Sansoni, M. Carocci, and R. Rodella, “Three-dimensional vision based on a combination of gray-code and phase-shift light projection: analysis and compensation of the systematic errors,” Appl. Opt. 38(31), 6565–6573 (1999). [CrossRef]  

10. C. Zuo, L. Huang, M. Zhang, Q. Chen, and A. Asundi, “Temporal phase unwrapping algorithms for fringe projection profilometry: A comparative review,” Opt. Lasers Eng. 85, 84–103 (2016). [CrossRef]  

11. Y.-Y. Cheng and J. C. Wyant, “Two-wavelength phase shifting interferometry,” Appl. Opt. 23(24), 4539–4543 (1984). [CrossRef]  

12. Y.-Y. Cheng and J. C. Wyant, “Multiple-wavelength phase-shifting interferometry,” Appl. Opt. 24(6), 804–807 (1985). [CrossRef]  

13. C. E. Towers, D. P. Towers, and J. D. C. Jones, “Optimum frequency selection in multifrequency interferometry,” Opt. Lett. 28(11), 887–889 (2003). [CrossRef]  

14. Y. Wang and S. Zhang, “Novel phase-coding method for absolute phase retrieval,” Opt. Lett. 37(11), 2067–2069 (2012). [CrossRef]  

15. Y. An, J.-S. Hyun, and S. Zhang, “Pixel-wise absolute phase unwrapping using geometric constraints of structured light system,” Opt. Express 24(16), 18445–18459 (2016). [CrossRef]  

16. P. Hu, S. Yang, G. Zhang, and H. Deng, “High-speed and accurate 3D shape measurement using DIC-assisted phase matching and triple-scanning,” Opt. Lasers Eng. 147, 106725 (2021). [CrossRef]  

17. W. Yin, S. Feng, T. Tao, L. Huang, M. Trusiak, Q. Chen, and C. Zuo, “High-speed 3D shape measurement using the optimized composite fringe patterns and stereo-assisted structured light system,” Opt. Express 27(3), 2411–2431 (2019). [CrossRef]  

18. S. Gai, F. Da, and X. Dai, “Novel 3D measurement system based on speckle and fringe pattern projection,” Opt. Express 24(16), 17686–17697 (2016). [CrossRef]  

19. W. Lohry, V. Chen, and S. Zhang, “Absolute three-dimensional shape measurement using coded fringe patterns without phase unwrapping or projector calibration,” Opt. Express 22(2), 1287–1301 (2014). [CrossRef]  

20. K. Zhong, Z. Li, Y. Shi, C. Wang, and Y. Lei, “Fast phase measurement profilometry for arbitrary shape objects without phase unwrapping,” Opt. Lasers Eng. 51(11), 1213–1222 (2013). [CrossRef]  

21. R. R. Garcia and A. Zakhor, “Consistent Stereo-Assisted Absolute Phase Unwrapping Methods for Structured Light Systems,” IEEE J. Sel. Top. Signal Process. 6(5), 411–424 (2012). [CrossRef]  

22. M. Pankow, B. Justusson, and A. M. Waas, “Three-dimensional absolute shape measurement by combining binary statistical pattern matching with phase-shifting methods,” Appl. Opt. 49(17), 3418–3427 (2010). [CrossRef]  

23. Y. An and S. Zhang, “Three-dimensional absolute shape measurement by combining binary statistical pattern matching with phase-shifting methods,” Appl. Opt. 56(19), 5418–5426 (2017). [CrossRef]  

24. J. Zhang, W. Guo, Z. Wu, and Q. Zhang, “Three-dimensional shape measurement based on speckle-embedded fringe patterns and wrapped phase-to-height lookup table,” Opt. Rev. 28(2), 227–238 (2021). [CrossRef]  

25. B. Pan, Z. Wang, and Z. Lu, “Genuine full-field deformation measurement of an object with complex shape using reliability-guided digital image correlation,” Opt. Express 18(2), 1011–1023 (2010). [CrossRef]  

26. S. Baker and I. Matthews, “Lucas-Kanade 20 Years On: A Unifying Framework,” Int. J. Comput. Vis. 56(3), 221–255 (2004). [CrossRef]  

27. H. A. Bruck, S. R. McNeill, M. A. Sutton, and W. H. Peters, “Digital image correlation using Newton-Raphson method of partial differential correction,” Exp. Mech. 29(3), 261–267 (1989). [CrossRef]  

28. B. Pan, K. Li, and W. Tong, “Fast, Robust and Accurate Digital Image Correlation Calculation Without Redundant Computations,” Exp. Mech. 53(7), 1277–1289 (2013). [CrossRef]  

29. X. Shao, X. Dai, and X. He, “Noise robustness and parallel computation of the inverse compositional Gauss–Newton algorithm in digital image correlation,” Opt. Lasers Eng. 71, 9–19 (2015). [CrossRef]  

30. X. Shao, X. Dai, Z. Chen, and X. He, “Real-time 3D digital image correlation method and its application in human pulse monitoring,” Appl. Opt. 55(4), 696–704 (2016). [CrossRef]  

31. Y. Gao, T. Cheng, Y. Su, X. Xu, Y. Zhang, and Q. Zhang, “High-efficiency and high-accuracy digital image correlation for three-dimensional measurement,” Opt. Lasers Eng. 65, 73–80 (2015). [CrossRef]  

32. J. Blaber, B. Adair, and A. Antoniou, “Ncorr: Open-Source 2D Digital Image Correlation Matlab Software,” Exp. Mech. 55(6), 1105–1122 (2015). [CrossRef]  

33. S. Zhang and P. S. Huang, “Novel method for structured light system calibration,” Opt. Eng. 45(8), 083601 (2006). [CrossRef]  

Supplementary Material (1)

NameDescription
Visualization 1       Figure 8

Data availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (9)

Fig. 1.
Fig. 1. The schematic process of providing initial guesses to the IC-GN algorithm. (a) $I_o$ with the POI (red plus sign). (b) $I_r$ with the epipolar points (yellow line). (c) $I_r$ with the epipolar points (yellow and green plus signs) after applying the wrapped phase constraint. The epipolar point that has the highest $C_{ZNCC}$ is selected as the initial guess (green plus sign).
Fig. 2.
Fig. 2. Example of determining the pixels that will be spatial unwrapped. (a) Mask used in the DIC matching. (b) $\Phi _2$. (c) Pixels (white) that will be unwrapped by local spatial phase unwrapping.
Fig. 3.
Fig. 3. The overall framework of our proposed method.
Fig. 4.
Fig. 4. Photograph of our structured light prototype system.
Fig. 5.
Fig. 5. Pre-captured reference data. (a) Computer-designed ideal random binary pattern. (b) Captured binary image of the a flat surface, $I_r$. (c) The retrieved horizontal reference absolute phase $\Phi _{r}^{h}$ of the same flat surface. (d) The retrieved vertical reference absolute phase $\Phi _{r}^{v}$ of the same flat surface.
Fig. 6.
Fig. 6. Phase unwrapping process of the 3D objects. (a) Photograph of the 3D objects. (b) One of the fringe images. (c) $\phi$. (d) $I_o$. (e) $\tilde {\Phi }_1$. (f) $\Phi _2$. (g) $\Phi$. (h) 3D reconstruction result using (g).
Fig. 7.
Fig. 7. Accuracy evaluation of the measuring result of a sphere. (units:mm) (a) Overlap 3D data of the sphere from Fig. 6(h) and the ideal sphere. (b) Error map of (a) (RMS: 0.056 mm).
Fig. 8.
Fig. 8. Comparison results between our method and multi-frequency unwrapping method. (a) Unwrapped phase using multi-frequency unwrapping method. (b) Unwrapped phase difference map between our method and multi-frequency unwrapping method. (c) 3D reconstruction result using multi-frequency unwrapping method. (d) One cross-section the reconstructed 3D shape using our method, shown in Fig. 6(h), and and that using the multi-frequency unwrapping method.
Fig. 9.
Fig. 9. Typical 3D frames of capturing a moving face at 75 Hz (Visualization 1). (a)-(d) Texture images of the face. (e)-(h) 3D geometry of (a)-(d).

Equations (12)

Equations on this page are rendered with MathJax. Learn more.

I 1 ( x , y ) = I ( x , y ) + I ( x , y ) cos [ ϕ ( x , y ) 2 π / 3 ] ,
I 2 ( x , y ) = I ( x , y ) + I ( x , y ) cos [ ϕ ( x , y ) ] ,
I 3 ( x , y ) = I ( x , y ) + I ( x , y ) cos [ ϕ ( x , y ) + 2 π / 3 ] ,
ϕ ( x , y ) = tan 1 [ 3 ( I 1 I 3 ) 2 I 2 I 1 I 3 ] ,
γ = I ( x , y ) I ( x , y ) ,
Φ ( x , y ) = ϕ ( x , y ) + 2 π × K ,
K ( x , y ) = c e i l [ Φ m i n ϕ 2 π ] ,
C Z N S S D ( Δ P ) = ξ { f ( x + W ( ξ ; Δ p ) ) f ¯ Δ f g ( x + W ( ξ ; p ) ) g ¯ Δ g } 2 ,
ϕ r = Φ r mod ( 2 π ) .
C Z N C C ( P ) = 1 0.5 × C Z N S S D ( P ) .
K ~ ( x , y ) = c e i l [ ( Φ ~ 2 π ) ϕ 2 π ] .
( x x 0 ) 2 + ( y y 0 ) 2 + ( z z 0 ) 2 = r 2 ,
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.