Abstract

As a fundamental step in fringe projection profilometry, absolute phase unwrapping via single-frequency fringe patterns is still a challenging ill-posed problem, which attracts lots of interest in the research area. To solve the problem above, additional constraints were constructed, such as spatial smoothness constraint (SSC) in spatial phase unwrapping algorithm and viewpoint consistency constraint (VCC) in multi-view systems (e.g., stereo and light-field cameras). However, there still exists phase ambiguity in the unwrapping result based on SSC. Moreover, VCC-based methods rely on additional cameras or light-field cameras, which makes the system complicated and expensive. In this paper, we propose to construct a novel constraint directly from photometric information in captured image intensity, which has never been fully exploited in phase unwrapping. The proposed constraint, named photometric constraint (PC), provides a prospective constraint for absolute phase unwrapping from single-frequency fringe patterns without any additional cameras. Extensive experiments have been conducted for the validation of the proposed method, which achieved comparable performance with the state-of-the-art method, given a traditional camera-projector setup and single high-frequency fringe patterns.

© 2021 Optical Society of America under the terms of the OSA Open Access Publishing Agreement

1. Introduction

Fringe projection profilometry (FPP) has been widely used in many applications [1] for capability of accurate dense 3D reconstruction. In a typical 3D reconstruction process of FPP, fringe patterns are projected onto an object’s surface by a projector, and then captured by the camera. Before transformed into 3D point, absolute phase of these captured patterns needs to be retrieved. However, the phase directly retrieved using phase-shifting [2] or Fourier analysis algorithms [3] is wrapped into (-π, π), denoted wrapped phase, where phase ambiguity of 2kπ exists between the absolute phase and wrapped one as Eq. (1).

$$\Phi (x,y) = \varphi (x,y) + 2\pi k(x,y),$$
where Φ and φ are absolute and wrapped phase respectively, and k is the fringe order. Therefore, unwrapping the wrapped phase plays a core role in FPP.

Multi-frequency phase unwrapping can accurately and robustly retrieve absolute phase when sufficient frequencies are applied, but it is time-consuming due to projection of multi-frequency patterns. Phase unwrapping from only single-frequency fringe patterns is more desirable, which however is an ill-posed reverse problem and seems impossible unless additional constraint is made. Spatial smoothness constraint (SSC) is the most used constraint. It is derived from the assumption that the phase difference between any neighboring pixels is less than π. Under the SSC constraint, phase unwrapping can be resolved by minimizing a target function [49]. Considering no additional frequency patterns needed, SSC is the most easy-to-use constraint, and generalizes well to complex scene with discontinuous surfaces and noises. However, the unwrapped result using SSC is still “relative” phase because all other phases are unwrapped by refereeing to one particular pixel’s phase (for instance the pixel with the highest quality [5]). Consequently, SSC-based methods are not capable of absolute phase unwrapping.

To this end, additional constraints are introduced by providing more observations under different views. Assuming the phase consistency across different views in stereo [10,11] or light-field cameras setup [12], viewpoint consistency constraint (VCC) is employed and has achieved impressive performance. To further improve the performance, methods combining several constraints were developed [13]. Nevertheless, additional cameras in VCC-based methods make the system complicated and expensive. Another kind of constraint is the limited-depth-constraint (LDC), where a depth range is estimated [14] or implicitly reduced [15] in advance. Actually, estimation of LDC may involve two adjacent depth maps in a consecutive depth map sequence or scarify the depth range instead.

In this paper, we propose to construct a novel constraint, termed photometric constraint (PC), for absolute phase unwrapping. The proposed constraint provide a prospective way to unwrap absolute phase from only single-frequency fringe patterns, without any additional cameras. To the best of our knowledge, this is the first attempt to fully exploit photometric information for absolute phase unwrapping.

The remainder of this paper is arranged as follows. Section 2 provides the basic principle of the FPP. In Section 3 the proposed PC constraint is described in detail. Subsequently, the experiment results and some discussion are presented in section 4. In the last section the conclusion is drawn.

2. Fundamental principles

In a basic pipeline of FPP with a camera-projector pair, vertical-fringe patterns with phase-shift are projected onto the object’s surface. These patterns are expressed as

$${I_{\textrm{Pi}}}({x_\textrm{p}},{y_\textrm{p}}) = {I_0}[{1 + \cos (2\pi f{x_\textrm{p}} + \delta i)} ],$$
where I0 and f are average intensity and frequency of the fringe respectively, and (xp,yp) is the projector image coordinate. δ is the phase-shift and i=1,2,…,N indicates the pattern with i-th phase-shift and N is the phase-shift number. Note that the fringe patterns above share the same frequency f.

After modulated by the geometry and reflectance of the surface, the captured fringe pattern is expressed as

$$\begin{aligned}{I_\textrm{i}}(x,y) &= r(x,y){I_{\textrm{Pi}}}(x,y) + {I_\textrm{e}}\\ &= r(x,y){I_0} + r(x,y){I_0}\cos [{\varphi (x,y) + \delta i} ]+ {I_\textrm{e}},\\ &= a(x,y) + b(x,y)\cos [{\varphi (x,y) + \delta i} ]\end{aligned}$$
where a(x,y)=r(x,y)I0+Ie, b(x,y)=r(x,y)I0 and φ(x,y) are the average intensity, modulation and phase of the captured fringe pattern respectively, and r(x,y) is the reflectance map of the surface and Ie is the ambient light. Without loss of generality, Ie is set zero in the following discuss.

With the captured patterns, it is easy to retrieve the wrapped phase φ(x,y) with phase-shifting algorithm [2], which is then unwrapped to absolute phase Φ(x,y). We leave phase unwrapping problem in the following discussion and now suppose absolute phase has been already obtained. Given a calibrated FPP system, where the fundamental matrix Fe, camera and projector parameter matrices Mc and Mp are known, the correspondence between the projector and camera image points can be established [16] as follows.

$$\left\{ \begin{array}{l} {x_\textrm{p}} = \frac{{\Phi (x,y)}}{{2\pi f}} \\ \left[{{x_\textrm{p}},{y_\textrm{p}},1} \right]{F_\textrm{e}}{\left[{x,y,1} \right]^{\textrm{T}}} = 0 \end{array} \right..$$

Subsequentially, 3D point-cloud of the surface is retrieved [16] by solving the following

$$\left\{ \begin{array}{l} {[{x,y,1} ]^{\textrm{T}}} = {M_\textrm{c}}{[{X,Y,Z,1} ]^{\textrm{T}}}\\ {[{{x_\textrm{p}},{y_\textrm{p}},1} ]^\textrm{T}} = {M_\textrm{p}}{[{X,Y,Z,1} ]^{\textrm{T}}} \end{array} \right..$$

Since the non-linear distortion can be corrected in advance, we ignored the lens distortion in Eq. (5).

3. Photometric constraint for absolute phase unwrapping

According to the illustration in Sec. 2, 3D point-cloud may be determined by absolute phase or fringe order k(x,y), provided other variables such as x, y and φ(x,y) known. To obtain absolute phase using phase unwrapping, additional constraints are needed. In this section, we will show how to derive the proposed PC constraint, and how the constraint can be used for phase unwrapping.

3.1 Derivation of photometric constraint

Intuitively, the appearance of an object recorded in image depends heavily on geometry of the object. More rigorously, if ignoring other influences such as inter-reflection, ambient light, etc, the appearance in image is determined only by three factors, which are the light source, the observer or camera and reflectance property of the object, as shown in Fig. 1(a). Based on the observation above, we may derive the relationship between photometric information embedded in appearance and geometry of the object. Moreover, as presented in Sec. 2, geometry of the object is related to absolute phase. Combining these two relationships above, photometric information in image can be related to absolute phase directly, based on which PC constraint can be constructed.

 figure: Fig. 1.

Fig. 1. Geometry of the FPP system (a) and process of image intensity transfer from projector to camera (b).

Download Full Size | PPT Slide | PDF

The relationship between geometry of the object and absolute phase is derived. According to Eqs. (1), (4) and (5), we may express the relationship as a function of absolute phase or fringe order k(x,y), where other variables such as x, y and φ(x,y) are already known. The function is expressed as

$${[{X,Y,Z} ]^\textrm{T}} = g({\Phi (x,y)} )= g({k(x,y)} ).$$

We continue to derive the relationship between geometry of the object and corresponding photometric information. Following the work [17], the process of how photometric information is recorded in captured image is illustrated in Figs. 1(a) and 1(b), where ds is the distance between a point P on the surface and the lens’s center of the projector S, i, v and n are unit vectors indicating directions of incident light, reflected light and normal of the surface point P respectively. h is the bisector of i and v which specifies the specular direction.

Under uniform illumination of radiance I0, the irradiance Es illuminating point P from the light source (i.e. the projector) in direction i can be expressed as

$${E_\textrm{s}} = \frac{{{I_0}}}{{{d_\textrm{s}}^2}}\max (0,\boldsymbol {i} \bullet \boldsymbol {n}),$$
where max(.) is used to model occlusion, and $\boldsymbol {i} \bullet \boldsymbol {n}$ indicates the inner product of the two vectors i and n.

The reflected radiance L from point P is related to the bidirectional reflection distribution function (BRDF) fBRDF and Es in the manner

$$L = {E_\textrm{s}}{f_{BRDF}} = {E_\textrm{s}}(\alpha {f_\textrm{d}} + \beta {f_\textrm{s}}),$$
where fd and fs are diffuse and specular reflectance respectively, and α and β are corresponding weighting factors.

Additionally, we use the same models in [17], i.e. the Wolff model [18] and Torrance–Sparrow model [19], to model diffuse and specular reflectance, which has been proved generalized well to lots of surfaces in real world. The expression about fd and fs is

$$\left\{ \begin{array}{l} {f_\textrm{d}}\textrm{ = 1/}\pi \\ {f_\textrm{s}}\textrm{ = }\frac{\textrm{1}}{{(\boldsymbol {i} \bullet \boldsymbol {n})(\boldsymbol {v} \bullet \boldsymbol {n})}}{e^{ - {{\left[ {\frac{{{{\cos }^{ - 1}}(\boldsymbol {h} \bullet \boldsymbol {n})}}{\sigma }} \right]}^2}}}, \end{array} \right.$$
where σ indicates the roughness of the surface.

Considering the lens collection effect, sensor’s response function [17] and gamma effect [20], the final image intensity Ic recorded by the camera is then derived as

$${I_\textrm{c}} = {[{{k_\textrm{c}}{E_\textrm{c}} + b} ]^\gamma }\textrm{ = }{\left[ {{k_\textrm{c}}\frac{\pi }{\textrm{4}}{{\left( {\frac{D}{{{f_\textrm{c}}}}} \right)}^2}{{\cos }^4}(\theta )L + b} \right]^\gamma },$$
where kc and b are sensor’s gain and bias. Ec is the irradiance collected by the camera’s lens from L. D and fc are the lens diameter and focal length respectively. θ is the angle between v and the optical axis of the lens. γ is the nonlinear gamma factor of the system.

Now consider Eqs. (7)-(10), the image intensity under uniform illumination can be expressed as

$$\begin{array}{l} {I_\textrm{c}} = \left[ {{k_\textrm{c}}\frac{\pi }{\textrm{4}}{{\left( {\frac{D}{{{f_\textrm{c}}}}} \right)}^2}{{\cos }^4}(\theta )\frac{{{I_0}}}{{{d_\textrm{s}}^2}}\max (0,\boldsymbol {i} \bullet \boldsymbol {n})\ldots } \right.\\ \;\;\;\;\;{\left. {\textrm{ }\left( {\frac{\alpha }{\pi } + \frac{\beta }{{(\boldsymbol {i} \bullet \boldsymbol {n})(\boldsymbol {v} \bullet \boldsymbol {n})}}{e^{ - {{\left[ {\frac{{{{\cos }^{ - 1}}(\boldsymbol {h} \bullet \boldsymbol {n})}}{\sigma }} \right]}^2}}}} \right) + b} \right]^\gamma },. \end{array}$$
or in a more abstracted form
$${I_\textrm{c}} = F(\boldsymbol {i},\boldsymbol {v},\boldsymbol {n},\theta ,\alpha ,\beta ,\sigma ,{k_\textrm{c}},b,\gamma ,D,{f_\textrm{c}}),$$
where F(.) is the abstracted version of Eq. (11).

Among the parameters above, kc, b, γ, D and fc are system parameters independent of the scene, which can be calibrated in advance. Furthermore, with a calibrated FPP system, lens centers of projector and camera, i.e. S and C, are known, and hence i and v can be determined by subtracting S and C from the 3D coordinate of P, i.e. X = [X, Y, Z]T. Similarly, n can be estimated by taking the partial derivatives of X as n = [∂X, ∂Y, ∂Z]T.

Thus, photometric information (recorded and quantified as the image intensity Ic) is only determined by geometry of the object (3D point-cloud X) and reflectance parameters α, β and σ of the object’s surface. By combining Eqs. (6) and (12), we obtain the relationship between the absolute phase and the image intensity, which is expressed in an abstracted form as

$${I_\textrm{c}} = G(k(x,y),\alpha ,\beta ,\sigma ).$$

According to Eq. (13), the relationship above specifies a constraint for absolute phase unwrapping. Since the constraint is derived from photometric information recorded in image intensity, we denote it the photometric constraint (PC).

3.2 Phase unwrapping algorithm using photometric constraint

With the derived PC constraint, we may develop a phase unwrapping algorithm. Specifically, fringe order can be obtained by solving Eq. (13). However, for each pixel, there are four unknowns including k(x,y), α, β and σ, and only one equation, which is a under-defined equation or ill-posed problem. To relax the problem, some assumptions regarding the surface reflectance property are made. Without loss of generality, we assume a surface with a uniform material where α, β and σ are constant across the surface. Thus, for M pixels on the same surface, the number of unknowns is reduced to M+3 (M different k’s plus three unknowns α, β and σ) from 4×M (M different k’s plus M different α’s, β’s and σ’s, respectively). It is easy to estimate these unknowns above jointly under an optimization framework, provided M is sufficiently larger than 3 (it is true for most cases). The optimization target termed energy function Ef is given by

$$\begin{array}{c} ({k\ast (x,y),\alpha \ast ,\beta \ast ,\sigma \ast } )= \arg \min {E_f}\\ \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\textrm{ } = \arg \min \sum\limits_{x,y}^M {{{||{{I_\textrm{c}} - G({k(x,y),\alpha ,\beta ,\sigma } )} ||}^2}} , \end{array}$$

Please note that Ic can be easily obtained since it is equivalent to the average intensity in Eq. (3), and thus we need not to project and captured an additional uniform white image.

To further simplify the optimization, we employed a two-stage strategy. In the first stage, traditional spatial phase unwrapping algorithm (SPU) [5] is applied to retrieve a continuous phase map ψ(x,y), where relative phase ambiguity is eliminated and all pixels share the same “absolute” phase ambiguity. More specifically, ψ(x,y) is related to the original wrapped phase map by

$$\Phi (x,y) = \varphi (x,y) + 2\pi k(x,y) = \psi (x,y) + 2\pi K,$$
where 2πK is the absolute phase ambiguity and K is a constant fringe order for all pixels on the same surface.

With Eq. (15), the minimization of energy function in Eq. (14) changes to a simpler one, where only four unknowns, i.e. K, α, β and σ, are to be estimated.

In the second stage, we further simplify the energy function to a linear form. Considering that the absolute phase is bounded by the projector image resolution via 1 ≤ Φ/(2πf) ≤ W (width of the projector image), the corresponding K is also a bounded integer with some certain upper and lower bounds KU and KL. Accordingly, we minimize the energy function with each fixed K within KLK ≤ KU. In addition, since specular reflection usually results in image saturation and inaccurate observed image intensity, we will treat the specular component as outliers and ignore the corresponding specular reflection component. Consequently, the energy function in Eq. (14) is reduced to a linear form for each potential K, and we can solve it in linear least square sense. After the energy function is minimized with all potential K’s, the K* corresponding to the minimal energy is chosen as the final optimal estimate, and absolute phase map can be obtained by added 2πK* to ψ(x,y).

4. Experiments

To verify the effectiveness of PC constraint, we implemented an FPP system as shown in Fig. 2(a). The system consists of a traditional CCD camera with3384 × 2704 pixels, and a projector with 1280 × 800 pixels. Furthermore, the system was calibrated in advance via the method in Ref. [16]. In the experiment, 12-step phase-shifting vertical fringe patterns with single frequency f=1/10 were projected, where f was sufficiently large and accordingly the phase ambiguity was considerably heavy. The gamma of the system was estimated in advance using the method in Ref. [20]. The gain and bias of the camera response was estimated jointly when minimizing the energy function.

 figure: Fig. 2.

Fig. 2. FFP system setup (a) and three typical scenarios (b)-(d).

Download Full Size | PPT Slide | PDF

Three typical scenarios were measured during the experiment, as shown in Figs. 2(b)–2(d). The first scenario contained only a bottle. In the second scenario, three isolated ceramic spheres were placed in front of the system, representing discontinuous surfaces across different spheres. In the last scenario, there were two isolated walnuts with freeform surfaces, to further validate generalization of the proposed constraint to complex surfaces. Results of the proposed method and traditional multi-frequency phase unwrapping algorithm (MFPU) [21] were compared, where three frequencies, e.g. 1/10, 1/80, 1/1280, with unit of 1/pixel were used in MFPU. Results and detailed analysis are as follows.

4.1 Experiment results

4.1.1 Results of the first scenario

Results of the proposed method are shown in Fig. 3. As shown in Fig. 3(a), there are obvious phase discontinuities in wrapped phase, which is caused by “wrapping” operation in phase-shifting algorithm. The first stage result is shown in Fig. 3(b). Phase discontinuity in Fig. 3(a) was eliminated by SPU, which produced a continuous phase map ψ(x,y), as shown in Fig. 3(b). The second stage result is shown in Fig. 3(c), where the curve of optimization energy Ef v.s. fringe order K is presented. As shown in Fig. 3(c), the fringe order corresponding to the minimal Ef is K=5, which indicates a considerable small deviation from the result of MFPU, i.e. KG=0. To be noted, for better visualization, the horizontal axis of curve (i.e. the K axis) is shifted to align the origin with KG, such that KG=0. The axes in the following figures such as Figs. 7 and 11 are the same.

 figure: Fig. 3.

Fig. 3. Results of the proposed method: (a) wrapped phase and (b) the first stage result ψ(x,y); (c) is the curve of Ef v.s. K in the second stage of the proposed method, where K and KG are fringe orders determined by the proposed method and MFPU respectively.

Download Full Size | PPT Slide | PDF

As Ef reaches its minimum, best estimate of image intensity is obtained, which provides an additional evidence of the effectiveness of the proposed method. Thus, comparison of captured and estimated image intensities is presented in Fig. 4. As shown in Figs. 4(a) and 4(b), the estimated image intensity with K=5 is visually identical to the captured one. By taking a closer look, the cross-section from estimated image intensity is also highly consistent with the one from captured image intensity, as shown in the blue and red curves in Fig. 4(c) respectively. The comparison result above presents an additional proof of the proposed method.

 figure: Fig. 4.

Fig. 4. Comparison of image intensity: (a) and (b) are captured image intensity and the estimated one with K=5, and (c) presents the cross-sections marked in yellow dashed lines in Figs. 4(a) and 4(b).

Download Full Size | PPT Slide | PDF

To further evaluate the performance of the proposed method, comparison of unwrapped phase by proposed method and MFPU is shown in Fig. 5. As shown in Figs. 5(a) and 5(b), the difference between the results of proposed method and MFPU is visually negligible, which suggests the capacity of accurate unwrapping of the proposed method. For quantitative analysis, comparison of cross-sections is presented in Fig. 5(c). There is obvious phase difference between result of MFPU and the one of proposed method, which is almost 10π due to the 5-fringe-order difference. However, compared with the unwrapping result of SPU, as shown in Fig. 3(b), the phase difference has been reduced significantly, as shown in Figs. 5(a) and 5(b), which demonstrates a significant improvement by the proposed method.

 figure: Fig. 5.

Fig. 5. Comparison of unwrapped phases: (a) and (b) are results of MFPU and proposed method, respectively, and (c) shows cross-sections marked in red dashed lines in Figs. 5(a) and 5(b).

Download Full Size | PPT Slide | PDF

Additional analysis regarding the unwrapping phase has been carried out, of which the results are shown in Fig. 6. As shown in Fig. 6(a), positions in projector image plane determined according to unwrapped phase has been illustrated. It is evident that the position determined using the proposed method is much closer to the one using MFPU, as shown in yellow and blue areas in Fig. 6(a) respectively, comparing to the one using SPU, as shown in red areas in Fig. 6(a). The result above demonstrates the promising performance on phase unwrapping of the proposed method. The final reconstructed 3D point cloud is shown in Fig. 6(b).

 figure: Fig. 6.

Fig. 6. Positions in projector image plane determined by unwrapped phase and the final 3D point cloud: (a) presents positions determined by unwrapped phases of MFPU, the first and second stage results of the proposed method, respectively; (b) is the final point cloud.

Download Full Size | PPT Slide | PDF

4.1.2 Results of the second scenario

Results of three spheres in the second scenario are shown in Fig. 7, where the spheres are labeled as S1, S2 and S3 from left to right. As shown in Fig. 7(a), there are phase discontinuities in wrapped phases of each sphere. Furthermore, there exist obvious gaps between each sphere, which makes phase unwrapping using SPU more challenging. Phase unwrapping result in the first stage is shown in Fig. 7(b). Since there is no continuous path from spheres S1 to S2, or from S2 to S3, SPU-based phase unwrapping of each sphere was performed independently. Thus, the resulting unwrapped phase distribution of these spheres are almost identical to each other, as shown in Fig. 7(b), which is obvious wrong and correct fringe order K needs to be estimated and added to them. The fringe order K is estimated regarding to each sphere in the second stage. Curves of Ef v.s. K of the three spheres are shown in Figs. 7(c)–7(e), respectively, where estimated fringe orders are K=-1, 5 and -4 for spheres S1 to S3 respectively. As shown in Figs. 7(c)–7(e), the fringe orders deviation from the one of MFPU is small, with minimal deviation of 1 and the maximum one less than 5, which demonstrates similar performance of the proposed method comparing to MFPU.

 figure: Fig. 7.

Fig. 7. Results of the proposed method: (a) wrapped phase and (b) the first stage result ψ(x,y); (c)-(e) are curves of Ef v.s. K of spheres S1, S2, S3 in the second stage respectively, where K and KG are fringe orders determined by the proposed method and MFPU respectively.

Download Full Size | PPT Slide | PDF

Similarly, comparison of captured and estimated image intensities is presented in Fig. 8. As shown in Figs. 8(a) and 8(b), the estimated image intensities of spheres S1, S2 and S3 with K=-1, 5, -4 are visually identical to the captured ones, except some areas with highlights. Furthermore, similar results can be found in cross-sections from these spheres, as shown in Figs. 8(c)–8(e), which provides an additional evidence of the proposed method’s effectiveness.

 figure: Fig. 8.

Fig. 8. Comparison of image intensity: (a) and (b) are captured and estimated image intensities respectively; (c)-(e) are cross-sections of spheres S1, S2 and S3 respectively, whose positions are marked in red dashed lines in Figs. 4 (a) and 4(b), respectively.

Download Full Size | PPT Slide | PDF

Unwrapped phase is compared with the one of MFPU, whose result is shown in Fig. 9. As shown in Fig. 9(b), the distribution of each sphere’s unwrapped phase differs from the ones of the other two spheres, which is more plausible comparing to the SPU result in Fig. 7(b). Furthermore, unwrapped phase of the proposed method approximates to the one of MFPU visually, as shown in Figs. 9(a) and 9(b), which verified the comparable performance of the proposed method, again. As shown in Figs. 9(c)–9(e), there is obvious phase difference between cross-sections of each sphere, with phase differences about -2π, 10π and -8π for spheres S1, S2 and S3, respectively. However, the result is still encouraging and promising comparing to the one of SPU as shown in Fig. 7(b).

 figure: Fig. 9.

Fig. 9. Comparison of unwrapped phase: (a) and (b) are results of MFPU and proposed method, respectively, and (c)-(e) shows cross-sections of spheres S1, S2 and S3 respectively, whose positions are marked in red dashed lines in Figs. 9(a) and 9(b).

Download Full Size | PPT Slide | PDF

For comprehensive comparison, the position determined by unwrapped phase is shown in Fig. 10(a). Result of the proposed method is close to the one of MFPU, and deviates from the one of SPU in the first stage. It is evident that the proposed method performed comparably well with the MFPU, and that significant improvement has been achieved comparing with SPU.

 figure: Fig. 10.

Fig. 10. Positions in projector image plane and the final 3D point cloud: (a) presents positions of each sphere determined by unwrapped phases of MFPU, SPU and the proposed method respectively; (b) is the final point cloud.

Download Full Size | PPT Slide | PDF

Final 3D point-clouds of these spheres are shown in Fig. 10(b).

4.1.3 Results of the third scenario

Results of two walnuts in the third scenario are shown in Fig. 11. Comparing to the second scenario, the third one is more complicated, where there is not only gap between the walnuts but also varying curvature across the surface of each walnut. Thus, the experiment on third scenario provides another convincing result for the validation of the proposed method. As expected, phase discontinuities exist on each walnut, as shown in Fig. 11(a), which were eliminated by SPU in the first stage, as shown in Fig. 11(b). Similarly, incorrect unwrapped results of both walnuts were obtained due to wrong fringe order. The correct fringe order determined in the second stage is shown in Figs. 11(c) and 11(d), where K=3, 5 for the left and right walnuts, respectively. Again, the fringe order deviation from the one of MFPU is relatively small, which indicates good performance on estimating fringe order of the proposed method.

 figure: Fig. 11.

Fig. 11. Results of the proposed method: (a) wrapped phase and (b) the first stage result ψ(x,y); (c)-(d) are curves of Ef v.s. K of the left and right walnuts in the second stage respectively, where K and KG are fringe orders determined by the proposed method and MFPU.

Download Full Size | PPT Slide | PDF

Similarly, estimated image intensity of the two walnuts were compared to the captured one, whose result is shown in Fig. 12. As shown in Figs. 12(a) and 12(c), the estimated image intensity when Ef reaches its minimum is visually identical to the captured one, except some highlights areas. Same result can also be found in Figs. 12(b) and 12(d), where the cross-sections were compared.

 figure: Fig. 12.

Fig. 12. Comparison of image intensity: (a) and (c) are captured and estimated image intensities respectively; (b) and (d) are cross-sections of the left and right walnuts respectively, whose positions are marked in red dashed lines in Figs. 12(a) and 12(b), respectively.

Download Full Size | PPT Slide | PDF

For further performances comparison, unwrapped phases of the proposed method and MFPU are shown in Fig. 13. As shown in Figs. 13(a) and 13(c), each walnut’s unwrapped phase of the proposed method approximates visually to the one of MFPU. As expected, there also exist phase differences between results of proposed method and MFPU, as shown in Figs. 13(b) and 13(d), which are caused by the 3-fringe order and 5-fringe-order deviations respectively. The phase difference for the left walnut is 6π and the one for the right walnut is 10π. However, the result of the proposed method is still encouraging and obvious improvement has been achieved comparing to the one of SPU as shown in Figs. 11(b), 13(a) and 13(c), where result of SPU deviates from the result of MFPU significantly and obvious difference can be found easily with the naked eye.

 figure: Fig. 13.

Fig. 13. Comparison of unwrapped phase: (a) and (c) are results of MFPU and proposed method, respectively, and (b) and (d) show the cross-sections of the left and right walnuts respectively, of which the positions are marked in red dashed lines in Figs. 13(a) and 13(b).

Download Full Size | PPT Slide | PDF

For comprehensive comparison, positions determined using unwrapped phases of different methods are shown in Fig. 14(a). Result of the proposed method is much closer to the one of MFPU, comparing to the distance between the position of SPU and the one of MFPU, which provides evidence that the proposed method performed comparably well with the MFPU, and that significant improvement has been achieved comparing with SPU.

 figure: Fig. 14.

Fig. 14. Positions in projector image plane determined by unwrapped phase and the final 3D point cloud: (a) presents positions determined by unwrapped phases of MFPU, SPU and the proposed method; (b) is the final point cloud.

Download Full Size | PPT Slide | PDF

In addition, final 3D point-clouds of the two walnuts are shown in Fig. 14(b).

4.2 Discussion

4.2.1 Discussion about the runtime of the proposed algorithm

To verify the performance on computational efficiency, we selected the second scenario and compared the runtimes of the proposed algorithm and MFPU, which are shown in Tab. 1.

Tables Icon

Table 1. Comparison of runtimes

As shown in Tab. 1, the runtime cost by the proposed algorithm is much larger than MFPU, which is an obvious issue to address. However, the emphasis of the proposed algorithm is put on achieving absolute phase unwrapping with only single-frequency fringe patterns, which has shown a significant improvement comparing with MFPU, which needs fringe patterns of more than two frequencies. Additionally, more sophisticated algorithm can be developed to reduce the runtime, such as parallelizing the unwrapping processing, improving the computing power etc., which will be studied in our future work.

4.2.2 Discussion about the robustness to more complex scenarios

To further investigate the robustness of the proposed algorithm, two complex scenarios of statues with freeform surfaces and obvious phase jumps were reconstructed. The two statues and their corresponding wrapped phases are shown in Fig. 15. As shown in Figs. 15(a) and 15(c), the hairs and clothes of the statues consist of freeform surfaces, where there are also some discontinuity and height jumps, which makes phase unwrapping more challenging. Results of the proposed method are shown in Figs. 1618.

 figure: Fig. 15.

Fig. 15. (a) and (b) are the first statue and its corresponding wrapped phase, and similarly (c) and (d) are the second statue and its wrapped phase.

Download Full Size | PPT Slide | PDF

 figure: Fig. 16.

Fig. 16. Comparison result of the first statue: (a)-(c) are the first stage result ψ(x,y), final result of the proposed algorithm, and the one of MFPU, respectively; (d) are cross-sections of (a)-(c), which are curves in yellow, blue and red lines respectively.

Download Full Size | PPT Slide | PDF

 figure: Fig. 17.

Fig. 17. Result of the proposed method: (a) is the segmentation mask, where the head region is marked in red and the body region is in blue; (b) and (c) are “Ef v.s. K” curves of the head and body regions, respectively.

Download Full Size | PPT Slide | PDF

 figure: Fig. 18.

Fig. 18. Comparison result of the second statue: (a)-(c) are the first stage result ψ(x,y), final result of the proposed method, and the one of MFPU, respectively.

Download Full Size | PPT Slide | PDF

The result of the first statue is shown in Fig. 16. As shown in Figs. 16(a)–16(c), there is obvious difference between the first stage result ψ(x,y) and the one of MFPU, whereas the final result of the proposed method is comparably good as the one of MFPU, as shown in Figs. 16(b) and 16(c), which indicates the robustness of the proposed method to scenarios with complex surfaces. Similar results can be found in the cross-sections in Fig. 16(d).

For the second statue, discontinuities and phase jumps exist between the head and body regions, as shown in Fig. 15(c). Therefore, in the experiment, the surface of the second statue was segmented into two separate regions, i.e. the head region and body region, as shown in Fig. 17(a). Fringe orders for these two regions were estimated separately, of which the results are shown in Figs. 17(b) and 17(c). It is obvious that for each region, the estimated fringe orders are very close to the ground truths, where K=-3 for the body region and K=3 for the head region, as shown in Figs. 17(b) and 17(c).

Unwrapped phases are shown in Fig. 18. Similar to results of the first statue, the result of the second statue using the proposed algorithm is visually identical to the one of MFPU, even for regions with obvious discontinuity and large phase jumps.

For further investigation, we selected two typical regions with discontinuity and large phase jumps, and compared the cross-sections of unwrapped phases by different methods, as shown in Fig. 19. For regions with obvious phase discontinuity(there are no data due to shadow) as shown in Fig. 19(a), where the phase jump is larger than π as shown in the right figure in Fig. 19(a), the result of the proposed algorithm is still much closer to the one of MFPU than the result of the first stage, which demonstrates promising performance on phase unwrapping for discontinuous surfaces with phase jump larger than π. For regions with large phase jumps shown in Fig. 19(b), the large phase jump was unwrapped correctly by the proposed algorithm, where the phase jump is 31.4 or about 10π as shown in the right figure in Fig. 19(b), and the phase deviation from the result of MFPU is relatively small. These two results in Fig. 19 imply that the proposed algorithm performs robustly for complex regions with discontinuity and phase jumps.

 figure: Fig. 19.

Fig. 19. Cross-sections of phases: the left figure in Fig. 18(a) shows cross-sections marked in red dashed lines in Fig. 17, and the right one is the zoomed-in version of the boxed region in black dotted lines in the left figure; similarly, the left and right figures in Fig. 18(b) are cross-sections marked in blue dashed lines in Fig. 17, and its corresponding zoomed-in version boxed in Fig. 18(b).

Download Full Size | PPT Slide | PDF

However, since performance of the proposed algorithm partially relies on SPU algorithm in the first stage, they share some similar limitations. For instance, SPU algorithm assumes that there is only one continuous surface in the scene. Or more specifically, if we can find a continuous path on the surface, where the phase difference between any two adjacent points along the path is less than π, the surface is continuous and SPU algorithm performs well. Based on the assumption, the SPU-unwrapped phases of the whole surface share a common global fringe order K, and the estimation of K can be achieved in the second stage of the proposed algorithm. This is just the case shown in Fig. 20(a). In this case, even though there is a larger phase jump (around 10π) along the path marked in blue dotted line in Fig. 20(a) and Fig. 19(b), SPU algorithm can find a continuous path from point PA to point PB, which is marked in yellow dashed line in Fig. 20(a), and thus accurate unwrapping can be achieved. That’s why the proposed algorithm performed well and obtained the satisfying result in Fig. 19(b).

 figure: Fig. 20.

Fig. 20. Unwrapping paths in SPU algorithm in the first stage: (a) continuous path from point PA to point PB both on the body; (b) discontinuous path from PA on the body to PB on the head.

Download Full Size | PPT Slide | PDF

On the contrary, as for the case in Fig. 20(b), since there is discontinuity between the head and body regions, there is no continuous path from PA to PB. Or in other words, the surface of the statue consists of two separate and discontinuous sub-surfaces, i.e. the head region and body region. In this case, SPU algorithm will also treat the surface of the statue as a whole continuous surface, and obtain incorrect unwrapping result. This is one of typical failing cases when using SPU algorithm. Accordingly, if no additional information is told, it will be considered that there is only one global fringe order K for the whole surface(In fact, there are two different K’s for the head and body regions, respectively). Continuing with the assumption of only one fringe order as above, the proposed algorithm will reach an incorrect estimation of fringe order. That’s why the surface of the statue was segmented into head and body regions, and fringe orders for each region were estimated separated as shown in Fig. 17. For this kind of cases, image segmentation is needed as a pre-processing before performing the proposed algorithm. Nevertheless, unwrapping phases of discontinuous surfaces is challenging for SPU algorithm. Due to employing SPU algorithm in the first stage, the proposed algorithm shares the similar limitation and additional pre-processing such as image segmentation is needed. However, in contrast to SPU algorithm, the proposed algorithm can achieve absolute phase unwrapping without any additional projection of multi-frequency patterns, whereas SPU is incapable of absolute phase unwrapping. That is the main contribution and improvement brought by the proposed algorithm. In the future research, we will work on developing more advanced algorithm combing image segmentation and the proposed algorithm, to achieve more robust unwrapping for complex discontinuous surfaces with large phase jumps.

According to discussion above, the propose algorithm can achieve robust unwrapping for complex surfaces with large phase jumps, and for special case of surfaces with discontinuity, some additional pre-processing’s may be needed, which is caused by the limitation of SPU algorithm in the first stage. Nevertheless, according to experiment results above, the proposed algorithm has demonstrated promising performance, and can be applied to a wide range of complex surfaces.

5. Conclusion

In conclusion, a novel photometric constraint for phase unwrapping is proposed in this paper. We demonstrated its impressive performance on phase unwrapping of diverse scenarios, where only single-frequency fringe patterns and no additional cameras were used. Significant improvements have been achieved comparing to spatial-constraint-based method. Furthermore, the proposed method performed comparably well as multi-frequency unwrapping with a difference of only several fringe orders. It shows promising potential to realize single frequency-fringe phase unwrapping, and can be easily integrated to other methods like multi-view systems, to achieve more accurate and reliable phase unwrapping. In future research, we will continue to investigate reasons regarding the fringe order error and further extend the method to more general and complex scenes with various materials.

Funding

National Natural Science Foundation of China (62005217); Fundamental Research Funds for the Central Universities (3102020QD1002); Overseas Expertise Introduction Project for Discipline Innovation (B16039).

Disclosures

The authors declare no conflicts of interest.

Data availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

Supplemental document

See Supplement 1 for supporting content.

References

1. J. Geng, “Structured-light 3D surface imaging: a tutorial,” Adv. Opt. Photonics 3(2), 128–160 (2011). [CrossRef]  

2. V. Srinivasan, H. C. Liu, and M. Halioua, “Automated phase measuring profilometry of 3-D diffuse object,” Appl. Opt. 23(18), 3105 (1984). [CrossRef]  

3. M. Takeda, “Fourier transform profilometry for the automatic measurement of 3-D object shapes,” Appl. Opt. 22(24), 3977–3982 (1983). [CrossRef]  

4. R. M. Goldstein, H. A. Zebker, and C. L. Werner, “Satellite radar interferometry: t wo dimensional phase unwrapping,” Radio Sci. 23(4), 713–720 (1988). [CrossRef]  

5. X. Y. Su and W. J. Chen, “Reliability-guided phase unwrapping algorithm: a review,” Opt. Lasers Eng. 42(3), 245–261 (2004). [CrossRef]  

6. D. C. Ghiglia and L. A. Romero, “Minimum Lp-norm two-dimensional phase unwrapping,” J. Opt. Soc. Am. A 13(10), 1999–2013 (1996). [CrossRef]  

7. T. Flynn, “Two-dimensional phase unwrapping with minimum weighted discontinuity,” J. Opt. Soc. Am. A 14(10), 2692–2701 (1997). [CrossRef]  

8. C Chen, “Statistical-cost network-flow approaches to two-dimensional phase unwrapping for radar interferometry”, Stanford University, Stanford, CT, 2001.

9. J. M. B. Dias and G. Valadao, “Phase Unwrapping via Graph Cuts,” IEEE Trans. Image Process. 16(3), 698–709 (2007). [CrossRef]  

10. T Weise, B Leibe, and L. Van Gool, “Fast 3D scanning with automatic motion compensation,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE2007).

11. R. R. Garcia and A. Zakhor, “Consistent Stereo-Assisted Absolute Phase Unwrapping Methods for Structured Light Systems,” IEEE J. Sel. Top. Signal Process. 6(5), 411–424 (2012). [CrossRef]  

12. Z. Cai, X. Liu, Z. Chen, Q. Tang, B. Z. Gao, G. Pedrini, W. Osten, and X. Peng, “Light-field-based absolute phase unwrapping,” Opt. Lett. 43(23), 5717–5720 (2018). [CrossRef]  

13. J. Qian, S. Feng, T. Tao, Y. Hu, Y. Li, Q. Chen, and C. Zuo, “Deep-learning-enabled geometric constraints and phase unwrapping for single-shot absolute 3D shape measurement,” http://arxiv.org/abs/2001.01439.

14. T. Tao, Q. Chen, S. Feng, J. Qian, Y. Hu, L. Huang, and C. Zuo, “High-speed real-time 3D shape measurement based on adaptive depth constraint,” Opt. Express 26(17), 22440–22456 (2018). [CrossRef]  

15. C. Jiang, B. Li, and S. Zhang, “Pixel-by-pixel absolute phase retrieval using three phase-shifted fringe patterns without markers,” Opt. Lasers Eng. 91, 232–241 (2017). [CrossRef]  

16. Z. Qi, Z. Wang, J. Huang, C. Xing, and J. Gao, “Invalid-point removal based on epipolar constraint in the structured-light method,” Opt. Lasers Eng. 105, 173–181 (2018). [CrossRef]  

17. K. Lee and C. Kuo, “Shape from Shading with a Generalized Reflectance Map Model,” Comput. Vis. Image Underst. 67(2), 143–160 (1997). [CrossRef]  

18. L. B. Wolff, “Diffuse-reflectance model for smooth dielectric surface,” J. Opt. Soc. Am. A 11(11), 2956–2968 (1994). [CrossRef]  

19. K. E. Torrance and E. M. Sparrow, “Theory for off-specular reflection from roughened surfaces,” J. Opt. Soc. Am. 57(9), 1105–1114 (1967). [CrossRef]  

20. K. Liu, Y. Wang, D. L. Lau, Q. Hao, and L. G. Hassebrook, “Gamma model and its analysis for phase measuring profilometry,” J. Opt. Soc. Am. A 27(3), 553–562 (2010). [CrossRef]  

21. J. Salvi, S. Fernandez, T. Pribanic, and B. Llado, “A state of the art in structured light patterns for surface profilometry,” Pattern Recognit. 43(8), 2666–2680 (2010). [CrossRef]  

References

  • View by:
  • |
  • |
  • |

  1. J. Geng, “Structured-light 3D surface imaging: a tutorial,” Adv. Opt. Photonics 3(2), 128–160 (2011).
    [Crossref]
  2. V. Srinivasan, H. C. Liu, and M. Halioua, “Automated phase measuring profilometry of 3-D diffuse object,” Appl. Opt. 23(18), 3105 (1984).
    [Crossref]
  3. M. Takeda, “Fourier transform profilometry for the automatic measurement of 3-D object shapes,” Appl. Opt. 22(24), 3977–3982 (1983).
    [Crossref]
  4. R. M. Goldstein, H. A. Zebker, and C. L. Werner, “Satellite radar interferometry: t wo dimensional phase unwrapping,” Radio Sci. 23(4), 713–720 (1988).
    [Crossref]
  5. X. Y. Su and W. J. Chen, “Reliability-guided phase unwrapping algorithm: a review,” Opt. Lasers Eng. 42(3), 245–261 (2004).
    [Crossref]
  6. D. C. Ghiglia and L. A. Romero, “Minimum Lp-norm two-dimensional phase unwrapping,” J. Opt. Soc. Am. A 13(10), 1999–2013 (1996).
    [Crossref]
  7. T. Flynn, “Two-dimensional phase unwrapping with minimum weighted discontinuity,” J. Opt. Soc. Am. A 14(10), 2692–2701 (1997).
    [Crossref]
  8. C Chen, “Statistical-cost network-flow approaches to two-dimensional phase unwrapping for radar interferometry”, Stanford University, Stanford, CT, 2001.
  9. J. M. B. Dias and G. Valadao, “Phase Unwrapping via Graph Cuts,” IEEE Trans. Image Process. 16(3), 698–709 (2007).
    [Crossref]
  10. T Weise, B Leibe, and L. Van Gool, “Fast 3D scanning with automatic motion compensation,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE2007).
  11. R. R. Garcia and A. Zakhor, “Consistent Stereo-Assisted Absolute Phase Unwrapping Methods for Structured Light Systems,” IEEE J. Sel. Top. Signal Process. 6(5), 411–424 (2012).
    [Crossref]
  12. Z. Cai, X. Liu, Z. Chen, Q. Tang, B. Z. Gao, G. Pedrini, W. Osten, and X. Peng, “Light-field-based absolute phase unwrapping,” Opt. Lett. 43(23), 5717–5720 (2018).
    [Crossref]
  13. J. Qian, S. Feng, T. Tao, Y. Hu, Y. Li, Q. Chen, and C. Zuo, “Deep-learning-enabled geometric constraints and phase unwrapping for single-shot absolute 3D shape measurement,” http://arxiv.org/abs/2001.01439 .
  14. T. Tao, Q. Chen, S. Feng, J. Qian, Y. Hu, L. Huang, and C. Zuo, “High-speed real-time 3D shape measurement based on adaptive depth constraint,” Opt. Express 26(17), 22440–22456 (2018).
    [Crossref]
  15. C. Jiang, B. Li, and S. Zhang, “Pixel-by-pixel absolute phase retrieval using three phase-shifted fringe patterns without markers,” Opt. Lasers Eng. 91, 232–241 (2017).
    [Crossref]
  16. Z. Qi, Z. Wang, J. Huang, C. Xing, and J. Gao, “Invalid-point removal based on epipolar constraint in the structured-light method,” Opt. Lasers Eng. 105, 173–181 (2018).
    [Crossref]
  17. K. Lee and C. Kuo, “Shape from Shading with a Generalized Reflectance Map Model,” Comput. Vis. Image Underst. 67(2), 143–160 (1997).
    [Crossref]
  18. L. B. Wolff, “Diffuse-reflectance model for smooth dielectric surface,” J. Opt. Soc. Am. A 11(11), 2956–2968 (1994).
    [Crossref]
  19. K. E. Torrance and E. M. Sparrow, “Theory for off-specular reflection from roughened surfaces,” J. Opt. Soc. Am. 57(9), 1105–1114 (1967).
    [Crossref]
  20. K. Liu, Y. Wang, D. L. Lau, Q. Hao, and L. G. Hassebrook, “Gamma model and its analysis for phase measuring profilometry,” J. Opt. Soc. Am. A 27(3), 553–562 (2010).
    [Crossref]
  21. J. Salvi, S. Fernandez, T. Pribanic, and B. Llado, “A state of the art in structured light patterns for surface profilometry,” Pattern Recognit. 43(8), 2666–2680 (2010).
    [Crossref]

2018 (3)

2017 (1)

C. Jiang, B. Li, and S. Zhang, “Pixel-by-pixel absolute phase retrieval using three phase-shifted fringe patterns without markers,” Opt. Lasers Eng. 91, 232–241 (2017).
[Crossref]

2012 (1)

R. R. Garcia and A. Zakhor, “Consistent Stereo-Assisted Absolute Phase Unwrapping Methods for Structured Light Systems,” IEEE J. Sel. Top. Signal Process. 6(5), 411–424 (2012).
[Crossref]

2011 (1)

J. Geng, “Structured-light 3D surface imaging: a tutorial,” Adv. Opt. Photonics 3(2), 128–160 (2011).
[Crossref]

2010 (2)

K. Liu, Y. Wang, D. L. Lau, Q. Hao, and L. G. Hassebrook, “Gamma model and its analysis for phase measuring profilometry,” J. Opt. Soc. Am. A 27(3), 553–562 (2010).
[Crossref]

J. Salvi, S. Fernandez, T. Pribanic, and B. Llado, “A state of the art in structured light patterns for surface profilometry,” Pattern Recognit. 43(8), 2666–2680 (2010).
[Crossref]

2007 (1)

J. M. B. Dias and G. Valadao, “Phase Unwrapping via Graph Cuts,” IEEE Trans. Image Process. 16(3), 698–709 (2007).
[Crossref]

2004 (1)

X. Y. Su and W. J. Chen, “Reliability-guided phase unwrapping algorithm: a review,” Opt. Lasers Eng. 42(3), 245–261 (2004).
[Crossref]

1997 (2)

T. Flynn, “Two-dimensional phase unwrapping with minimum weighted discontinuity,” J. Opt. Soc. Am. A 14(10), 2692–2701 (1997).
[Crossref]

K. Lee and C. Kuo, “Shape from Shading with a Generalized Reflectance Map Model,” Comput. Vis. Image Underst. 67(2), 143–160 (1997).
[Crossref]

1996 (1)

1994 (1)

1988 (1)

R. M. Goldstein, H. A. Zebker, and C. L. Werner, “Satellite radar interferometry: t wo dimensional phase unwrapping,” Radio Sci. 23(4), 713–720 (1988).
[Crossref]

1984 (1)

1983 (1)

1967 (1)

Cai, Z.

Chen, C

C Chen, “Statistical-cost network-flow approaches to two-dimensional phase unwrapping for radar interferometry”, Stanford University, Stanford, CT, 2001.

Chen, Q.

Chen, W. J.

X. Y. Su and W. J. Chen, “Reliability-guided phase unwrapping algorithm: a review,” Opt. Lasers Eng. 42(3), 245–261 (2004).
[Crossref]

Chen, Z.

Dias, J. M. B.

J. M. B. Dias and G. Valadao, “Phase Unwrapping via Graph Cuts,” IEEE Trans. Image Process. 16(3), 698–709 (2007).
[Crossref]

Feng, S.

Fernandez, S.

J. Salvi, S. Fernandez, T. Pribanic, and B. Llado, “A state of the art in structured light patterns for surface profilometry,” Pattern Recognit. 43(8), 2666–2680 (2010).
[Crossref]

Flynn, T.

Gao, B. Z.

Gao, J.

Z. Qi, Z. Wang, J. Huang, C. Xing, and J. Gao, “Invalid-point removal based on epipolar constraint in the structured-light method,” Opt. Lasers Eng. 105, 173–181 (2018).
[Crossref]

Garcia, R. R.

R. R. Garcia and A. Zakhor, “Consistent Stereo-Assisted Absolute Phase Unwrapping Methods for Structured Light Systems,” IEEE J. Sel. Top. Signal Process. 6(5), 411–424 (2012).
[Crossref]

Geng, J.

J. Geng, “Structured-light 3D surface imaging: a tutorial,” Adv. Opt. Photonics 3(2), 128–160 (2011).
[Crossref]

Ghiglia, D. C.

Goldstein, R. M.

R. M. Goldstein, H. A. Zebker, and C. L. Werner, “Satellite radar interferometry: t wo dimensional phase unwrapping,” Radio Sci. 23(4), 713–720 (1988).
[Crossref]

Halioua, M.

Hao, Q.

Hassebrook, L. G.

Hu, Y.

Huang, J.

Z. Qi, Z. Wang, J. Huang, C. Xing, and J. Gao, “Invalid-point removal based on epipolar constraint in the structured-light method,” Opt. Lasers Eng. 105, 173–181 (2018).
[Crossref]

Huang, L.

Jiang, C.

C. Jiang, B. Li, and S. Zhang, “Pixel-by-pixel absolute phase retrieval using three phase-shifted fringe patterns without markers,” Opt. Lasers Eng. 91, 232–241 (2017).
[Crossref]

Kuo, C.

K. Lee and C. Kuo, “Shape from Shading with a Generalized Reflectance Map Model,” Comput. Vis. Image Underst. 67(2), 143–160 (1997).
[Crossref]

Lau, D. L.

Lee, K.

K. Lee and C. Kuo, “Shape from Shading with a Generalized Reflectance Map Model,” Comput. Vis. Image Underst. 67(2), 143–160 (1997).
[Crossref]

Leibe, B

T Weise, B Leibe, and L. Van Gool, “Fast 3D scanning with automatic motion compensation,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE2007).

Li, B.

C. Jiang, B. Li, and S. Zhang, “Pixel-by-pixel absolute phase retrieval using three phase-shifted fringe patterns without markers,” Opt. Lasers Eng. 91, 232–241 (2017).
[Crossref]

Liu, H. C.

Liu, K.

Liu, X.

Llado, B.

J. Salvi, S. Fernandez, T. Pribanic, and B. Llado, “A state of the art in structured light patterns for surface profilometry,” Pattern Recognit. 43(8), 2666–2680 (2010).
[Crossref]

Osten, W.

Pedrini, G.

Peng, X.

Pribanic, T.

J. Salvi, S. Fernandez, T. Pribanic, and B. Llado, “A state of the art in structured light patterns for surface profilometry,” Pattern Recognit. 43(8), 2666–2680 (2010).
[Crossref]

Qi, Z.

Z. Qi, Z. Wang, J. Huang, C. Xing, and J. Gao, “Invalid-point removal based on epipolar constraint in the structured-light method,” Opt. Lasers Eng. 105, 173–181 (2018).
[Crossref]

Qian, J.

Romero, L. A.

Salvi, J.

J. Salvi, S. Fernandez, T. Pribanic, and B. Llado, “A state of the art in structured light patterns for surface profilometry,” Pattern Recognit. 43(8), 2666–2680 (2010).
[Crossref]

Sparrow, E. M.

Srinivasan, V.

Su, X. Y.

X. Y. Su and W. J. Chen, “Reliability-guided phase unwrapping algorithm: a review,” Opt. Lasers Eng. 42(3), 245–261 (2004).
[Crossref]

Takeda, M.

Tang, Q.

Tao, T.

Torrance, K. E.

Valadao, G.

J. M. B. Dias and G. Valadao, “Phase Unwrapping via Graph Cuts,” IEEE Trans. Image Process. 16(3), 698–709 (2007).
[Crossref]

Van Gool, L.

T Weise, B Leibe, and L. Van Gool, “Fast 3D scanning with automatic motion compensation,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE2007).

Wang, Y.

Wang, Z.

Z. Qi, Z. Wang, J. Huang, C. Xing, and J. Gao, “Invalid-point removal based on epipolar constraint in the structured-light method,” Opt. Lasers Eng. 105, 173–181 (2018).
[Crossref]

Weise, T

T Weise, B Leibe, and L. Van Gool, “Fast 3D scanning with automatic motion compensation,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE2007).

Werner, C. L.

R. M. Goldstein, H. A. Zebker, and C. L. Werner, “Satellite radar interferometry: t wo dimensional phase unwrapping,” Radio Sci. 23(4), 713–720 (1988).
[Crossref]

Wolff, L. B.

Xing, C.

Z. Qi, Z. Wang, J. Huang, C. Xing, and J. Gao, “Invalid-point removal based on epipolar constraint in the structured-light method,” Opt. Lasers Eng. 105, 173–181 (2018).
[Crossref]

Zakhor, A.

R. R. Garcia and A. Zakhor, “Consistent Stereo-Assisted Absolute Phase Unwrapping Methods for Structured Light Systems,” IEEE J. Sel. Top. Signal Process. 6(5), 411–424 (2012).
[Crossref]

Zebker, H. A.

R. M. Goldstein, H. A. Zebker, and C. L. Werner, “Satellite radar interferometry: t wo dimensional phase unwrapping,” Radio Sci. 23(4), 713–720 (1988).
[Crossref]

Zhang, S.

C. Jiang, B. Li, and S. Zhang, “Pixel-by-pixel absolute phase retrieval using three phase-shifted fringe patterns without markers,” Opt. Lasers Eng. 91, 232–241 (2017).
[Crossref]

Zuo, C.

Adv. Opt. Photonics (1)

J. Geng, “Structured-light 3D surface imaging: a tutorial,” Adv. Opt. Photonics 3(2), 128–160 (2011).
[Crossref]

Appl. Opt. (2)

Comput. Vis. Image Underst. (1)

K. Lee and C. Kuo, “Shape from Shading with a Generalized Reflectance Map Model,” Comput. Vis. Image Underst. 67(2), 143–160 (1997).
[Crossref]

IEEE J. Sel. Top. Signal Process. (1)

R. R. Garcia and A. Zakhor, “Consistent Stereo-Assisted Absolute Phase Unwrapping Methods for Structured Light Systems,” IEEE J. Sel. Top. Signal Process. 6(5), 411–424 (2012).
[Crossref]

IEEE Trans. Image Process. (1)

J. M. B. Dias and G. Valadao, “Phase Unwrapping via Graph Cuts,” IEEE Trans. Image Process. 16(3), 698–709 (2007).
[Crossref]

J. Opt. Soc. Am. (1)

J. Opt. Soc. Am. A (4)

Opt. Express (1)

Opt. Lasers Eng. (3)

C. Jiang, B. Li, and S. Zhang, “Pixel-by-pixel absolute phase retrieval using three phase-shifted fringe patterns without markers,” Opt. Lasers Eng. 91, 232–241 (2017).
[Crossref]

Z. Qi, Z. Wang, J. Huang, C. Xing, and J. Gao, “Invalid-point removal based on epipolar constraint in the structured-light method,” Opt. Lasers Eng. 105, 173–181 (2018).
[Crossref]

X. Y. Su and W. J. Chen, “Reliability-guided phase unwrapping algorithm: a review,” Opt. Lasers Eng. 42(3), 245–261 (2004).
[Crossref]

Opt. Lett. (1)

Pattern Recognit. (1)

J. Salvi, S. Fernandez, T. Pribanic, and B. Llado, “A state of the art in structured light patterns for surface profilometry,” Pattern Recognit. 43(8), 2666–2680 (2010).
[Crossref]

Radio Sci. (1)

R. M. Goldstein, H. A. Zebker, and C. L. Werner, “Satellite radar interferometry: t wo dimensional phase unwrapping,” Radio Sci. 23(4), 713–720 (1988).
[Crossref]

Other (3)

C Chen, “Statistical-cost network-flow approaches to two-dimensional phase unwrapping for radar interferometry”, Stanford University, Stanford, CT, 2001.

T Weise, B Leibe, and L. Van Gool, “Fast 3D scanning with automatic motion compensation,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE2007).

J. Qian, S. Feng, T. Tao, Y. Hu, Y. Li, Q. Chen, and C. Zuo, “Deep-learning-enabled geometric constraints and phase unwrapping for single-shot absolute 3D shape measurement,” http://arxiv.org/abs/2001.01439 .

Supplementary Material (1)

NameDescription
» Supplement 1       Appendix A: Details about the simplification of energy function in the second stage of the proposed algorithm

Data availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

Cited By

OSA participates in Crossref's Cited-By Linking service. Citing articles from OSA journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (20)

Fig. 1.
Fig. 1. Geometry of the FPP system (a) and process of image intensity transfer from projector to camera (b).
Fig. 2.
Fig. 2. FFP system setup (a) and three typical scenarios (b)-(d).
Fig. 3.
Fig. 3. Results of the proposed method: (a) wrapped phase and (b) the first stage result ψ(x,y); (c) is the curve of Ef v.s. K in the second stage of the proposed method, where K and KG are fringe orders determined by the proposed method and MFPU respectively.
Fig. 4.
Fig. 4. Comparison of image intensity: (a) and (b) are captured image intensity and the estimated one with K=5, and (c) presents the cross-sections marked in yellow dashed lines in Figs. 4(a) and 4(b).
Fig. 5.
Fig. 5. Comparison of unwrapped phases: (a) and (b) are results of MFPU and proposed method, respectively, and (c) shows cross-sections marked in red dashed lines in Figs. 5(a) and 5(b).
Fig. 6.
Fig. 6. Positions in projector image plane determined by unwrapped phase and the final 3D point cloud: (a) presents positions determined by unwrapped phases of MFPU, the first and second stage results of the proposed method, respectively; (b) is the final point cloud.
Fig. 7.
Fig. 7. Results of the proposed method: (a) wrapped phase and (b) the first stage result ψ(x,y); (c)-(e) are curves of Ef v.s. K of spheres S1, S2, S3 in the second stage respectively, where K and KG are fringe orders determined by the proposed method and MFPU respectively.
Fig. 8.
Fig. 8. Comparison of image intensity: (a) and (b) are captured and estimated image intensities respectively; (c)-(e) are cross-sections of spheres S1, S2 and S3 respectively, whose positions are marked in red dashed lines in Figs. 4 (a) and 4(b), respectively.
Fig. 9.
Fig. 9. Comparison of unwrapped phase: (a) and (b) are results of MFPU and proposed method, respectively, and (c)-(e) shows cross-sections of spheres S1, S2 and S3 respectively, whose positions are marked in red dashed lines in Figs. 9(a) and 9(b).
Fig. 10.
Fig. 10. Positions in projector image plane and the final 3D point cloud: (a) presents positions of each sphere determined by unwrapped phases of MFPU, SPU and the proposed method respectively; (b) is the final point cloud.
Fig. 11.
Fig. 11. Results of the proposed method: (a) wrapped phase and (b) the first stage result ψ(x,y); (c)-(d) are curves of Ef v.s. K of the left and right walnuts in the second stage respectively, where K and KG are fringe orders determined by the proposed method and MFPU.
Fig. 12.
Fig. 12. Comparison of image intensity: (a) and (c) are captured and estimated image intensities respectively; (b) and (d) are cross-sections of the left and right walnuts respectively, whose positions are marked in red dashed lines in Figs. 12(a) and 12(b), respectively.
Fig. 13.
Fig. 13. Comparison of unwrapped phase: (a) and (c) are results of MFPU and proposed method, respectively, and (b) and (d) show the cross-sections of the left and right walnuts respectively, of which the positions are marked in red dashed lines in Figs. 13(a) and 13(b).
Fig. 14.
Fig. 14. Positions in projector image plane determined by unwrapped phase and the final 3D point cloud: (a) presents positions determined by unwrapped phases of MFPU, SPU and the proposed method; (b) is the final point cloud.
Fig. 15.
Fig. 15. (a) and (b) are the first statue and its corresponding wrapped phase, and similarly (c) and (d) are the second statue and its wrapped phase.
Fig. 16.
Fig. 16. Comparison result of the first statue: (a)-(c) are the first stage result ψ(x,y), final result of the proposed algorithm, and the one of MFPU, respectively; (d) are cross-sections of (a)-(c), which are curves in yellow, blue and red lines respectively.
Fig. 17.
Fig. 17. Result of the proposed method: (a) is the segmentation mask, where the head region is marked in red and the body region is in blue; (b) and (c) are “Ef v.s. K” curves of the head and body regions, respectively.
Fig. 18.
Fig. 18. Comparison result of the second statue: (a)-(c) are the first stage result ψ(x,y), final result of the proposed method, and the one of MFPU, respectively.
Fig. 19.
Fig. 19. Cross-sections of phases: the left figure in Fig. 18(a) shows cross-sections marked in red dashed lines in Fig. 17, and the right one is the zoomed-in version of the boxed region in black dotted lines in the left figure; similarly, the left and right figures in Fig. 18(b) are cross-sections marked in blue dashed lines in Fig. 17, and its corresponding zoomed-in version boxed in Fig. 18(b).
Fig. 20.
Fig. 20. Unwrapping paths in SPU algorithm in the first stage: (a) continuous path from point PA to point PB both on the body; (b) discontinuous path from PA on the body to PB on the head.

Tables (1)

Tables Icon

Table 1. Comparison of runtimes

Equations (15)

Equations on this page are rendered with MathJax. Learn more.

Φ ( x , y ) = φ ( x , y ) + 2 π k ( x , y ) ,
I Pi ( x p , y p ) = I 0 [ 1 + cos ( 2 π f x p + δ i ) ] ,
I i ( x , y ) = r ( x , y ) I Pi ( x , y ) + I e = r ( x , y ) I 0 + r ( x , y ) I 0 cos [ φ ( x , y ) + δ i ] + I e , = a ( x , y ) + b ( x , y ) cos [ φ ( x , y ) + δ i ]
{ x p = Φ ( x , y ) 2 π f [ x p , y p , 1 ] F e [ x , y , 1 ] T = 0 .
{ [ x , y , 1 ] T = M c [ X , Y , Z , 1 ] T [ x p , y p , 1 ] T = M p [ X , Y , Z , 1 ] T .
[ X , Y , Z ] T = g ( Φ ( x , y ) ) = g ( k ( x , y ) ) .
E s = I 0 d s 2 max ( 0 , i n ) ,
L = E s f B R D F = E s ( α f d + β f s ) ,
{ f d  = 1/ π f s  =  1 ( i n ) ( v n ) e [ cos 1 ( h n ) σ ] 2 ,
I c = [ k c E c + b ] γ  =  [ k c π 4 ( D f c ) 2 cos 4 ( θ ) L + b ] γ ,
I c = [ k c π 4 ( D f c ) 2 cos 4 ( θ ) I 0 d s 2 max ( 0 , i n )   ( α π + β ( i n ) ( v n ) e [ cos 1 ( h n ) σ ] 2 ) + b ] γ , .
I c = F ( i , v , n , θ , α , β , σ , k c , b , γ , D , f c ) ,
I c = G ( k ( x , y ) , α , β , σ ) .
( k ( x , y ) , α , β , σ ) = arg min E f   = arg min x , y M | | I c G ( k ( x , y ) , α , β , σ ) | | 2 ,
Φ ( x , y ) = φ ( x , y ) + 2 π k ( x , y ) = ψ ( x , y ) + 2 π K ,

Metrics