Abstract

Fringe projection profilometry has become one of the most popular 3D information acquisition techniques being developed over the past three decades. However, the general and practical issues on valid point detection, including object segmentation, error correction and noisy point removal, have not been studied thoroughly. Furthermore, existing valid point detection techniques require multiple case-dependent thresholds which increase processing inconvenience. In this paper, we proposed a new valid point detection framework, which includes the k-means clustering for automatic background segmentation, unwrapping error correction based on theoretical analysis, and noisy point detection in both temporal and spatial directions with automatic threshold setting. Experimental results are given to validate the proposed framework.

© 2015 Optical Society of America

1. Introduction

Fringe projection profilometry (FPP) [1, 2] has become one of the most popular 3D information acquisition techniques. A FPP system typically uses a projector to project fringe patterns onto object surfaces, a camera to capture the distorted fringes patterns, and a computer to analyze the fringe patterns and reconstruct the surface geometry. The main processing steps for geometric reconstruction include wrapped phase estimation [3–5], phase unwrapping [6–9] and phase to 3D point cloud transformation through system calibration [10–14]. Four-step phase shifting [5], temporal phase unwrapping with multi-frequency projection [6] and point-cloud reconstruction [14] are typical methods used in FPP systems.

Efforts have been made to improve measurement accuracy, through better system calibration [10–14] and Gama correction [15, 16] methods. Valid point detection, however, is relatively less comprehensively discussed. As a profiling technique, FPP aims at reconstructing reliable object points. To achieve this target, we classify invalid points into three types and manage them individually. First, background points that do not belong to the objects but are captured by the camera need to be segmented. Fringe modulation has often been used to recognize and remove such background points. Second, some object points, due to inevitable noise, are unwrapped wrongly. These points can be rectified and turned into valid points. Third, some object points are simply too noisy and thus they should be detected and excluded.

Previous works on valid point detection include the complete frameworks proposed by Zhang [17] and Huang et al. [18], and the error detection methods proposed by Chen et al. [19] and Song et al. [20]. To perform background segmentation, modulation values are computed and thresholded [17, 18]. Different thresholds should be set according to different FPP systems and measurement environments. Changes, such as lighting condition, projector or camera parameters and so on, will cause manually resetting of thresholds, which is not convenient. The simple thresholding could also misclassify boundary points between the objects and the background. To remove unwrapping errors, Gaussian filtering has been used to suppress the noise influence. It is followed by hole filling with a traditional phase unwrapping method to correct the unwrapping error [17]. However, Gaussian filtering may smooth the high frequency phase and the hole filling requires two user-defined thresholds which are not convenient to use. For noisy point detection, monotonicity, a root mean square error (RMSE) and second order derivatives have been proposed [17–20]. A threshold is required for each of these metrics, making these methods inconvenient to use. Monotonicity and second order derivatives are also sensitive to noise and boundaries.

In this paper, we propose a framework with three novel ways to successfully tackle these three types of invalid points. The overall process is shown in Fig. 1 with the proposed framework highlighted in red. The details are as follows:

 figure: Fig. 1

Fig. 1 Proposed valid point detection flowchart.

Download Full Size | PPT Slide | PDF

  • (1) For background removal, k-means clustering is used to automatically recognize the background, objects, and boundary points between them;
  • (2) For points with unwrapping error due to noise, monotonicity checking is applied to recognize and correct them;
  • (3) For other noisy object points, we propose to use RMSE checking with automatic threshold setting in the temporal direction, and 3D point-cloud smoothness checking in the spatial direction to perform the detection effectively.

This paper is organized as follows. The continuous phase estimation and the 3D point-cloud transformation in FPP are briefly introduced in Sect. 2. Our proposed background removal method, unwrapping error correction method and invalid object point detection method are discussed in Sect. 3-5, respectively. The overall performance ispresented in Sect. 6. The paper is concluded in Sect. 7.

2. The principle of FPP

Stereo vision is a classic method to estimate depth information of an object using two cameras. In FPP, one camera is replaced by a projector so that fringe patterns can be projected ontothe object to code its shape information. Thus an object point po=[X,Y,Z]T is illuminated by a point pp=[up,vp]T from a projector image and goes to a point pc=[uc,vc]T in the camera image. The two points, ppand pc, are called a pair if they correspond to the same point po. In FPP, paired points have the same phase value. This feature is utilized to greatly simplify point correspondence. Once pp and pc have been corresponded and paired, their object point po can be reconstructed by a straightforward triangulation.

2.1 Continuous phase estimation and point correspondence

Since point correspondence is established by interrogating phase values, obtaining continuous phase values from both the projector and camera images are essential. A well-recognized approach uses the phase-shifting technique to estimate wrapped phase and multi-frequency projection to temporally unwrap it. The fringe pattern sequence projected from a projector is designed as

f˜l,t(up,vp)=127+127cos[Φ˜l(up,vp)+2tπT],l=0,1,...,L1,t=0,1,...,T1,
Where l indicates the lth frequency and the total number of frequencies is L; t indicates the tth phase-shift and the total number of shifts is T; f˜l,tis the fringe pattern at lth frequency and tth phase shift; Φ˜l is the continuous phase value at the lth frequency with a size of m × n, pre-designed to be monotonically increasing in x or y direction. Without losing generality, the x direction is chosen so the phase is
Φ˜l(up,vp)=hΦ˜l1(up,vp),Φ˜0(up,vp)=2πupm[0,2π),l=1,2,...L1,up=0,1,...m1,
Where h is an integer number ranging from 2 to 5. Smaller h normally has higher unwrapping accuracy but it requires larger Lto ensure more measurement details and thus is more time-consuming.

After projection, the sequence of the captured fringe patterns can be represented as [12, 13]

fl,t(uc,vc)=al(uc,vc)+bl(uc,vc)cos[Φl(uc,vc)+2tπT],l=0,1,...,L1,t=0,1,...,T1,
Where al and bl are the background intensity and modulation intensity for the lth frequency, respectively, and are assumed to remain unchanged for different phase-shifts. For convenience, bl is called modulation in the rest of this paper. The captured phase maps retain the relationship between consecutive frequencies in Eq. (2) so that
Φl(uc,vc)hΦl1(uc,vc),Φ0[0,2π),l=1,2,...L1.
The phases Φl(uc,vc) in the camera images are measurable. First, the phase-shifting technique is used (with four shifts for simplicity), which gives
φl=arctan2(fl,3fl,1,fl,0fl,2)[0,2π).
The obtained phase φl is wrapped and related to the unwrapped phase Φl as follows,
Φl=φl+2λlπ,Φo=φo,
Where λl are integer numbers to be determined. Next, the multi-frequency method is used for phase unwrapping, thus Φl can be determined based on Eqs. (4) and (6):
λl=round(hΦl1φl2π),
wherethe round operation is necessary because hΦl1 is only a rough estimation of Φl [6]. Starting from Φ0, we can obtain Φl for all l.

If two pointspp and pc are paired, then Φl(uc,vc)=Φ˜l(up,vp) where Φl(uc,vc) has been measured. From this relationship, pp=(up,vp) can be found. Note that the highest frequency L−1 is selected for correspondence because it provides the highest accuracy. The phase relationship can then be elaborated as follows,

ΦL1(uc,vc)=Φ˜L1(up,vp)=hL1Φ˜0(up,vp)=hL12πupm,
from which up is determined. If the phase in the projector frame is designed to be linearly increasing in they direction, then the coordinate vp in they direction can be estimated in the same way. However, obtaining up only is sufficient for object point reconstruction.

2.2 Point-cloud reconstruction

Given a point pair pp=[up,vp]Tand pc=[uc,vc]T(in fact, vp is unknown), according to a pinhole camera model, their relationship to the object point [X,Y,Z]T can be written as follows,

sc[uc,vc,1]T=Mc[X,Y,Z,1]Tsp[up,vp,1]T=Mp[X,Y,Z,1]T,
where Mc and Mp are 3 × 4 transformation matrices of the camera and the projector, respectively, which are assumed known by the so-called camera and projector calibration [10–14]. Equation (9) can be expanded and then re-arranged as
M[XYZ]T=R,
where
M=[Mc(0,0)Mc(2,0)ucMc(0,1)Mc(2,1)ucMc(0,2)Mc(2,2)ucMc(1,0)Mc(2,0)vcMc(1,1)Mc(2,1)vcMc(1,2)Mc(2,2)vcMp(0,0)Mp(2,0)upMp(0,1)Mp(2,1)upMp(0,2)Mp(2,2)up],
and

R=[Mc(2,3)ucMc(0,3)Mc(2,3)vcMc(1,3)Mp(2,3)upMp(0,3)].

3. Background removal

Object points and background points are analyzed and identified in this section. A k-means clustering is proposed to remove useless background points.

3.1 Problem statement and analysis

FPP is often performed in a dark environment where the only light source is the projector. Points from objects being measured are well illuminated by the projector and seen by the camera. They are thus naturally called object points. All others points also seen by the camera are referred to as background points. The object points are characterized by their high modulation, except for (a) those with a dark color and (b) those on the boundary of the object with a large angle between the surface normal and light direction. Although the modulation of the object points in these two exceptional cases decreases, it is usually still higher than that of background points.

Background points include the following four cases: (a) the background such as a wall that is far behind the target object but is in the field of view of the camera and the projector. They are out of focus and thus have low modulation. Though plausible phase can still be computed, it is inaccurate; (b) the environment is so dark that the captured background points appear to be completely black, i.e., the intensity is zero; (c) some background points have low and random intensity values due to weakly scattered light. Their modulation is low and the computed phase is random; (d) when the background points are near the objects, their modulation values are higher than normal background points, but their phase is still random.

Ideally, all background points should be removed while all object points are retained, for which using modulation is seen to be the most natural choice, and indeed has been applied in practice [17, 18]. However, with modulation alone, it is difficult to differentiate between a dark color object, the boundary of an object and the boundary of background, as all of them have intermediate modulation between normal objects and background points.

3.2Current solution

The averaged modulation b¯is calculated from phase-shifted fringe patterns as follows [17],

b¯=1Ll=0L1[2T(t=0T1fl,tsin2tπT)2+(t=0T1fl,tcos2tπT)2].
A threshold tb is needed to separate the object from the background as
pc{object,b¯tbbackground,b¯<tb.
Zhang et al. suggested usingb/a for thresholding. Since acould be very small in a dark environment,b¯ is more reliable and convenient to use. Because b¯varies with projector settings and environment conditions, tbis system-dependent and is thus inconvenient to set.

3.3Proposed solution

By noticing the fact that the modulation of the objects is higher than that of the background, k-means clustering is proposed for automatic segmentation. The k-means clustering is one of the simplest and most popular unsupervised learning algorithms for clustering analysis in signal processing [21]. The k-means clustering aims at partitioning the n modulation values (b¯1,b¯2,...,b¯n) into k sets S=(S1,S2,...Sk) with c=(c1,c2,...ck) as their centroids, so that

i=1kb¯Si|b¯ci|2minimum.

There are two key parameters in thek-means clustering, the number of clusters, k, and the cluster centroids, c. It is natural to set two clusters (k = 2), one for the object and the other for the background. However, as discussed earlier, boundary points with intermediate modulation values are hard to determine which cluster they belong to. We thus create one more set of cluster for these points and set k = 3.As for the setting of c,a random initialization may fail the clustering. Although both the modulation ranges of the object and the background are system-dependent, the mean modulation of the whole image falls in-between these two ranges, i.e., it falls into the modulation range of the boundary points. This finding enables us to set c automatically and effectively. The mean value of the whole image (denoted as c2) is set as the centroid for the boundary cluster; the mean value of the modulation that is larger than c2 (denoted as c1) is set as the centroid for the object cluster; and the mean value of the modulation that is smaller than c2 (denoted as c3) is set as the centroid for the background cluster. This setting can be formulated as

{c1=mean(b¯|b¯>c2)c2=mean(b¯)c3=mean(b¯|b¯<c2).
Since the differences between the clusters are quite obvious, the above settings of k and c lead to successful and unique clustering.

The k-means clustering algorithm for background removal is summarized below:

  • Step 1:Set k = 3 and initial c according Eq. (16);
  • Step 2: For each pixel j, assign its modulation b¯j to the cluster Sithat has the closest centroidci;
  • Step 3: Update the centroid value as ci=mean(b¯Si);
  • Step 4: Repeat Step 2 and Step 3 until the centroids do not move;
  • Step 5: Treat S1 as the objects and S3 as the background. A point in S2 will be included into the object cluster if it satisfies the following two conditions: (a) it is connected to the object, and (b) it is smooth (its derivative Φx is smaller than twice of the largest derivative of object points in S1). To preserve as much points as possible, eight-connectivity is used.

3.4 Results

An angel figure is measured by our system with a Grasshopper GRAS-20S4C camera (with the camera image size of 1200 × 1600) and a Viewsonic PLED-W500 projector (with the projector image size of 768 × 1024). The phase maps for projection are simulated according to Eq. (2) with h=3andL=5.

Figure 2(a) shows the calculated modulation from captured images. As discussed earlier, the modulation differences of object and background are clear except for the object points with a dark color or on object boundaries. Figures 2(b) and 2(c) show the wrapped phase φL1 and the corresponding unwrapped phase ΦL1. Although many background points have clear phase values, they are considered as background points based on their low modulation values. Figure 2(d) shows the object phase by manual segmentation, which is regarded as the ground truth.

 figure: Fig. 2

Fig. 2 Measured modulation and phases.(a) The calculated modulation; (b) the calculated wrapped phase; (c) the unwrapped phase; (d) the manually separated object phase as the ground true.

Download Full Size | PPT Slide | PDF

Figures 3(a) and 3(c) show the separated phases using a threshold of 8 and the proposed k-means clustering, respectively. The threshold value of 8 is experimentally optimal in minimizing the difference between Fig. 2(d) and 3(a). The differences between Fig. 2(d) and Fig. 3(a),and between Fig. 2(d) and Fig. 3(c) are shown in Fig. 3(b) and 3(d), respectively,where the white points are actually object points but misidentified as background, while the black points are actually background points but are misidentified as object points. The proposed k-means clustering outperforms the optimally thresholded result.

 figure: Fig. 3

Fig. 3 Separated object phases and errors. (a) The separated object phase using a threshold of 8; (b) the point differences between Fig. 2(d) and Fig. 3(a); (c) the separated object phase using the proposed k-means clustering; (d) the point differences between Fig. 2(d) and Fig. 3(c).

Download Full Size | PPT Slide | PDF

4. Unwrapping error correction around phase boundaries

Random noise is a common problem in FPP. Phase errors caused by random noise around phase boundaries are amplified during unwrapping. This problem is analyzed and consequently corrected.

4.1. Problem statement and analysis

Random noise commonly appears in an entire image in FPP. For convenience, the random noise is assumed to be additive with a mean of zero and a standard deviation of σ. Let the noise in fl,kbenl,k,according to Eq. (5), the noisy wrapped phase can be represented as

(φl)n=arctan2[sin(Φl)+nl,3nl,12bl,cos(Φl)+nl,0nl,22bl][0,2π).
The noisy phase can be rewritten as
(φl)n=φl+Δφl+ρl2π
with
ρl={0,Δφlφl<2πΔφl1,φl<Δφl1,φl2πΔφl,
whereΔφl is a small phase error caused by the random noise and ρl2π is a possible additional 2π error to keep (φl)n within [0,2π).

According to [22], it is not difficult to find that

Δφlcos(Φl)(nl,3nl,1)sin(Φl)(nl,0nl,2)2bl,
which can be modeled as Gaussian noise with a mean of zero and a standard deviation of σΔφl=σ/(2bl) [22, 23]. This error is very small in FPP and thus ignorable. For example, σΔφl in our system is estimated to be about 0.02 rad only.

However, even for small σΔφl, according to Eqs. (18) and (19),2π is added to some pixels with phase near 0 (i.e., ρl=1), and subtracted from some pixels with phase near 2π (i.e., ρl=1).Although spatial phase unwrapping can correct the error of ±2π, it is not performed due to its complexity. In multi-frequency phase unwrapping, according to Eq. (7), we have

(λ1)n=round[h(φ0+Δφ0+ρ02π)(φ1+Δφ1+ρ12π)2π]=round[hφ0φ1+(hΔφ0Δφ1)2π]+hρ0ρ1=λ1+hρ0ρ1.
The last step in Eq. (21) is obtained because hΔφ0Δφ1 is very small. Subsequently, according to Eq. (6), we have

(Φ1)n=(φ1)n+(λ1)n2π=φ1+Δφ1+λ12π+hρ02π.

It is interesting to see that ρ1 does not affect the unwrapping result but unfortunately, the ρ0 error is amplified by h.Proceeding with the unwrapping, the ρ0 error is amplified by hL1 in ΦL1, which is an huge error.Since Φ0 is around 0 or 2π only in the boundary regions of the projected fringe pattern(regarded as phase boundaries), one way to deal with the unwrapping error is to avoid capturing the phase boundaries, which is not always convenient.

4.2. Current solutions

To deal with random noise and to avoid error amplification, one of the current solutions uses Gaussian smoothing during the estimation of Φl(l=1,2,...,L1) for unwrapping [17]. However, when the frequency of a projected fringe pattern is high, or the measured objects have sharp edges, such filtering will likely over-smooth the fringe pattern and result in unwrapping errors.

Another solution uses a hole detection and filling strategy [17], which requires two user-defined thresholds to identify good points and ambiguous points. The determination of the threshold values is case sensitive and user dependent. It is thus not convenient. Furthermore, the performance of this strategy depends on the particular spatial phase unwrapping method, which usually does not outperform the temporal phase unwrapping method.

4.3. Proposed solution

As shown in Eq. (2), the phase is designed to increase monotonically in one direction. The monotonicity feature has been used as a criterion for noisy point detection in the final continuous phase map ΦL1 [17–20], which is however sensitive to noise. For example, the designed phase has the relationship of Φ(x1,y)<Φ(x,y)<Φ(x+1,y). When there is a sufficiently large positive noise ΔΦcontributed to Φ(x,y), the above inequality becomes Φ(x1,y)<Φ(x,y)+ΔΦ and Φ(x,y)+ΔΦ>Φ(x+1,y), i.e., the noisy point at (x, y) passes the monotonicity check but the good point at (x + 1, y) does not. In order to cover this situation, a large phase difference range is used instead in [18], which however becomes too tolerant and can hardly detect error points.

Though monotonicity is not suitable to detect noisy points in ΦL1, we interestingly find that it works very well to detect the 2π error (ρ0 error) for Φ0. As described in Eq. (19), a point with ρ0 has an error near 2π. Such a big error allows us to perform the monotonicity checking with a large phase tolerance (3π/2) to detect amplified random noise,

ΦL1(x,y+1)={ΦL1(x,y+1)|Φ0(x,y+1)Φ0(x,y)|<3π/2ΦL1(x,y+1)2πhL1Φ0(x,y+1)Φ0(x,y)>3π/2ΦL1(x,y+1)+2πhL1Φ0(x,y+1)Φ0(x,y)<3π/2.
The monotonicity is checked only for object points identified from the previous section.

4.4. Results

In this example, we only capture the first three fourths of the projected fringe pattern which contains enough information of the angel object. The unsatisfactory result along the left phase boundary which is also located in the left image boundary in Fig. 2(c) is clearly observed. Although this part corresponds to a wall belonging to the background, we use it as an example to show the effectiveness of our proposed method. Applying the monotonicity checking gives the result in Fig. 4(a). For comparison, Fig. 4(b) shows the sub-image of the wrapped phase, continuous phases before and after correction at the region highlighted by the red rectangle. The unwrapping errors of most pixels along the left image boundary are rectified. Note that there are yet some error points not corrected because their phase values are too noisy.

 figure: Fig. 4

Fig. 4 Phase Correction. (a) The corrected phase; (b) the wrapped phase, the continuous phase before and after correction at the region highlighted by the red rectangle in Fig. 4(a).

Download Full Size | PPT Slide | PDF

5. Invalid object point detection

By now, the object has been segmented from the background, and the object points around phase boundaries have been rectified for possible unwrapping errors. In this section, invalid points within the segmented objects are detected.

5.1Problem statement

So far all the object points considered have high modulation values, which however do not guarantee them to be valid object points. As the object points are actually surface points, they should have continuous phase values. The phase continuity or surface continuity thus needs to be checked, either temporally or spatially, to find the invalid points and remove them.

5.2Current solutions

The RMSE [18, 19] is an effective metric to temporally detect noisy points. As discussed in Sec. 2.1, temporally consecutive phases are the same except for an amplification factor h. This property may not be upheld due to noise, which, on the other hand, can be used to measure the noise level. The phase measured from different frequencies is first averaged as follows,

Φf=l=0L1Φlhl/l=0L1h2l,
based on which the RMSE is computed as
RMSE=l=0L1(ΦlhlΦf)2L.
A larger RMSE reflects higher discrepancy of phases across frequencies, and consequently reflects “badness” of the point. However, a threshold tRMSEis needed, which is set experimentally and is thus inconvenient.

Spatially, monotonicity [17, 18] and a second order derivative (Φxx) [20] are two metrics for noisy point detection. As described in Sect. 4.3, the monotonicity criterion is not practical to detect noisy points in ΦL1. The second order derivative has also been used for noisy point detection in FPP [20], denoted as

|Φxx|<t2nd.
However, object points on boundaries or sharp surfaces will be mistaken as noise and discarded, which is not desirable.

5.3Proposed solutions

Methods to detect invalid point temporally and spatially are proposed. Temporally, in order to avoid manual setting of tRMSE for the RMSE method, the Otsu's method [24] is proposed to perform clustering-based automatic thresholding by minimizing intra-class variance of the two groups. Subsequently the threshold is set as follows,

tRMSE=α1tOtsu,
wheretOtsu is the threshold obtained from the Otsu’s method and α1=1.2is used to preserve more details. Note that it is possible to use k-means clustering again but it is slower than Otsu’s method.

Spatially, a point-cloud smoothness measure with automatic threshold setting is proposed. To deal with surface discontinuity, eight-connectivity is used. In other words, for each object point (uc,vc), its smoothness with respect to all its eight connected neighbors, (uc+ε,vc+η) with |ε|1 and |η|1, is considered. Normally, smoothness measure is performed based on phase values in the camera image as in the monotonicity [17, 18] and the second order derivative [20]. Because the phase is designed to increase in one direction, multiple thresholds are needed to cater to different phase changes for its eight neighbors. We instead directly measure smoothness on an object surface, i.e., the reconstructed point-cloud, which is more natural and requires a single threshold. The distance between an object point and its neighbor is computed in the world space as

dε,η=(Xuc+ε,vc+ηXuc,vc)2+(Yuc+ε,vc+ηYuc,vc)2+(Zuc+ε,vc+ηZuc,vc)2.
The object point can be claimed to be valid if (a) the distances between this point and all its eight neighbors are less than a threshold td, or (b) the distance between this point and a neighbor already confirmed to be valid is less than td. The threshold can be set as an amplified mean distance as
td=α2mean(ε,η)S1(dε,η)
with α2=10.

5.4 Results

Figure 5(a) shows the top-view of the reconstructed model before the invalid point detection. Temporal and spatial noisy point detections are then performed, resulting in a cleaner and thus better result shown in Fig. 5(b). The point differences of corresponding phase maps of Fig. 5(a) and 5(b) are computed and shown in Fig. 5(c), where boundary zero difference regions are excluded to have a better demonstration of the detected noisy points.

 figure: Fig. 5

Fig. 5 Reconstructed model.(a) The reconstructed model before noisy point detection; (b) the model after noisy point detection; (c) the detected noisy points.

Download Full Size | PPT Slide | PDF

6. Overall performance

To see the overall performance of the proposed valid point detection framework, it is compared with the frameworks proposed for the same purpose by Zhang [17] and Huang et al. [18]. Furthermore, the RMSE [19] and the second order derivative Φxx [20] are also included for the comparison as well. Two modifications are thus implemented: Zhang’s framework with RMSE and Φxxand Huang’s framework with Φxx. The same angel figure is used to demonstrate the performances. Zhang’s and Huang et al.’s frameworks are implemented by following their papers [17, 18]. The thresholds of the modulation are set as tbL=8 and tbH=50 for Zhang’s framework and tb=8 for Huang et al.’s framework. The threshold of the RMSE is set to the same value as our proposed method while t2nd=0.5 is set for Φxx after careful observation.

Figure 6 shows the front-view results while Fig. 7 shows the top-view results. As shown in Fig. 6(a) and Fig. 7(a), invalid points still exist after applying Zhang’s framework. The invalid points indicated by the green arrow are the unwrapping errors caused by the Gaussian smoothing of fringe patterns with high frequency. The invalid points indicated by the red arrow are misclassified background points which are also presented in Fig. 6(b-d) and Fig. 7(b-d) as their corresponding arrows point to. Noise detection with the RMSE and Φxx can reduce the error, but not thoroughly, as shown in Fig. 6(b) and Fig. 7(b). The result from Huang et al.’s framework is shown in Fig. 6(c) and Fig. 7(c), which is cleaner than that from Zhang’s framework, but it still contains some invalid points as the arrows point to. Further processing with Φxx as shown in Fig. 6(d) and Fig. 7(d) cannot fully remove these invalid points. As shown in Fig. 6(e) and Fig. 7(e), the proposed framework produces the most satisfactory result where the invalid points have been cleaned and details such as the boundary points are preserved. The computation time using MATLAB in DELL Optiplex with Intel® Core i5-4590 CPU @ 3.30 GHz and 8.0GB RAM is about 8s for the proposed method, 2s for Zhang’s framework and modified Zhang’s framework, 1s for Huang’s framework and modified Huang’s framework.

 figure: Fig. 6

Fig. 6 Front-view results(a) from Zhang’s framework, (b) from the modified Zhang’s framework, (c) from Huang et al.’s framework, (d) from the modified Huang et al.’s framework, and (e) from the proposed framework.

Download Full Size | PPT Slide | PDF

 figure: Fig. 7

Fig. 7 Top view results(a) from Zhang’s framework, (b) from modified Zhang’s framework,(c) from Huang et al.’s framework, (d) from modified Huang et al.’s framework, and (e) from the proposed framework.

Download Full Size | PPT Slide | PDF

7.Conclusion

A new valid point detection framework is proposed after an evaluation of some existing techniques. The k-means clustering is adopted for object segmentation. It is automatic and able to classify object boundary points properly. Unwrapping errors caused by random noise around phase boundaries are corrected based on theoretical analysis. Noisy point detection methods in the temporal and spatial directions are proposed with automatic threshold settings. Experimental results have demonstrated good performance of the proposed framework, which produces clean point-cloud results and preserves object details.

Acknowledgments

This work was partially supported by Multi-plAtform Game Innovation Centre (MAGIC), funded by the Singapore National Research Foundation under its IDM Futures Funding Initiative and administered by the Interactive & Digital Media Programme Office, Media Development Authority, and Zhejiang Provincial Natural Science Foundation (LY14F020014).

References and links

1. S. S. Gorthi and P. Rastogi, “Fringe projection techniques: whither we are?” Opt. Lasers Eng. 48(2), 133–140 (2010). [CrossRef]  

2. Z. Wang, D. A. Nguyen, and J. C. Bames, “Some practical considerations for fringe projection profilometry,” Opt. Lasers Eng. 48(2), 218–225 (2010). [CrossRef]  

3. V. Srinivasan, H. C. Liu, and M. Halioua, “Automated phase-measuring profilometry of 3-D diffuse objects,” Appl. Opt. 23(18), 3105–3108 (1984). [CrossRef]   [PubMed]  

4. X. F. Meng, X. Peng, L. Z. Cai, A. M. Li, J. P. Guo, and Y. R. Wang, “Wavefront reconstruction and three-dimensional shape measurement by two-step dc-term-suppressed phase-shifted intensities,” Opt. Lett. 34(8), 1210–1212 (2009). [CrossRef]   [PubMed]  

5. E. H. Kim, J. Hahn, H. Kim, and B. Lee, “Profilometry without phase unwrapping using multi-frequency and four-step phase-shift sinusoidal fringe projection,” Opt. Express 17(10), 7818–7830 (2009). [CrossRef]   [PubMed]  

6. H. O. Saldner and J. M. Huntley, “Temporal phase unwrapping: application to surface profiling of discontinuous objects,” Appl. Opt. 36(13), 2770–2775 (1997). [CrossRef]   [PubMed]  

7. H. O. Saldner and J. M. Huntley, “Profilometry using temporal phase unwrappingand a spatial light modulator-based fringe projector,” Opt. Eng. 36(2), 610–615 (1997). [CrossRef]  

8. S. Su and X. Lian, “Phase unwrapping algorithm based on fringe frequency analysis in Fourier-transform profilometry,” Opt. Eng. 40(4), 637–643 (2001). [CrossRef]  

9. A. Baldi, “Phase unwrapping by region growing,” Appl. Opt. 42(14), 2498–2505 (2003). [CrossRef]   [PubMed]  

10. Y. Hung, L. Lin, H. Shang, and B. Park, “Practical three-dimensional computer vision techniques for full-field surface measurement,” Opt. Eng. 39(1), 143–149 (2000).

11. H. Liu, W. Su, K. Reichard, and S. Yin, “Calibration-based phase-shifting projected fringeprofilometry for accurate absolute 3D surface profile measurement,” Opt. Commun. 216(1), 65–80 (2003). [CrossRef]  

12. S. Zhang and S. T. Yau, “High-resolution, real-time 3D absolute coordinate measurement based on a phase-shifting method,” Opt. Express 14(7), 2644–2649 (2006). [CrossRef]   [PubMed]  

13. L. Huang, P. S. Chua, and A. Asundi, “Least-squares calibration method for fringe projection profilometry considering camera lens distortion,” Appl. Opt. 49(9), 1539–1548 (2010). [CrossRef]   [PubMed]  

14. D. Moreno and G. Taubin, “Simple, Accurate, and Robust Projector-Camera calibration,” 3D Imaging, Modeling, Processing, Visualization and Transmission (3DIMPVT), 2012 Second International Conference on, 464–471 (2012). [CrossRef]  

15. H. Guo, H. He, and M. Chen, “Gamma correction for digital fringe projection profilometry,” Appl. Opt. 43(14), 2906–2914 (2004). [CrossRef]   [PubMed]  

16. B. Pan, Q. Kemao, L. Huang, and A. Asundi, “Phase error analysis and compensation for nonsinusoidal waveforms in phase-shifting digital fringe projection profilometry,” Opt. Lett. 34(4), 416–418 (2009). [CrossRef]   [PubMed]  

17. S. Zhang, “Phase unwrapping error reduction frameworkfor a multiple-wavelength phase-shifting algorithm,” Opt. Eng. 48(10), 105601 (2009). [CrossRef]  

18. L. Huang and A. K. Asundi, “Phase invalidity identification framework with the temporal phase unwrapping methods,” Meas. Sci. Technol. 22(3), 035304 (2011). [CrossRef]  

19. F. Chen, X. Su, and L. Xiang, “Analysis and identification of phase error in phase measuring profilometry,” Opt. Express 18(11), 11300–11307 (2010). [CrossRef]   [PubMed]  

20. L. Song, Y. Chang, Z. Li, P. Wang, G. Xing, and J. Xi, “Application of global phase filtering method in multi frequency measurement,” Opt. Express 22(11), 13641–13647 (2014). [CrossRef]   [PubMed]  

21. B. S. Everitt, S. Landau, M. Leese, and D. Stahl, Cluster Analysis, 5th edition (Wiley, 2011).

22. Q. Kemao, Windowed Fringe Pattern Analysis (SPIE, 2013).

23. R. E. Walpole, R. H. Myers, S. L. Myers, and K. Ye, Probability and Statistics for Engineers and Scientists, 8th edition (Person Prentice Hall, 2007).

24. N. Otsu, “A threshold selection method from gray-level histograms,” IEEE Trans. Sys., Man., Cyber. 9(1), 62–66 (1979). [CrossRef]  

References

  • View by:

  1. S. S. Gorthi and P. Rastogi, “Fringe projection techniques: whither we are?” Opt. Lasers Eng. 48(2), 133–140 (2010).
    [Crossref]
  2. Z. Wang, D. A. Nguyen, and J. C. Bames, “Some practical considerations for fringe projection profilometry,” Opt. Lasers Eng. 48(2), 218–225 (2010).
    [Crossref]
  3. V. Srinivasan, H. C. Liu, and M. Halioua, “Automated phase-measuring profilometry of 3-D diffuse objects,” Appl. Opt. 23(18), 3105–3108 (1984).
    [Crossref] [PubMed]
  4. X. F. Meng, X. Peng, L. Z. Cai, A. M. Li, J. P. Guo, and Y. R. Wang, “Wavefront reconstruction and three-dimensional shape measurement by two-step dc-term-suppressed phase-shifted intensities,” Opt. Lett. 34(8), 1210–1212 (2009).
    [Crossref] [PubMed]
  5. E. H. Kim, J. Hahn, H. Kim, and B. Lee, “Profilometry without phase unwrapping using multi-frequency and four-step phase-shift sinusoidal fringe projection,” Opt. Express 17(10), 7818–7830 (2009).
    [Crossref] [PubMed]
  6. H. O. Saldner and J. M. Huntley, “Temporal phase unwrapping: application to surface profiling of discontinuous objects,” Appl. Opt. 36(13), 2770–2775 (1997).
    [Crossref] [PubMed]
  7. H. O. Saldner and J. M. Huntley, “Profilometry using temporal phase unwrappingand a spatial light modulator-based fringe projector,” Opt. Eng. 36(2), 610–615 (1997).
    [Crossref]
  8. S. Su and X. Lian, “Phase unwrapping algorithm based on fringe frequency analysis in Fourier-transform profilometry,” Opt. Eng. 40(4), 637–643 (2001).
    [Crossref]
  9. A. Baldi, “Phase unwrapping by region growing,” Appl. Opt. 42(14), 2498–2505 (2003).
    [Crossref] [PubMed]
  10. Y. Hung, L. Lin, H. Shang, and B. Park, “Practical three-dimensional computer vision techniques for full-field surface measurement,” Opt. Eng. 39(1), 143–149 (2000).
  11. H. Liu, W. Su, K. Reichard, and S. Yin, “Calibration-based phase-shifting projected fringeprofilometry for accurate absolute 3D surface profile measurement,” Opt. Commun. 216(1), 65–80 (2003).
    [Crossref]
  12. S. Zhang and S. T. Yau, “High-resolution, real-time 3D absolute coordinate measurement based on a phase-shifting method,” Opt. Express 14(7), 2644–2649 (2006).
    [Crossref] [PubMed]
  13. L. Huang, P. S. Chua, and A. Asundi, “Least-squares calibration method for fringe projection profilometry considering camera lens distortion,” Appl. Opt. 49(9), 1539–1548 (2010).
    [Crossref] [PubMed]
  14. D. Moreno and G. Taubin, “Simple, Accurate, and Robust Projector-Camera calibration,” 3D Imaging, Modeling, Processing, Visualization and Transmission (3DIMPVT), 2012 Second International Conference on, 464–471 (2012).
    [Crossref]
  15. H. Guo, H. He, and M. Chen, “Gamma correction for digital fringe projection profilometry,” Appl. Opt. 43(14), 2906–2914 (2004).
    [Crossref] [PubMed]
  16. B. Pan, Q. Kemao, L. Huang, and A. Asundi, “Phase error analysis and compensation for nonsinusoidal waveforms in phase-shifting digital fringe projection profilometry,” Opt. Lett. 34(4), 416–418 (2009).
    [Crossref] [PubMed]
  17. S. Zhang, “Phase unwrapping error reduction frameworkfor a multiple-wavelength phase-shifting algorithm,” Opt. Eng. 48(10), 105601 (2009).
    [Crossref]
  18. L. Huang and A. K. Asundi, “Phase invalidity identification framework with the temporal phase unwrapping methods,” Meas. Sci. Technol. 22(3), 035304 (2011).
    [Crossref]
  19. F. Chen, X. Su, and L. Xiang, “Analysis and identification of phase error in phase measuring profilometry,” Opt. Express 18(11), 11300–11307 (2010).
    [Crossref] [PubMed]
  20. L. Song, Y. Chang, Z. Li, P. Wang, G. Xing, and J. Xi, “Application of global phase filtering method in multi frequency measurement,” Opt. Express 22(11), 13641–13647 (2014).
    [Crossref] [PubMed]
  21. B. S. Everitt, S. Landau, M. Leese, and D. Stahl, Cluster Analysis, 5th edition (Wiley, 2011).
  22. Q. Kemao, Windowed Fringe Pattern Analysis (SPIE, 2013).
  23. R. E. Walpole, R. H. Myers, S. L. Myers, and K. Ye, Probability and Statistics for Engineers and Scientists, 8th edition (Person Prentice Hall, 2007).
  24. N. Otsu, “A threshold selection method from gray-level histograms,” IEEE Trans. Sys., Man., Cyber. 9(1), 62–66 (1979).
    [Crossref]

2014 (1)

2011 (1)

L. Huang and A. K. Asundi, “Phase invalidity identification framework with the temporal phase unwrapping methods,” Meas. Sci. Technol. 22(3), 035304 (2011).
[Crossref]

2010 (4)

F. Chen, X. Su, and L. Xiang, “Analysis and identification of phase error in phase measuring profilometry,” Opt. Express 18(11), 11300–11307 (2010).
[Crossref] [PubMed]

L. Huang, P. S. Chua, and A. Asundi, “Least-squares calibration method for fringe projection profilometry considering camera lens distortion,” Appl. Opt. 49(9), 1539–1548 (2010).
[Crossref] [PubMed]

S. S. Gorthi and P. Rastogi, “Fringe projection techniques: whither we are?” Opt. Lasers Eng. 48(2), 133–140 (2010).
[Crossref]

Z. Wang, D. A. Nguyen, and J. C. Bames, “Some practical considerations for fringe projection profilometry,” Opt. Lasers Eng. 48(2), 218–225 (2010).
[Crossref]

2009 (4)

2006 (1)

2004 (1)

2003 (2)

H. Liu, W. Su, K. Reichard, and S. Yin, “Calibration-based phase-shifting projected fringeprofilometry for accurate absolute 3D surface profile measurement,” Opt. Commun. 216(1), 65–80 (2003).
[Crossref]

A. Baldi, “Phase unwrapping by region growing,” Appl. Opt. 42(14), 2498–2505 (2003).
[Crossref] [PubMed]

2001 (1)

S. Su and X. Lian, “Phase unwrapping algorithm based on fringe frequency analysis in Fourier-transform profilometry,” Opt. Eng. 40(4), 637–643 (2001).
[Crossref]

2000 (1)

Y. Hung, L. Lin, H. Shang, and B. Park, “Practical three-dimensional computer vision techniques for full-field surface measurement,” Opt. Eng. 39(1), 143–149 (2000).

1997 (2)

H. O. Saldner and J. M. Huntley, “Temporal phase unwrapping: application to surface profiling of discontinuous objects,” Appl. Opt. 36(13), 2770–2775 (1997).
[Crossref] [PubMed]

H. O. Saldner and J. M. Huntley, “Profilometry using temporal phase unwrappingand a spatial light modulator-based fringe projector,” Opt. Eng. 36(2), 610–615 (1997).
[Crossref]

1984 (1)

1979 (1)

N. Otsu, “A threshold selection method from gray-level histograms,” IEEE Trans. Sys., Man., Cyber. 9(1), 62–66 (1979).
[Crossref]

Asundi, A.

Asundi, A. K.

L. Huang and A. K. Asundi, “Phase invalidity identification framework with the temporal phase unwrapping methods,” Meas. Sci. Technol. 22(3), 035304 (2011).
[Crossref]

Baldi, A.

Bames, J. C.

Z. Wang, D. A. Nguyen, and J. C. Bames, “Some practical considerations for fringe projection profilometry,” Opt. Lasers Eng. 48(2), 218–225 (2010).
[Crossref]

Cai, L. Z.

Chang, Y.

Chen, F.

Chen, M.

Chua, P. S.

Gorthi, S. S.

S. S. Gorthi and P. Rastogi, “Fringe projection techniques: whither we are?” Opt. Lasers Eng. 48(2), 133–140 (2010).
[Crossref]

Guo, H.

Guo, J. P.

Hahn, J.

Halioua, M.

He, H.

Huang, L.

Hung, Y.

Y. Hung, L. Lin, H. Shang, and B. Park, “Practical three-dimensional computer vision techniques for full-field surface measurement,” Opt. Eng. 39(1), 143–149 (2000).

Huntley, J. M.

H. O. Saldner and J. M. Huntley, “Temporal phase unwrapping: application to surface profiling of discontinuous objects,” Appl. Opt. 36(13), 2770–2775 (1997).
[Crossref] [PubMed]

H. O. Saldner and J. M. Huntley, “Profilometry using temporal phase unwrappingand a spatial light modulator-based fringe projector,” Opt. Eng. 36(2), 610–615 (1997).
[Crossref]

Kemao, Q.

Kim, E. H.

Kim, H.

Lee, B.

Li, A. M.

Li, Z.

Lian, X.

S. Su and X. Lian, “Phase unwrapping algorithm based on fringe frequency analysis in Fourier-transform profilometry,” Opt. Eng. 40(4), 637–643 (2001).
[Crossref]

Lin, L.

Y. Hung, L. Lin, H. Shang, and B. Park, “Practical three-dimensional computer vision techniques for full-field surface measurement,” Opt. Eng. 39(1), 143–149 (2000).

Liu, H.

H. Liu, W. Su, K. Reichard, and S. Yin, “Calibration-based phase-shifting projected fringeprofilometry for accurate absolute 3D surface profile measurement,” Opt. Commun. 216(1), 65–80 (2003).
[Crossref]

Liu, H. C.

Meng, X. F.

Nguyen, D. A.

Z. Wang, D. A. Nguyen, and J. C. Bames, “Some practical considerations for fringe projection profilometry,” Opt. Lasers Eng. 48(2), 218–225 (2010).
[Crossref]

Otsu, N.

N. Otsu, “A threshold selection method from gray-level histograms,” IEEE Trans. Sys., Man., Cyber. 9(1), 62–66 (1979).
[Crossref]

Pan, B.

Park, B.

Y. Hung, L. Lin, H. Shang, and B. Park, “Practical three-dimensional computer vision techniques for full-field surface measurement,” Opt. Eng. 39(1), 143–149 (2000).

Peng, X.

Rastogi, P.

S. S. Gorthi and P. Rastogi, “Fringe projection techniques: whither we are?” Opt. Lasers Eng. 48(2), 133–140 (2010).
[Crossref]

Reichard, K.

H. Liu, W. Su, K. Reichard, and S. Yin, “Calibration-based phase-shifting projected fringeprofilometry for accurate absolute 3D surface profile measurement,” Opt. Commun. 216(1), 65–80 (2003).
[Crossref]

Saldner, H. O.

H. O. Saldner and J. M. Huntley, “Profilometry using temporal phase unwrappingand a spatial light modulator-based fringe projector,” Opt. Eng. 36(2), 610–615 (1997).
[Crossref]

H. O. Saldner and J. M. Huntley, “Temporal phase unwrapping: application to surface profiling of discontinuous objects,” Appl. Opt. 36(13), 2770–2775 (1997).
[Crossref] [PubMed]

Shang, H.

Y. Hung, L. Lin, H. Shang, and B. Park, “Practical three-dimensional computer vision techniques for full-field surface measurement,” Opt. Eng. 39(1), 143–149 (2000).

Song, L.

Srinivasan, V.

Su, S.

S. Su and X. Lian, “Phase unwrapping algorithm based on fringe frequency analysis in Fourier-transform profilometry,” Opt. Eng. 40(4), 637–643 (2001).
[Crossref]

Su, W.

H. Liu, W. Su, K. Reichard, and S. Yin, “Calibration-based phase-shifting projected fringeprofilometry for accurate absolute 3D surface profile measurement,” Opt. Commun. 216(1), 65–80 (2003).
[Crossref]

Su, X.

Wang, P.

Wang, Y. R.

Wang, Z.

Z. Wang, D. A. Nguyen, and J. C. Bames, “Some practical considerations for fringe projection profilometry,” Opt. Lasers Eng. 48(2), 218–225 (2010).
[Crossref]

Xi, J.

Xiang, L.

Xing, G.

Yau, S. T.

Yin, S.

H. Liu, W. Su, K. Reichard, and S. Yin, “Calibration-based phase-shifting projected fringeprofilometry for accurate absolute 3D surface profile measurement,” Opt. Commun. 216(1), 65–80 (2003).
[Crossref]

Zhang, S.

S. Zhang, “Phase unwrapping error reduction frameworkfor a multiple-wavelength phase-shifting algorithm,” Opt. Eng. 48(10), 105601 (2009).
[Crossref]

S. Zhang and S. T. Yau, “High-resolution, real-time 3D absolute coordinate measurement based on a phase-shifting method,” Opt. Express 14(7), 2644–2649 (2006).
[Crossref] [PubMed]

Appl. Opt. (5)

IEEE Trans. Sys., Man., Cyber. (1)

N. Otsu, “A threshold selection method from gray-level histograms,” IEEE Trans. Sys., Man., Cyber. 9(1), 62–66 (1979).
[Crossref]

Meas. Sci. Technol. (1)

L. Huang and A. K. Asundi, “Phase invalidity identification framework with the temporal phase unwrapping methods,” Meas. Sci. Technol. 22(3), 035304 (2011).
[Crossref]

Opt. Commun. (1)

H. Liu, W. Su, K. Reichard, and S. Yin, “Calibration-based phase-shifting projected fringeprofilometry for accurate absolute 3D surface profile measurement,” Opt. Commun. 216(1), 65–80 (2003).
[Crossref]

Opt. Eng. (4)

S. Zhang, “Phase unwrapping error reduction frameworkfor a multiple-wavelength phase-shifting algorithm,” Opt. Eng. 48(10), 105601 (2009).
[Crossref]

Y. Hung, L. Lin, H. Shang, and B. Park, “Practical three-dimensional computer vision techniques for full-field surface measurement,” Opt. Eng. 39(1), 143–149 (2000).

H. O. Saldner and J. M. Huntley, “Profilometry using temporal phase unwrappingand a spatial light modulator-based fringe projector,” Opt. Eng. 36(2), 610–615 (1997).
[Crossref]

S. Su and X. Lian, “Phase unwrapping algorithm based on fringe frequency analysis in Fourier-transform profilometry,” Opt. Eng. 40(4), 637–643 (2001).
[Crossref]

Opt. Express (4)

Opt. Lasers Eng. (2)

S. S. Gorthi and P. Rastogi, “Fringe projection techniques: whither we are?” Opt. Lasers Eng. 48(2), 133–140 (2010).
[Crossref]

Z. Wang, D. A. Nguyen, and J. C. Bames, “Some practical considerations for fringe projection profilometry,” Opt. Lasers Eng. 48(2), 218–225 (2010).
[Crossref]

Opt. Lett. (2)

Other (4)

D. Moreno and G. Taubin, “Simple, Accurate, and Robust Projector-Camera calibration,” 3D Imaging, Modeling, Processing, Visualization and Transmission (3DIMPVT), 2012 Second International Conference on, 464–471 (2012).
[Crossref]

B. S. Everitt, S. Landau, M. Leese, and D. Stahl, Cluster Analysis, 5th edition (Wiley, 2011).

Q. Kemao, Windowed Fringe Pattern Analysis (SPIE, 2013).

R. E. Walpole, R. H. Myers, S. L. Myers, and K. Ye, Probability and Statistics for Engineers and Scientists, 8th edition (Person Prentice Hall, 2007).

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (7)

Fig. 1
Fig. 1 Proposed valid point detection flowchart.
Fig. 2
Fig. 2 Measured modulation and phases.(a) The calculated modulation; (b) the calculated wrapped phase; (c) the unwrapped phase; (d) the manually separated object phase as the ground true.
Fig. 3
Fig. 3 Separated object phases and errors. (a) The separated object phase using a threshold of 8; (b) the point differences between Fig. 2(d) and Fig. 3(a); (c) the separated object phase using the proposed k-means clustering; (d) the point differences between Fig. 2(d) and Fig. 3(c).
Fig. 4
Fig. 4 Phase Correction. (a) The corrected phase; (b) the wrapped phase, the continuous phase before and after correction at the region highlighted by the red rectangle in Fig. 4(a).
Fig. 5
Fig. 5 Reconstructed model.(a) The reconstructed model before noisy point detection; (b) the model after noisy point detection; (c) the detected noisy points.
Fig. 6
Fig. 6 Front-view results(a) from Zhang’s framework, (b) from the modified Zhang’s framework, (c) from Huang et al.’s framework, (d) from the modified Huang et al.’s framework, and (e) from the proposed framework.
Fig. 7
Fig. 7 Top view results(a) from Zhang’s framework, (b) from modified Zhang’s framework,(c) from Huang et al.’s framework, (d) from modified Huang et al.’s framework, and (e) from the proposed framework.

Equations (29)

Equations on this page are rendered with MathJax. Learn more.

f ˜ l,t ( u p , v p )=127+127cos[ Φ ˜ l ( u p , v p )+ 2tπ T ],l=0,1,...,L1,t=0,1,...,T1,
Φ ˜ l ( u p , v p )=h Φ ˜ l1 ( u p , v p ), Φ ˜ 0 ( u p , v p )= 2π u p m [ 0,2π ),l=1,2,...L1, u p =0,1,...m1,
f l,t ( u c , v c )= a l ( u c , v c )+ b l ( u c , v c )cos[ Φ l ( u c , v c )+ 2tπ T ],l=0,1,...,L1,t=0,1,...,T1,
Φ l ( u c , v c )h Φ l1 ( u c , v c ), Φ 0 [ 0,2π ),l=1,2,...L1.
φ l =arctan2( f l,3 f l,1 , f l,0 f l,2 )[ 0,2π ).
Φ l = φ l +2 λ l π, Φ o = φ o ,
λ l =round( h Φ l1 φ l 2π ),
Φ L1 ( u c , v c )= Φ ˜ L1 ( u p , v p )= h L1 Φ ˜ 0 ( u p , v p )= h L1 2π u p m ,
s c [ u c , v c ,1 ] T = M c [ X,Y,Z,1 ] T s p [ u p , v p ,1 ] T = M p [ X,Y,Z,1 ] T ,
M [ X Y Z ] T =R,
M=[ M c ( 0,0 ) M c ( 2,0 ) u c M c ( 0,1 ) M c ( 2,1 ) u c M c ( 0,2 ) M c ( 2,2 ) u c M c ( 1,0 ) M c ( 2,0 ) v c M c ( 1,1 ) M c ( 2,1 ) v c M c ( 1,2 ) M c ( 2,2 ) v c M p ( 0,0 ) M p ( 2,0 ) u p M p ( 0,1 ) M p ( 2,1 ) u p M p ( 0,2 ) M p ( 2,2 ) u p ],
R=[ M c ( 2,3 ) u c M c ( 0,3 ) M c ( 2,3 ) v c M c ( 1,3 ) M p ( 2,3 ) u p M p ( 0,3 ) ].
b ¯ = 1 L l=0 L1 [ 2 T ( t=0 T1 f l,t sin 2tπ T ) 2 + ( t=0 T1 f l,t cos 2tπ T ) 2 ] .
p c { object, b ¯ t b background, b ¯ < t b .
i=1 k b ¯ S i | b ¯ c i | 2 minimum.
{ c 1 =mean( b ¯ | b ¯ > c 2 ) c 2 =mean( b ¯ ) c 3 =mean( b ¯ | b ¯ < c 2 ) .
( φ l ) n =arctan2[ sin( Φ l )+ n l,3 n l,1 2 b l ,cos( Φ l )+ n l,0 n l,2 2 b l ][ 0,2π ).
( φ l ) n = φ l +Δ φ l + ρ l 2π
ρ l ={ 0, Δ φ l φ l <2πΔ φ l 1, φ l <Δ φ l 1, φ l 2πΔ φ l ,
Δ φ l cos( Φ l )( n l,3 n l,1 )sin( Φ l )( n l,0 n l,2 ) 2 b l ,
( λ 1 ) n =round[ h( φ 0 +Δ φ 0 + ρ 0 2π )( φ 1 +Δ φ 1 + ρ 1 2π ) 2π ] =round[ h φ 0 φ 1 +( hΔ φ 0 Δ φ 1 ) 2π ]+h ρ 0 ρ 1 = λ 1 +h ρ 0 ρ 1 .
( Φ 1 ) n = ( φ 1 ) n + ( λ 1 ) n 2π= φ 1 +Δ φ 1 + λ 1 2π+h ρ 0 2π.
Φ L1 ( x,y+1 )={ Φ L1 ( x,y+1 ) | Φ 0 ( x,y+1 ) Φ 0 ( x,y ) |<3π/2 Φ L1 ( x,y+1 )2π h L1 Φ 0 ( x,y+1 ) Φ 0 ( x,y )>3π/2 Φ L1 ( x,y+1 )+2π h L1 Φ 0 ( x,y+1 ) Φ 0 ( x,y )<3π/2 .
Φ f = l=0 L1 Φ l h l / l=0 L1 h 2l ,
RMSE= l=0 L1 ( Φ l h l Φ f ) 2 L .
| Φ xx |< t 2nd .
t RMSE = α 1 t Otsu ,
d ε,η = ( X u c +ε, v c +η X u c , v c ) 2 + ( Y u c +ε, v c +η Y u c , v c ) 2 + ( Z u c +ε, v c +η Z u c , v c ) 2 .
t d = α 2 mean ( ε,η ) S 1 ( d ε,η )

Metrics