## Abstract

Fringe projection profilometry has become one of the most popular 3D information acquisition techniques being developed over the past three decades. However, the general and practical issues on valid point detection, including object segmentation, error correction and noisy point removal, have not been studied thoroughly. Furthermore, existing valid point detection techniques require multiple case-dependent thresholds which increase processing inconvenience. In this paper, we proposed a new valid point detection framework, which includes the *k*-means clustering for automatic background segmentation, unwrapping error correction based on theoretical analysis, and noisy point detection in both temporal and spatial directions with automatic threshold setting. Experimental results are given to validate the proposed framework.

© 2015 Optical Society of America

## 1. Introduction

Fringe projection profilometry (FPP) [1, 2] has become one of the most popular 3D information acquisition techniques. A FPP system typically uses a projector to project fringe patterns onto object surfaces, a camera to capture the distorted fringes patterns, and a computer to analyze the fringe patterns and reconstruct the surface geometry. The main processing steps for geometric reconstruction include wrapped phase estimation [3–5], phase unwrapping [6–9] and phase to 3D point cloud transformation through system calibration [10–14]. Four-step phase shifting [5], temporal phase unwrapping with multi-frequency projection [6] and point-cloud reconstruction [14] are typical methods used in FPP systems.

Efforts have been made to improve measurement accuracy, through better system calibration [10–14] and Gama correction [15, 16] methods. Valid point detection, however, is relatively less comprehensively discussed. As a profiling technique, FPP aims at reconstructing reliable object points. To achieve this target, we classify invalid points into three types and manage them individually. First, background points that do not belong to the objects but are captured by the camera need to be segmented. Fringe modulation has often been used to recognize and remove such background points. Second, some object points, due to inevitable noise, are unwrapped wrongly. These points can be rectified and turned into valid points. Third, some object points are simply too noisy and thus they should be detected and excluded.

Previous works on valid point detection include the complete frameworks proposed by Zhang [17] and Huang et al. [18], and the error detection methods proposed by Chen et al. [19] and Song et al. [20]. To perform background segmentation, modulation values are computed and thresholded [17, 18]. Different thresholds should be set according to different FPP systems and measurement environments. Changes, such as lighting condition, projector or camera parameters and so on, will cause manually resetting of thresholds, which is not convenient. The simple thresholding could also misclassify boundary points between the objects and the background. To remove unwrapping errors, Gaussian filtering has been used to suppress the noise influence. It is followed by hole filling with a traditional phase unwrapping method to correct the unwrapping error [17]. However, Gaussian filtering may smooth the high frequency phase and the hole filling requires two user-defined thresholds which are not convenient to use. For noisy point detection, monotonicity, a root mean square error (RMSE) and second order derivatives have been proposed [17–20]. A threshold is required for each of these metrics, making these methods inconvenient to use. Monotonicity and second order derivatives are also sensitive to noise and boundaries.

In this paper, we propose a framework with three novel ways to successfully tackle these three types of invalid points. The overall process is shown in Fig. 1 with the proposed framework highlighted in red. The details are as follows:

- (1) For background removal,
*k*-means clustering is used to automatically recognize the background, objects, and boundary points between them; - (2) For points with unwrapping error due to noise, monotonicity checking is applied to recognize and correct them;
- (3) For other noisy object points, we propose to use RMSE checking with automatic threshold setting in the temporal direction, and 3D point-cloud smoothness checking in the spatial direction to perform the detection effectively.

This paper is organized as follows. The continuous phase estimation and the 3D point-cloud transformation in FPP are briefly introduced in Sect. 2. Our proposed background removal method, unwrapping error correction method and invalid object point detection method are discussed in Sect. 3-5, respectively. The overall performance ispresented in Sect. 6. The paper is concluded in Sect. 7.

## 2. The principle of FPP

Stereo vision is a classic method to estimate depth information of an object using two cameras. In FPP, one camera is replaced by a projector so that fringe patterns can be projected ontothe object to code its shape information. Thus an object point ${p}_{o}={\left[X,Y,Z\right]}^{T}$ is illuminated by a point ${p}_{p}={\left[{u}_{p},{v}_{p}\right]}^{T}$ from a projector image and goes to a point ${p}_{c}={\left[{u}_{c},{v}_{c}\right]}^{T}$ in the camera image. The two points, ${p}_{p}$and ${p}_{c}$, are called a pair if they correspond to the same point ${p}_{o}$. In FPP, paired points have the same phase value. This feature is utilized to greatly simplify point correspondence. Once ${p}_{p}$ and ${p}_{c}$ have been corresponded and paired, their object point ${p}_{o}$ can be reconstructed by a straightforward triangulation.

#### 2.1 Continuous phase estimation and point correspondence

Since point correspondence is established by interrogating phase values, obtaining continuous phase values from both the projector and camera images are essential. A well-recognized approach uses the phase-shifting technique to estimate wrapped phase and multi-frequency projection to temporally unwrap it. The fringe pattern sequence projected from a projector is designed as

*l*indicates the

*l*th frequency and the total number of frequencies is

*L*;

*t*indicates the

*t*th phase-shift and the total number of shifts is

*T*; ${\tilde{f}}_{l,t}$is the fringe pattern at

*l*th frequency and

*t*th phase shift; ${\tilde{\Phi}}_{l}$ is the continuous phase value at the

*l*th frequency with a size of m × n, pre-designed to be monotonically increasing in

*x*or

*y*direction. Without losing generality, the

*x*direction is chosen so the phase is

*h*is an integer number ranging from 2 to 5. Smaller

*h*normally has higher unwrapping accuracy but it requires larger

*L*to ensure more measurement details and thus is more time-consuming.

After projection, the sequence of the captured fringe patterns can be represented as [12, 13]

*l*th frequency, respectively, and are assumed to remain unchanged for different phase-shifts. For convenience, ${b}_{l}$ is called modulation in the rest of this paper. The captured phase maps retain the relationship between consecutive frequencies in Eq. (2) so that

*l*.

If two points${p}_{p}$ and ${p}_{c}$ are paired, then ${\Phi}_{l}\left({u}_{c},{v}_{c}\right)={\tilde{\Phi}}_{l}\left({u}_{p},{v}_{p}\right)$ where ${\Phi}_{l}\left({u}_{c},{v}_{c}\right)$ has been measured. From this relationship, ${p}_{p}=\left({u}_{p},{v}_{p}\right)$ can be found. Note that the highest frequency *L*−1 is selected for correspondence because it provides the highest accuracy. The phase relationship can then be elaborated as follows,

*y*direction, then the coordinate ${v}_{p}$ in the

*y*direction can be estimated in the same way. However, obtaining ${u}_{p}$ only is sufficient for object point reconstruction.

#### 2.2 Point-cloud reconstruction

Given a point pair ${p}_{p}={\left[{u}_{p},{v}_{p}\right]}^{T}$and ${p}_{c}={\left[{u}_{c},{v}_{c}\right]}^{T}$(in fact, ${v}_{p}$ is unknown), according to a pinhole camera model, their relationship to the object point ${\left[X,Y,Z\right]}^{T}$ can be written as follows,

## 3. Background removal

Object points and background points are analyzed and identified in this section. A *k*-means clustering is proposed to remove useless background points.

#### 3.1 Problem statement and analysis

FPP is often performed in a dark environment where the only light source is the projector. Points from objects being measured are well illuminated by the projector and seen by the camera. They are thus naturally called object points. All others points also seen by the camera are referred to as background points. The object points are characterized by their high modulation, except for (a) those with a dark color and (b) those on the boundary of the object with a large angle between the surface normal and light direction. Although the modulation of the object points in these two exceptional cases decreases, it is usually still higher than that of background points.

Background points include the following four cases: (a) the background such as a wall that is far behind the target object but is in the field of view of the camera and the projector. They are out of focus and thus have low modulation. Though plausible phase can still be computed, it is inaccurate; (b) the environment is so dark that the captured background points appear to be completely black, i.e., the intensity is zero; (c) some background points have low and random intensity values due to weakly scattered light. Their modulation is low and the computed phase is random; (d) when the background points are near the objects, their modulation values are higher than normal background points, but their phase is still random.

Ideally, all background points should be removed while all object points are retained, for which using modulation is seen to be the most natural choice, and indeed has been applied in practice [17, 18]. However, with modulation alone, it is difficult to differentiate between a dark color object, the boundary of an object and the boundary of background, as all of them have intermediate modulation between normal objects and background points.

#### 3.2Current solution

The averaged modulation $\overline{b}$is calculated from phase-shifted fringe patterns as follows [17],

#### 3.3Proposed solution

By noticing the fact that the modulation of the objects is higher than that of the background, *k*-means clustering is proposed for automatic segmentation. The *k*-means clustering is one of the simplest and most popular unsupervised learning algorithms for clustering analysis in signal processing [21]. The *k*-means clustering aims at partitioning the *n* modulation values $\left({\overline{b}}_{1},{\overline{b}}_{2},\mathrm{...},{\overline{b}}_{n}\right)$ into *k* sets $S=\left({S}_{1},{S}_{2},\mathrm{...}{S}_{k}\right)$ with $c=\left({c}_{1},{c}_{2},\mathrm{...}{c}_{k}\right)$ as their centroids, so that

There are two key parameters in the*k*-means clustering, the number of clusters, *k*, and the cluster centroids, **c**. It is natural to set two clusters (*k* = 2), one for the object and the other for the background. However, as discussed earlier, boundary points with intermediate modulation values are hard to determine which cluster they belong to. We thus create one more set of cluster for these points and set *k* = 3.As for the setting of **c**,a random initialization may fail the clustering. Although both the modulation ranges of the object and the background are system-dependent, the mean modulation of the whole image falls in-between these two ranges, i.e., it falls into the modulation range of the boundary points. This finding enables us to set **c **automatically and effectively. The mean value of the whole image (denoted as ${c}_{2}$) is set as the centroid for the boundary cluster; the mean value of the modulation that is larger than ${c}_{2}$ (denoted as ${c}_{1}$) is set as the centroid for the object cluster; and the mean value of the modulation that is smaller than ${c}_{2}$ (denoted as ${c}_{3}$) is set as the centroid for the background cluster. This setting can be formulated as

*k*and

**c**lead to successful and unique clustering.

The *k*-means clustering algorithm for background removal is summarized below:

- Step 2: For each pixel
*j*, assign its modulation ${\overline{b}}_{j}$ to the cluster ${S}_{i}$that has the closest centroid${c}_{i}$; - Step 3: Update the centroid value as ${c}_{i}=mean\left(\overline{b}\in {S}_{i}\right)$;
- Step 4: Repeat Step 2 and Step 3 until the centroids do not move;
- Step 5: Treat ${S}_{1}$ as the objects and ${S}_{3}$ as the background. A point in ${S}_{2}$ will be included into the object cluster if it satisfies the following two conditions: (a) it is connected to the object, and (b) it is smooth (its derivative ${\Phi}_{x}$ is smaller than twice of the largest derivative of object points in ${S}_{1}$). To preserve as much points as possible, eight-connectivity is used.

#### 3.4 Results

An angel figure is measured by our system with a Grasshopper GRAS-20S4C camera (with the camera image size of 1200 × 1600) and a Viewsonic PLED-W500 projector (with the projector image size of 768 × 1024). The phase maps for projection are simulated according to Eq. (2) with $h=3$and$L=5$.

Figure 2(a) shows the calculated modulation from captured images. As discussed earlier, the modulation differences of object and background are clear except for the object points with a dark color or on object boundaries. Figures 2(b) and 2(c) show the wrapped phase ${\phi}_{L-1}$ and the corresponding unwrapped phase ${\Phi}_{L-1}$. Although many background points have clear phase values, they are considered as background points based on their low modulation values. Figure 2(d) shows the object phase by manual segmentation, which is regarded as the ground truth.

Figures 3(a) and 3(c) show the separated phases using a threshold of 8 and the proposed *k*-means clustering, respectively. The threshold value of 8 is experimentally optimal in minimizing the difference between Fig. 2(d) and 3(a). The differences between Fig. 2(d) and Fig. 3(a),and between Fig. 2(d) and Fig. 3(c) are shown in Fig. 3(b) and 3(d), respectively,where the white points are actually object points but misidentified as background, while the black points are actually background points but are misidentified as object points. The proposed *k*-means clustering outperforms the optimally thresholded result.

## 4. Unwrapping error correction around phase boundaries

Random noise is a common problem in FPP. Phase errors caused by random noise around phase boundaries are amplified during unwrapping. This problem is analyzed and consequently corrected.

#### 4.1. Problem statement and analysis

Random noise commonly appears in an entire image in FPP. For convenience, the random noise is assumed to be additive with a mean of zero and a standard deviation of σ. Let the noise in ${f}_{l,k}$be${n}_{l,k}$,according to Eq. (5), the noisy wrapped phase can be represented as

According to [22], it is not difficult to find that

However, even for small ${\sigma}_{\Delta {\phi}_{l}}$, according to Eqs. (18) and (19),$2\pi $ is added to some pixels with phase near 0 (i.e., ${\rho}_{l}=1$), and subtracted from some pixels with phase near $2\pi $ (i.e., ${\rho}_{l}=-1$).Although spatial phase unwrapping can correct the error of $\pm 2\pi $, it is not performed due to its complexity. In multi-frequency phase unwrapping, according to Eq. (7), we have

It is interesting to see that ${\rho}_{1}$ does not affect the unwrapping result but unfortunately, the ${\rho}_{0}$ error is amplified by *h*.Proceeding with the unwrapping, the ${\rho}_{0}$ error is amplified by ${h}^{L-1}$ in ${\Phi}_{L-1}$, which is an huge error.Since ${\Phi}_{0}$ is around 0 or 2π only in the boundary regions of the projected fringe pattern(regarded as phase boundaries), one way to deal with the unwrapping error is to avoid capturing the phase boundaries, which is not always convenient.

#### 4.2. Current solutions

To deal with random noise and to avoid error amplification, one of the current solutions uses Gaussian smoothing during the estimation of ${\Phi}_{l}\left(l=1,2,\mathrm{...},L-1\right)$ for unwrapping [17]. However, when the frequency of a projected fringe pattern is high, or the measured objects have sharp edges, such filtering will likely over-smooth the fringe pattern and result in unwrapping errors.

Another solution uses a hole detection and filling strategy [17], which requires two user-defined thresholds to identify good points and ambiguous points. The determination of the threshold values is case sensitive and user dependent. It is thus not convenient. Furthermore, the performance of this strategy depends on the particular spatial phase unwrapping method, which usually does not outperform the temporal phase unwrapping method.

#### 4.3. Proposed solution

As shown in Eq. (2), the phase is designed to increase monotonically in one direction. The monotonicity feature has been used as a criterion for noisy point detection in the final continuous phase map ${\Phi}_{L-1}$ [17–20], which is however sensitive to noise. For example, the designed phase has the relationship of $\Phi \left(x-1,y\right)<\Phi \left(x,y\right)<\Phi \left(x+1,y\right)$. When there is a sufficiently large positive noise $\Delta \Phi $contributed to $\Phi \left(x,y\right)$, the above inequality becomes $\Phi \left(x-1,y\right)<\Phi \left(x,y\right)+\Delta \Phi $ and $\Phi \left(x,y\right)+\Delta \Phi >\Phi \left(x+1,y\right)$, i.e., the noisy point at (*x*, *y*) passes the monotonicity check but the good point at (*x* + 1, *y*) does not. In order to cover this situation, a large phase difference range is used instead in [18], which however becomes too tolerant and can hardly detect error points.

Though monotonicity is not suitable to detect noisy points in ${\Phi}_{L-1}$, we interestingly find that it works very well to detect the 2π error ($\rho \ne 0$ error) for ${\Phi}_{0}$. As described in Eq. (19), a point with $\rho \ne 0$ has an error near 2π. Such a big error allows us to perform the monotonicity checking with a large phase tolerance ($3\pi /2$) to detect amplified random noise,

#### 4.4. Results

In this example, we only capture the first three fourths of the projected fringe pattern which contains enough information of the angel object. The unsatisfactory result along the left phase boundary which is also located in the left image boundary in Fig. 2(c) is clearly observed. Although this part corresponds to a wall belonging to the background, we use it as an example to show the effectiveness of our proposed method. Applying the monotonicity checking gives the result in Fig. 4(a). For comparison, Fig. 4(b) shows the sub-image of the wrapped phase, continuous phases before and after correction at the region highlighted by the red rectangle. The unwrapping errors of most pixels along the left image boundary are rectified. Note that there are yet some error points not corrected because their phase values are too noisy.

## 5. Invalid object point detection

By now, the object has been segmented from the background, and the object points around phase boundaries have been rectified for possible unwrapping errors. In this section, invalid points within the segmented objects are detected.

#### 5.1Problem statement

So far all the object points considered have high modulation values, which however do not guarantee them to be valid object points. As the object points are actually surface points, they should have continuous phase values. The phase continuity or surface continuity thus needs to be checked, either temporally or spatially, to find the invalid points and remove them.

#### 5.2Current solutions

The RMSE [18, 19] is an effective metric to temporally detect noisy points. As discussed in Sec. 2.1, temporally consecutive phases are the same except for an amplification factor *h*. This property may not be upheld due to noise, which, on the other hand, can be used to measure the noise level. The phase measured from different frequencies is first averaged as follows,

Spatially, monotonicity [17, 18] and a second order derivative (${\Phi}_{xx}$) [20] are two metrics for noisy point detection. As described in Sect. 4.3, the monotonicity criterion is not practical to detect noisy points in ${\Phi}_{L-1}$. The second order derivative has also been used for noisy point detection in FPP [20], denoted as

However, object points on boundaries or sharp surfaces will be mistaken as noise and discarded, which is not desirable.#### 5.3Proposed solutions

Methods to detect invalid point temporally and spatially are proposed. Temporally, in order to avoid manual setting of ${t}_{RMSE}$ for the RMSE method, the Otsu's method [24] is proposed to perform clustering-based automatic thresholding by minimizing intra-class variance of the two groups. Subsequently the threshold is set as follows,

where${t}_{{}_{Otsu}}$ is the threshold obtained from the Otsu’s method and ${\alpha}_{1}=1.2$is used to preserve more details. Note that it is possible to use*k*-means clustering again but it is slower than Otsu’s method.

Spatially, a point-cloud smoothness measure with automatic threshold setting is proposed. To deal with surface discontinuity, eight-connectivity is used. In other words, for each object point $\left({u}_{c},{v}_{c}\right)$, its smoothness with respect to all its eight connected neighbors, $\left({u}_{c}+\epsilon ,{v}_{c}+\eta \right)$ with $\left|\epsilon \right|\le 1$ and $\left|\eta \right|\le 1$, is considered. Normally, smoothness measure is performed based on phase values in the camera image as in the monotonicity [17, 18] and the second order derivative [20]. Because the phase is designed to increase in one direction, multiple thresholds are needed to cater to different phase changes for its eight neighbors. We instead directly measure smoothness on an object surface, i.e., the reconstructed point-cloud, which is more natural and requires a single threshold. The distance between an object point and its neighbor is computed in the world space as

#### 5.4 Results

Figure 5(a) shows the top-view of the reconstructed model before the invalid point detection. Temporal and spatial noisy point detections are then performed, resulting in a cleaner and thus better result shown in Fig. 5(b). The point differences of corresponding phase maps of Fig. 5(a) and 5(b) are computed and shown in Fig. 5(c), where boundary zero difference regions are excluded to have a better demonstration of the detected noisy points.

## 6. Overall performance

To see the overall performance of the proposed valid point detection framework, it is compared with the frameworks proposed for the same purpose by Zhang [17] and Huang et al. [18]. Furthermore, the RMSE [19] and the second order derivative ${\Phi}_{xx}$ [20] are also included for the comparison as well. Two modifications are thus implemented: Zhang’s framework with RMSE and ${\Phi}_{xx}$and Huang’s framework with ${\Phi}_{xx}$. The same angel figure is used to demonstrate the performances. Zhang’s and Huang et al.’s frameworks are implemented by following their papers [17, 18]. The thresholds of the modulation are set as ${t}_{bL}=8$ and ${t}_{bH}=50$ for Zhang’s framework and ${t}_{b}=8$ for Huang et al.’s framework. The threshold of the RMSE is set to the same value as our proposed method while ${t}_{2nd}=0.5$ is set for ${\Phi}_{xx}$ after careful observation.

Figure 6 shows the front-view results while Fig. 7 shows the top-view results. As shown in Fig. 6(a) and Fig. 7(a), invalid points still exist after applying Zhang’s framework. The invalid points indicated by the green arrow are the unwrapping errors caused by the Gaussian smoothing of fringe patterns with high frequency. The invalid points indicated by the red arrow are misclassified background points which are also presented in Fig. 6(b-d) and Fig. 7(b-d) as their corresponding arrows point to. Noise detection with the RMSE and ${\Phi}_{xx}$ can reduce the error, but not thoroughly, as shown in Fig. 6(b) and Fig. 7(b). The result from Huang et al.’s framework is shown in Fig. 6(c) and Fig. 7(c), which is cleaner than that from Zhang’s framework, but it still contains some invalid points as the arrows point to. Further processing with ${\Phi}_{xx}$ as shown in Fig. 6(d) and Fig. 7(d) cannot fully remove these invalid points. As shown in Fig. 6(e) and Fig. 7(e), the proposed framework produces the most satisfactory result where the invalid points have been cleaned and details such as the boundary points are preserved. The computation time using MATLAB in DELL Optiplex with Intel® Core i5-4590 CPU @ 3.30 GHz and 8.0GB RAM is about 8s for the proposed method, 2s for Zhang’s framework and modified Zhang’s framework, 1s for Huang’s framework and modified Huang’s framework.

## 7.Conclusion

A new valid point detection framework is proposed after an evaluation of some existing techniques. The *k*-means clustering is adopted for object segmentation. It is automatic and able to classify object boundary points properly. Unwrapping errors caused by random noise around phase boundaries are corrected based on theoretical analysis. Noisy point detection methods in the temporal and spatial directions are proposed with automatic threshold settings. Experimental results have demonstrated good performance of the proposed framework, which produces clean point-cloud results and preserves object details.

## Acknowledgments

This work was partially supported by Multi-plAtform Game Innovation Centre (MAGIC), funded by the Singapore National Research Foundation under its IDM Futures Funding Initiative and administered by the Interactive & Digital Media Programme Office, Media Development Authority, and Zhejiang Provincial Natural Science Foundation (LY14F020014).

## References and links

**1. **S. S. Gorthi and P. Rastogi, “Fringe projection techniques: whither we are?” Opt. Lasers Eng. **48**(2), 133–140 (2010). [CrossRef]

**2. **Z. Wang, D. A. Nguyen, and J. C. Bames, “Some practical considerations for fringe projection profilometry,” Opt. Lasers Eng. **48**(2), 218–225 (2010). [CrossRef]

**3. **V. Srinivasan, H. C. Liu, and M. Halioua, “Automated phase-measuring profilometry of 3-D diffuse objects,” Appl. Opt. **23**(18), 3105–3108 (1984). [CrossRef] [PubMed]

**4. **X. F. Meng, X. Peng, L. Z. Cai, A. M. Li, J. P. Guo, and Y. R. Wang, “Wavefront reconstruction and three-dimensional shape measurement by two-step dc-term-suppressed phase-shifted intensities,” Opt. Lett. **34**(8), 1210–1212 (2009). [CrossRef] [PubMed]

**5. **E. H. Kim, J. Hahn, H. Kim, and B. Lee, “Profilometry without phase unwrapping using multi-frequency and four-step phase-shift sinusoidal fringe projection,” Opt. Express **17**(10), 7818–7830 (2009). [CrossRef] [PubMed]

**6. **H. O. Saldner and J. M. Huntley, “Temporal phase unwrapping: application to surface profiling of discontinuous objects,” Appl. Opt. **36**(13), 2770–2775 (1997). [CrossRef] [PubMed]

**7. **H. O. Saldner and J. M. Huntley, “Profilometry using temporal phase unwrappingand a spatial light modulator-based fringe projector,” Opt. Eng. **36**(2), 610–615 (1997). [CrossRef]

**8. **S. Su and X. Lian, “Phase unwrapping algorithm based on fringe frequency analysis in Fourier-transform profilometry,” Opt. Eng. **40**(4), 637–643 (2001). [CrossRef]

**9. **A. Baldi, “Phase unwrapping by region growing,” Appl. Opt. **42**(14), 2498–2505 (2003). [CrossRef] [PubMed]

**10. **Y. Hung, L. Lin, H. Shang, and B. Park, “Practical three-dimensional computer vision techniques for full-field surface measurement,” Opt. Eng. **39**(1), 143–149 (2000).

**11. **H. Liu, W. Su, K. Reichard, and S. Yin, “Calibration-based phase-shifting projected fringeprofilometry for accurate absolute 3D surface profile measurement,” Opt. Commun. **216**(1), 65–80 (2003). [CrossRef]

**12. **S. Zhang and S. T. Yau, “High-resolution, real-time 3D absolute coordinate measurement based on a phase-shifting method,” Opt. Express **14**(7), 2644–2649 (2006). [CrossRef] [PubMed]

**13. **L. Huang, P. S. Chua, and A. Asundi, “Least-squares calibration method for fringe projection profilometry considering camera lens distortion,” Appl. Opt. **49**(9), 1539–1548 (2010). [CrossRef] [PubMed]

**14. **D. Moreno and G. Taubin, “Simple, Accurate, and Robust Projector-Camera calibration,” 3D Imaging, Modeling, Processing, Visualization and Transmission (3DIMPVT), 2012 Second International Conference on, 464–471 (2012). [CrossRef]

**15. **H. Guo, H. He, and M. Chen, “Gamma correction for digital fringe projection profilometry,” Appl. Opt. **43**(14), 2906–2914 (2004). [CrossRef] [PubMed]

**16. **B. Pan, Q. Kemao, L. Huang, and A. Asundi, “Phase error analysis and compensation for nonsinusoidal waveforms in phase-shifting digital fringe projection profilometry,” Opt. Lett. **34**(4), 416–418 (2009). [CrossRef] [PubMed]

**17. **S. Zhang, “Phase unwrapping error reduction frameworkfor a multiple-wavelength phase-shifting algorithm,” Opt. Eng. **48**(10), 105601 (2009). [CrossRef]

**18. **L. Huang and A. K. Asundi, “Phase invalidity identification framework with the temporal phase unwrapping methods,” Meas. Sci. Technol. **22**(3), 035304 (2011). [CrossRef]

**19. **F. Chen, X. Su, and L. Xiang, “Analysis and identification of phase error in phase measuring profilometry,” Opt. Express **18**(11), 11300–11307 (2010). [CrossRef] [PubMed]

**20. **L. Song, Y. Chang, Z. Li, P. Wang, G. Xing, and J. Xi, “Application of global phase filtering method in multi frequency measurement,” Opt. Express **22**(11), 13641–13647 (2014). [CrossRef] [PubMed]

**21. **B. S. Everitt, S. Landau, M. Leese, and D. Stahl, *Cluster Analysis*, 5th edition (Wiley, 2011).

**22. **Q. Kemao, *Windowed Fringe Pattern Analysis* (SPIE, 2013).

**23. **R. E. Walpole, R. H. Myers, S. L. Myers, and K. Ye, *Probability and Statistics for Engineers and Scientists*, 8th edition (Person Prentice Hall, 2007).

**24. **N. Otsu, “A threshold selection method from gray-level histograms,” IEEE Trans. Sys., Man., Cyber. **9**(1), 62–66 (1979). [CrossRef]