## Abstract

This paper presents a non-iterative phase retrieval method from randomly phase-shifted fringe images. By combining the hyperaccurate least squares ellipse fitting method with the subspace method (usually called the principal component analysis), a fast and accurate phase retrieval algorithm is realized. The proposed method is simple, flexible, and accurate. It can be easily coded without iteration, initial guess, or tuning parameter. Its flexibility comes from the fact that totally random phase-shifting steps and any number of fringe images greater than two are acceptable without any specific treatment. Finally, it is accurate because the hyperaccurate least squares method and the modified subspace method enable phase retrieval with a small error as shown by the simulations. A MATLAB code, which is used in the experimental section, is provided within the paper to demonstrate its simplicity and easiness.

© 2016 Optical Society of America under the terms of the OSA Open Access Publishing Agreement

## 1. INTRODUCTION

Phase-shifting interferometry (PSI) is the widely used measurement technique to acquire the objective phase from a set of phase-shifted interferograms [1–7]. The objective phase that contains the desired information is retrieved from $M$ fringe images using their structure imposed by the phase shifting. Although traditional phase retrieval algorithms require precise information about the shifting steps, it is not easy to realize it because of the practical implementation errors of a measuring system. Therefore, a lot of methods to estimate the unknown phase-shifting steps and/or objective phase from randomly observed data have been investigated [8–23].

Among the estimation techniques, the ellipse fitting method is one of the earliest ideas to determine and correct errors on shifting steps [24–26]. If measured data are represented by two quadrature components, data are ideally distributed on a circle in the two-dimensional (2D) space. Then, the arctan operation can reconstruct the objective phase. However, those components are not in quadrature when the shifting steps contain errors. In such a case, the data distribution forms an ellipse instead of a circle, which results in patterned noise. The ellipse fitting method tries to convert the ellipse back into the circle so that phase extraction error is eliminated. This idea recently attracted several authors to investigate it further because of its powerful ability to correct phase extraction errors [27–32].

Although the ellipse fitting method is simple, the preprocessing required to apply the method is not trivial. To apply the ellipse fitting method, $M$ fringe images must be represented by two images corresponding to the two (hopefully in quadrature) components. However, in general, there are an infinite number of ways to project $M$-dimensional space into a 2D subspace. In other words, one has to choose how to construct the 2D representation of the data (frequently called the Lissajous figure). Performance of the ellipse fitting method depends on the construction method, and thus it should be carefully chosen.

Another relatively new family of phase retrieval algorithms is the subspace-based method, which is usually called the principal component analysis (PCA) or quadrature component analysis (QCA) [33–38]. We call them the subspace method because their principle is to find the subspace, where noiseless fringe images lie, within an $M$-dimensional vector space (see Section 2). It can extract the objective phase from fringe images without any explicit formula and knowledge about their phase shifts. The advantages of the subspace method includes its low computational cost, non-iterative nature that does not require any initial guess, ease of treating spatially varying background illumination and contrast, and ease of implementation. Therefore, the subspace method is a promising approach for generalized PSI.

Although such advantages are quite attractive, there are some situations in which the subspace method results in notable errors. Such a situation occurs when the spatial variation of fringes is small within the fringe images [34–36]. This accuracy degradation is called the number of fringes limitation, and there have been several attempts to overcome it [35–37]. However, existing methods either partially solve the problem or require considerably more computational time, which cancels out some of the advantages.

In this paper, a non-iterative algorithm to reconstruct the objective phase from randomly phase-shifted fringes is proposed. It realizes accurate reconstruction by combining the ellipse fitting method with the subspace method, and retaining the simplicity and flexibility of both methods. The main advantages of the proposed method are as follows:

- • Flexibility. For the input data, any number of fringe images greater than two is acceptable. In addition, phase shifts of the fringes can be totally random. Therefore, the proposed method can be applied to the broad range of situations automatically.
- • Accuracy. For the ellipse fitting, the hyperaccurate least squares method is used so that statistical bias up to second order noise terms is eliminated. Moreover, a modification on the subspace method enables random noise reduction [39], which further improves the accuracy of the ellipse fitting. Therefore, this combination can achieve highly accurate phase retrieval.

This paper is organized in six sections. In Section 2, the subspace method and its modified version are briefly reviewed. In addition, the cause of the number of fringes limitation is discussed. The ellipse method is then explained in Section 3. We provide an explicit formula for direct conversion from a point on an ellipse to its phase on the corresponding circle. Furthermore, the hyper least squares method is introduced in an implementation-friendly form. The proposed method is summarized in Section 4, and a MATLAB code is shown there to demonstrate its simplicity. The proposed method is evaluated in Section 5, and the paper is finally concluded in Section 6.

## 2. SUBSPACE METHOD FOR PHASE RETRIEVAL

Let the $m$-th image of $M$ phase-shifted fringe images be expressed as

#### A. Data Vector Constructed from Fringes and Its Subspace

The subspace-based phase retrieval method, such as PCA and QCA, is based on the relation among phase-shifted fringes. For each pixel of the images, an $M$-dimensional column vector can be constructed as

where $[{z}_{1},{z}_{2},\dots ,{z}_{M}]$ is the horizontal concatenation of $\{{z}_{m}\}$, and ${z}^{T}$ is the transpose of $z$.Since Eq. (1) is equivalently written as

By subtracting the mean value ${\overline{d}}_{i,j}$ from ${d}_{i,j}$ as ${d}_{i,j}-{\overline{d}}_{i,j}$, the subspace spanned by $o$ is eliminated because subtraction of mean is the orthogonal projection onto the orthogonal compliment of that subspace:

#### B. Data Matrix and Its Singular Value Decomposition

For extracting the subspace in which data vectors lie, one can apply the singular value decomposition (SVD) to the corresponding data matrix.

A data matrix is constructed by concatenating ${d}_{i,j}$ for every pixel horizontally,

whose size is $M\times N$, where ${N}_{v}$ and ${N}_{h}$ are, respectively, numbers of vertical and horizontal pixels, and $N(={N}_{v}{N}_{h})$ is the total number of pixels. Since the order of concatenation is not important, one may construct $D$ in a different manner.SVD of the matrix $D$ can be written as

where $U$ and $V$ are (column) orthonormal matrices whose sizes are, respectively, $M\times M$ and $N\times M$, and $\mathrm{\Sigma}$ is a diagonal matrix consisting of the singular values as $\mathrm{\Sigma}=\mathrm{diag}({\sigma}_{1},{\sigma}_{2},\dots ,{\sigma}_{M})$ with ${\sigma}_{1}\ge {\sigma}_{2}\ge \cdots \ge {\sigma}_{M}$. Then, a basis of the subspace can be obtained as column vectors of $U$. It can also be obtained by PCA, which is an eigendecomposition of $D{D}^{T}$, becauseIf mean values of the data vectors are subtracted before constructing the data matrix, a basis of the subspace spanned by $c$ and $s$ is obtained as two leading vectors of $U$ that correspond to ${\sigma}_{1}$ and ${\sigma}_{2}$. In addition, if $c$ and $s$ are orthogonal and if the background illumination is correctly eliminated by the mean subtraction, then the objective phase can be reconstructed from the two leading vectors of $V$ or $\mathrm{\Sigma}V$ as Eq. (8); the former is called PCA and the latter is QCA in the literature of subspace methods. Figure 1 illustrates the procedure of the subspace method. The subspace methods are quite fast and easy to implement because only the matrix construction and its (reduced) SVD are needed for the computation.

#### C. Modified Subspace Method for Reducing Random Error

Recently, it was shown that the subspace method with slight modification can reduce random error [39]. It will be briefly reviewed here because such a noise reduction property can effectively improve the accuracy of the ellipse fitting described in Section 3.

A data vector ${d}_{i,j}$ was defined as an $M$-dimensional vector in Eq. (2). It can be embedded into a higher dimensional vector space by considering adjacent pixels. For example, a $2M$-dimensional extended data vector can be constructed by concatenating two data vectors vertically,

where ${d}_{i+1,j}$ is the data vector corresponding to the next pixel of ${d}_{i,j}$, and the superscript of ${d}_{i,j}^{(2)}$ indicates how many pixels are incorporated. In general, one can use $L$ adjacent pixels to construct an $LM$-dimensional extended data vector ${d}_{i,j}^{(L)}$. That is, ${d}_{i,j}^{(1)}={d}_{i,j}$ correspond to the ordinary data vector in Eq. (2). In this paper, the following two cases corresponding to $L=5$ and $L=9$ are considered in the experiments:By concatenating the above extended data vectors ${d}_{i,j}^{(L)}$ horizontally as in Eq. (9), an extended data matrix, whose size is $LM\times N$, can be constructed. Then, applying a subspace method to the extended data matrix reconstructs the objective phase with less random noise as shown in [39].

#### D. Conditions under Which the Subspace Method Successfully Works

As mentioned at the end of Section 2.B, there are two conditions that should be satisfied to successfully reconstruct the phase by the subspace method.

The first condition is on the background subtraction. Before performing SVD, the mean values of each data vector are subtracted, which can be regarded as the orthogonal projection in Eq. (7). However, such mean subtraction may not correctly eliminate only the background illumination $\{{B}_{i,j}\}$. Let us explain the situation by a simple example. When a set of data is $\{B+\mathrm{cos}(0),B+\mathrm{cos}(\pi /2),B+\mathrm{cos}(-\pi /2)\}=\{B+1,B+0,B+0\}$, its mean is $(B+1/3)$. Then, the mean subtraction yields $\{\mathrm{cos}(0)-1/3,\mathrm{cos}(\pi /2)-1/3,\mathrm{cos}(-\pi /2)-1/3\}$. Note that the data points are shifted by the mean of true fringes, $1/3$, even though the background illumination $B$ is perfectly removed. This shifting phenomenon causes a mismatch of the centroid of data and the origin of the subspace. Since the phase is defined relative to the origin, an error in the origin of the estimated coordinate axes causes an error in the reconstructed phase (see Fig. 2).

The second condition is the orthogonality of $c$ and $s$. Since SVD and PCA are performed with the constraint that $U$ and $V$ must be orthonormal matrices, subspace methods estimate $c$ and $s$ as orthonormal vectors. Nevertheless, $c$ and $s$ are not orthogonal in general because their angle, or inner product, depends on the set of phase shifts $\{{\delta}^{[m]}\}$ as in Eqs. (5) and (6). This mismatch of the angle between basis elements of the subspace also causes errors in the obtained phase.

These conditions can be satisfied more easily when the number of interferograms increases, which should be the reason that some literature claims that the subspace method requires many interferograms. Although the modified subspace method in Section 2.C can effectively reduce random noise, the error related to the above conditions cannot be eliminated. To avoid such an error, estimation of the true origin is necessary. In addition, the oblique SVD should be performed to relax the orthogonality constraint. However, these methods usually require some iterative procedures that may be time consuming and unreliable. Therefore, this paper does not try to improve the subspace method itself. Instead, the hyperaccurate ellipse fitting method is combined with the subspace method to eliminate the above-mentioned error without iteration.

## 3. ELLIPSE FITTING FOR PHASE RETRIEVAL AND HYPERACCURATE LEAST SQUARES METHOD

As in the previous section, the phase obtained by the subspace method may not be accurate because of its dependency on the two conditions of data. On the other hand, the ellipse fitting method, which is an another branch of phase retrieval algorithms, can overcome the two mismatches of the assumptions above. However, its performance depends on how the so-called Lissajous figure, to which an ellipse fitting method is applied, is constructed. Moreover, the ordinary least squares method has a huge statistical bias of the estimation, which limits the accuracy.

In this paper, the subspace method and hyperaccurate least squares ellipse fitting method are combined. This combination is able to overcome the above-mentioned weaknesses and achieve fast, easy, and accurate phase retrieval.

#### A. Ordinary Least Squares Ellipse Fitting

A general ellipse in the 2D space denoted by the $(x,y)$-coordinate is represented by

Although ideal points on an ellipse obey Eq. (16), an observed set of points on an ellipse may not satisfy it because of unavoidable observation error. Therefore, some criterion is necessary to estimate the optimal parameters. Arguably the most popular criterion is the mean squared error

where the column vector,#### B. Ellipse Fitting Based Phase Retrieval

The ellipse fitting methods are used in several articles to estimate unknown parameters in PSI, such as, for example, the phase-shifting steps $\{{\delta}^{[m]}\}$. In this paper, ellipse fitting is applied to the result of the subspace method so that the estimation error caused by the mismatch of the two conditions in Section 2.D is eliminated.

As in Eqs. (4) and (8), the phase is reconstructed based on the assumption that the two components $C$ and $S$ (which are often written as the numerator and denominator inside the arctan operation in the literature) are obtained from two sinusoidal functions whose amplitudes are the same, and the phase difference is exactly $\pi /2$. Reconstruction error occurs when this assumption is violated, and thus a compensation of the assumption mismatch is necessary to increase the accuracy. After applying the subspace method, every data vector is projected on the 2D subspace, which is ideally spanned by $c$ and $s$. In that subspace, distribution of the data vectors is a circle when the above assumption is satisfied. On the other hand, it becomes an ellipse when the assumption is violated. If the ellipse can be converted back into the circle, the reconstruction error of phase can be eliminated. Therefore, the ellipse fitting method tries to fit an ellipse to the data and convert it to the circle for error correction, which is illustrated in Fig. 2.

The most popular method to fit an ellipse to data is the ordinary least squares method described in the last subsection. After estimating the parameter $\alpha $ in Eq. (17), the conversion is calculated from it. Although one may convert the ellipse into the circle first and reconstruct the phase later, it requires additional computations. Therefore, a direct conversion of data into the phase is provided.

The corrected phase $\phi $ is obtained directly from a point $(x,y)$ on the ellipse as

From the above equations, it is obvious that the accuracy of the estimated parameter $\alpha $ decides the accuracy of the reconstructed phase. However, the ordinary least squares method gives a statistically biased solution that limits the accuracy. Therefore, an accurate ellipse fitting method is necessary for a better reconstruction.

#### C. Hyper Least Squares Ellipse Fitting

In the ordinary least squares ellipse fitting, ${\Vert \alpha \Vert}^{2}=1$ is chosen as the constraint to avoid the trivial solution. On the other hand, several other constraints have been proposed to achieve a better estimation. Variations of the constraint can be written in the form of the weighted norm constraint ${\Vert \alpha \Vert}_{W}^{2}=1$, where $W$ is a symmetric weighting matrix, and ${\Vert \alpha \Vert}_{W}^{2}={\alpha}^{T}W\alpha $. It is known that a proper choice of $W$ can dramatically improve the estimation accuracy. In this paper, the hyper least squares ellipse fitting method proposed in [40,41] is used.

The estimation error of the ordinary least squares ellipse fitting results from the second order bias [40–42]. The hyper least squares method is designed so that the statistical bias up to second order noise terms is completely eliminated. Such accurate estimation is realized by the choice of the weighting matrix $W$, which is based on error analysis of the least squares estimators. Although the error analysis is important to understand how the hyper least squares method achieves high accuracy, the detail is omitted in this paper because it may require several pages for explanation. Here, only the information necessary for the implementation is briefly introduced.

To cancel the second order bias, $W$ is chosen according to the explicit form of the bias. Nevertheless, the original form of the hyper least squares method is exceedingly complicated, which may require much more computational time than the ordinary least squares method. Therefore, an approximated version, which is called the semi-hyper least squares method [40], is chosen as the weight in the present paper. The approximated weighting matrix is defined as

Then, the semi-hyper least squares method is to find

Note that, for numerical computation, it should be better to calculate the eigenvector for the eigenvalue with the largest absolute value of the generalized eigenvalue problem of $(W,X)$,

because some linear algebra library may not accept $(X,W)$ owing to the structure of $W$, which is not positive definite.## 4. PROPOSED METHOD: HEFS

Based on the above methods, we proposed a simple, flexible and accurate phase retrieval algorithm by combining the ellipse fitting method with the subspace method. We will call it the hyper ellipse fitting in subspace (HEFS) method, which is defined as

- 1. Import $M$ fringe images $\{{I}_{i,j}^{[1]},{I}_{i,j}^{[2]},\dots ,{I}_{i,j}^{[M]}\}$.
- 2. Concatenate the fringes for each pixel to construct the $M$-dimensional data vectors $\{{d}_{i,j}\}$ as in Eq. (2).
- 4. Construct an $LM\times N$ data matrix $D$ by horizontally concatenating mean subtracted data vectors $\{{d}_{i,j}^{(L)}-{\overline{d}}_{i,j}^{(L)}\}$ as in Eq. (9).
- 5. Perform SVD to decompose the data matrix as $D=U\mathrm{\Sigma}{V}^{T}$.
- 6. Fit an ellipse to the projected data obtained as the two leading vectors of $V$. The optimal set of parameters ${\alpha}^{\star}$ is obtained as the eigenvector for the eigenvalue with the largest absolute value of the generalized eigenvalue problem in Eq. (37).
- 7. Reconstruct the objective wrapped phase $\phi $ from $V$ and ${\alpha}^{\star}$ by using Eq. (26).

For Step 5, one may calculate $V$ by multiplying ${\mathrm{\Sigma}}^{-1}{U}^{T}$ to $D$ from the left, where $U$ and $\mathrm{\Sigma}$ are obtained as Eq. (11). This strategy is useful when the images contain some untrusted region where no fringe pattern is available. In that case, $U$ and $\mathrm{\Sigma}$ are obtained only using data in the trusted region, and then $V$ is calculated for whole region [33,37]. Note that determining the global sign requires additional information, which is usual for asynchronous phase retrieval algorithms as claimed in [37].

An example of the implementation of steps 5–7 in MATLAB is shown in Fig. 3, where the mean subtraction in step 4 is executed altogether at line 8 independently from the construction of the data matrix $D$. This code illustrates the simplicity and ease of the proposed method, even though the underlying theory may seem quite burdensome.

The proposed method can be regarded as a simple, flexible and accurate method for several reasons. It is a simple non-iterative algorithm that can be coded by several tens of lines in MATLAB, as shown in Fig. 3. The algorithm can be performed with any number of fringe images greater than two without any specific treatment depending on the number. Moreover, the phase shifts can be totally random since no assumption is made on them. Therefore, the automatic execution is possible because there is no parameter that must be tuned by hand. Furthermore, the hyperaccurate ellipse fitting method allows the elimination of the statistical bias on the parameter estimation up to second order noise terms. Its non-iterative and global nature ensures reliable estimation because no initial guess is required. The modification on the data matrix of the subspace method also enables reduction of the random noise-related error.

## 5. NUMERICAL EXPERIMENTS

In this section, the effectiveness of the proposed method is examined numerically. Fringe images were simulated by

where $\epsilon $ is the Gaussian random noise with a standard deviation of one, and $\sigma \ge 0$ is the standard deviation.To show the importance of the hyper least squares method, the ordinary least squares method was also tested. The ordinary least squares version of the proposed method will be called EFS by omitting “hyper” from HEFS. In addition, parameters were chosen so that differences among the retrieval methods were more apparent because they may perform similarly for data with good condition. The boundary of simulated data was excluded from the retrieval process as in [39] to avoid a complicated discussion that is not in the scope of this paper.

#### A. Illustrative Example

First, an example is shown to demonstrate how the proposed HEFS method works in principle. For the parameters in Eq. (38), $B=2$, $A=1$, and $\phi ={x}_{1}^{2}+{x}_{2}^{2}$ were chosen, where $({x}_{1},{x}_{2})\in [-1,1]\times [-1,1]$, and the image size was $300\times 300$. Three fringe images were generated for shifting steps ${\delta}^{[m]}\in \{0.8817,2.2198,3.6285\}$, which were randomly obtained. Here, noise was added slightly $(\sigma =0.01)$ to show the difference because the two ellipse fitting methods provide the same result for noiseless data.

Figure 4 shows data distributions in the 2D subspace. The root mean square error (RMSE) is also shown in the subcaptions. As a reference, RMSE of the advanced iterative algorithm (AIA) [11] with the best initial guess was 0.0097 rad for this example. Since data was only available for a part of the circle, the resulting distribution of PCA was highly deformed, which is the cause of the so-called number of fringes limitation. Although QCA obtained a better result for this case, it may worsen the accuracy for some cases, which will be shown in Section 5.C. The ellipse fitting methods were applied in the subspace obtained by PCA as shown in the bottom row. As in Fig. 4(e), the ordinary least squares method could not obtain a reasonable fit in this example because of the statistical bias. For such an erroneous estimate, the correction from ellipse to circle can deteriorate the accuracy, as shown in Fig. 4(f). In contrast, the hyper ellipse fitting method provided a quite natural fit, which resulted in a dozen times better RMSE (note that RMSE cannot be zero because of the noise, which had a standard deviation of $\sigma =0.01$).

This example clearly illustrates that using the hyper least squares method instead of the ordinary one is quite important to accurately retrieve the phase. In this subsection, the data matrix was not extended (i.e., $L=1$). As shown in the subsequent simulations, the noise reduction property of the modified subspace method explained in Section 2.C can further improve the accuracy of the ellipse fitting and overall performance.

#### B. Equispaced Phase Shifts

To examine the performance of the proposed method, it was applied to equally phase-shifted fringe images, that is,

where $m=1,2,\dots ,M$.The reason to consider equispaced shifting steps here is that the standard $M$-step phase-shifting algorithm [1],

Figure 5 shows the RMSE for each method for different noise levels and numbers of fringe images. Here, the phase was set to $\phi =2({x}_{1}^{2}+{x}_{2}^{2})$ because QCA could obtain a similar RMSE as the $M$-step algorithm in this case. Without modifying the subspace method $(L=1)$, QCA performed best by improving PCA for small noise levels, say, $\sigma =0.1,0.2$. Although the proposed HEFS performed similarly to QCA for $\sigma =0.1,0.2$, it obtained quite inaccurate results for larger noise levels. This is because the ellipse fitting method did not work properly because of the noise, which is more apparent for EFS. On the other hand, the accuracy was dramatically improved by the modification of the subspace method $(L=5,9)$. The accuracy improvement was limited for PCA and EFS, respectively, due to the number of fringes limitation and the statistical bias of ellipse fitting. HEFS, however, achieved accurate results without such limitation. Note that the RMSE of HEFS was less than half of that of $M$-step phase shifting algorithm, which means one can benefit from using the proposed method (owing to the denoising and bias elimination property) even when the phase shifts are exactly equispaced and known.

#### C. Random Phase Shifts

The proposed method was also applied to randomly phase-shifted fringes, where the phase shifts $\{{\delta}^{[m]}\}$ were obtained randomly from the uniform distribution within $[0,2\pi ]$. The other parameters were the same as in the previous subsection. Since the $M$-step algorithm cannot be applied because of the randomness of the shifts, AIA was used as a reference.

Figure 6 shows the mean values of RMSE calculated from 100 trials because the fringe images were generated in a totally random manner. For easier interpretation, some outliers with RMSEs greater than a standard deviation from the mean RMSE were excluded. Although QCA was better than PCA in the previous subsection, it was not always true for this situation. These results indicate that QCA is not a direct solution to the accuracy issue of PCA. In contrast, HEFS with a modified subspace method was able to improve the accuracy.

Table 1 shows the computational times of HEFS and AIA. It was measured with an Intel core i7 3.0 GHz processor using five fringe images ($M=5$), with an image size of $300\times 300$ (like other simulations in this section). The construction of each data matrix was also included. Since HEFS is non-iterative, its computational time is reasonably fast compared to the iterative algorithm, AIA.

In these simulations, $B=2$ and $A=1$ were chosen so that the assumptions of AIA were fulfilled, which allows an ideal comparison. However, such a situation is highly unrealistic because $B$ and $A$ vary spatially in practice. The variation of illumination should affect the accuracy of the ellipse fitting, and thus its effect must be investigated. Moreover, the modification of step 5 of the proposed method to handle some untrusted regions (mentioned in Section 4) must be considered for practical use. These points will be considered in future works along with a verification using real data.

## 6. CONCLUSIONS

In this paper, a phase retrieval algorithm named HEFS is proposed. It combines the ellipse fitting method with the subspace method to benefit from the advantages of both methods. Since each method is non-iterative, HEFS can be executed faster than an iterative algorithm. Its non-iterative nature also allows an easy implementation, as shown in Fig. 3. Moreover, the modification of the subspace method and the hyper least squares method enable accurate and stable phase retrieval. The numerical experiments indicated that the proposed method can retrieve phase with less error. Therefore, the proposed method should obtain a similar accuracy from less fringe images, or a more accurate result from the same number of fringe images.

It is known that some iterative ellipse fitting methods achieve more accurate results than the hyper least squares method [42–44]. Although they were not chosen in this paper because of their iterative nature, they must be the choice when accuracy is more important than simplicity and computational time. Moreover, combining the proposed method with other methods as in [45,46] can be the next step toward improving overall accuracy.

## Funding

Japan Society for the Promotion of Science (JSPS) (15J08043, 16J06772)

## REFERENCES

**1. **D. Malacara, M. Servín, and Z. Malacara, *Interferogram Analysis For Optical Testing*, 2nd ed. (CRC, 2005).

**2. **H. Schreiber and J. H. Bruning, “Phase shifting interferometry,” in *Optical Shop Testing* (Wiley, 2006), pp. 547–666.

**3. **Y.-Y. Cheng and J. C. Wyant, “Multiple-wavelength phase-shifting interferometry,” Appl. Opt. **24**, 804–807 (1985). [CrossRef]

**4. **H. Medecki, E. Tejnil, K. A. Goldberg, and J. Bokor, “Phase-shifting point diffraction interferometer,” Opt. Lett. **21**, 1526–1528 (1996). [CrossRef]

**5. **I. Yamaguchi and T. Zhang, “Phase-shifting digital holography,” Opt. Lett. **22**, 1268–1270 (1997). [CrossRef]

**6. **Y. Awatsuji, M. Sasada, and T. Kubota, “Parallel quasi-phase-shifting digital holography,” Appl. Phys. Lett. **85**, 1069–1071 (2004). [CrossRef]

**7. **K. Ishikawa, K. Yatabe, N. Chitanont, Y. Ikeda, Y. Oikawa, T. Onuma, H. Niwa, and M. Yoshii, “High-speed imaging of sound using parallel phase-shifting interferometry,” Opt. Express **24**, 12922–12932 (2016). [CrossRef]

**8. **G. Lai and T. Yatagai, “Generalized phase-shifting interferometry,” J. Opt. Soc. Am. A **8**, 822–827 (1991). [CrossRef]

**9. **K. Okada, A. Sato, and J. Tsujiuchi, “Simultaneous calculation of phase distribution and scanning phase shift in phase shifting interferometry,” Opt. Commun. **84**, 118–124 (1991). [CrossRef]

**10. **I.-B. Kong and S.-W. Kim, “General algorithm of phase-shifting interferometry by iterative least-squares fitting,” Opt. Eng. **34**, 183–188 (1995). [CrossRef]

**11. **Z. Wang and B. Han, “Advanced iterative algorithm for phase extraction of randomly phase-shifted interferograms,” Opt. Lett. **29**, 1671–1673 (2004). [CrossRef]

**12. **A. Patil and P. Rastogi, “Approaches in generalized phase shifting interferometry,” Opt. Lasers Eng. **43**, 475–490 (2005). [CrossRef]

**13. **H. Guo, Y. Yu, and M. Chen, “Blind phase shift estimation in phase-shifting interferometry,” J. Opt. Soc. Am. A **24**, 25–33 (2007). [CrossRef]

**14. **X. F. Xu, L. Z. Cai, Y. R. Wang, X. F. Meng, W. J. Sun, H. Zhang, X. C. Cheng, G. Y. Dong, and X. X. Shen, “Simple direct extraction of unknown phase shift and wavefront reconstruction in generalized phase-shifting interferometry: algorithm and experiments,” Opt. Lett. **33**, 776–778 (2008). [CrossRef]

**15. **P. Gao, B. Yao, N. Lindlein, K. Mantel, I. Harder, and E. Geist, “Phase-shift extraction for generalized phase-shifting interferometry,” Opt. Lett. **34**, 3553–3555 (2009). [CrossRef]

**16. **J. Deng, H. Wang, D. Zhang, L. Zhong, J. Fan, and X. Lu, “Phase shift extraction algorithm based on Euclidean matrix norm,” Opt. Lett. **38**, 1506–1508 (2013). [CrossRef]

**17. **H. Guo and Z. Zhang, “Phase shift estimation from variances of fringe pattern differences,” Appl. Opt. **52**, 6572–6578 (2013). [CrossRef]

**18. **R. Juarez-Salazar, C. Robledo-Sánchez, C. Meneses-Fabian, F. Guerrero-Sánchez, and L. A. Aguilar, “Generalized phase-shifting interferometry by parameter estimation with the least squares method,” Opt. Lasers Eng. **51**, 626–632 (2013). [CrossRef]

**19. **J. Deng, L. Zhong, H. Wang, H. Wang, W. Zhang, F. Zhang, S. Ma, and X. Lu, “1-Norm character of phase shifting interferograms and its application in phase shift extraction,” Opt. Commun. **316**, 156–160 (2014). [CrossRef]

**20. **Q. Liu, Y. Wang, J. He, and F. Ji, “Phase shift extraction and wavefront retrieval from interferograms with background and contrast fluctuations,” J. Opt. **17**, 025704 (2015). [CrossRef]

**21. **J. Deng, D. Wu, K. Wang, and J. Vargas, “Precise phase retrieval under harsh conditions by constructing new connected interferograms,” Sci. Rep. **6**, 24416 (2016). [CrossRef]

**22. **Y. Xu, Y. Wang, Y. Ji, H. Han, and W. Jin, “Three-frame generalized phase-shifting interferometry by a Euclidean matrix norm algorithm,” Opt. Lasers Eng. **84**, 89–95 (2016). [CrossRef]

**23. **X. Xu, X. Lu, J. Tian, J. Shou, D. Zheng, and L. Zhong, “Random phase-shifting interferometry based on independent component analysis,” Opt. Commun. **370**, 75–80 (2016). [CrossRef]

**24. **K. Kinnstaetter, A. W. Lohmann, J. Schwider, and N. Streibl, “Accuracy of phase shifting interferometry,” Appl. Opt. **27**, 5082–5089 (1988). [CrossRef]

**25. **C. T. Farrell and M. A. Player, “Phase step measurement and variable step algorithms in phase-shifting interferometry,” Meas. Sci. Technol. **3**, 953–958 (1992). [CrossRef]

**26. **C. T. Farrell and M. A. Player, “Phase-step insensitive algorithms for phase-shifting interferometry,” Meas. Sci. Technol. **5**, 648–654 (1994). [CrossRef]

**27. **A. Albertazzi Jr., A. V. Fantin, D. P. Willemann, and M. E. Benedet, “Phase maps retrieval from sequences of phase shifted images with unknown phase steps using generalized N-dimensional Lissajous figures—principles and applications,” Int. J. Optomechatron. **8**, 340–356 (2014). [CrossRef]

**28. **C. Meneses-Fabian and F. A. Lara-Cortes, “Phase retrieval by Euclidean distance in self-calibrating generalized phase-shifting interferometry of three steps,” Opt. Express **23**, 13589–13604 (2015). [CrossRef]

**29. **F. Liu, Y. Wu, and F. Wu, “Correction of phase extraction error in phase-shifting interferometry based on Lissajous figure and ellipse fitting technology,” Opt. Express **23**, 10794–10807 (2015). [CrossRef]

**30. **F. Liu, Y. Wu, F. Wu, and W. Song, “Generalized phase shifting interferometry based on Lissajous calibration technology,” Opt. Lasers Eng. **83**, 106–115 (2016). [CrossRef]

**31. **F. Liu, J. Wang, Y. Wu, F. Wu, M. Trusiak, K. Patorski, Y. Wan, Q. Chen, and X. Hou, “Simultaneous extraction of phase and phase shift from two interferograms using Lissajous figure and ellipse fitting technology with Hilbert–Huang prefiltering,” J. Opt. **18**, 105604 (2016). [CrossRef]

**32. **A. V. Fantin, D. P. Willemann, M. E. Benedet, and A. G. Albertazzi, “Robust method to improve the quality of shearographic phase maps obtained in harsh environments,” Appl. Opt. **55**, 1318–1323 (2016). [CrossRef]

**33. **J. Vargas, J. A. Quiroga, and T. Belenguer, “Phase-shifting interferometry based on principal component analysis,” Opt. Lett. **36**, 1326–1328 (2011). [CrossRef]

**34. **J. Vargas, J. A. Quiroga, and T. Belenguer, “Analysis of the principal component algorithm in phase-shifting interferometry,” Opt. Lett. **36**, 2215–2217 (2011). [CrossRef]

**35. **J. Xu, W. Jin, L. Chai, and Q. Xu, “Phase extraction from randomly phase-shifted interferograms by combining principal component analysis and least squares method,” Opt. Express **19**, 20483–20492 (2011). [CrossRef]

**36. **J. Vargas, C. Sorzano, J. Estrada, and J. Carazo, “Generalization of the principal component analysis algorithm for interferometry,” Opt. Commun. **286**, 130–134 (2013). [CrossRef]

**37. **J. Vargas and C. Sorzano, “Quadrature component analysis for interferometry,” Opt. Lasers Eng. **51**, 637–641 (2013). [CrossRef]

**38. **J. Deng, K. Wang, D. Wu, X. Lv, C. Li, J. Hao, J. Qin, and W. Chen, “Advanced principal component analysis method for phase reconstruction,” Opt. Express **23**, 12222–12231 (2015). [CrossRef]

**39. **K. Yatabe, K. Ishikawa, and Y. Oikawa, “Improving principal component analysis based phase extraction method for phase-shifting interferometry by integrating spatial information,” Opt. Express **24**, 22881–22891 (2016). [CrossRef]

**40. **K. Kanatani and P. Rangarajan, “Hyper least squares fitting of circles and ellipses,” Comput. Statist. Data Anal. **55**, 2197–2208 (2011). [CrossRef]

**41. **K. Kanatani, P. Rangarajan, Y. Sugaya, and H. Niitsuma, “HyperLS for parameter estimation in geometric fitting,” IPSJ Trans. Comput. Vis. Appl. **3**, 80–94 (2011). [CrossRef]

**42. **K. Kanatani, Y. Sugaya, and Y. Kanazawa, “Ellipse fitting for computer vision: implementation and applications,” in *Synthesis Lectures on Computer Vision* (Morgan & Claypool, 2016).

**43. **K. Kanatani, “Statistical optimization for geometric estimation: minimization vs. non-minimization,” in *22nd International Conference on Pattern Recognition (ICPR)* (2014), pp. 1–8.

**44. **K. Kanatani, A. Al-Sharadqah, N. Chernov, and Y. Sugaya, “Hyper-renormalization: non-minimization approach for geometric estimation,” Inf. Media Technol. **10**, 71–87 (2015). [CrossRef]

**45. **K. Yatabe and Y. Oikawa, “Convex optimization based windowed Fourier filtering with multiple windows for wrapped phase denoising,” Appl. Opt. , **55**, 4632–4641 (2016). [CrossRef]

**46. **K. Yatabe, K. Ishikawa, and Y. Oikawa, “Compensation of fringe distortion for phase-shifting three-dimensional shape measurement by inverse map estimation,” Appl. Opt. **55**, 6017–6024 (2016). [CrossRef]