This paper presents a method of significantly improving the previously proposed simple, flexible, and accurate phase retrieval algorithm for the random phase-shifting interferometry named HEFS [ K. Yatabe, J. Opt. Soc. Am. A 34, 87 (2017)]. Although its effectiveness and performance were confirmed by numerical experiments in the original paper, it is found that the algorithm may not work properly if observed fringe images contains untrusted (extremely noisy) pixels. In this paper, a method of avoiding such untrusted pixels within the estimation processes of HEFS is proposed for the practical use of the algorithm. In addition to the proposal, an experiment of measuring a sound field in air was conducted to show the performance for real data, where the proposed improvement is critical for that situation. MATLAB codes (which can be downloaded from http://goo.gl/upcsFe) are provided within the paper to aid understanding the main concept of the proposed methods.
© 2017 Optical Society of America
In phase-shifting interferometry (PSI), estimation of the phase from phase-shifted interferograms is one of the most important factors for the accuracy of the measurement. Traditionally, the phase is retrieved based on previous knowledge about phase-shifting steps. However, it is not easy to implement the phase-shifting steps precisely in practice owing to the requirement of extreme accuracy of the measuring system. Therefore, a lot of estimation methods of unknown phase-shifting steps and/or objective phase without the information about phase-shifting steps have been proposed [1–16].
Among many phase retrieving techniques, the ellipse fitting based methods [17–25] and the subspace based methods (PCA, QCA) [26–31] are promising methodologies which are recently investigated by several authors because of their possibilities. Although they have several advantages (such as simplicity and flexibility) comparing to other methods, they also have some limitations leading to accuracy loss which might prevent their practical use . For example, the ellipse fitting methods may not give a reasonable estimation when the so-called Lissajous figure is not appropriately chosen, while the subspace methods may suffer from the number of fringes limitation [28–30].
To overcome the disadvantages of the ellipse fitting method and subspace method, the authors have proposed a phase estimation method by combining the above two methods, namely the hyper ellipse fitting in subspace (HEFS) method . It inherits the advantages of both methods, which makes HEFS a simple, flexible and accurate phase retrieval algorithm. Moreover, the embedding of observed data enables simultaneous phase retrieval and denoising with only slight effort [32, 33]. Although its main target is PSI with totally unknown phase-shifting steps, the numerical experiments showed that HEFS is also beneficial for PSI whose shifting steps are perfectly known thanks to the denoising functionality .
However, the original version of HEFS may not perform appropriately when the observed fringes contain untrusted regions with no fringe pattern visible. This is because the original HEFS utilizes the singular value decomposition (SVD) as a global estimation method. That is, all pixels of the fringe images are forced to be included in the estimation process. However, in many situations, some pixels of the observed images are noisier than the others or even do not contain any fringe pattern. Such pixels containing less or no information should be avoided in the estimation process for the reliable phase retrieval. Nevertheless, globally performing SVD does not allow the elimination of the low-quality pixels owing to simultaneous execution of the estimation and projection (see Sections 2.1 and 3.1 for the explanation).
In this paper, an improved version of HEFS is proposed for avoiding untrusted pixels within the estimation processes. It can be regarded as a more flexible variant of HEFS as it allows independent execution of the parameter estimation and phase retrieval processes. A set of MATLAB codes (which can be downloaded from http://goo.gl/upcsFe) is also provided for easier understanding of the main concept of the proposed improvement by comparing it to the original MATLAB implementation in . In addition to the extension of HEFS, a method for automatically distinguishing the low-quality pixels from the well-measured pixels is proposed based on the energy ratio between two subspaces. Furthermore, an experiment of applying HEFS to real data is provided to show the real performance, where the results indicate how the proposed improvement of HEFS is significant in such real situations.
2. Hyper ellipse fitting in subspace (HEFS)
In this section, HEFS is briefly reviewed (for detailed explanation, see ). Since it consists of the subspace method and ellipse fitting method, each method is explained independently before the procedure of HEFS is introduced.
2.1. Subspace method
The subspace method retrieves phase information from multiple fringe images by finding the subspace where ideal fringe images lie [26–30].
Let m-th image of M phase-shifted fringe images be written asEq. (1) is equivalently written as
By subtracting the mean value, from di,j as di,j − d̄i,j, the subspace spanned by o is eliminated. Thus, a mean subtracted data vector di,j − d̄i,j lies in the two-dimensional subspace spanned by c and s. After identifying the subspace of c and s, the objective phase ϕi,j is obtained from the coefficients Ci,j and Si,j as
For extracting the subspace, the singular value decomposition (SVD) is often applied. Firstly, a matrix is constructed by concatenating mean-subtracted data vector di,j for every pixels horizontally,Eq. (7).
2.2. Modified subspace method
The noise reduction is realized by the embedding of the data vectors into a higher-dimensional vector space. It is easily introduced by considering an extended data vector which incorporates adjacent pixels into a single vector. For example, a 2M-dimensional extended data vector can be constructed by vertically concatenating two data vectors,Eq. (2). In the previous paper , the following embedding (L = 9) is suggested to be a good choice in terms of performance, computational cost, and stability:
By horizontally concatenating the above extended data vectors as in Eq. (8), an LM × N extended data matrix is constructed. Then, applying the subspace method to the extended data matrix obtains the objective phase with less random noise.
2.3. Ellipse fitting method
The ellipse fitting method is another method for retrieving phase information from multiple fringes by finding an ellipse which approximates the set of fringes [17–23]. In HEFS, phase is directly extracted from parameters of the fitted ellipse .
A general ellipse on the (x, y)-coordinate can be expressed as
From those parameters, the angle on the ellipse ϕ is obtained as 
For estimating the parameters of the ellipse, the least squares method is often applied. By simplifying Eq. (12) as32]:
2.4. Hyper least squares method for ellipse fitting
Although the least squares method is the most popular method for estimating the parameters of ellipse, it may not be accurate owing to the higher-order statistical bias. For avoiding the bias up to the second-order term, the hyper least squares was proposed [34–36]. It utilizes the 6 × 6 (approximated) weighting matrix,Eq. (21). For numerical computation, calculating the eigenvector for the eigenvalue with the largest absolute value of,
2.5. Hyper ellipse fitting in subspace (HEFS) method
Combining the above two methods, HEFS is obtained as an accurate ellipse fitting method without a concern about ambiguity of the construction of the two-dimensional space for fitting the ellipse. The procedure of HEFS consists of the following three steps :
- Apply (modified) subspace method to project the data onto a two-dimensional subspace,
- Estimate parameters of ellipse from the projected data by the hyper least squares method,
- Calculate the retrieved phase from the parameters of fitted ellipse by Eq. (13).
For applying the ellipse fitting method, HEFS utilizes the subspace method to project the data vectors onto the two-dimensional subspace where the ideal phase-shifted fringe images lie. Random noise can be suppressed within the projection process by the modified subspace method so that the ellipse fitting method works more robustly. Furthermore, the hyper-accurate version of the least squares method is adopted to retrieve the phase. In , the numerical experiments showed that HEFS method can retrieve the phase accurately without computational effort.
3. Improved HEFS for dealing with untrusted pixels
Although direct use of SVD is the easiest way of implementing the subspace method, it has to simultaneously take all pixels into account. This simultaneity of considering all pixels may cause an inaccurate result. In many cases, some pixels are noisier than the others or even do not contain any fringe pattern. Such pixels containing less or no information about the objective phase are the source of estimation error. Therefore, they should be eliminated from the estimation process to obtain a better result. Nevertheless, the original HEFS does not allow such elimination because the usage of SVD (directly to the data matrix) have to simultaneously estimate the subspace and project data onto it. To perform the parameter estimation steps by only using selected well-measured pixels, the parameter estimation process and projection process must be separately executed. That is, the parameter estimation process is performed with only selected pixels, while the projection process is performed for all pixels.
In this section, we propose a more flexible procedure of HEFS which allows independent execution of the parameter estimation and phase retrieval processes. MATLAB codes are provided here for easier understanding of the main concept of the proposed improvement by comparing with the original MATLAB implementation in .
3.1. Estimating two-dimensional subspace through eigenvalue decomposition
If the subspace method is executed by using selected pixels only, the other pixels must be projected to the same estimated subspace afterward. To do so, one has to obtain the estimated result of the subspace method as a projection operator P which projects the data matrix D in Eq. (8) (or its extended version introduced in Section 2.2) onto the two-dimensional subspace. One way of obtaining the projection operator P is to apply SVD to a data matrix consisting of selected pixels only. Here, as a computationally efficient alternative, an estimation method using the eigenvalue decomposition is considered.
The projection operator P is represented as a 2 × LM matrix obtained by
A computationally efficient way of calculating U and Σ is realized by using the eigenvalue decomposition. By considering the LM × LM square matrix DDT, its SVD is represented asEq. (24) gives the corresponding projection operator.
Note that the projection operator P can be obtained from at least two data vectors because calculation of P only requires two leading eigenvalues (of course, more data vectors should be utilized to obtain better estimation). Therefore, a subset of D, containing more than two pixels, can be selected arbitrarily for the estimation of P.
3.2. Ellipse fitting method based on selected pixels
The ellipse fitting method can also be performed by using selected well-measured pixels only. After the estimation of projection operator P, projected data Ṽ is obtained by Eq. (25) as the input of the ellipse fitting method. Since P can independently project each data vector onto the two-dimensional subspace, any subset of D can be utilized for the estimation.
As in Eq. (13), the objective phase is calculated from the parameters α1, . . ., α6 and β. Therefore, once these parameters are estimated, any data on the two-dimensional subspace can be converted into the corresponding phase. The estimation of the parameters is based on the eigenvalue decomposition of the symmetric matrix in Eq. (19). This matrix can be constructed from arbitrarily selected data, and thus untrusted pixels can also be avoided in the ellipse fitting method.
Note that the selection of the pixels can be differ from that of the subspace method in the previous subsection. That is, further selection of the pixels can be performed according to the position on the projected subspace. In such cases, it is important to use the hyper-accurate least squares method instead of the ordinary least squares method because the ordinary least squares method may perform badly when the number of pixels decrease [34–36].
3.3. Summary of the improved HEFS
In summary, HEFS can be improved by the following procedure:
- Select some good pixels for estimating the parameters in HEFS,
- Estimate the projection operator P from the selected pixels by the eigenvalue decomposition of DDT and Eq. (24),
- Estimate the parameters of the ellipse from the projected data of selected pixels,
- Retrieve the objective phase for every pixel by using the estimated projection operator and parameters of the ellipse.
These processes can be implemented as the separated functions shown in Fig. 1. The estimation and phase retrieving processes are totally divided into two functions so that the estimation process can be executed with the selected subset of pixels. In addition to those two functions, a function for constructing extended version of the data matrix is included in order to activate the denoising functionality of HEFS easily.
4. Automatic selection of better pixels
As in the previous section, the improved HEFS enables to utilize only selected pixels in the parameter estimation steps. The selection of pixels can be performed manually if the untrusted regions are obvious. However, such manual selection might be erroneous or impossible for some situations. Therefore, a criterion for selecting “good” pixels is necessary. In this section, we propose a method of automatically selecting better pixels in terms of the energy ratio between two subspaces.
4.1. Estimating signal-to-noise ratio (SNR) as criterion of pixel selection
In the previous section, the projection operator P was defined as an operator projecting an LM-dimensional data vector onto the two-dimensional subspace. This subspace was introduced since every ideal data vector lies on there. In this sense, this two-dimensional subspace can be regarded as the subspace of desired signal because ideal (noiseless) data lies on there. On the other hand, the orthogonal complement of that subspace can be regarded as the subspace consisting of noise (non-ideal components). Therefore, noisy pixels can be distinguished from the better ones by considering the energy on the orthogonal complement.
As in Eq. (24), the projection operator is calculated from the LM × 2 matrix Ũ because this matrix consists of the orthonormal basis of the two-dimensional subspace. Since U is an orthonormal matrix, the rest of the LM × (LM − 2) matrix consists of the basis of the orthogonal complement of that subspace. Hence, the projection operator onto the orthogonal complement P⊥ can be obtained in the same manner,
For any mean subtracted data vector , provides the two-dimensional representation of the data vectors, while is their (LM − 2)-dimensional representation. If the data is ideal (noiseless), then33] for discussion). That is, all the energy of is concentrated on . Based on this observation, we define the estimated signal-to-noise ratio (SNR) of the data vector as the ratio of energy between the subspaces:
Note that the term noise in this definition is not only related to additive noise (e.g. Gaussian noise) but also related to any non-ideal property of the fringe images (such as nonuniform spatial distribution of phase-shifting error). Therefore, the proposed SNR can be low for regions where fringe pattern is clearly visible but some non-ideal phenomena occur.
4.2. Automatic pixel selection based on estimated SNR
As in the previous subsection, one can exclude the noisy untrusted pixels by discarding pixels with low SNR. Then, one must determine the boundary of discard in terms of SNR. In this paper, a threshold ϒ ∈ [0, 1) is considered relative to the maximum of SNR. That is, (i, j)-th pixels whose SNR satisfies the following inequality are selected as the pixels for estimating parameters:Fig. 2:
Note that choice of the threshold ϒ automatically determines the number of selected pixels. On the other hand, one can first fix the number of selected pixels if a particular number is desired. In that case, the automatic selection is achieved by sorting SNR and picking the pixels from larger SNR until the desired number of pixels are selected.
For demonstrating the effectiveness and significance of the proposed improvement, HEFS is applied to a measured sound field obtained by the parallel phase-shifting interferometry (PPSI). In this section, PPSI and the experimental setup are briefly explained first, and then the results are shown.
5.1. Parallel phase-shifting interferometry (PPSI)
As in Eq. (1), phase-shifted fringe images are obtained by artificially shifting the phase. The most common method for realizing the phase-shifting step δ is the temporal method which changes the shifting step temporally to reduce the complexity of the interferometer. However, a temporal method requires multiple measurements along time for a single phase map, which restricts its use to time-invariant phenomena.
On the other hand, the spatial methods enables us to obtain multiple phase-shifted fringe images simultaneously. PPSI is one of such spatial methods [37–39]. By using polarized interferometer, each fringe image can be obtained as the polar angle of interfered light. Then, the phase-shifting step can be implemented by a linear polarizer. Therefore, simultaneous observation of multiple fringes can be realized by a polarization camera which is a camera with pixel-wise linear polarizers attached on the image sensor. Its high-speed variant allows us to acquire the fringes in high frame rate , and thus high-speed time-varying phenomena can be observed by combining these equipments .
5.2. Experimental setup
For illustrating the performance of the improved HEFS, sound in air was chosen as the test data in this paper. Since sound is an important phenomenon for our daily life, optical measurement of sound interests several researchers because of its potentialities. Recent development of PPSI enabled measurement of such time-varying and high-speed phenomena [42–44]. Sound in air is not easy to measure optically because of not only its high-speed nature but also smallness of the effect to light (roughly speaking, the order of typical phase modulation caused by sound in air is 0.1 to 0.001 rad). Optical sound measurement requires a highly accurate system, and therefore it can be a good subject for testing a phase retrieval algorithm in terms of accuracy.
The experimental setup is illustrated in Fig. 3. A Fizeau-type polarized interferometer combined with a polarized high-speed camera was used for the four-step PPSI. A small loudspeaker shown in Fig. 4 was placed between the optical flats, and a stationary sound field of 8,000 Hz was generated from it . As it consists of two loudspeaker units (tweeter and woofer) the sound was emitted from both top and bottom parts of the observed area. PPSI measured the modulated phase of the object light which is resulted from the density variation of air caused by the sound [42–44].
The frame rate of the high-speed polarized camera was set to 7,000 fps so that full 512 × 512 pixel imaging is performed. Since a high-speed camera has the trade-off between the framerate and number of active pixels due to the data-transfer rate, one has to choose the framerate according to the desired measurement area. In this paper, we chose the maximum framerate which gives the full resolution of the sensor. The diameter of the object light beam, which is apparent in the next figures as circles, is about 90 mm.
An example of measured fringe images is shown in the left half of Fig. 5. By high-speed four-step PPSI, four fringe images were simultaneously observed for a single time instance (the shadow on the left part of the measured area is the loudspeaker, see Fig. 4). Then, phase retrieval algorithms were applied to each time instance so that time-sequential phase maps were obtained. After phase retrieval, a time-directional high-pass filter was applied to eliminate time-invariant or slowly varying components which are unrelated to the sound [42, 43]. Since the framerate was 7,000 Hz due to the measurement size limitation, the measured sound was aliased as a sine wave of 8,000 Hz was inputted into the loudspeaker. As the result, the observed data seems like an 8,000 Hz sound field sampled by 56,000 fps (one period of the sound was captured by seven frames in that sense). Therefore, the time-directional high-pass filter was set to 4,000 Hz. Two (out of several hundreds) time instances of measured data in this experiment, together with executable MATLAB codes, were available at http://goo.gl/upcsFe.
Here, the improved HEFS is compared with the ordinary HEFS and the traditional M-step phase-shifting algorithm :Fig. 6, where eight consecutive instances of the results were shown (time evolves from left to right, from top left to bottom right). From the figure, it can be seen that both tweeter and woofer (see Fig. 4) emitted the sound, which resulted in the interference pattern of sound pressure.
There are two types of noise which degrade the clarity of the visualized sound field: random noise and patterned noise. The patterned noise is mainly due to error on the phase-shifting steps and nonlinearity of the imaging devices. Both random and patterned noises are conspicuous because the magnitude of the phase modulation caused by sound in air is small (note that the peak value of the observed sound was about ±0.1 rad). Since the four-step algorithm does not remove any noise, noisy fringes resulted in the corresponding noisy phase maps.
The result obtained by using the ordinary HEFS  is shown in Fig. 7. Since the original version of HEFS utilizes every pixels of the measured images for the estimations, the result contains huge error that totally distorted the phase maps. This is because there were so many untrusted pixels in this data which do not contain any information about the observing phenomenon. Although the random noise was reduced by the denoising functionality of HEFS [32, 33] [the embedding (L = 9) shown in Eq. (11) was used], the result is meaningless as interpretation of the data becomes quite difficult.
The selected pixels used for the improved HEFS are shown in Fig. 8. The red rectangle in the left figure corresponds to the manually chosen region which seems to be a well-measured region. Although such manual selection is easy and simple in this case, the choice of the pixels often rely on the intuition whose correctness cannot be guaranteed. In contrast, the proposed SNR-based method automatically selects the pixels based on the estimated SNR which is shown at the center of Fig. 8. Note that the brightness of the estimated SNR in this figure is normalized so that the value of the color bar corresponds to ϒ in Eq. (30). As shown in the right side of Fig. 8, the proposed method correctly eliminated the region which is not observable for this setting (black and dark gray region of the left figure). It is interesting to note that the proposed selection method also eliminated several spots corresponding to some circular patterns in the left figure, where manually discarding such patterns may not be easy. It also eliminated some horizontal belt-shaped patterns, appeared along the fringe pattern shown in Fig. 5, which should be related to the patterned noise in Fig. 6.
The results of the proposed improved version of HEFS are shown in Figs. 9 and 10, where the selected pixels for the parameter estimations are in Fig. 8. By comparing these two cases, the automatically selected pixels obtained by the proposed SNR-based selection method were confirmed to be a reasonable choice because they ended up with a very similar result to that of the manual selection. Since the shadow and unobservable area were not included in the selected pixels in both cases, HEFS was able to work properly as expected (in contrast to Fig. 7). As the results, the improved HEFS with the embedding in Eq. (11) obtained clear images of the sound field with less noise comparing to the result of the traditional four-step algorithm shown in Fig. 6. To illustrate the difference among Figs. 6, 9 and 10 clearly, vertical line at the center of each image is extracted and depicted in Fig. 11. From the figure, the appropriateness of the proposed automatic pixel selection method can be confirmed because the phase values of HEFS corresponding to manual and automatic selection methods almost coincide. In addition, the effect of the denoising functionality of HEFS is also apparent in this figure as the random noise was reduced by the embedding. The patterned noise caused by imperfection of the implementation of the shifting steps (realized by the quarter-wave plate and pixel-wise linear polarizers) were also reduced because HEFS can compensate error of the shifting steps. However, some patterned noise caused by the nonlinearity of the imaging devices , which is not taken into account in HEFS, was remained in the results in Figs. 9 and 10. Reduction of such error within the framework of HEFS is a future work which should be considered next.
In this paper, an extension of HEFS was proposed to improve its performance. It is important for practical use because the improvement is essential when the measured data contains untrusted pixels which are usual in real measurement. By the proposed improvement, HEFS gained more flexibility, which can be seen from the set of MATLAB codes in Fig. 1.
In addition to the improvement, a method for automatically eliminating the untrusted pixels was proposed. Estimated SNR of each pixel was introduced as the energy ratio of two subspaces so that noisy pixels can be distinguished from the cleaner pixels based on its value. Since SNR is a quantitative measure of noisiness, the proposed method has no ambiguity on selecting “better” pixels in contrast to a manual selection based on intuition.
An experiment of measuring a sound field in front of a loudspeaker was performed to confirm the significance of the proposed improvement in a practical situation. It showed that the original version of HEFS totally failed to retrieve the objective phase because of the pixels which do not contain any information about the phase. In contrast, the improved HEFS was able to simultaneously retrieve the phase and denoise it. The automatic pixel selection method was also confirmed to be reasonable by comparing its result with that of the manual selection.
As mentioned at the end of the previous section, reduction of remaining noise, which may be possible by combining other techniques [47, 48], should be considered in the future works. Moreover, a collaboration of HEFS and methods in [49–51] for realizing some specialized methods for the optical sound measurement should be interesting to investigate.
Japan Society for the Promotion of Science (JSPS) Grant-in-Aid for Research Activity Start-up (17H07191), and Grant-in-Aid for JSPS Research Fellow (16J06772).
References and links
1. G. Lai and T. Yatagai, “Generalized phase-shifting interferometry,” J. Opt. Soc. Am. A 8, 822–827 (1991). [CrossRef]
2. K. Okada, A. Sato, and J. Tsujiuchi, “Simultaneous calculation of phase distribution and scanning phase shift in phase shifting interferometry,” Opt. Commun. 84, 118–124 (1991). [CrossRef]
3. I.-B. Kong and S.-W. Kim, “General algorithm of phase-shifting interferometry by iterative least-squares fitting,” Opt. Eng. 34, 183–188 (1995). [CrossRef]
5. A. Patil and P. Rastogi, “Approaches in generalized phase shifting interferometry,” Opt. Lasers Eng. 43, 475–490 (2005). [CrossRef]
6. H. Guo, Y. Yu, and M. Chen, “Blind phase shift estimation in phase-shifting interferometry,” J. Opt. Soc. Am. A 24, 25–33 (2007). [CrossRef]
7. X. F. Xu, L. Z. Cai, Y. R. Wang, X. F. Meng, W. J. Sun, H. Zhang, X. C. Cheng, G. Y. Dong, and X. X. Shen, “Simple direct extraction of unknown phase shift and wavefront reconstruction in generalized phase-shifting interferometry: algorithm and experiments,” Opt. Lett. 33, 776–778 (2008). [CrossRef] [PubMed]
11. R. Juarez-Salazar, C. Robledo-Sánchez, C. Meneses-Fabian, F. Guerrero-Sánchez, and L. A. Aguilar, “Generalized phase-shifting interferometry by parameter estimation with the least squares method,” Opt. Lasers Eng. 51, 626–632 (2013). [CrossRef]
12. J. Deng, L. Zhong, H. Wang, H. Wang, W. Zhang, F. Zhang, S. Ma, and X. Lu, “1-norm character of phase shifting interferograms and its application in phase shift extraction,” Opt. Commun. 316, 156–160 (2014). [CrossRef]
13. Q. Liu, Y. Wang, J. He, and F. Ji, “Phase shift extraction and wavefront retrieval from interferograms with background and contrast fluctuations,” J. Opt. 17, 025704 (2015). [CrossRef]
15. Y. Xu, Y. Wang, Y. Ji, H. Han, and W. Jin, “Three-frame generalized phase-shifting interferometry by a Euclidean matrix norm algorithm,” Opt. Lasers Eng. 84, 89–95 (2016). [CrossRef]
16. X. Xu, X. Lu, J. Tian, J. Shou, D. Zheng, and L. Zhong, “Random phase-shifting interferometry based on independent component analysis,” Opt. Commun. 370, 75–80 (2016). [CrossRef]
18. C. T. Farrell and M. A. Player, “Phase step measurement and variable step algorithms in phase-shifting interferometry,” Meas. Sci. Technol. 3, 953–958 (1992). [CrossRef]
19. C. T. Farrell and M. A. Player, “Phase-step insensitive algorithms for phase-shifting interferometry,” Meas. Sci. Technol. 5, 648 (1994). [CrossRef]
20. A. Albertazzi Jr., A. V. Fantin, D. P. Willemann, and M. E. Benedet, “Phase maps retrieval from sequences of phase shifted images with unknown phase steps using generalized N-dimensional Lissajous figures — principles and applications,” Int. J. Optomechatronics 8, 340–356 (2014). [CrossRef]
21. C. Meneses-Fabian and F. A. Lara-Cortes, “Phase retrieval by Euclidean distance in self-calibrating generalized phase-shifting interferometry of three steps,” Opt. Express 23, 13589–13604 (2015). [CrossRef] [PubMed]
22. F. Liu, Y. Wu, and F. Wu, “Correction of phase extraction error in phase-shifting interferometry based on Lissajous figure and ellipse fitting technology,” Opt. Express 23, 10794–10807 (2015). [CrossRef] [PubMed]
23. F. Liu, Y. Wu, F. Wu, and W. Song, “Generalized phase shifting interferometry based on Lissajous calibration technology,” Opt. Lasers Eng. 83, 106–115 (2016). [CrossRef]
24. F. Liu, J. Wang, Y. Wu, F. Wu, M. Trusiak, K. Patorski, Y. Wan, Q. Chen, and X. Hou, “Simultaneous extraction of phase and phase shift from two interferograms using Lissajous figure and ellipse fitting technology with Hilbert–Huang prefiltering,” J. Opt. 18, 105604 (2016). [CrossRef]
25. A. V. Fantin, D. P. Willemann, M. E. Benedet, and A. G. Albertazzi, “Robust method to improve the quality of shearographic phase maps obtained in harsh environments,” Appl. Opt. 55, 1318–1323 (2016). [CrossRef] [PubMed]
28. J. Xu, W. Jin, L. Chai, and Q. Xu, “Phase extraction from randomly phase-shifted interferograms by combining principal component analysis and least squares method,” Opt. Express 19, 20483–20492 (2011). [CrossRef] [PubMed]
29. J. Vargas, C. Sorzano, J. Estrada, and J. Carazo, “Generalization of the principal component analysis algorithm for interferometry,” Opt. Commun. 286, 130–134 (2013). [CrossRef]
30. J. Vargas and C. Sorzano, “Quadrature component analysis for interferometry,” Opt. Lasers Eng. 51, 637–641 (2013). [CrossRef]
31. J. Deng, K. Wang, D. Wu, X. Lv, C. Li, J. Hao, J. Qin, and W. Chen, “Advanced principal component analysis method for phase reconstruction,” Opt. Express 23, 12222–12231 (2015). [CrossRef] [PubMed]
32. K. Yatabe, K. Ishikawa, and Y. Oikawa, “Simple, flexible, and accurate phase retrieval method for generalized phase-shifting interferometry,” J. Opt. Soc. Am. A 34, 87–96 (2017). [CrossRef]
33. K. Yatabe, K. Ishikawa, and Y. Oikawa, “Improving principal component analysis based phase extraction method for phase-shifting interferometry by integrating spatial information,” Opt. Express 24, 22881–22891 (2016). [CrossRef] [PubMed]
34. K. Kanatani and P. Rangarajan, “Hyper least squares fitting of circles and ellipses,” Comput. Stat. Data Anal. 55, 2197–2208 (2011). [CrossRef]
35. K. Kanatani, P. Rangarajan, Y. Sugaya, and H. Niitsuma, “HyperLS for parameter estimation in geometric fitting,” IPSJ Trans. Comput. Vis. Appl. 3, 80–94 (2011). [CrossRef]
36. K. Kanatani, Y. Sugaya, and Y. Kanazawa, Ellipse Fitting for Computer Vision: Implementation and Applications, Synthesis Lectures on Computer Vision (Morgan & Claypool Publishers, 2016).
37. Y. Awatsuji, M. Sasada, and T. Kubota, “Parallel quasi-phase-shifting digital holography,” Appl. Phys. Lett. 85, 1069–1071 (2004). [CrossRef]
38. J. E. Millerd, N. J. Brock, J. B. Hayes, M. B. North-Morris, M. Novak, and J. C. Wyant, “Pixelated phase-mask dynamic interferometer,” Proc. SPIE 5531, 304–314 (2004). [CrossRef]
39. M. Novak, J. Millerd, N. Brock, M. North-Morris, J. Hayes, and J. Wyant, “Analysis of a micropolarizer array-based simultaneous phase-shifting interferometer,” Appl. Opt. 44, 6861–6868 (2005). [CrossRef] [PubMed]
40. T. Onuma and Y. Otani, “A development of two-dimensional birefringence distribution measurement system with a sampling rate of 1.3 MHz,” Opt. Commun. 315, 69–73 (2014). [CrossRef]
41. T. Yatagai, B. J. Jackin, A. Ono, K. Kiyohara, M. Noguchi, M. Yoshii, M. Kiyohara, H. Niwa, K. Ikuo, and T. Onuma, “Instantaneous phase-shifting fizeau interferometry with high-speed pixelated phase-mask camera,” Proc. SPIE 9660, 966018 (2015). [CrossRef]
42. K. Ishikawa, K. Yatabe, N. Chitanont, Y. Ikeda, Y. Oikawa, T. Onuma, H. Niwa, and M. Yoshii, “High-speed imaging of sound using parallel phase-shifting interferometry,” Opt. Express 24, 12922–12932 (2016). [CrossRef] [PubMed]
43. K. Ishikawa, K. Yatabe, Y. Ikeda, Y. Oikawa, T. Onuma, H. Niwa, and M. Yoshii, “Interferometric imaging of acoustical phenomena using high-speed polarization camera and 4-step parallel phase-shifting technique,” Proc. SPIE 10328, 103280I (2017).
44. Y. Oikawa, K. Yatabe, K. Ishikawa, and Y. Ikeda, “Optical sound field measurement and imaging using laser and high-speed camera,” Proc. 45th Int. Congr. Noise Control Eng. (INTER-NOISE 2016) pp. 258–266 (2016).
45. K. Ishikawa, K. Yatabe, Y. Ikeda, Y. Oikawa, T. Onuma, H. Niwa, and M. Yoshii, “Optical sensing of sound fields: Non-contact, quantitative, and single-shot imaging of sound using high-speed polarization camera,” Proc. Meet. Acoust. (POMA) 29, 030005 (2016). [CrossRef]
46. D. Malacara, M. Servín, and Z. Malacara, Interferogram Analysis For Optical Testing (CRC, 2005), 2nd ed. [CrossRef]
47. K. Yatabe, K. Ishikawa, and Y. Oikawa, “Compensation of fringe distortion for phase-shifting three-dimensional shape measurement by inverse map estimation,” Appl. Opt. 55, 6017–6024 (2016). [CrossRef] [PubMed]
49. N. Chitanont, K. Yatabe, K. Ishikawa, and Y. Oikawa, “Spatio-temporal filter bank for visualizing audible sound field by Schlieren method,” Appl. Acoust. 115, 109–120 (2017). [CrossRef]
50. K. Yatabe, K. Ishikawa, and Y. Oikawa, “Acousto-optic back-projection: Physical-model-based sound field reconstruction from optical projections,” J. Sound Vib. 394, 171–184 (2017). [CrossRef]
51. Y. Koyano, K. Yatabe, and Y. Oikawa, “Infinite-dimensional SVD for revealing microphone array’s characteristics,” Appl. Acoust. 129, 116–125 (2018). [CrossRef]