## Abstract

In most phase unwrapping algorithms, the image reconstruction results are obtained by shifting the phase jumps in the wrapped phase map by 2π. The performance of such algorithms is degraded by the presence of speckle noise, residual noise, noise at the height discontinuities and holes in the wrapped phase map. Thus, a filtering operation is performed prior to the unwrapping process in order to remove the noise. However, the filtering process smears the phase jumps in the wrapped phase map and therefore causes a phase shifting error during the reconstruction process. Moreover, the noise errors, hole errors and shifting errors are accumulated path-by-path during unwrapping. Accordingly, the present study proposes a new rotation algorithm for phase unwrapping applications which resolves the noise error, the error of hole, the shifting error. Existing phase unwrapping algorithms are designed to operate only on those pixels whose phase values have no noise or holes. Or they are designed to operate the three-dimensional unwrapping paths in the row and column directions to avoid the noise or holes. By contrast, the rotation algorithm proposed in this study operates on all the pixels in the wrapped phase map, including those affected by noise or holes. As a result, the noise errors and hole errors produced in existing 2π phase shifting unwrapping algorithms are eliminated. Furthermore, since in the proposed approach, the wrapped phase map is not filtered prior to the unwrapping process, the phase shifting errors induced in existing algorithms are also eliminated. The robustness of the proposed algorithm to various noise errors, hole errors and phase shifting errors is demonstrated both numerically and experimentally.

© 2012 OSA

## 1. Introduction

In optical interferometry, the quality of the 2D or 3D reconstruction results is seriously degraded by the presence of noise in the wrapped phase map. In general, this noise includes speckle noise [1–3], residual noise [4] and noise at the height discontinuities of the imaged object [5–9]. Speckle noise is usually produced by the laser source used for illumination purposes, while residual noise is caused by a contamination of either the sample or the optical system. Finally, noise at the lateral surfaces arises due to the inherent diffraction and depth-of-field limitations of the optical system [10–12]. Speckle noise is generally removed using digital methods [10–13] or by using a white light source or spatial filter in the experimental configuration [1,2]. Meanwhile, the effects of residual noise and noise at the height discontinuities are resolved by designing the phase unwrapping algorithm to avoid noisy pixels in the wrapped phase map [14,15] or by using a digital filtering scheme [10–12,16].

Generally speaking, existing phase unwrapping algorithms can be classified as either temporal [14,17,18], spatial [15,19–26], or period-coding [27]. Spatial phase unwrapping algorithms can be further classified as path-dependent [20], path-independent [21–23], or miscellaneous [15,19,24–31]. In every case, the unwrapping process involves shifting the phase jumps in the wrapped phase map by 2π. First, MACY [20], a well-known path-dependent algorithm, has a fast runtime. It operates the phase shifting by 2π when the difference between two neighboring pixels is greater than π. Therefore, if the noise is greater than π, MACY treats the noise as the phase jumps, and thus the unwrapping result fails. Second, the cellular automata (CA) algorithm [21–23], a path independent algorithm, yields a better unwrapping performance than MACY since it operates the 3D unwrapping paths in row and column directions to avoid the noisy pixels in the wrapped phase map. However, CA is time-consuming and may fail to converge if the wrapped phase map includes noise which is greater than π [11,12,24]. Finally, in order to resolve one of three types of noise (i.e. speckle noise, residual noise, and noise at the height discontinuities), the miscellaneous algorithms are continually presented [15,24–31]. As with CA, the robust spinning algorithm [15] also operates suitable unwrapping paths in row and column directions to avoid the noisy pixels (i.e. residual noise) in the wrapped phase map. Compared to the CA algorithm, the spinning algorithm achieves an improved unwrapping performance since its evaluation is faster and smarter to select the correct and non-noisy paths. The branch cut is the common method applied to the phase unwrapping algorithm with the noise problems [25,26]. Similarly, the inconsistent pixels marked by the branch cut can be avoided by the corresponding phase unwrapping algorithms. Nevertheless, the high speckle or high residual noise greater than π still easily causes the branch cut to be invalid [27]. Also, the noise at the height discontinuities easily causes the branch cut to fail [28]. Accordingly, in order to reduce the influence of the high speckle noise, the phase unwrapping and linear-filtering algorithms are performed simultaneously [27,29–31]. Similarly, the noise at height discontinuities is discussed by the method of minimum weighted discontinuity, which operates the paths in row and column directions to find the discontinuities containing the minimum weighted sum [26]. Furthermore, the speckle noise and noise at height discontinuities are simultaneously discussed in [24], which can find the correct phase jumps located in the interval of calculated wrap regions.

As described above [15,24,26], unfortunately, however, they can only be applied to 3D wrapped phase maps. That is, in a 3D wrapped phase map, each coordinate comprises row, column and phase information, and thus the unwrapping algorithm can avoid noise in either the row direction or the column direction. Importantly, the unwrapping algorithm does not work effectively if it does not find the correct and non-noisy paths in the row and column directions simultaneously. By contrast, in a 2D wrapped phase map, each coordinate comprises only row and phase information. In other words, the unwrapping algorithm can only use a row unwrapping path. Thus, if this path contains noisy pixels, a reconstruction error inevitably occurs.

The performance of existing phase unwrapping algorithms is commonly improved by using a filter to remove the noise from the wrapped phase map prior to unwrapping. Filtering algorithms can be broadly classified as either linear [13] or nonlinear [10–12,16]. Unfortunately, both types of algorithm smear the phase jumps in the wrapped phase map, and therefore result in a phase shifting error during the unwrapping process. Generally speaking, the smearing effect induced by linear filters is worse than that induced by nonlinear filters. For both types of filter, the smearing effect is reduced if the noise is uniformly distributed in the wrapped phase map (i.e., as in the case of speckle noise) [13,16]. However, if the noise is localized only in certain regions of the wrapped phase map (e.g., as in the case of residual noise or noise at the height discontinuities), a significant smearing effect is induced. As a result, a requirement exists for smart nonlinear filters capable of removing such noise without an excessive smearing of the phase jumps in the wrapped phase map [10–12].

In addition to speckle noise, residual noise and noise at the height discontinuities of the scanned object, the wrapped phase map may also include “holes”. That is, regions of the phase map in which the pixels have a very low value. In practice, these holes correspond to concave features on the surface of the scanned object, e.g., holes, dips, grooves, and so on. As with the serious noise greater than π, the values of holes greater than π also cause most existing phase unwrapping algorithms to fail. As a result, in most existing phase unwrapping algorithms based on 2π shifts of the phase jumps, the reconstruction results are highly sensitive to the effects of noise, phase shifting errors and holes. In addition, although the filtering algorithm can effectively remove the noise, it also destroys the real data in the holes.

Accordingly, this study proposes a novel rotation unwrapping algorithm which operates not only on those pixels with correct phase values, but also on those pixels whose phase values are corrupted by noise or hole errors. In other words, in contrast to existing methods, a filtering operation is not performed prior to the unwrapping process. Rather, the effects of noise and hole errors are eliminated by optionally applying a filter to the reconstruction results after the rotation algorithm has terminated. Importantly, the proposed rotation algorithm is not based on shifting 2π phase jumps. As a result, the phase shifting error induced in existing unwrapping methods is also eliminated. Before performing the proposed rotation algorithm, the phase jump pixels are detected by the detection scheme [10], and then the missed phase jump pixels are detected by the additional phase jump detection scheme. The proposed rotation algorithm contains two parts, the relative rotation procedure and the absolute rotation procedure. The relative rotation procedure operates 2D reconstruction and is introduced in this study. The absolute rotation procedure will be introduced in the future to operate the 3D reconstruction. The remainder of this paper is organized as follows. Section 2 introduces the basic principles of the proposed rotation algorithm. Section 3 describes the functional robustness of the proposed algorithm toward noise errors, hole errors and shifting errors. Sections 4 and 5 present the simulation and experimental results, respectively. Finally, Section 6 presents some brief concluding remarks.

## 2. Principles of proposed rotation algorithm

As described in Introduction, the noise and phase jumps in the wrapped phase map are detected before the rotation unwrapping process using the scheme proposed in [10].

#### 2.1 Two Detection schemes for phase jumps

### 2.1.1 Noise and phase jump detection scheme

In accordance with the method proposed in [10], the noise and phase jump positions in the wrapped phase map are detected by means of four parameters, S1, S2, S3 and S4, defined as

_{$\varphi $}is the corresponding phase value in the phase map; $[\text{}]$ indicates a rounding operation; and

_{${\sigma}_{{\rm A}}$}is the noise and phase jump detection threshold parameter, which is in the range of $0<{\sigma}_{{\rm A}}<\pi $. It is noted that Eq. (1) operates the noise and phase jump positions in the 3D coordinate, composed of the row pixel, the column pixel, and the phase. Meanwhile, in the 2D coordinate, the column pixel of $j+1$ is same as the column pixel of $j$ in Eq. (1). For the sake of simplicity, the column pixel equal to 1 is chosen as a case in the 2D coordinate, thus Eq. (1) can be expressed as

_{${\sigma}_{{\rm A}}$}is still in the range of $0<{\sigma}_{{\rm A}}<\pi $. In [10], $\pi -{\sigma}_{{\rm A}}$ is defined as the absolute-maximum phase difference in the whole phase map. Meanwhile, $\left|PD\right|$ is the absolute phase difference between any two neighboring pixels, i.e. $\varphi (i+1,j)-\varphi (i,j),$ $\varphi (i+1,j+1)-\varphi (i+1,j),$ $\text{}\varphi (i,j+1)-\varphi (i+1,j+1),$ and $\text{}\varphi (i,j)-\varphi (i,j+1)$. Comparing the magnitudes of $\left|PD\right|$ and $\pi -{\sigma}_{{\rm A}}$, all of pixels can be classified as the bad pixels (i.e. noise), the good pixels, and the phase jump pixels, i.e.,

- (3) Phase jump pixel: the current pixel is defined as a “phase jump” pixel if one or more of S1-S4 is not equal to zero, but the sum of S1-S4 is equal to zero.

_{${\sigma}_{{\rm A}}$}is properly decided, both of the phase jump positions and the noise positions are correctly detected.

### 2.1.2 Additional phase jump detection scheme for the proposed rotation algorithm

One phase jump should contain one 2π-high-position pixel and one 0-low-position pixel. Unfortunately, only one high- or low- position pixel in one phase jump is detected by Eq. (1) or Eq. (2). In this study, another missed phase jump can be obtained by the first term in $S1(i,j)$ in Eq. (1) or the first term in $S1(i,1)$ in Eq. (2). It is noted that if the 2π-high-position pixel of $\varphi (i,j)$ is detected by Eq. (1), the corresponding 0-low-position pixel of $\varphi (i+1,j)$ yields $[\frac{\varphi (i+1,j)-\varphi (i,j)-{\sigma}_{{\rm A}}}{2\pi}]=-1$. By contract, if the 0-low-position pixel of $\varphi (i,j)$ is detected by Eq. (1), the corresponding 2π-high-position pixel of $\varphi (i+1,j)$ yields $[\frac{\varphi (i+1,j)-\varphi (i,j)-{\sigma}_{{\rm A}}}{2\pi}]=1$. Similarly, in the case of the 2π-high-position pixel of $\varphi (i,1)$ and the corresponding 0-low-position pixel of $\varphi (i+1,1)$, $[\frac{\varphi (i+1,1)-\varphi (i,1)-{\sigma}_{{\rm A}}}{2\pi}]=-1$ is obtained. And in the case of the 0-low-position pixel of $\varphi (i,1)$ and the corresponding 2π-high-position pixel of $\varphi (i+1,1)$, $[\frac{\varphi (i+1,1)-\varphi (i,1)-{\sigma}_{{\rm A}}}{2\pi}]=1$ is obtained. It is concluded that the following Eq. (3) and Eq. (4) present the corresponding phase jumps (i.e. 2π-high-position or 0-low-position) by using the first term in $S1(i,j)$ in Eq. (1) in 3D coordinate and the first term in $S1(i,1)$ in Eq. (2) in 2D coordinate, respectively.

For 3D coordinate:

_{${\sigma}_{{\rm A}}$}. Therefore, the additional phase jump detection scheme introduced in this study can detect another missed pixel in one phase jump, which is improved from the detection scheme in [10].

#### 2.2 Relative rotation scheme

Figure 1 shows a typical 2D wrapped phase map (i.e., row vs. phase). As shown, the map contains two 2π phase jumps. Pixels D2 and D4 are located at the highest extremities of the 2π phase jumps, while pixels D1 and D3 are located at the lowest extremities of the 2π phase jumps. Phase 1 contains a subset of the phase values, noise and holes which in its entirety contains D2 and all of the phase values, noise and holes to the left of D2. Meanwhile, Phase 2 extends fully from D3 to D4. C1 and C2 denote the proposed-evaluating centers of Phase 1 and Phase 2, respectively, will be introduced later. It is seen that a height discontinuity exists between C1 and D2. Finally, the red triangles represent noise or holes. (Note that in Fig. 1, the row pixel coordinates are integral.)

The relative rotation scheme in the proposed unwrapping algorithm comprises three steps, namely a first relative rotation procedure, a second relative rotation procedure, and a stitching procedure. The first relative rotation procedure separately rotates all of the phase values (including those affected by noise and holes) in Phase 1 and Phase 2, respectively. In the second relative rotation procedure, all of values in Phase 1 are rotated while those in Phase 2 are kept unchanged. Finally, in the third step, Phase 1 and Phase 2 obtained in the second relative rotation procedure are stitched together in order to produce the final 2D reconstruction results.

### 2.2.1 First relative rotation procedure

As described above, from D1 to D2 (or from D3 to D4), the phase values, and noise and holes associated with each pixel are used to determine the center position in the row direction and corresponding phase center value in accordance with Eq. (5) and Eq. (6), respectively. That is,

The basic steps in the first relative rotation procedure are as follows:

- i.
**Determine the two center pixels C1 and C2**: The positions and phase values of pixels C1 and C2, corresponding to the center positions of Phase 1 and Phase 2, respectively, are determined using Eq. (5) and Eq. (6), and expressed as $C1(row\_center1,phase\_center1)$ and $C2(row\_center2,phase\_center2)$ (row, phase). - ii.
**Determine vectors**$\overrightarrow{{D}_{1}{D}_{2}}$, $\overrightarrow{{D}_{3}{D}_{4}}$ and $\overrightarrow{{C}_{1}{C}_{2}}$: Pixels D1, D2, D3, D4, C1 and C2 are all known. Thus, the three vectors $\overrightarrow{{D}_{1}{D}_{2}}$, $\overrightarrow{{D}_{3}{D}_{4}}$ and $\overrightarrow{{C}_{1}{C}_{2}}$ can be directly obtained. - iii.
**Determine the angle**${\theta}_{fix}$ between vectors$\overrightarrow{{D}_{1}{D}_{2}}$ and $\overrightarrow{{D}_{3}{D}_{4}}$: As shown in Fig. 1, vectors $\overrightarrow{{D}_{1}{D}_{2}}$ and $\overrightarrow{{D}_{3}{D}_{4}}$ have an angle ${\theta}_{fix}$ between them before the first relative rotation operation is performed. When the first and second relative rotation procedures have both been completed, the value of ${\theta}_{fix}$ is unchanged (see the related discussions in the section 3 and the subsection 4.5). The value of ${\theta}_{fix}$ is recorded in the first relative rotation operation, and is then used in the second relative rotation operation. - iv.
**Determine**${\theta}_{1}$ from $\overrightarrow{{D}_{1}{D}_{2}}$ and $\overrightarrow{{C}_{1}{C}_{2}}$, and ${\theta}_{2}$ from $\overrightarrow{{D}_{3}{D}_{4}}$ and $\overrightarrow{{C}_{1}{C}_{2}}$: The angles ${\theta}_{1}$ and ${\theta}_{2}$ between vectors $\overrightarrow{{D}_{1}{D}_{2}}$ and $\overrightarrow{{C}_{1}{C}_{2}}$ and vectors $\overrightarrow{{D}_{3}{D}_{4}}$and $\overrightarrow{{C}_{1}{C}_{2}}$, respectively, are determined. - v.
**Rotate Phase 2 through angle**${\theta}_{2}$ and then rotate Phase 1 through angle ${\theta}_{1}$: Phase 2 is rotated through an angle ${\theta}_{2}$ about center pixel C2 in a clockwise direction (i.e., toward vector $\overrightarrow{{C}_{1}{C}_{2}}$). Similarly, Phase 1 is rotated through an angle ${\theta}_{1}$ about center pixel C1 toward vector $\overrightarrow{{C}_{1}{C}_{2}}$. (Note that the clockwise direction is defined as positive rotation, and is denoted as ${\theta}_{1}{}^{+}$ and ${\theta}_{2}{}^{+}$ in Fig. 1). - vi.
**Complete the first relative rotation procedure**: The first relative rotation procedure terminates once Phase 1 and Phase 2 have been rotated through angles ${\theta}_{1}$ and ${\theta}_{2}$ toward vector $\overrightarrow{{C}_{1}{C}_{2}}$, respectively. Following rotation through ${\theta}_{2}$, the position of Phase 2 shifts from pixels D3 and D4 (see Fig. 1) to new pixels D3′ and D4’ (see blue line in Fig. 2 ). Similarly, following rotation through ${\theta}_{1}$, Phase 1 shifts from pixels D1 and D2 to new pixels D1’ and D2’(see red line in Fig. 2).

### 2.2.2 Second relative rotation procedure

A second relative rotation procedure is performed to further rotate Phase 1 for corresponding to the known and correct angle of ${\theta}_{fix}$.

The basic steps in the second relative rotation procedure are as follows:

- vii.
**Record the angle**$\theta {\text{'}}_{fix}$ between the two vectors $\overrightarrow{{D}_{1}\text{'}{D}_{2}\text{'}}$ and$\overrightarrow{{D}_{3}\text{'}{D}_{4}\text{'}}$: In Fig. 2, the positions of pixels D1’, D2’, D3′ and D4’ are all known following the first relative rotation operation. The angle $\theta {\text{'}}_{fix}$ between the vectors $\overrightarrow{{D}_{1}\text{'}{D}_{2}\text{'}}$ and $\overrightarrow{{D}_{3}\text{'}{D}_{4}\text{'}}$ can thus be directly determined. Note that in Figs. 1 and 2, the positions of C1 and C2 are unchanged since pixels C1 and C2 serve as the centers of rotation for Phase 1 and Phase 2, respectively, in the first relative rotation operation. As a result, vector $\overrightarrow{{C}_{1}{C}_{2}}$ is also unchanged following the first rotation procedure. - viii.
**Rotate Phase 1 though angle**$\theta \text{'}{\text{'}}_{fix}$ about fixed rotating center C1: The angle ${\theta}_{fix}$ is already known from Step (iii). The angle $\theta \text{'}{\text{'}}_{fix}$ through which Phase 1 is rotated about rotating center C1. As described in the following, the value of $\theta \text{'}{\text{'}}_{fix}$ can be determined directly from the known angles ${\theta}_{fix}$ and $\theta {\text{'}}_{fix}$ since all three angles are based on the unchanged vector $\overrightarrow{{C}_{1}{C}_{2}}$. Specifically, if the tangent function of unit vector of $\overrightarrow{{D}_{1}{D}_{2}}$ is greater than that of unit vector of $\overrightarrow{{D}_{3}{D}_{4}}$ at D2’, then $\theta \text{'}{\text{'}}_{fix}$ is obtained as $\theta \text{'}{\text{'}}_{fix}=\theta {\text{'}}_{fix}-{\theta}_{fix}$. Conversely, if the tangent function of unit vector of $\overrightarrow{{D}_{1}{D}_{2}}$ is less than or equal to that of unit vector of $\overrightarrow{{D}_{3}{D}_{4}}$ at D2’, then $\theta \text{'}{\text{'}}_{fix}={\theta}_{fix}-\theta {\text{'}}_{fix}$. Having determined the value of $\theta \text{'}{\text{'}}_{fix}$, Phase 1 is rotated about pixel C1 through an angle $\theta \text{'}{\text{'}}_{fix}$, while Phase 2 is unchanged. Following the rotation process, the position of Phase 1 moves from pixels D1’ to D2’ (see Fig. 2) to new pixels D1” to D2”, as shown in Fig. 3 .

### 2.2.3 Stitching process

Following the second relative rotation procedure, rotation results are obtained for both Phase 1 and Phase 2. A stitching process is then performed to stitch the two phase lines together. The basic steps in the stitching process are as follows:

- ix.
**Shift Phase 1 in y-direction and stitch to Phase 2:**As shown in Fig. 3, Phase 1 extending from D1” to D2” (marked by the red line) is totally shifted in the y-direction in order to stitch the pixel of D3′. Finally, the unwrapped phase composed of Phase 1 and Phase 2 is obtained. Note that the vector $\overrightarrow{{D}_{1}\text{'}\text{'}{D}_{2}\text{'}\text{'}}$ is unchanged before and after Phase 1 is totally shifted in the y-direction. Therefore, the angle $\theta \text{'}{\text{'}}_{fix}$ between vectors $\overrightarrow{{D}_{1}\text{'}\text{'}{D}_{2}\text{'}\text{'}}$ and $\overrightarrow{{D}_{3}\text{'}{D}_{4}\text{'}}$ is also the angle in the unwrapped phase composed of Phase 1 and Phase 2. By contrast, the angle ${\theta}_{fix}$ is between vectors $\overrightarrow{{D}_{1}{D}_{2}}$ and $\overrightarrow{{D}_{3}{D}_{4}}$ in the wrapped phase. The discussion of the angle values of $\theta \text{'}{\text{'}}_{fix}$ and ${\theta}_{fix}$ will be presented in the subsection 4.5. - x.
**Finish stitching process and start new first relative rotation procedure**: Fig. 4 shows the stitching result obtained in the previous step for Phase 1 and Phase 2 (see the blue line). Note that the blue line can be regarded as the 2D unwrapping result. According to the definition of Phase 1 provided in the subsection 2.2.1, the blue line can also be regarded as a new Phase 1. Thus, the first relative rotation procedure can be repeated using new Phase 1 and new Phase 2 extending from new pixels D3 to D4. Note that in new Phase 1, pixels D1 and D2 are taken from pixels D3′ and D4’, respectively. Furthermore, the position and phase value of the new center position, C1, are computed from Eq. (5) and Eq. (6), respectively, based on these new values of D1 and D2.

## 3. Effect of proposed rotation algorithm on noise errors, hole errors and phase shifting errors

Figures 5(a) –5(c) show three typical problems encountered when performing phase unwrapping. Note that the yellow and green triangles represent noise and holes in the wrapped phase map, respectively. Note also that D2 and D3 denote the 2π-high and 0-low positions of the phase jump, respectively. Figure 5(a) shows the situation in which the noise and holes are located outside of the phase jump region. Figure 5(b) shows the case where the noise and holes straddle the two phase jump positions. Finally, Fig. 5(c) shows the situation where the phase jump contains a shifting error. In typical 2π phase shifting algorithms, all three types of error are accumulated pixel-by-pixel along the unwrapping path, and thus the quality of the unwrapping results is seriously degraded.

#### 3.1 Characteristic of angle ${\theta}_{fix}$ given above errors

As shown in Figs. 5(a), 5(b) and 5(c), the vector $\overrightarrow{{D}_{1}{D}_{2}}$ obtained from pixels D1 and D2 is indicated by the solid line, while the vector $\overrightarrow{{D}_{3}{D}_{4}}$ obtained from pixels D3 and D4 is indicated by the dashed line. Furthermore, the large arrows indicate the unwrapping path used in a typical 2π phase shifting algorithm. In Figs. 5(a) and 5(b), the arrow encounters the yellow triangles (i.e., noisy pixels), and thus the phase unwrapping process fails. However, given the hypothetical assumption that the unwrapping algorithm does not fail, the unwrapped phase in the two cases is marked by the red line. Similarly, in Fig. 5(c), the arrow passes through a shifting error which exists at D3. And thus the unwrapped phase also contains a shifting error since 2π phase shifting algorithm are incapable of removing this error during the unwrapping process (see the subsection 4.3). It is noted that the unwrapping process preserves the orientation of vector $\overrightarrow{{D}_{3}{D}_{4}}$. In other words, the direction of vector $\overrightarrow{{D}_{3}{D}_{4}}$ after the unwrapping process (brown-solid arrow) is parallel to that of vector $\overrightarrow{{D}_{3}{D}_{4}}$ prior to the unwrapping process (brown-dashed arrow). In Figs. 5(a), 5(b) and 5(c), vectors $\overrightarrow{{D}_{1}{D}_{2}}$ and $\overrightarrow{{D}_{3}{D}_{4}}$ are separated by an angle *${\theta}_{fix}$*. Importantly, the vectors can be shifted to any position in the phase map without changing their direction in space. In other words, the yellow and green triangles (i.e. the noise or holes) do not change the direction of vectors $\overrightarrow{{D}_{1}{D}_{2}}$ and $\overrightarrow{{D}_{3}{D}_{4}}$ in each figure. As a result, angle *${\theta}_{fix}$* remains the same before and after phase unwrapping, respectively, in all three figures.

#### 3.2 Advantage of vector $\overrightarrow{{C}_{1}{C}_{2}}$ in proposed rotation algorithm

As discussed above, the reconstruction results obtained using unwrapping algorithms based on shifting 2π phase jumps invariably contain phase shifting errors. However, as described in the following, this problem is successfully resolved by the rotation algorithm proposed in this study. Recall that in Fig. 1, vector *$\overrightarrow{{C}_{1}{C}_{2}}$* is the central vector of Phase 1 and Phase 2. Importantly, this vector is not directly influenced by the shifting error since the evaluation of C1 and C2 in Eq. (5) and Eq. (6) does not include the shifting error. In the first relative rotation procedure, Phase 1 and Phase 2 are rotated in the counterclockwise or clockwise directions toward vector $\overrightarrow{{C}_{1}{C}_{2}}$ and are then stitched together to form a single line with no shifting error. Importantly, C1 and C2 are recalculated at the beginning of each relative rotation cycle. Therefore, vector *$\overrightarrow{{C}_{1}{C}_{2}}$* reduces the shifting error more effectively than traditional algorithms based on shifting 2π phase jumps.

## 4. Simulation results

The unwrapping performance of the proposed rotation algorithm was evaluated by performing a series of MATLAB simulations on a PC equipped with an AMD Athion 64 × 2 Dual Core Processor 4400 + 2.31GHz and 2 GB of RAM.

#### 4.1 2D non-noisy linear case (demonstrating total process in section 2 and presenting case in Fig. 5(a))

### 4.1.1 Finding all of phase jumps

The raw wrapped phase by applying the Hariharan algorithm [32–34] was produced by using a self-written MATLAB program, which contained one shallow hole less than π, one deep hole greater than π, and two height discontinuities, as illustrated in Fig. 6(a) . Note that this subsection not only demonstrated the total process in the section 2, but also presented the case in Fig. 5(a) where the wrapped phase contained holes which did not straddle a phase jump.

First, the detection scheme of Eq. (2) with ${\sigma}_{{\rm A}}=2.8$ detects the phase jump positions (see the subsection 2.1.1) in Fig. 6(a), and the results are marked as two green symbols. For example, the first green symbol is located at the pixel row 80 ($i=80$), and the corresponding phase values yield $\varphi (i,1)=\varphi (80,1)=6.258$ and $\varphi (i+1,1)=\varphi (81,1)=0.0643$. Substituting $\varphi (80,1)=6.258$ and $\varphi (81,1)=0.0643$ into Eq. (2) gives

### 4.1.2 Performing relative rotation scheme

The first relative rotation from Step (i) to Step (vi) is performed, which is introduced in the subsection 2.2.1. Phase 1 (from D1 to D2) and Phase 2 (from D3 to D4) can be obtained since Fig. 6(b) gives the known pixels D1 (1, 0.09), D2 (80, 6.258), D3 (81, 0.0643), and D4 (150, 6.274) (row, phase) as shown in Fig. 7(a) . In Step (i) (see the subsection 2.2.1), center pixels C1 and C2 are determined via Eqs. (5) and (6). The calculation result yields C1 (53.725, 4.8352) and C2 (127.0933, 4.2127) (row, phase) which are marked as two blue points in Fig. 7(a). In Step (ii), vectors of $\overrightarrow{{D}_{1}{D}_{2}}$, $\overrightarrow{{D}_{3}{D}_{4}}$ and $\overrightarrow{{C}_{1}{C}_{2}}$ can be directly determined using the known positions D1, D2, D3, D4, C1, and C2. Then, in Steps (iii), the angle of ${\theta}_{fix}$ between vectors of $\overrightarrow{{D}_{1}{D}_{2}}$ and $\overrightarrow{{D}_{3}{D}_{4}}$ can be determined and the calculation result is 0.68°. In Step (iv), the known vectors of $\overrightarrow{{D}_{1}{D}_{2}}$ and $\overrightarrow{{C}_{1}{C}_{2}}$ yields the angle ${\theta}_{1}$ = 4.95°, and the known vectors of $\overrightarrow{{D}_{3}{D}_{4}}$ and $\overrightarrow{{C}_{1}{C}_{2}}$ yields the angle ${\theta}_{2}$ = 5.63°. In Step (v), Phase 2 and Phase 1 are rotated through angles of ${\theta}_{2}{}^{+}$( = + 5.63°) and ${\theta}_{1}{}^{+}$( = + 4.95°) about C2 and C1 toward vector $\overrightarrow{{C}_{1}{C}_{2}}$, respectively (Note that the clockwise direction is positive). In Step (vi), the first relative rotation procedure terminates; yielding the black-dashed line (Phase 1) and black-solid line (Phase 2) shown in Fig. 7(a).

The second relative rotation procedure starts after the first relative rotation procedure is finished that pixels D1’, D2’, D3′, and D4’ yield (0.7888, 5.284), (81.01, 3.673), (80.82, 4.605), and (150.1, 4.018) (row, phase). In Step (vii), the angle $\theta {\text{'}}_{fix}$ between the two known vectors $\overrightarrow{{D}_{1}\text{'}{D}_{2}\text{'}}$ and$\overrightarrow{{D}_{3}\text{'}{D}_{4}\text{'}}$ can yield $\theta {\text{'}}_{fix}$ = 0°. Additionally, the tangent function of unit vector of $\overrightarrow{{D}_{1}{D}_{2}}$ is equal to 0.0782 and the tangent function of unit vector of $\overrightarrow{{D}_{3}{D}_{4}}$ is equal to 0.0902. According to Step (viii), since the tangent function of unit vector of $\overrightarrow{{D}_{1}{D}_{2}}$ is less than that of unit vector of $\overrightarrow{{D}_{3}{D}_{4}}$ at D2’, the corresponding equation yields $\theta \text{'}{\text{'}}_{fix}={\theta}_{fix}-\theta {\text{'}}_{fix}$. Substituting ${\theta}_{fix}$ = 0.68° and $\theta {\text{'}}_{fix}$ = 0° into $\theta \text{'}{\text{'}}_{fix}={\theta}_{fix}-\theta {\text{'}}_{fix}$ gives $\theta \text{'}{\text{'}}_{fix}$ = 0.68°-0° = + 0.68°. And thus, as shown in Fig. 7(b), the black-dashed line needs to be clockwise rotated by an angle $\theta \text{'}{\text{'}}_{fix}{}^{+}$( = + 0.68°) about the fixed center C1 and the rotation result is marked as the green line.

Figure 8(a) presents the stitching results (Steps (ix) and (x)), in which the green line (i.e., Phase 1) in Fig. 7(b) is shifted in the y-direction to a new position (indicated by the red line) and stitched to Phase 2. The relative rotation cycle is then repeated using the red line and black-solid line as new Phase 1 (from the pixel row 1 through new D1 (80.81, 4.221) toward new D2 (150.1, 4.413)). As shown in Fig. 8(b), two blue lines represent the new Phase 1 (as presented above) and the new Phase 2 (from new D3 (151, 0.0812) to new D4 (209, 6.244), respectively. Similarly, in Step (i), Eq. (5) and Eq. (6) give the new center pixels C1 (114.6391, 4.3134) and C2 (189.9587, 4.522). From Step (ii) to Step (iv), the known six pixels of D1, D2, D3, D4, C1, and C2 are used to determine the values of $\overrightarrow{{D}_{1}{D}_{2}}$, $\overrightarrow{{D}_{3}{D}_{4}}$, $\overrightarrow{{C}_{1}{C}_{2}}$, ${\theta}_{fix}$, ${\theta}_{1}$, and ${\theta}_{2}$. The calculation results present that ${\theta}_{fix}$ = 0.92°, ${\theta}_{1}$ = −0.64°, ${\theta}_{2}$ = 5.91°, as shown in Fig. 8(b). In the following Step (v), since angles of ${\theta}_{1}$ and ${\theta}_{2}$ need to rotate toward the vector of $\overrightarrow{{C}_{1}{C}_{2}}$, the new Phase 2 is rotated through ${\theta}_{2}{}^{+}$( = + 5.91°) in a clockwise direction and the new Phase 1 is rotated through ${\theta}_{1}{}^{-}$( = −0.64°) in a counter-clockwise direction, as shown in Fig. 8(b). After the first relative rotation procedure terminates, one black-dashed line and one black-solid line are presented in Fig. 8(b).

Subsequently, the second relative rotation procedure is performed. In Step (vii), the two known vectors $\overrightarrow{{D}_{1}\text{'}{D}_{2}\text{'}}$ and $\overrightarrow{{D}_{3}\text{'}{D}_{4}\text{'}}$ can determine that $\theta {\text{'}}_{fix}$ = 0°. In Step (viii), the tangent function of unit vector of $\overrightarrow{{D}_{1}{D}_{2}}$ ( = 0.0902) is less than that of unit vector of $\overrightarrow{{D}_{3}{D}_{4}}$ ( = 0.1066). And thus, substituting ${\theta}_{fix}$ = 0.92° and $\theta {\text{'}}_{fix}$ = 0° into $\theta \text{'}{\text{'}}_{fix}={\theta}_{fix}-\theta {\text{'}}_{fix}$ gives = + 0.92°. After the black-dashed line is rotated through $\theta \text{'}{\text{'}}_{fix}{}^{+}$( = 0.92°) in a clockwise direction, the green line is obtained and shown in Fig. 9(a) . Finally, the stitching process is performed. Figure 9(b) shows the results of Step (ix), in which the green line is shifted in the y-direction to the position indicated by the red line and stitched to Phase 2. Note that Step (x) is not performed since the unwrapped phase is obtained.

Figures 10(a) and 10(b) present the phase unwrapping results obtained using the MACY algorithm and the CA algorithm, respectively. Note that in the two figures, the blue line represents the raw wrapped phase, while the black line indicates the unwrapping result. It is seen that both unwrapping algorithms convert a concave-deep hole into a convex-shallow hole. By contrast, the proposed rotation algorithm retains the shallow and deep holes in the reconstruction results (see Fig. 9(b)). In other words, the proposed algorithm provides a more effective unwrapping solution for the case where the wrapped phase contains holes which do not straddle a phase jump (i.e., the condition shown in Fig. 5(a)). The runtime for MACY was found to be 0.9 seconds, and the runtime for CA was found to be 4.3 seconds. Compared to MACY and CA, the runtime for the proposed rotation algorithm was found to be 1.5 seconds, which was between MACY and CA.

#### 4.2 2D noisy linear case (containing speckle noise, residual noise, and holes, and resenting case in Figs. 5(a) and 5(b))

The original five interferograms converting into the wrapped phase in Fig. 6(a) were used in this subsection again. The noise and holes were added to these original interferograms. These five interferograms were then merged to form a single raw wrapped phase (see Fig. 11(a)
, in which the noise and holes were labeled as “Shallow hole”, “Deep hole”, “Residual noise” and “Speckle noise (distributed in the whole phase)”). Speckle noise was a random noise mixed with the interference pattern within the map, and was generated using the “imnoise” function in MATLAB with an intensity parameter setting of 0.3 (i.e., *“imnoise (each of five interferograms, 'speckle', 0.3)”*). In practical interferometry systems, the interferograms contained residual noise due to the effects of environmental or experimental contamination. In the present simulations, this residual noise was reproduced using the MATLAB “imnoise” function with salt and pepper noise and an intensity parameter of 0.25, written as “*imnoise (each of five interferograms, 'salt & pepper', 0.25”)*. Since one residual noise was located outside of phase jump region and one residual noise straddled phase jump region, this subsection presented the cases as shown in Figs. 5(a) and 5(b).

As shown in Fig. 11(a), the green symbols present the phase jump pixels and the black symbols present the bad pixels (or the noise), which are detected by the detection scheme of Eq. (2) with ${\sigma}_{{\rm A}}$ = 1. From an inspection of Fig. 11(a), the phase jump pixels are not correct and the bad pixels are too less. As shown in Fig. 11(b), when a threshold parameter value of ${\sigma}_{{\rm A}}$ is equal to 2.8, the two green symbols are located at the correct phase jump positions and the black symbols are distributed in the noise positions. Therefore, the threshold parameter was set to ${\sigma}_{{\rm A}}=2.8$ in this subsection. Figure 12(a) presents that the corresponding phase jump pixels are detected by Eq. (4), and are marked as the red symbols. An inspection of Fig. 12(a) reveals that the serious noise does not cause Eq. (4) to produce any detection error.

Subsequently, the wrapped phase in Fig. 12(a) is performed by the relative rotation algorithm. As a result, the unwrapped phase in Fig. 12(b) retains one shallow hole, one deep hole, one residual noise (outside of phase jump region), one residual noise (straddled phase jump region), and speckle noise. The runtime was found to be 1.6 seconds. By contrast, the wrapped phase in Fig. 11(a) is also performed by the MACY algorithm and the CA algorithm, and the unwrapped results are shown in Figs. 13(a) and 13(b), respectively. Obviously, the unwrapping process fails since the unwrapping path encounters two of the residual noise and one deep hole. Overall, the proposed rotation algorithm can successfully retain all of the noise and holes.

#### 4.3 2D filtered linear case (presenting shifting error in Fig. 5(c))

To demonstrate the shifting error produced by a filtering algorithm, Filter A [11] was applied 10 times to the raw wrapped phase shown in Fig. 6(a). Filter A comprises the detection scheme proposed in [10] and the adaptive median filter proposed in [16]. Briefly, the noise is removed (while simultaneously preserving most of the phase jumps) using Filter A. After filtering 10 times, a significant shifting error is produced as the result of a smearing of the phase jumps in the wrapped phase. Figure 14 shows the filtered phase (red line) and the original wrapped phase (blue-dashed line) presented in Fig. 6(a). Comparing the two phases, it can be seen that Filter A not only destroys two holes, but also smears all phase jumps, such as the filtered-high phase jump position (80, 6.168) and filtered-low phase jump position (81, 0.1543). In other words, a phase shifting error is produced. Note that the difference in the filtered phase values of pixels (80, 6.168) and (81, 0.1543), respectively, is equal to 6.01 rather than 2π because the filter smears the 2π phase jump. The shifting error is equal to −0.27 (i.e., 6.01-2π). The filtered phase values were processed using MACY, CA and the relative rotation procedure, respectively. Although the filtered-deep hole causes the MACY and CA to fail, no accumulated error is produced to affect the other phase values in the whole unwrapped phase. Therefore, the unwrapped phase by performing MACY or CA is acceptable except the filtered-deep hole. The corresponding phase differences between the pixel row 80 and the pixel row 81 are summarized in Table 1 . It is seen that the MACY and CA algorithms yield a phase difference of −0.27, which is equal to the shifting error. In other words, MACY and CA simply retain the shifting error during the unwrapping process. By contrast, the phase difference between the two pixels following the relative rotation procedure is equal to just 0.05. In other words, the proposed relative rotation algorithm reduces the shifting error. Therefore, the problem shown in Fig. 5(c) is resolved. The runtime of the relative rotation procedure was found to be 1.2 seconds.

#### 4.4 2D non-noisy non-linear case (presenting sine-wave case)

The raw wrapped phase was also obtained by applying the Hariharan algorithm [33,34] to the five interferograms. Again, the wrapped phase was generated by using a self-written MATLAB program, which contained the sine-wave line shown in Fig. 15(a) . In Figs. 6(a) and 11(b), the detected phase jump pixels marked as the green symbols are always located at the 2π-high-positions using the detection scheme of Eq. (2). By contrast, as shown in Fig. 15(a), after performing the detection scheme with a threshold parameter setting of ${\sigma}_{{\rm A}}$ = 2.8, the detected phase jump pixels marked as the green symbols are located at not only the 2π-high-positions but also the 0-low-positions. Moreover, the additional phase jump detection scheme of Eq. (4) can detect the corresponding 2π-high-positions or the corresponding 0-low-positions marked as the red symbols in Fig. 15(b). Then, the wrapped phase shown in Fig. 15(b) is performed by the relative rotation procedure, and the unwrapped result is shown in Fig. 15(c). An inspection of Fig. 15(c) reveals that the sine-wave line is successfully reconstructed. The runtime was found to be 1.1 seconds. Collectively, the detection scheme of Eq. (2) and the additional phase jump detection scheme of Eq. (4) can detect all of phase jump pixels (i.e. 0 and 2π positions), and the relative rotation procedure can be applied to not only the linear cases but also the non-linear (i.e. sine-wave) cases.

#### 4.5 Discussion of rotation angles

Table 2 summarizes the angles, ${\theta}_{fix}$, ${\theta}_{1}$, ${\theta}_{2}$, $\theta {\text{'}}_{fix}$, and $\theta \text{'}{\text{'}}_{fix}$ used on the one repeat time (i.e. Cycle 2) performing by the relative rotation procedure introduced in the subsections 4.1.2, 4.2, 4.3, and 4.4, respectively. The positive angle values are defined as the clockwise rotation direction, while the negative angle values are defined as the counter-clockwise rotation direction. The wrapped phases shown in Figs. 6(a), 11(a), and 14 are regarded as the same shape, which contain one shallow hole, one deep hole, and two height discontinuities. The wrapped phase shown in Fig. 11(a) is the noisy wrapped phase obtained from the original non-noisy phase shown in Fig. 6(a). Nevertheless, the wrapped phase shown in Fig. 14 is the filtered wrapped phase by Filter A [11] filtering the original non-noisy wrapped phase shown in Fig. 6(a). It is seen that the noise causes the original angle value of ${\theta}_{fix}$ (0.92°) in original non-noisy phase to reduce with the value of 0.83° in noisy phase. By contrast, the angle of ${\theta}_{fix}$ is equal to 0.89° in filtered wrapped phase, which is just slight less than the original value of 0.92° in original non-noisy phase. Therefore, compared to the filter, the noise more seriously influences vectors $\overrightarrow{{D}_{1}{D}_{2}}$ and $\overrightarrow{{D}_{3}{D}_{4}}$ (or these four pixels positions).

An inspection of the angles of ${\theta}_{1}$ and ${\theta}_{2}$ in Table 2 reveals that the angle ${\theta}_{1}$ is negative (i.e. counter-clockwise) in original non-noisy phase and noisy phase, while the angle ${\theta}_{1}$ is positive (i.e. clockwise) in filtered phase. As shown in Fig. 14, it is seen that two original holes marked as the blue-dashed line are seriously destroyed by performing Filter A, and thus the moved positions of two holes marked as the red-solid lines cause the original positions of centers C1 and C2 to yield new positions. The new positions of pixels C1 and C2 directly influence the angles of ${\theta}_{1}$ and ${\theta}_{2}$. Therefore, the angle of ${\theta}_{1}$ in filtered phase is greater than angles of ${\theta}_{1}$ in original non-noisy phase and noisy phase. In addition, in Table 2, the angles of ${\theta}_{1}$ and ${\theta}_{2}$ in sine-wave phase are greater than those angles in other three phases. It is seen that those three phases just contain the detected green symbols located at the 2π-high-positions using the detection scheme, while the sine-wave phase contains the detected green symbols located at not only 2π-high-positions but also the 0-low-positions. Accordingly, in the sine-wave case, the relative rotation procedure needs to rotate through the greater angles of ${\theta}_{1}$ and ${\theta}_{2}$.

Finally, as shown in Table 2, four corresponding angles of $\theta {\text{'}}_{fix}$ are equal to 0. When angle of $\theta {\text{'}}_{fix}$ is equal to 0, it means both of vectors $\overrightarrow{{D}_{1}\text{'}{D}_{2}\text{'}}$ and $\overrightarrow{{D}_{3}\text{'}{D}_{4}\text{'}}$ are parallel to the vector $\overrightarrow{{C}_{1}{C}_{2}}$. And thus, the absolute value of $\theta \text{'}{\text{'}}_{fix}$ is equal to the valve of ${\theta}_{fix}$, which is obtained from the equation $\theta \text{'}{\text{'}}_{fix}=\theta {\text{'}}_{fix}-{\theta}_{fix}$ or $\theta \text{'}{\text{'}}_{fix}={\theta}_{fix}-\theta {\text{'}}_{fix}$. According to Step (ix), the angle of ${\theta}_{fix}$ represents the angle in wrapped phase, while the angle of $\theta \text{'}{\text{'}}_{fix}$ represents the angle in unwrapped phase. In Table 2, comparing the angles of ${\theta}_{fix}$ and $\theta \text{'}{\text{'}}_{fix}$, both of the corresponding absolute values are same, which are presented as 0.92°, 0.83°, 0.89°, 5.67°. Therefore, the proposed rotation algorithm based on operating the angle of ${\theta}_{fix}$ is reasonable for the theories and is more robust than the most of existing 2π phase shifting unwrapping algorithms.

#### 4.6 Basic process in 3D wrapped phase map

The influence of noise or the filters to the angle of ${\theta}_{fix}$ is described in the subsection 4.5. Accordingly, in the 3D wrapped phase map (i.e. (row, column, phase)), different 2D wrapped phase lines (i.e. (row, phase)) contain different noise distribution, thus the corresponding angles of ${\theta}_{fix}$ are different line-by-line. Obviously, the 3D image reconstruction cannot be obtained by just performing the relative rotation algorithm line-by-line. However, in order to reconstruct the 3D unwrapped phase map by using the proposed rotation algorithm, the absolute rotation algorithm is proposed to operate the 2D unwrapped phase line-by-line, which will be introduced in the future. Briefly, after the 3D wrapped phase map is detected by the detection scheme of Eq. (1), the noise map and phase jump map are produced. Comparing the noise map and phase jump map, a suitable parameter threshold setting of ${\sigma}_{{\rm A}}$ can be decided. Then, the additional phase jump detection scheme of Eq. (3) is used to detect the corresponding phase jump pixels missed by Eq. (1). After all of the phase jumps are detected by Eq. (1) and Eq. (3), the relative rotation procedure is performed line-by-line. Finally, the absolute rotation procedure maps the unwrapped phase of each 2D coordinate in the relative rotation reconstruction result to the corresponding 3D coordinate, thereby producing a 3D unwrapped phase map. Moreover, the absolute rotation procedure can make the 3D unwrapped phase map to be effectively filtered by Filter B [11] if required. Filter B simply replaces the pixels in the noise map constructed by the detection scheme in [11] with the median phase value of an *N* x *N* mask centered on the noisy pixel.

## 5. Experimental results

The effectiveness of the proposed unwrapping methodology was demonstrated experimentally by performing a series of interferometry trials using various representative samples. The trials were conducted using a Mirau interferometer configuration comprising a microscope (OLYMPUS BH2-UMA) fitted with a 20XDL objective lens (NA 0.4, WD 4.7, Nikon JAPAN), a CCD camera (JAI CV-A11, monochrome progressive scan 1/3 inch), and a lens (6-60 mm, F:1.6, Kenko, JAPAN). In every case, the sample was illuminated by a white-light LED (LP-1201H-3-IO, EXLITE) with a filter central wavelength of 633 nm or a 632.8 nm He-Ne laser (MODEL 1135P, JDS Uniphase). For each sample, five interferograms were obtained. The interferograms were converted into a single raw wrapped phase map using the Hariharan algorithm [33,34]. The spatial resolution of the wrapped phase map was equal to approximately 0.42 µm per length between two pixels in both the row and column directions.

#### 5.1 Retention of holes in proposed relative rotation algorithm

The ability of the proposed relative rotation algorithm to preserve holes in the sample surface in the reconstruction results was evaluated using a Si sample containing two holes. Figure 16(a) presents the corresponding raw wrapped phase with dimensions of 200 × 1 (row × column). Moreover, the green and red symbols shown in Fig. 16(a) are the detected results using Eq. (2) and Eq. (4) with a detection threshold parameter setting of ${\sigma}_{{\rm A}}$ = 2.75. It is noted that the holes are located away from the phase jumps (i.e., as in the condition illustrated in Fig. 5(a)). Figure 16(b) shows the reconstruction results obtained from the relative rotation procedure, marked as the black-solid line. Obviously, Fig. 16(b) shows that the rotation algorithm successfully preserves the two holes in the sample surface when unwrapping the wrapped phase. Thus, in this particular example, the unwrapping process should terminate following completion of the relative rotation algorithm without the filtering algorithm. The runtime was found to be 1.5 seconds.

#### 5.2 Retention of noise in proposed relative rotation algorithm

The robustness of the proposed rotation algorithm toward noise was evaluated using a rough Si sample. Figure 17(a) shows that the raw wrapped phase contains the residual noise as a result of the rough sample surface and the speckle noise as a result of the laser source (with dimensions of 337 × 1 (row × column)). The green and red symbols are obtained using a detection threshold parameter setting of ${\sigma}_{{\rm A}}$ = 2.65. Figure 17(b) shows the reconstruction results obtained from the relative rotation procedure. It can be seen that the residual noise and speckle noise are retained and those noises can be removed by the filtering process if required. The runtime by using the relative rotation procedure was found to be 1.2 seconds.

#### 5.3 Robustness of proposed relative and absolute rotation algorithms toward noise

This subsection demonstrates that the absolute rotation algorithm can successfully operate all of the 2D unwrapped phase lines using the relative rotation algorithm, and can reconstruct the 3D unwrapped phase map. The robustness of the proposed rotation algorithm toward noise was evaluated using a rough TaSiN step height sample with a single height discontinuity. Figure 18(a) shows the raw wrapped phase map (with dimensions of 265 × 265 (row × column)), while Fig. 18(b) shows the noise map obtained using a detection threshold parameter setting of ${\sigma}_{{\rm A}}$ = 2.75. The results presented in Fig. 18(b) show that the wrapped phase map contains both residual noise as a result of the rough sample surface and obvious noise at a single height discontinuity. Figure 19(a) shows the reconstruction results obtained from the absolute rotation procedure. The runtime was found to be 35 seconds. It can be seen that the residual noise and noise at a single height discontinuity are retained, but are regarded as the adverse phase distributions here. Thus, Filter B with the 9 × 9 mask and the detection scheme ${\sigma}_{{\rm B}}$ = 0. 5 is used 2 times to remove the residual noise and the noise at a single height discontinuity in Fig. 19(a). Figure 19(b) shows the filtered unwrapped phase map. The filtering runtime was found to be 32 seconds. It is seen that the filter removes the residual noise and noise at single height discontinuity while preserving the height discontinuity feature. The filtered reconstruction results are qualitatively clearer than the original (unfiltered) results. Thus, in this particular case, the unwrapping procedure should terminate after the filtering process.

## 6. Conclusions

This study has proposed a novel rotation algorithm for phase unwrapping applications. The proposed algorithm comprises two stages, namely a relative phase rotation stage and an absolute phase rotation stage. The relative phase rotation stage yields an initial reconstruction result using the detected phase jump pixels by Eqs. (1)–(4). The absolute phase rotation stage refines the initial reconstruction result such that it is to further processing using a noise reduction filter, which will be introduced in the future. In contrast to traditional unwrapping algorithms, the raw wrapped phase map is not filtered prior to the unwrapping process. That is, the reconstruction results retain the noise and hole errors in the unwrapped phase map. However, a filter [11] can be optionally applied to the reconstruction results to remove these errors if required. Since the filtering process is directly applied to the unwrapped phase rather than the raw wrapped phase, the phase shifting error induced in traditional 2π phase jump unwrapping schemes is avoided.

The effectiveness of the proposed unwrapping methodology has been demonstrated numerically given various common reconstruction problems. In general, the 2D reconstruction results have shown that while the proposed algorithm retains the noise and hole errors in the reconstruction results, these errors do not prevent the algorithm to obtain a reliable solution. By contrast, in the MACY and CA algorithms, the presence of noise and holes in the unwrapped phase map prevents the reconstruction process to obtain the reliable solutions. Furthermore, since a filtering process is performed prior to the unwrapping process, both algorithms generate a phase shifting error.

The 2D experimental results have confirmed the ability of the proposed relative rotation algorithm to reconstruct holes, speckle noise, and residual noise in the sample surface which has been demonstrated using a Si sample. Moreover, one 3D experimental result has confirmed the robustness of the absolute rotation algorithm toward residual noise and noise at a single height discontinuity which has been demonstrated using a rough TaSiN sample. It has been shown that the use of an additional filtering process is beneficial in improving the quality of the 3D reconstruction result of TaSiN sample.

In conclusion, from the angle value of $\theta \text{'}{\text{'}}_{fix}$ equal to that of ${\theta}_{fix}$, the rotation theories based on operating the angle of ${\theta}_{fix}$ are reasonable and are more effective than the most of existing 2π phase shifting unwrapping algorithms, which is not only robust toward the presence of noise and hole errors in the wrapped phase, but also avoids the phase shifting error induced in traditional 2π phase jump unwrapping schemes such as MACY and CA. For the simulation and experimental examples considered in this study, a successful (i.e., reliable) reconstruction result was obtained within 1~2 s for 2D coordinate and within 35~67 s for 3D coordinate when using a general PC (AMD Athion 64 × 2 Dual Core Processor 4400 + 2.31GHz and 2 GB of RAM). An inspection of 2D coordinate reveals that the runtime for the proposed relative rotation algorithm is faster than that for CA and slower than that for MACY.

## Acknowledgments

The authors gratefully acknowledge the financial support provided to this study by the National Science Council of Taiwan under grant NSC 98-2221-E-006-053-MY3.

## References and links

**1. **B. F. Pouet and S. Krishnaswamy, “Technique for the removal of speckle phase in electronic speckle interferometry,” Opt. Lett. **20**(3), 318–320 (1995). [CrossRef] [PubMed]

**2. **I. Moon and B. Javidi, “Three-dimensional speckle-noise reduction by using coherent integral imaging,” Opt. Lett. **34**(8), 1246–1248 (2009). [CrossRef] [PubMed]

**3. **M. J. Huang and J. K. Liou, “Retrieving ESPI map of discontinuous objects via a novel phase unwrapping algorithm,” Strain **44**(3), 239–247 (2008). [CrossRef]

**4. **R. Yamaki and A. Hirose, “Singularity-spreading phase unwrapping,” IEEE Trans. Geosci. Rem. Sens. **45**(10), 3240–3251 (2007). [CrossRef]

**5. **E. H. Kim, J. Hahn, H. Kim, and B. Lee, “Profilometry without phase unwrapping using multi-frequency and four-step phase-shift sinusoidal fringe projection,” Opt. Express **17**(10), 7818–7830 (2009). [CrossRef] [PubMed]

**6. **W. H. Su, K. Shi, Z. Liu, B. Wang, K. Reichard, and S. Yin, “A large-depth-of-field projected fringe profilometry using supercontinuum light illumination,” Opt. Express **13**(3), 1025–1032 (2005). [CrossRef] [PubMed]

**7. **P. Potuluri, M. Fetterman, and D. Brady, “High depth of field microscopic imaging using an interferometric camera,” Opt. Express **8**(11), 624–630 (2001). [CrossRef] [PubMed]

**8. **H. O. Saldner and J. M. Huntley, “Temporal phase unwrapping: application to surface profiling of discontinuous objects,” Appl. Opt. **36**(13), 2770–2775 (1997). [CrossRef] [PubMed]

**9. **A. Wada, M. Kato, and Y. Ishii, “Large step-height measurements using multiple-wavelength holographic interferometry with tunable laser diodes,” J. Opt. Soc. Am. A **25**(12), 3013–3020 (2008). [CrossRef] [PubMed]

**10. **J. F. Weng and Y. L. Lo, “Robust detection scheme on noise and phase jump for phase maps of objects with height discontinuities--theory and experiment,” Opt. Express **19**(4), 3086–3105 (2011). [CrossRef] [PubMed]

**11. **J. F. Weng and Y. L. Lo, “Integration of robust filters and phase unwrapping algorithms for image reconstruction of objects containing height discontinuities,” Opt. Express **20**(10), 10896–10920 (2012). [CrossRef] [PubMed]

**12. **J. Jiang, J. Cheng, and B. Luong, “Unsupervised-clustering-driven noise-residue filter for phase images,” Appl. Opt. **49**(11), 2143–2150 (2010). [CrossRef] [PubMed]

**13. **H. A. Aebischer and S. Waldner, “A simple and effective method for filtering speckle-interferometric phase fringe patterns,” Opt. Commun. **162**(4–6), 205–210 (1999). [CrossRef]

**14. **H. O. Saldner and J. M. Huntley, “Temporal phase unwrapping: application to surface profiling of discontinuous objects,” Appl. Opt. **36**(13), 2770–2775 (1997). [CrossRef] [PubMed]

**15. **S. Yuqing, “Robust phase unwrapping by spinning iteration,” Opt. Express **15**(13), 8059–8064 (2007). [CrossRef] [PubMed]

**16. **A. Capanni, L. Pezzati, D. Bertani, M. Cetica, and F. Francini, “Phase-shifting speckle interferometry: a noise reduction filter for phase unwrapping,” Opt. Eng. **36**(9), 2466–2472 (1997). [CrossRef]

**17. **J. M. Huntley and H. Saldner, “Temporal phase-unwrapping algorithm for automated interferogram analysis,” Appl. Opt. **32**(17), 3047–3052 (1993). [CrossRef] [PubMed]

**18. **D. S. Mehta, S. K. Dubey, M. M. Hossain, and C. Shakher, “Simple multifrequency and phase-shifting fringe-projection system based on two-wavelength lateral shearing interferometry for three-dimensional profilometry,” Appl. Opt. **44**(35), 7515–7521 (2005). [CrossRef] [PubMed]

**19. **S. Zhang, X. L. Li, and S. T. Yau, “Multilevel quality-guided phase unwrapping algorithm for real-time three-dimensional shape reconstruction,” Appl. Opt. **46**(1), 50–57 (2007). [CrossRef] [PubMed]

**20. **W. W. Macy Jr., “Two-dimensional fringe-pattern analysis,” Appl. Opt. **22**(23), 3898–3901 (1983). [CrossRef] [PubMed]

**21. **D. C. Ghiglia, G. Mastin, and L. A. Romero, “Cellular-automata method for phase unwrapping,” J. Opt. Soc. Am. A **4**(1), 267–280 (1987). [CrossRef]

**22. **A. Spik and D. W. Robinson, “Investigation of the cellular automata method for phase unwrapping and its implementation on an array processor,” Opt. Lasers Eng. **14**(1), 25–37 (1991). [CrossRef]

**23. **H. Y. Chang, C. W. Chen, C. K. Lee, and C. P. Hu, “The Tapestry Cellular Automata phase unwrapping algorithm for interferogram analysis,” Opt. Lasers Eng. **30**(6), 487–502 (1998). [CrossRef]

**24. **K. A. Stetson, J. Wahid, and P. Gauthier, “Noise-immune phase unwrapping by use of calculated wrap regions,” Appl. Opt. **36**(20), 4830–4838 (1997). [CrossRef] [PubMed]

**25. **R. Goldstein, H. Zebker, and C. Werner, “Satellite radar interferometry: two-dimensional phase unwrapping,” Radio Sci. **23**(4), 713–720 (1988). [CrossRef]

**26. **T. J. Flynn, “Two-dimensional phase unwrapping with minimum weighted discontinuity,” J. Opt. Soc. Am. A **14**(10), 2692–2701 (1997). [CrossRef]

**27. **M. A. Navarro, J. C. Estrada, M. Servin, J. A. Quiroga, and J. Vargas, “Fast two-dimensional simultaneous phase unwrapping and low-pass filtering,” Opt. Express **20**(3), 2556–2561 (2012). [CrossRef] [PubMed]

**28. **H. Cui, W. Liao, N. Dai, and X. Cheng, “Reliability-guided phase-unwrapping algorithm for the measurement of discontinuous three-dimensional objects,” Opt. Eng. **50**(6), 063602 (2011). [CrossRef]

**29. **J. C. Estrada, M. Servin, and J. Vargas, “2D simultaneous phase unwrapping and filtering: a review and comparison,” Opt. Eng. **50**(8), 1026–1029 (2011).

**30. **X. Xianming and P. Yiming, “Multi-baseline phase unwrapping algorithm based on the unscented Kalman filter,” IET Radar Sonar Nav. **5**(3), 296–304 (2011). [CrossRef]

**31. **J. J. Martinez-Espla, T. Martinez-Marin, and J. M. Lopez-Sanchez, “Using a Grid-Based Filter to Solve InSAR Phase Unwrapping,” IEEE Geosci. Remote Sens. **5**(2), 147–151 (2008). [CrossRef]

**32. **K. Liu, Y. C. Wang, D. L. Lau, Q. Hao, and L. G. Hassebrook, “Dual-frequency pattern scheme for high-speed 3-D shape measurement,” Opt. Express **18**(5), 5229–5244 (2010). [CrossRef] [PubMed]

**33. **K. Creath, “Phase-shifting speckle interferometry,” Appl. Opt. **24**(18), 3053–3058 (1985). [CrossRef] [PubMed]

**34. **P. Hariharan, B. F. Oreb, and T. Eiju, “Digital phase-shifting interferometry: a simple error-compensating phase calculation algorithm,” Appl. Opt. **26**(13), 2504–2506 (1987). [CrossRef] [PubMed]