Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Convex optimization-based windowed Fourier filtering with multiple windows for wrapped-phase denoising

Open Access Open Access

Abstract

The windowed Fourier filtering (WFF), defined as a thresholding operation in the windowed Fourier transform (WFT) domain, is a successful method for denoising a phase map and analyzing a fringe pattern. However, it has some shortcomings, such as extremely high redundancy, which results in high computational cost, and difficulty in selecting an appropriate window size. In this paper, an extension of WFF for denoising a wrapped-phase map is proposed. It is formulated as a convex optimization problem using Gabor frames instead of WFT. Two Gabor frames with differently sized windows are used simultaneously so that the above-mentioned issues are resolved. In addition, a differential operator is combined with a Gabor frame in order to preserve discontinuity of the underlying phase map better. Some numerical experiments demonstrate that the proposed method is able to reconstruct a wrapped-phase map, even for a severely contaminated situation.

© 2016 Optical Society of America

1. INTRODUCTION

Interferometry-based measurement techniques generally observe information of interest as a wrapped-phase map. The so-called wrapped phase is a restriction of the original phase to the interval [π,π) by adding or subtracting an integer which is a multiple of 2π. The process of recovering the original phase from its wrapped version by removing the modulus 2π ambiguities, called phase unwrapping, is necessary to acquire the information. If the magnitude of phase difference between adjacent pixels is less than π (i.e., if the Itoh condition [1] is satisfied) the unwrapped phase can be obtained exactly by a path-following algorithm. However, the presence of measurement noise often violates the condition, and phase unwrapping becomes an ill-posed problem. Thus, many unwrapping techniques have been proposed during the last two decades to overcome such difficulties [28].

An effective strategy for eliminating the issue is to integrate a denoising filter before or within the unwrapping process [918]. Such a denoising process is usually desirable because the unwrapping problem becomes considerably easier as the noise level becomes lower. One successful approach is the windowed Fourier transform (WFT)-based filter, which is often called windowed Fourier filtering (WFF) [1924]. The procedure of WFF is as follows. First, a wrapped-phase map φ is converted into the exponential phase field (EPF): eiφ, where i=1. Second, EPF is represented by a linear combination of windowed sinusoidal functions via WFT. Then, a thresholding operator is employed to enforce sparsity on the noisy EPF in the WFT domain. Finally, the inverse WFT provides denoised EPF, and the corresponding wrapped phase is obtained by calculating the principal value of its complex argument. Conversion of wrapped phase to EPF is necessary to remove the difficulty caused by the 2π discontinuity of the wrapping effect. Since EPF is a sinusoidal function, a Fourier-type transformation, including WFT, allows sparse representation of noiseless EPF. At the same time, noisy EPF is generally not sparse in the WFT domain. This sparsity-based characterization admits WFF to be a reasonable and effective approach for denoising the wrapped phase.

Although WFF has achieved great success, there are some shortcomings. The first point is high computational complexity. WFF is usually based on the full WFT, which results in extremely redundant representation. In general, redundancy in the transformed domain should be controlled properly in order to avoid unnecessary computation. Another point, which is closely related to the first one, is that WFF has less freedom on a choice of a window function. There are infinitely many pairs of a window function and its dual counterpart, which should be chosen depending on application. Nevertheless, the ordinary WFF seems to adopt only the Gaussian window, whose support is not compact. The last but most important point is that there is a trade-off between the effectiveness of noise removal and preservation of details [25,26]. When the window size is large, random noise can be effectively removed. However, essential details of a phase map, such as rapid change or discontinuity, may be damaged easily due to oversmoothing. On the other hand, a small window, which can preserve such details, is usually less effective for noise reduction. Although some methods using adaptive windows have been proposed to resolve this trade-off [13,14], adaptation of window size may suffer from instability because adaptation itself is another difficult task in the presence of noise.

In this paper, a denoising method for a wrapped-phase map based on sparse convex optimization techniques is proposed. It can be viewed as an improved version of WFF, since it relies upon the same sparsity principle, while the proposed method allows simultaneous use of multiple windows with controllability of computational cost. Also, it allows one to ignore low-frequency components that often degrade performance of WFF. The main contributions of this paper can be summarized as follows:

  • • Use of the Gabor frame instead of WFT. The two-dimensional Gabor frame representation, which is a mathematical framework for representing a two-dimensional function by combination of the windowed Fourier functions, allows us to control redundancy and to choose any window function whose dual window is reasonably available. Therefore, our formulation can easily balance computational cost and effectiveness of noise reduction by varying the redundancy.
  • • Formulation of the wrapped phase denoising problem as a convex optimization problem involving an infimal convolution type regularization term. The regularizer is designed to enforce sparsity on EPF in the Gabor frame domains computed with multiple windows. Thus, the above-mentioned trade-off on performance due to the window size can be solved by using windows with different sizes simultaneously. It also allows us to compose some linear transformations, a difference operator in the case of this paper, which can enhance the performance of noise reduction.
  • • Providing an efficient iterative algorithm for solving the optimization problem. The proposed algorithm is based on a recently developed accelerated linearized preconditioned alternating direction method of multipliers (ALP-ADMM) so that no matrix inversion is involved in each step. Hence, the proposed method can be implemented as a matrix-free scheme, which greatly saves computational resources and time compared to a full matrix implementation. If we ignore the computation of the Gabor frame, computational complexity of the proposed algorithm is O(N) where N is the number of total pixels. Therefore, the Gabor frame representation, which may be computed with O(KlogM) by using the fast Fourier transform, is the main computational cost, and that can be reduced by decreasing its redundancy, where K and M depend on the number of frequency channels and shifting width.

This paper is organized in the following way. First, an interpretation of WFF in terms of sparse optimization is introduced in Section 2. Then, the proposed method is described as an extension of WFF in Section 3, with a brief explanation of the Gabor frame and convex optimization. Its experimental results are presented in Section 4, and finally, the paper concludes with Section 5.

2. INTERPRETATION OF WFF AS OPTIMIZATION

In this section, the standard WFF is explained as a procedure solving a sparse optimization problem. This optimization point of view will clarify the relationship between WFF and the proposed method in the subsequent section.

A. Two-Dimensional WFT

The two-dimensional WFT is a Fourier-based integral transform often defined as

(Wgf)(y,k)=R2f(x)g(xy)¯e2πik,xdx,
where x,yR2 are position vectors, g is a window function whose L2-norm is normalized to one, z¯ is the complex conjugate of z, ·,·, is the standard inner product, and kR2 is a spatial frequency vector (for simplicity, we will omit some mathematical details that are not important for describing the proposed method such as f,gL2(R2)). Its inverse transform,
f(x)=R2R2(Wgf)(y,k)g(xy)e2πik,xdydk,
allows reconstruction of f from its representation Wgf. For notational convenience, Eq. (2) will be denoted as
f=WinvgWgf.
A window function g, usually the Gaussian function in the context of WFF, provides spatially local information of f. Compared to the global nature of the Fourier transform, this localization property is preferable for most applications because frequency components of practical data usually vary in position.

B. WFF

WFF is a denoising method for a fringe pattern and wrapped phase. It is defined as a thresholding operation in the WFT domain:

f˜=WinvgThardλ[Wgf],
where for a threshold value λ>0,
Thardλ[z]n={zn(|zn|λ)0(|zn|<λ)
is the element-wise hard-thresholding operator. In the case when WFF is applied to a wrapped-phase map φ, conversion between the wrapped phase and EPF is required before and after the filtering, i.e.,
φ˜=Arg[WinvgThardλ[Wgeiφ]],
where the symbol Arg[z] denotes the principal value of the complex argument of z. The thresholding operator converts a vector into a sparser one: a vector consists of more zeros. Therefore, thresholding has an ability to recover clean EPF, which is sparsely represented in the WFT domain because of its sinusoidal nature, from noisy EPF, whose WFT domain representation is usually not sparse.

C. Optimization View of WFF

The hard-thresholding operator provides a solution to the following minimization problem [27]:

Thardλ[z]argminx[xz22+λ2x0],
where z0 is number of nonzero elements of a vector z. Therefore, with suitable discretization of WFT, WFF can be viewed as an optimization procedure solving Eq. (7) in the discrete WFT domain.

This optimization problem naturally arises in sparse modeling, since ·0 is the most direct realization of a sparsity-enforcing penalty function. More generally, replacing ·0 in Eq. (7) with any function promoting sparsity can recover a sparse signal to some extent. Among such penalty functions, probably the most famous and popular one is 1-norm z1=Σn|zn|, which is the convex relaxation of ·0, resulting in the following optimization problem: finding

xargminx[12xz22+λx1],
whose solution is unique and obtained by the element-wise soft-thresholding operation,
Tsoftλ[z]n={(1λ/|zn|)zn(|zn|λ)0(|zn|<λ).

3. CONVEX OPTIMIZATION-BASED WFF

In this section, the proposed method is described as an extension of WFF. Its building blocks, Gabor frame, and convex optimization techniques are also introduced here briefly.

A. Gabor Frame Representation

Calculation of WFT is time-consuming because it is an extremely redundant representation. Instead, a Gabor frame is adopted for the proposed method so that unnecessary computation can be eliminated by controlling redundancy.

A two-dimensional Gabor system for fixed step size a,b>0 is defined as

{g(xan)e2πibm,x}n,mZ2.
It is a Gabor frame if there exists a dual window gd and the following reconstruction formula holds [28]:
f(x)=n,mZ2(Fgf)(n,m)gd(xan)e2πibm,x,
where
(Fgf)(n,m)=R2f(x)g(xan)¯e2πibm,xdx.
Similar to Eq. (3), Eq. (11) will be denoted as
f=FinvgdFgf.
By comparing Eqs. (12) and (11) with Eqs. (1) and (2), it is clear that the Gabor frame can be interpreted as a discretely sampled version of WFT. Therefore, redundancy of Gabor frame representation can be decreased by changing the sampling intervals as long as a reasonable dual window exists. A computationally efficient version of WFF can be written as
f˜=FinvgdThardλ[Fgf],
which utilizes a Gabor frame instead of WFT.

For a window function g, there are infinitely many choices for its dual gd satisfying Eq. (11). A tight window is a special class of window function whose dual window is itself. In this study, the canonical tight window gt, whose frame bound is normalized to one, is adopted [28].

B. Convex Optimization

Convex optimization is a fundamental tool for solving many engineering problems in signal and image processing. It is often formulated as the following minimization problem: finding

xargminx[Fd(x)+λG(Lx)],
where Fd and G are convex functions, respectively called data fidelity term and regularization term; λ>0 is a tuning parameter balancing the importance of them, and L is a linear operator. The regularization term is used to implement some prior knowledge, such as sparsity in the transformed domain, while the data fidelity term tries to keep the solution not too different from the observed data d. Recent advances in the field allow G and/or Fd to be nondifferentiable. Therefore, the above formulation can include hard constraints to convex sets as well as structured priors necessary for modern signal processing methods [29].

Among several advantages of convex optimization-based formulation, probably the most important one is its ease in achieving a global solution. When objective functions and constraints are all convex, a local minimum of a minimization problem becomes a global minimum. Therefore, convex formulation is less sensitive to an initial value and noise compared to a nonconvex one. Convex relaxation is a well-accepted strategy to receive benefit from this advantage. Since globally solving a nonconvex optimization problem is practically impossible, except in some very special cases, the convexly approximated problem of the original problem is solved instead. An example of such relaxation is 1-norm, which is a convex function approximating ·0, as introduced in Section 2.C.

C. Proposed Method

To extend WFF for overcoming some difficulties mentioned in Section 1, we first start with a convexly relaxed version of WFF: finding

xargminx[12xd22+λFgtx1],
which is a solution x that has sparse Gabor frame representation Fgtx and is close to the original observed data d. The degree of sparsity is controlled by the regularization parameter λ>0; larger λ provides a sparser solution. (Strictly speaking, Eq. (16) and the soft-thresholding version of WFF are similar but not equivalent, since data fidelity is measured in a different domain; see [30]).

In order to incorporate multiple windows, the sparse regularization term is divided into two terms:

minx,y[12xd22+λ{αFgt1(xy)1+(1α)Fgt2y1}],
where y is an auxiliary variable, and α(0,1) is a balancing parameter typically chosen around 0.5. This splitting using the auxiliary variable is in the form of infimal convolution [31]; thus, we shall call it “infimal convolutional WFF” with two windows. Such decomposition results in a solution which is penalized by either of the regularization terms whose value is smaller. The penalty is designed so that the smooth and detailed parts are handled separately in each term. For example, when gt1 is a large window and gt2 is a smaller one, the smooth part of a phase map will be penalized with Fgt1·1, and the detailed part will be penalized with Fgt2·1. Therefore, the window size is automatically and continuously balanced between the two, depending on the position, in the process of optimization. Obviously, it can be easily extended to more than three windows in a similar manner.

Although the above formulation seemingly works, we will modify the second part of the regularizer slightly in order to achieve better results. A known issue of WFF is that its performance will be degraded if low frequency components are contained in each local area because their side lobe violates the sparsity assumption of a clean EPF in the windowed frequency domain. However, such low frequency components will arise quite frequently due to windowing. Therefore, a high-pass filter, or difference operator D, is combined with the penalty term; our proposed method is to find

xargminx,y[12xd22+λ{αFgt1(xy)1+(1α)Fgt2Dy1}].
Each term of the regularizer now ignores low-frequency components, as they will be canceled by y, whose low-frequency components are not penalized by any term. Thus, the sparsity assumption should be fulfilled in every local area, and the effectiveness of noise reduction is ensured. In addition, the combination of the high-pass filter can treat discontinuity and detail of a clean phase map better because it enhances such nonsmooth patterns, which are dominated by high-frequency components.

D. ALP-ADMM

Among many convex optimization algorithms [32,33], an ALP-ADMM [34], which is a recently proposed accelerated variant of the well-known ADMM [35], is chosen for solving Eq. (18) in this paper. Here, as usual in convex analysis [31], Γ0(CN) denotes the set of all proper lower-semicontinuous convex functions over the Euclidean space CN.

Let us restate the convex optimization problem in Eq. (15): finding

xargminxBCN[F(x)+G(Lx)],
where FΓ0(CN) is differentiable with the β-Lipschitzian gradient (there exist a positive constant β(0,) such that F(x)F(y)βxy for all x,yCN), GΓ0(CM), LCM×N, and B is a set of box constraints bounding each element of x: |xn|bn (bn>0). Then, its solution can be obtained approximately by iterating
x˜(12k+1)x^+2k+1xxPB[xkτ{L*(ρKk(Lxz)+r)+F(x˜)}]zproxkρKG[Lx+kρKr]rr+ρkK(Lxz)x^(12k+1)x^+2k+1x
from initial values x, z (=Lx), x^ (=x), and r=0, where k=1,,K is the iteration index, ρ is a parameter typically chosen around 1, τ=2β+ρKLS2, LS is the spectral norm of L, L* denotes the adjoint of L, proxG[·] denotes the proximity operator [33],
proxλG[x]=argminyCN[G(y)+12λxy22],
and PB is the projection operator to the convex set B,
PB[x]n={bnxn/|xn|(|xn|bn)xn(|xn|<bn).
We should note that Eq. (20) is a simplified version modified so that it is easier to apply to the proposed method. ALP-ADMM can solve more general problems than Eq. (19) with more freedom on choice of the step-size parameters [34].

Tables Icon

Algorithm 1. Proposed algorithm (ALP-ADMM)

E. Proposed Algorithm

In order to apply ALP-ADMM to the proposed method of Eq. (18), we will restate the problem more precisely.

The observation dCN is EPF eiφ converted from a noisy wrapped-phase map φ that consists of N=Nv×Nh pixels, where Nv and Nh are, respectively, the numbers of vertical and horizontal pixels. Let the two Gabor frame transformations Fgt1:CNCR1N and Fgt2:CNCR2N be simply written as F[1] and F[2], where R1 and R2 are the redundancy of the transformations. The first order difference operator with the Neumann boundary condition D:CNx(xv,xh)C2N and its adjoint D* are defined as usual [36], where xv and xh are the vertical and horizontal difference of x. By defining a linear operator L:C2N(x,y)(F[1](xy),F[2]yv,F[2]yh)CR1N+2R2N and two convex functions F:C2N(x,y)12xd22R and G:CR1N+2R2N(x,y)λ{αx1+(1α)y1}R, the proposed formulation of Eq. (18) can be viewed as the convex optimization problem, Eq. (19). Thus, Algorithm 1 can be derived by directly applying ALP-ADMM in Eq. (20), where P1 is in Eq. (22) with bn=1 for all n. After iterating the algorithm for some K, the denoised wrapped-phase map can be obtained as φ˜=Arg[x^].

F. Some Remarks

Some notes on the proposed method are presented in this subsection.

First, it is well-known that a sparse solution obtained from an 1-norm regularized problem contains the bias error: each nonzero element has a smaller modulus compared to the original observation. This phenomenon triggered a great deal of research on reducing this error [27,37]. The proposed method also contains such bias, and thus a solution x^ has a smaller magnitude than one obtained by a hard-thresholding type method. However, this bias will not be a matter for the wrapped-phase denoising problem, because its objective is to recover the complex argument, which does not depend on the magnitude (see experiments in Section 4). Therefore, the proposed method can receive the benefit of 1-norm, for example, robustness to initial values and noise thanks to its convexity, without much concern about its disadvantage in usual applications.

Second, the proposed method treats the gradient terms yv and yh separately, which results in anisotropic thresholding, which is often out of favor in many applications. An isotropic variant of the proposed method can be obtained by simply replacing the soft-thresholder Tsoft in line 14 of Algorithm 1 with the block soft-thresholding operator [38]. However, we empirically found that Algorithm 1 works better than its isotropic variant in terms of the signal-to-noise ratio, perhaps because separately treating the directional difference terms leads to sparser Gabor representation, which can be recovered more efficiently by the thresholding. Therefore, we left the sparsity-inducing regularizer as 1-norm instead of the mixed norm.

Finally, it is also well-known that considering some higher-order difference operators instead of only the first-order difference D can improve the smoothness of a solution. However, just replacing D with a second-order difference performed worse in our case; this should be caused by the same reason related to sparsity as explained in the previous paragraph. There are several methods for incorporating higher-order differences using infimal convolution, and the proposed method can be improved by them [39]. In this paper, we chose only the first-order difference because of cheaper computational cost. For the same reason, although there are no limitations on using more than two Gabor frames, such possible extension will not be examined here.

4. EXPERIMENTAL RESULTS

In this paper, all experiments were performed on a MATLAB 2015b with an Intel core i7 3.0 GHz processor. Every calculation of Gabor frame representation is done by the large time-frequency analysis toolbox (LTFAT), which is openly available software for frame-based signal processing [40,41].

A. Gabor Frame WFF

Before testing the proposed method, performance of the Gabor frame based WFF in Eq. (14) was investigated to show how redundancy is related to computational cost and denoising ability.

Figure 1 shows the computational time of Gabor frame representation for several redundancies obtained by varying the number of channels (frequency components) and sliding width of a window. Note that redundancy can take only some discrete values because the parameters (channel number and shifting width) are both integers. Since discrete WFT can be regarded as a fully redundant Gabor frame representation, it is obvious that replacing WFT with a Gabor frame can easily reduce the computational cost of any WFT-based method.

 figure: Fig. 1.

Fig. 1. Measured computational time of Gabor frame representation for an image of 64×64 pixels. It includes both forward and inverse transform as in Eq. (13). For reference, computational time of two WFT functions implemented by Kemao (wft2 [20] and wft2f [42]) are also shown as red dots.

Download Full Size | PDF

The denoising ability of Gabor frame-based WFF in Eq. (14) with both hard- and soft-thresholding is illustrated in Fig. 2. The noisy wrapped phase map is obtained by adding complex-valued Gaussian noise of variance 2σ2 (the real and imaginary part are independent Gaussian with variance σ2) to EPF of a Gaussian function,

14πen2/200m2/450,
as in [14], where n,mZ are horizontal and vertical indices representing the position of a pixel in an image whose origin is at the center of the image. The denoising performances were measured by the improvement in the signal-to-noise ratio (ISNR),
ISNR=10log10[eiφeiφtrue22eiφ˜eiφtrue22],
where φ is a noisy wrapped-phase map, φ˜ is its denoised version, and φtrue is the noiseless one. Since the performance depends on both redundancy and threshold λ, every combination of them are tested. The second row is the result for hard-thresholding, while the third row is for soft-thresholding.

 figure: Fig. 2.

Fig. 2. Comparison of denoising ability of hard- and soft-thresholding operators combined with Gabor frames as in Eq. (14). Noisy 100×100 image of the Gaussian function in Eq. (23) (noiselevelσ=0.5) in (a) is used as test data. The channel number [how many frequency components are used for each direction (vertical and horizontal)] and shifting step (how many pixels are shifted to compute adjacent window) of a Gabor frame are adjusted to control redundancy shown in (b). Their computational times are depicted in (c). Second row is the result for hard-thresholding, while third row is for soft-thresholding. The results for lower left part in (b) are omitted because their redundancies are less than or equal to 1, which means they are not a frame [a forward and inverse transform pair cannot reconstruct a function as in Eq. (13)]. (d) and (g) are ISNR obtained by varying the threshold λ in Eqs. (5) and (9), where each line corresponds to each pixel in (b). The maximum values of each line in (d) and (g), which are the best achievable ISNR for (a), are illustrated in (e), (f), and (h), (i), respectively.

Download Full Size | PDF

From Figs. 2(f) and 2(i), it can be confirmed that the performances of hard- and soft-thresholders are almost the same when they are combined with the fully redundant Gabor frame, or WFT. On the other hand, their performance greatly differs when redundancy is cut down: soft-thresholding is more stable in decreasing redundancy than hard-thresholding. This result indicates that WFF with a Gabor frame and soft-thresholding operator can be accelerated by reducing its redundancy without much loss on the denoising performance. It also justifies the effectiveness of using the soft-thresholding operator instead of hard-thresholding in the proposed method. Note that it also has advantages from the optimization point of view: the corresponding cost function, 1-norm, is convex, which is an important property, as explained in Section 3.B.

Based on these results, for comparison to the proposed method, WFF is performed with the soft-thresholding operator in the next experiment.

B. Proposed Method

Denoising performance of the proposed method is compared with WFF and PEARLS (phase estimation using adaptive regularization based on local smoothing), which is one of the state-of-the-art methods for wrapped phase denoising using window-sized adaptation [14]. Testing data are the same Gaussian function in Eq. (23) except for 1/4 region where the Gaussian is replaced by zero. This is chosen for checking the ability of preserving the discontinuity of phase.

The two window functions gt1 and gt2 used in the Gabor frames of the proposed method are shown in Fig. 3(a) and 3(b), respectively. gt1 is the canonical tight window for a 15-channel Gabor system, with 5-pixel shifting constructed from the Hamming window, whose support is 15×15 pixels, while gt2 is for 4-channel with 2-pixel shifting constructed from the iterated sine window of 3×3 pixels [43]. Their redundancies are R1=9 and R2=4. These windows are chosen intuitively, with the hope that the Gabor system corresponding to gt1 is adapted to the smooth part, while gt2 is adapted to the detailed part of the phase. There should be a better pair of windows for the testing data because, in general, the best window function depends on its application and data; we do not consider such adaptations of windows here.

 figure: Fig. 3.

Fig. 3. Window functions used for the proposed method. The canonical tight windows gt1 and gt2 are illustrated in (a) and (b), respectively. For comparison, the Gaussian window used for calculating WFF is also depicted in (c); only part of it is shown in (c) for visual convenience since its support is not compact. These functions are symmetric (rotating the 25×25 pixel plane 90 deg does not change their appearance).

Download Full Size | PDF

Figure 4 summarizes the denoising performances for noise level σ=0.25, 0.5, 0.75, 1 that are approximately SNR=9, 3, 0.5, 3dB, respectively. WFF was performed by soft-thresholding with threshold λ on a fully redundant Gabor system whose redundancy is 10,000. Its window function was a Gaussian function, shown in Fig. 3(c), which is chosen to have similar shape as gt1, Fig. 3(a). The filtering step of PEARLS was tested by using the MATLAB code provided in [44]. It adaptively changes window size so that the smooth part is treated by a large window while the rapidly changing part is treated by a smaller one. Γ is a parameter to control such adaptation; see [14] for detail. The proposed method was performed by 100 iterations (K=100) with α=0.5, ρ=1, and the initial value of x was set to Finv[1]TsoftλαF[1]d.

 figure: Fig. 4.

Fig. 4. ISNR for several noise levels. (a) is the results for WFF using soft-thresholding. (b) is for PEARLS, whose tuning parameter Γ is described in [14], and (c) is for the proposed method. Each line corresponds to testing data with different noise levels: σ=0.25 (blue), 0.5 (red), 0.75 (yellow), and 1 (green). Some illustrative examples of these data are shown in Figs. 5 and 6.

Download Full Size | PDF

Figures 5 and 6 show visual examples of the results. The parameters for obtaining them are listed in Table 1; they were chosen based on Fig. 4 so that nearly the best ISNR was achieved for each situation. The corresponding unwrapped results were obtained by the phase-unwrapping max-flow (PUMA) algorithm [4].

Tables Icon

Table 1. Parameters for Obtaining Figs. 5 and 6

Tables Icon

Table 2. Computational Time for Executing Each Method Once

 figure: Fig. 5.

Fig. 5. Some examples of the results for mildly contaminated data. The test data are obtained by replacing a quarter part of Eq. (23) with zero. Each column shows (from left to right) noisy data to be denoised, results for WFF, results for PEARLS, and results for the proposed method. Each row shows (from top to bottom) wrapped-phase maps, error of wrapped phase, and corresponding unwrapped phase obtained by PUMA algorithm [4] for noise levels σ=0.25, 0.5. The near side of 3D graphs of error and unwrapped phases corresponds to the upper left corner of the wrapped-phase maps. The parameters used for each method are listed in Table 1.

Download Full Size | PDF

 figure: Fig. 6.

Fig. 6. Some examples of the results for severely contaminated data. The test data is obtained by replacing a quarter part of Eq. (23) with zero. Each column shows (from left to right) noisy data to be denoised, results for WFF, results for PEARLS, and results for the proposed method. Each row shows (from top to bottom) wrapped phase maps, error of wrapped phase, and corresponding unwrapped phase obtained by PUMA algorithm [4] for noise levels σ=0.75, 1. The near side of 3D graphs of error and unwrapped phases corresponds to the upper left corner of the wrapped-phase maps. The parameters used for each method are listed in Table 1.

Download Full Size | PDF

Let us point out a few things about the proposed method:

  • • Comparison to WFF. Since the proposed method is an extended version of WFF, their experimental results are similar in terms of ISNR. However, there are notable differences in the presence of artifacts. The results for WFF contain some low-frequency patterns that deformed the flat part of the original phase, as in Figs. 5 and 6. This is caused by violation of the sparsity assumption owing to artificial low-frequency components created by windowing. Such components can be ignored if a very large window is used, but that causes another problem: oversmoothing. On the other hand, the results for the proposed method contain fewer artifacts thanks to the high-pass filter D in the penalty term. In addition, the usage of multiple windows preserved discontinuity, or edge, more clearly than WFF.
  • • Comparison to PEARLS. Since the core feature of PEARLS is to choose the window size adaptively depending on the smoothness of the underlying phase, estimation of window size is the most important step in achieving a good result. Indeed, it works quite well when the noise level is moderate. However, window size cannot be estimated reasonably for severely contaminated data because PEARLS tries to preserve noise pattern as detail in such a case. Although this phenomenon can be eliminated by choosing larger windows more frequently, this strategy cannot preserve discontinuity and detail, which makes PEARLS less attractive. In contrast, the proposed method is more robust in a highly noisy situation. This should be because it is formulated as a convex optimization problem: not changing window size directly but combining multiple windows in the form of infimal convolution.
  • • Computational time. Table 2 shows the computational time of each method. Although the proposed method iterated the loop 100 times, which requires computation of forward and inverse Gabor frame transformations more than 400 times, the proposed method can be performed faster than WFF. This is because the redundancies of the Gabor frames, R1=9 and R2=4, are much smaller than that of WFF (10,000). Note that the balance of computational cost and denoising performance of the proposed method can be adjusted by changing the redundancies and number of iterations K. Also, it is possible to increase its performance for fixed K by tuning the parameters α and ρ. Moreover, implementation in faster languages, such as C language, can decrease the execution time of the proposed method that is implemented in MATLAB here.

5. CONCLUSIONS

In this paper, a convex optimization-based method for denoising a wrapped-phase map is proposed. It can be regarded as an extension of the well-known WFF in three senses: (1) replacing WFT with a Gabor frame decreases computational cost; (2) formulation based on infimal convolution allows simultaneous use of multiple windows; and (3) combination of a difference operator enhances sparsity and increases qualitative performance. The experimental results showed that the proposed method can obtain better results, which contain fewer artifacts than the ordinary WFF and PEARLS.

We should note that automatic estimation of a good combination of regularization parameters is a highly desirable feature for practical use. There is much research on such automatic tuning, and that topic should be investigated in the future for the proposed method as well.

Funding

Japan Society for the Promotion of Science (JSPS) (15J08043).

REFERENCES

1. K. Itoh, “Analysis of the phase unwrapping algorithm,” Appl. Opt. 21, 2470 (1982). [CrossRef]  

2. D. C. Ghiglia and M. D. Pritt, Two-Dimensional Phase Unwrapping: Theory, Algorithms and Software (Wiley, 1998).

3. J. M. Bioucas-Dias and J. M. N. Leitao, “The F020F05AπM algorithm: a method for interferometric image reconstruction in SAR/SAS,” IEEE Trans. Image Process. 11, 408–422 (2002). [CrossRef]  

4. J. M. Bioucas-Dias and G. Valadao, “Phase unwrapping via graph cuts,” IEEE Trans. Image Process. 16, 698–709 (2007). [CrossRef]  

5. R. Yamaki and A. Hirose, “Singularity-spreading phase unwrapping,” IEEE Trans. Geosci. Remote Sens. 45, 3240–3251 (2007). [CrossRef]  

6. S. Tomioka and S. Nishiyama, “Phase unwrapping for noisy phase map using localized compensator,” Appl. Opt. 51, 4984-4994 (2012). [CrossRef]  

7. D. Kitahara, M. Yamagishi, and I. Yamada, “A virtual resampling technique for algebraic two-dimensional phase unwrapping,” in Proceedings of the IEEE International Conference on Acoustics, Speech Signal Processing (ICASSP) (IEEE, 2015), pp. 3871–3875.

8. D. Kitahara and I. Yamada, “Algebraic phase unwrapping based on two-dimensional spline smoothing over triangles,” IEEE Trans. Signal Process. 64, 2103–2118 (2016). [CrossRef]  

9. A. Capanni, L. Pezzati, D. Bertani, M. Cetica, and F. Francini, “Phase-shifting speckle interferometry: a noise reduction filter for phase unwrapping,” Opt. Eng. 36, 2466–2472 (1997). [CrossRef]  

10. H. A. Aebischer and S. Waldner, “A simple and effective method for filtering speckle-interferometric phase fringe patterns,” Opt. Commun. 162, 205–210 (1999). [CrossRef]  

11. K. Qian, S. H. Soon, and A. Asundi, “A simple phase unwrapping approach based on filtering by windowed Fourier transform,” Opt. Laser Technol. 37, 458–462 (2005). [CrossRef]  

12. Q. Kemao, W. Gao, and H. Wang, “Windowed Fourier-filtered and quality-guided phase-unwrapping algorithm,” Appl. Opt. 47, 5420-5428 (2008). [CrossRef]  

13. V. Katkovnik, J. Astola, and K. Egiazarian, “Phase local approximation (phasela) technique for phase unwrap from noisy data,” IEEE Trans. Image Process. 17, 833–846 (2008). [CrossRef]  

14. J. Bioucas-Dias, V. Katkovnik, J. Astola, and K. Egiazarian, “Absolute phase estimation: adaptive local denoising and global unwrapping,” Appl. Opt. 47, 5358–5369 (2008). [CrossRef]  

15. M. A. Navarro, J. C. Estrada, M. Servin, J. A. Quiroga, and J. Vargas, “Fast two-dimensional simultaneous phase unwrapping and low-pass filtering,” Opt. Express 20, 2556–2561 (2012). [CrossRef]  

16. J.-F. Weng and Y.-L. Lo, “Integration of robust filters and phase unwrapping algorithms for image reconstruction of objects containing height discontinuities,” Opt. Express 20, 10896–10920 (2012). [CrossRef]  

17. H. Y. H. Huang, L. Tian, Z. Zhang, Y. Liu, Z. Chen, and G. Barbastathis, “Path-independent phase unwrapping using phase gradient and total-variation (TV) denoising,” Opt. Express 20, 14075–14089 (2012). [CrossRef]  

18. X. Xie and Y. Li, “Enhanced phase unwrapping algorithm based on unscented Kalman filter, enhanced phase gradient estimator, and path-following strategy,” Appl. Opt. 53, 4049–4060 (2014). [CrossRef]  

19. Q. Kemao, “Windowed Fourier transform for fringe pattern analysis,” Appl. Opt. 43, 2695–2702 (2004). [CrossRef]  

20. Q. Kemao, “Two-dimensional windowed Fourier transform for fringe pattern analysis: principles, applications and implementations,” Opt. Lasers Eng. 45, 304–317 (2007). [CrossRef]  

21. Q. Kemao, L. T. H. Nam, L. Feng, and S. H. Soon, “Comparative analysis on some filters for wrapped phase maps,” Appl. Opt. 46, 7412–7418 (2007). [CrossRef]  

22. Q. Kemao, H. Wang, and W. Gao, “Windowed Fourier transform for fringe pattern analysis: theoretical analyses,” Appl. Opt. 47, 5408–5419 (2008). [CrossRef]  

23. C. Quan, H. Niu, and C. Tay, “An improved windowed Fourier transform for fringe demodulation,” Opt. Laser Technol. 42, 126–131 (2010). [CrossRef]  

24. Q. Kemao, “Applications of windowed Fourier fringe analysis in optical measurement: a review,” Opt. Lasers Eng. 66, 67–73 (2015). [CrossRef]  

25. Q. Kemao, “On window size selection in the windowed Fourier ridges algorithm,” Opt. Lasers Eng. 45, 1186–1192 (2007). [CrossRef]  

26. Q. Kemao, “On window size selection in the windowed Fourier ridges algorithm: addendum,” Opt. Lasers Eng. 45, 1193–1195 (2007). [CrossRef]  

27. R. Chartrand, “Shrinkage mappings and their induced penalty functions,” in Proceedings of the IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP) (IEEE, 2014), pp. 1026–1029.

28. O. Christensen, Frames and Bases: An Introductory Course (Springer, 2008).

29. F. Bach, R. Jenatton, J. Mairal, and G. Obozinski, “Optimization with sparsity-inducing penalties,” Found. Trends Mach. Learn. 4, 1–106 (2011). [CrossRef]  

30. M. Elad, P. Milanfar, and R. Rubinstein, “Analysis versus synthesis in signal priors,” Inv. Probl. 23, 947–968 (2007). [CrossRef]  

31. H. H. Bauschke and P. L. Combettes, Convex Analysis and Monotone Operator Theory in Hilbert Spaces (Springer, 2011).

32. P. L. Combettes and J.-C. Pesquet, “Proximal splitting methods in signal processing,” in Fixed-Point Algorithms for Inverse Problems in Science and Engineering, H. H. Bauschke, R. Burachik, P. L. Combettes, V. Elser, D. R. Luke, and H. Wolkowicz, eds. (Springer, 2011), pp. 185–212.

33. N. Parikh and S. Boyd, “Proximal algorithms,” Found. Trends Optim. 1, 127–239 (2014). [CrossRef]  

34. Y. Ouyang, Y. Chen, G. Lan, and J. Eduardo Pasiliao, “An accelerated linearized alternating direction method of multipliers,” SIAM J. Imaging Sci. 8, 644–681 (2015). [CrossRef]  

35. S. Boyd, N. Parikh, E. Chu, B. Peleato, and J. Eckstein, “Distributed optimization and statistical learning via the alternating direction method of multipliers,” Found. Trends Mach. Learn. 3, 1–122 (2010). [CrossRef]  

36. A. Chambolle, “An algorithm for total variation minimization and applications,” J. Math. Imaging Vis. 20, 73–87 (2004). [CrossRef]  

37. H. Zou, “The adaptive lasso and its oracle properties,” J. Am. Stat. Assoc. 101, 1418–1429 (2006). [CrossRef]  

38. S. Ono and I. Yamada, “Signal recovery with certain involved convex data-fidelity constraints,” IEEE Trans. Signal Process. 63, 6149–6163 (2015). [CrossRef]  

39. S. Setzer, G. Steidl, and T. Teuber, “Infimal convolution regularizations with discrete l1-type functionals,” Commun. Math. Sci. 9, 797–827 (2011). [CrossRef]  

40. P. L. Søndergaard, B. Torrésani, and P. Balazs, “The linear time frequency analysis toolbox,” Int. J. Wavelets Multiresolut. Inf. Process. 10, 1250032 (2012). [CrossRef]  

41. Z. Průša, P. L. Søndergaard, N. Holighaus, C. Wiesmeyr, and P. Balazs, “The large time-frequency analysis toolbox 2.0,” in Sound, Music, and Motion, M. Aramaki, O. Derrien, R. Kronland-Martinet, and S. Ystad, eds. (Springer, 2014), pp. 419–442.

42. Q. Kemao, “Windowed Fourier transform for fringe pattern analysis” (2009) [Online]. Available: http://www.mathworks.com/matlabcentral/fileexchange/24852.

43. E. Wesfreid and M. V. Wickerhauser, “Adapted local trigonometric transforms and speech processing,” IEEE Trans. Signal Process. 41, 3596–3600 (1993). [CrossRef]  

44. J. Bioucas-Dias, “Code” (2012) [Online] Available: http://www.lx.it.pt/~bioucas/code.htm.

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (6)

Fig. 1.
Fig. 1. Measured computational time of Gabor frame representation for an image of 64 × 64 pixels. It includes both forward and inverse transform as in Eq. (13). For reference, computational time of two WFT functions implemented by Kemao (wft2 [20] and wft2f [42]) are also shown as red dots.
Fig. 2.
Fig. 2. Comparison of denoising ability of hard- and soft-thresholding operators combined with Gabor frames as in Eq. (14). Noisy 100 × 100 image of the Gaussian function in Eq. (23) ( n o i s e l e v e l σ = 0.5 ) in (a) is used as test data. The channel number [how many frequency components are used for each direction (vertical and horizontal)] and shifting step (how many pixels are shifted to compute adjacent window) of a Gabor frame are adjusted to control redundancy shown in (b). Their computational times are depicted in (c). Second row is the result for hard-thresholding, while third row is for soft-thresholding. The results for lower left part in (b) are omitted because their redundancies are less than or equal to 1, which means they are not a frame [a forward and inverse transform pair cannot reconstruct a function as in Eq. (13)]. (d) and (g) are ISNR obtained by varying the threshold λ in Eqs. (5) and (9), where each line corresponds to each pixel in (b). The maximum values of each line in (d) and (g), which are the best achievable ISNR for (a), are illustrated in (e), (f), and (h), (i), respectively.
Fig. 3.
Fig. 3. Window functions used for the proposed method. The canonical tight windows g t 1 and g t 2 are illustrated in (a) and (b), respectively. For comparison, the Gaussian window used for calculating WFF is also depicted in (c); only part of it is shown in (c) for visual convenience since its support is not compact. These functions are symmetric (rotating the 25 × 25 pixel plane 90 deg does not change their appearance).
Fig. 4.
Fig. 4. ISNR for several noise levels. (a) is the results for WFF using soft-thresholding. (b) is for PEARLS, whose tuning parameter Γ is described in [14], and (c) is for the proposed method. Each line corresponds to testing data with different noise levels: σ = 0.25 (blue), 0.5 (red), 0.75 (yellow), and 1 (green). Some illustrative examples of these data are shown in Figs. 5 and 6.
Fig. 5.
Fig. 5. Some examples of the results for mildly contaminated data. The test data are obtained by replacing a quarter part of Eq. (23) with zero. Each column shows (from left to right) noisy data to be denoised, results for WFF, results for PEARLS, and results for the proposed method. Each row shows (from top to bottom) wrapped-phase maps, error of wrapped phase, and corresponding unwrapped phase obtained by PUMA algorithm [4] for noise levels σ = 0.25 , 0.5. The near side of 3D graphs of error and unwrapped phases corresponds to the upper left corner of the wrapped-phase maps. The parameters used for each method are listed in Table 1.
Fig. 6.
Fig. 6. Some examples of the results for severely contaminated data. The test data is obtained by replacing a quarter part of Eq. (23) with zero. Each column shows (from left to right) noisy data to be denoised, results for WFF, results for PEARLS, and results for the proposed method. Each row shows (from top to bottom) wrapped phase maps, error of wrapped phase, and corresponding unwrapped phase obtained by PUMA algorithm [4] for noise levels σ = 0.75 , 1. The near side of 3D graphs of error and unwrapped phases corresponds to the upper left corner of the wrapped-phase maps. The parameters used for each method are listed in Table 1.

Tables (3)

Tables Icon

Algorithm 1 Proposed algorithm (ALP-ADMM)

Tables Icon

Table 1. Parameters for Obtaining Figs. 5 and 6

Tables Icon

Table 2. Computational Time for Executing Each Method Once

Equations (24)

Equations on this page are rendered with MathJax. Learn more.

( W g f ) ( y , k ) = R 2 f ( x ) g ( x y ) ¯ e 2 π i k , x d x ,
f ( x ) = R 2 R 2 ( W g f ) ( y , k ) g ( x y ) e 2 π i k , x d y d k ,
f = W inv g W g f .
f ˜ = W inv g T hard λ [ W g f ] ,
T hard λ [ z ] n = { z n ( | z n | λ ) 0 ( | z n | < λ )
φ ˜ = Arg [ W inv g T hard λ [ W g e i φ ] ] ,
T hard λ [ z ] arg min x [ x z 2 2 + λ 2 x 0 ] ,
x arg min x [ 1 2 x z 2 2 + λ x 1 ] ,
T soft λ [ z ] n = { ( 1 λ / | z n | ) z n ( | z n | λ ) 0 ( | z n | < λ ) .
{ g ( x a n ) e 2 π i b m , x } n , m Z 2 .
f ( x ) = n , m Z 2 ( F g f ) ( n , m ) g d ( x a n ) e 2 π i b m , x ,
( F g f ) ( n , m ) = R 2 f ( x ) g ( x a n ) ¯ e 2 π i b m , x d x .
f = F inv g d F g f .
f ˜ = F inv g d T hard λ [ F g f ] ,
x arg min x [ F d ( x ) + λ G ( L x ) ] ,
x arg min x [ 1 2 x d 2 2 + λ F g t x 1 ] ,
min x , y [ 1 2 x d 2 2 + λ { α F g t 1 ( x y ) 1 + ( 1 α ) F g t 2 y 1 } ] ,
x arg min x , y [ 1 2 x d 2 2 + λ { α F g t 1 ( x y ) 1 + ( 1 α ) F g t 2 D y 1 } ] .
x arg min x B C N [ F ( x ) + G ( L x ) ] ,
x ˜ ( 1 2 k + 1 ) x ^ + 2 k + 1 x x P B [ x k τ { L * ( ρ K k ( L x z ) + r ) + F ( x ˜ ) } ] z prox k ρ K G [ L x + k ρ K r ] r r + ρ k K ( L x z ) x ^ ( 1 2 k + 1 ) x ^ + 2 k + 1 x
prox λ G [ x ] = arg min y C N [ G ( y ) + 1 2 λ x y 2 2 ] ,
P B [ x ] n = { b n x n / | x n | ( | x n | b n ) x n ( | x n | < b n ) .
14 π e n 2 / 200 m 2 / 450 ,
ISNR = 10 log 10 [ e i φ e i φ true 2 2 e i φ ˜ e i φ true 2 2 ] ,
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.