Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Boundary-aware single fringe pattern demodulation

Open Access Open Access

Abstract

Optical interferometric techniques offer non-contact, high accuracy and full field measurements, which are very attractive in various research and application fields. Single fringe pattern processing is often needed when measuring fast phenomenon. However, several difficulties are encountered in phase retrieval, among which the discontinuity problem of the fringe pattern is challenging and requires attention due to the increasing complexity of manufactured pieces. In this paper, we propose a complete flowchart for discontinuous single fringe pattern processing, which uses segmentation as universal pre-processing for all discontinuous fringe pattern problems. Within the flowchart, we also propose a systematic way to introduce boundary-awareness into demodulation methods by a masking function to improve demodulation accuracy and a quality-guided scanning strategy with a novel composite quality map to improve demodulation robustness. To the best of our knowledge, this is the first time a complete solution is proposed for single and discontinuous fringe pattern processing. Three typical demodulation methods, the frequency-guided regularized phase tracker with quadratic phase matching, the windowed Fourier ridges, and the spiral phase quadrature transform, are used to demonstrate how demodulation methods can be made boundary-aware. The proposed methods are verified by successful demodulation results from both simulated and experimental fringe patterns.

© 2017 Optical Society of America

1. Introduction

Efficient and accurate measurement is required in various research and application fields, such as mechanical engineering, material engineering, non-destructive testing, etc. Optical interferometric techniques are very attractive because they offer non-contact, highly sensitive and full-field measurements. Fringe patterns as the measuring results of most optical interferometric techniques can be modeled as [1]

f(x,y)=a(x,y)+b(x,y)cos[φ(x,y)]+n(x,y),
where f(x,y) is the measured and known intensity; a(x,y), b(x,y), and φ(x,y) are the unknown background intensity, fringe amplitude and phase distribution, respectively; n(x,y) represents noise. The purpose of fringe processing is to retrieve the phase φ(x,y), for which, multiple phase-shifted fringe patterns or extra carrier frequency are often introduced with specific experimental settings [1]. However, such settings are hard to be carried out when measuring a fast phenomenon, and single fringe pattern processing, especially single closed fringe pattern processing, is then needed. The difficulties of single fringe pattern processing can be summarized as: (D1) ill-posedness: for each pixel, there are three unknowns, a(x,y), b(x,y), and φ(x,y), but only one known intensity value f(x,y); (D2) sign ambiguity: both φ(x,y) or -φ(x,y) are qualified solutions; (D3) order ambiguity: φ(x,y) + 2 with an arbitrary integer k is a qualified solution; (D4) inevitable noise; (D5) discontinuity: a(x,y), b(x,y), and φ(x,y) can be discontinuous.

There are abundant methods addressing the difficulties (D1-D4) [2–11]. For example, background removal and amplitude normalization are optional pre-processing steps to address the (D1) difficulty [2–4]; demodulation methods such as frequency-guided regularized phase tracker, windowed Fourier ridges, and Hilbert transform provide reliable solutions to some or all of the (D1-D4) difficulties [5–7]; various phase unwrapping methods provide solutions to the (D3) difficulty [8–10]; denoising has been widely applied for the (D4) difficulty [6, 11].

In contrast, there are limited discussions on the (D5) difficulty, although discontinuities are often encountered as the manufactured pieces nowadays are becoming more complicated with multiple parts. In single fringe pattern processing, due to the lack of information such as phase-shifted fringe patterns and carrier frequency, one has to make assumptions on spatial variations of the background, amplitude and phase. For example, assuming the background and amplitude to be locally constant and the phase locally linear have been a common practice in single fringe pattern processing. Such assumptions, however, are invalid at discontinuities, and consequently, the methods based on these assumptions will fail. This requests the demodulation methods to have special considerations on discontinuities, for which there are mainly two approaches. An explicit and straightforward approach is to recognize them before further processing. A snake assisted quality-guided phase unwrapping [12] incorporates the gradient vector flow (GVF) snake model to produce a clear boundary for piece-wise unwrapping, although human interactions are required for better results. A local orientation coherence based fringe segmentation (LOCS) method also divides discontinuous fringe patterns into continuous segments, based on which a boundary-aware coherence enhancing diffusion (BCED) is established for discontinuous fringe pattern denoising [13]. Another more implicit approach attempts to integrate the discontinuity information into the designed methods. For example, a second-order robust regularization cost function has been used to deal with discontinuity by preserving the edge information [14]. Hilbert assisted wavelet transform [15] and Shearlet transform [16] are used to reduce the phase extraction error at the phase discontinuous points due to the localized nature of these transforms.

In this paper, we follow the first approach because of the following merits. First, a clear-boundary segmentation of the fringe pattern can work as a universal pre-processing step for various discontinuous fringe pattern processing tasks, including denoising, demodulation, unwrapping and so on. Second, with the identified boundaries, many state-of-the-art methods can be easily developed into a boundary-aware version. By incorporating this idea, a complete flowchart for discontinuous single fringe pattern processing is proposed in Fig. 1 as our first contribution. Although the flowchart seems to be simple and natural, such a complete solution has not yet been seen in the literature. The success of this technique is partially due to our earlier satisfactory segmentation work [13], which will be briefly introduced in Sec. 2 along with other preparatory processing steps. As the second contribution, we propose a systematic way to introduce boundary-awareness into demodulation methods in Sec. 3. A masking function is presented to exclude the irrelevant pixels to improve processing accuracy (Sec. 3.1); a quality-guided scanning strategy with a novel composite quality is proposed to postpone the processing of boundary pixels so as to improve the processing robustness (Sec. 3.2). Lastly, the masking function and scanning strategy are applied to three typical demodulation methods, the frequency-guided regularized phase tracker with quadratic phase matching (QFGRPT), the windowed Fourier ridges (WFR) and the spiral phase quadrature transform (SQT), in Secs. 4.1-4.3, respectively. The scanning strategy is also applied to phase unwrapping in Sec. 4.4. Results and discussions are given in Sec. 5 and a conclusion is drawn in Sec. 6.

 figure: Fig. 1

Fig. 1 Flowchart of boundary-aware single fringe pattern processing (dotted squares indicate optional processing steps)

Download Full Size | PDF

2. Preparatory processing for demodulation of discontinuous fringe patterns

In this section, the preparatory processing steps involved in this paper before fringe demodulation are briefly introduced, including a LOCS method for fringe segmentation and a BCED method for fringe denoising from our earlier segmentation work [13], as well as a boundary-aware spatial envelop scanning for background removal and fringe normalization.

2.1 Fringe segmentation by LOCS

LOCS recognizes boundaries based on the fringe property that a continuous fringe pattern has a smooth flow-like structure, which is interrupted by discontinuity. This property can be picked up by eigenvalue analysis of local orientation coherence in the form of a structure tensor. Given a fringe pattern f(x,y), the structure tensor for a local block centering at pixel (x,y) is constructed

T(x,y)=[(ε,η)Nx,yρ(ε,η)fxσ2(ε,η)(ε,η)Nx,yρ(ε,η)fxσ(ε,η)fyσ(ε,η)(ε,η)Nx,yρ(ε,η)fxσ(ε,η)fyσ(ε,η)(ε,η)Nx,yρ(ε,η)fyσ2(ε,η)],
where Nx,y is the neighborhood region centering at (x,y) and the coordinates of the pixels in this region are denoted as (ε,η); ρ(ε,η) is a fixed window function which is set to a unit value, but its values will be changed in the refinement step [13]; f and f are first order derivatives of f after diffused by a Gaussian with a kernel size of σ. The eigenvalues of this matrix, λ1 and λ2, can be easily calculated. If the fringe in the block is continuous, there is only one dominant orientation, i.e. λ1>0, λ2≈0; while if the block includes a discontinuity, there are at least two dominant orientations, i.e. λ1>λ2>0. Thus discontinuities can be detected according to λ2 as follows,
D(x,y)=[λ2(x,y)>thr],
where 1 and 0 in D(x,y) indicate discontinuity and continuity, respectively; thr is a threshold whose value can be determined automatically by evaluating the histogram of λ2 [13]. After the thresholding, the discontinuous regions are identified. Thinning is applied on D(x, y) to get one-pixel width boundaries.

The boundaries so obtained have two imperfections. First, some boundary pixels are missing when two sides of a boundary have similar fringe orientations. Second, the detected boundary pixels are often not precise since the true boundary is not always located at the center of the discontinuous region. To deal with these problems, a spline representation of the discontinuous boundaries is established first to fill in the missing gaps. Next, the control points of these spline curves are pushed/pulled to adjust the location of the boundary pixels to improve the accuracy. The details of the refinement can be found in [13].

To see the results of LOCS, two fringe patterns of size 256 × 256 and containing linear or circular boundaries are simulated and shown in Figs. 2(a) and 2(b), respectively. The former is added with additive noise of mean of 0 and standard deviation of 1.0 and the latter is with speckle noise, as shown in Figs. 2(c) and 2(d), respectively. The simulated fringe patterns are segmented using the LOCS method. The segmentation results are shown in Fig. 3, where the red lines indicate the ideal boundaries in Figs. 3(a) and 3(b) while satisfactorily identified boundaries in Figs. 3(c) and 3(d). Quantitatively, there are 327 and 599 pixels that are classified to the wrong segment for Figs. 3(c) and 3(d), respectively. For both segmentation, the maximum deviation of the identified boundary to the true boundary is 2 pixels.

 figure: Fig. 2

Fig. 2 Simulated fringe patterns. (a) A simulated fringe pattern with linear boundary; (b) a simulated fringe pattern with circular boundary; (c) Fig. (a) with additive noise; (d) Fig. (b) with speckle noise.

Download Full Size | PDF

 figure: Fig. 3

Fig. 3 Segmentation results of (a) Fig. 2(a); (b) Fig. 2(b); (c) Fig. 2(c); (d) Fig. 2(d).

Download Full Size | PDF

2.2 Fringe denoising by BCED

Denoising is often required before single fringe pattern demodulation when noise level is high. Many existing denoising techniques can be potentially used, where BCED [13] is chosen in this paper. It is a boundary-aware partial differential equation based method that also requires prior knowledge of discontinuous boundaries. The details of BCED are skipped to keep the paper concise and focus on our proposed demodulation methods in the next two sections. The denoising results are shown in Fig. 4, which demonstrate that even very heavy noise can be effectively suppressed with boundaries preserved.

 figure: Fig. 4

Fig. 4 Denoising results of (a) Fig. 2(c); (b) Fig. 2(d).

Download Full Size | PDF

2.3 Background removal and fringe normalization

To address the difficult (D1) in fringe demodulation, background removal and amplitude normalization are sometimes required before demodulation. After the background is removed, the fringe becomes

fa(x,y)=b(x,y)cos[φ(x,y)];
when the amplitude is further normalized, the fringe becomes

fn(x,y)=cos[φ(x,y)].

The background and amplitude can be estimated from the fringe pattern being processed or directly measured in advance. Because of the slow-varying nature of the background and amplitude, they can be possibly estimated either globally or region-wise. There are a number of techniques for this purpose such as high-pass filtering [2], Hilbert transform [3], and spatial envelop scanning [4]. In this paper, we use spatial envelop scanning to perform the background removal and fringe normalization with derivative-sign method for extrema estimation and biharmonic spline interpolation for envelop fitting. Spatial envelop scanning is a region-wise method and can be directly applied to fringe segments after segmentation is performed and boundary is not crossed. The normalization results of Fig. 4 are shown in Fig. 5. Demodulation is then performed on fa(x,y) or fn(x,y) in the rest of the paper.

 figure: Fig. 5

Fig. 5 Background removal and amplitude normalization results of (a) Fig. 4(a); (b) Fig. 4(b).

Download Full Size | PDF

3. Dealing with boundaries

As mentioned earlier, for effective demodulation of single fringe patterns, the phase is usually assumed to be locally linear or quadratic. Unfortunately, such an assumption is invalid at discontinuities, and often fails the demodulation. If we demodulate the fringe pattern segment by segment, the phase inside the segment becomes continuous. Thus the above assumption becomes valid again, which is the motivation of our boundary-aware fringe demodulation methods. Two key issues are addressed in this section: a general masking function to adapt to boundaries and a composite quality map to guide the demodulation.

3.1 Masking function for boundary adaption

Assume that a segment s of a fringe pattern f(x,y) is to be demodulated. We pad s by zeros until this segment has the same size as the whole fringe pattern and denote the padded segment as S, which can be expressed as

S(x,y)=f(x,y)M(x,y),
where a mask M is introduced as
M(x,y)={1,(x,y)s0,(x,y)s.
This mask function is able to recognize boundaries: for an inner pixel of the segment, the mask values of its neighbors will all be 1, while for a pixel near boundary, some mask values of its neighbors will be 1 and others be 0. Consequently, the fringe segment after background removal and normalization can be denoted as Sa(x,y) and Sn(x,y), respectively. As we will see later, most demodulation methods are local processing methods. When one pixel is demodulated, its neighboring pixels will be involved in a way of “summation”. The mask can then be used to weight the contributions of the pixels in the summation for boundary adaptation. In detail, given a function as follows,
h(x,y)=(ε,η)Nx,yr(ε,η),
where r(ε,η) differs from one demodulation method to another. Equation (8) can be used as a general representation of many demodulation methods that a summation is involved, including spatial domain methods such as regularized phase tracker [2, 5], frequency domain methods involving Fourier transform [6, 7]. Such examples will be given in Sec. 4. The function can be easily adapted to a segment as
hB(x,y)=(ε,η)Nx,yM(ε,η)r(ε,η)
to null the contribution from irrelevant pixels outside the segment s with M(ε,η) defined in Eq. (7). Consequently, the masking function can improve the demodulation accuracy at pixels around the boundary. This adaptation looks simple (indeed it is very easy for implementation), it is powerful as it can universally adapt all the demodulation methods covered in this paper.

3.2 Composite quality map for boundary-aware scanning strategy

Due to the dependency between pixels, single fringe pattern demodulation is often carried out pixel by pixel in a sequential manner. The information estimated from the current pixel can be used as the reference for the next pixel. A good scanning strategy is required to minimize the possibility of the failure caused by “bad” pixels and consequently maximizes the robustness of fringe processing. Quality-guided scanning path processes a fringe pattern from high-quality pixels to low-quality ones and has been widely adopted. A quality map Q(x,y) could have various options, such as image quality, fringe frequency and so on [1]. In the previous works about continuous fringe patterns, boundaries are rarely considered in the quality map construction. However, in the discontinuous case, the boundary could not be ignored. Because many demodulation methods are block-based, when processing the boundary pixels, the block will cover pixels outside the region being processed. Thus the boundary pixels are deemed as low-quality pixels and should be reflected in the quality map. In this paper, we propose a general scanning strategy for both demodulation and unwrapping with a composite quality map, which augments the original quality with boundary information. The composite quality map is defined as

P(x,y)=[Q(x,y)+Qmax(ε,η)Nx,yM(ε,η)],
where Qmax is the maximum value of Q(x,y) if available or an estimated maximum. The first term inherits the original quality map while the second term makes sure that a pixel closer to a boundary is processed later. This composite quality is also applicable to continuous fringe patterns where the pixels around image boarders are postponed for processing. The scanning strategy is outlined as follows.

  • 1) Choose a seed pixel at the inner part of a segment manually or randomly, or with the highest Q(x,y) if it is available in advance. Set it as the current pixel;
  • 2) Process the current pixel(s), calculate their composite quality values P(x,y) and push the pixel(s) into an adjoin list;
  • 3) Select the pixel with highest priority in the adjoin list and remove it from the list. Among its four or eight adjacent pixels, take the ones that are not yet processed and with M(x,y) = 1 as the current pixels;
  • 4) Repeat steps 2 and 3 until all pixels in the segment are processed.

We summarize that, first, the above scanning strategy has already been widely used, but the composite quality map is novel; second, as the boundary pixels are processed at last, error propagation from these pixels are avoided and the robustness is improved.

4. Boundary-aware demodulation methods

The masking function and boundary-ware scanning strategy provide new and important ingredients for boundary-aware demodulation. In this section, three demodulation methods, QFGRPT, WFR and SQT are adapted to fringe segments with boundary-awareness for demonstration in Sec. 4.1 to 4.3, respectively. Boundary-aware phase unwrapping is also briefly discussed in Sec. 4.4.

4.1 The boundary-aware and frequency-guided regularized phase tracker with quadratic phase matching (BQFGRPT)

(a) Original method

QFGRPT is a demodulation method with high accuracy, robustness and speed [5]. It consists of two essential components: 1) a locally quadratic phase assumption and subsequent matching to improve the accuracy; 2) a frequency guided demodulation path to improve the robustness. It estimates the phase by minimizing an energy function as

U(x,y)=(ε,η)Nx,y({fn(ε,η)cos[φe(x,y;ε,η)]}2+δ[φ0(ε,η)φe(x,y;ε,η)]2m(ε,η)),
where fn(x,y) is the normalized fringe pattern as in Eq. (5); φe(x,y;ε,η) is the locally quadratic phase at (ε,η)Nxy based on the information at pixel (x,y). The quadratic phase is expressed as
φe(x,y;ε,η)=φ0(x,y)+ωx(x,y)(εx)+ωy(x,y)(ηy)+cxx(x,y)(εx)2/2+cyy(x,y)(ηy)2/2+cxy(x,y)(εx)(ηy),
where φ0(x,y) is the phase estimation for pixel (x,y); ωx(x,y) and ωy(x,y) are the local frequencies in x and y directions, respectively; cxx(x,y), cyy(x,y) and cxy(x,y) are the second order derivatives of the phase; m(ε,η) is defined to be one if a pixel has been demodulated and zero otherwise. The first term in Eq. (11) indicates the fidelity of the assumed φe(x,y;ε,η) and thus also the fidelity of parameters being estimated, i.e., φ0(x,y), ωx(x,y), ωy(x,y), cxx(x,y), cyy(x,y) and cxy(x,y). The second term in Eq. (11) controls the smoothness of the phase being estimated with its already estimated neighbors’ phase. The second term is often called as a regularization term, with δ as the regularizing parameter. The energy function is minimized pixel-by-pixel with respect to φ0(x,y), ωx(x,y), ωy(x,y), cxx(x,y), cyy(x,y) and cxy(x,y). The demodulation path is guided by the total local frequency defined as

ω(x,y)=ωx2(x,y)+ωy2(x,y).

(b) Boundary adaption

Block-based energy function minimization [5, 17–19] is commonly used in spatial domain demodulation. As a block-based method shown in Eq. (11), the performance of QFGRPT is affected by discontinuities. However, comparing Eq. (11) with Eq. (8), it is easy to recognize that

r(ε,η)={fn(ε,η)cos[φe(x,y;ε,η)]}2+δ[φ0(ε,η)φe(x,y;ε,η)]2m(ε,η).
Thus it is straightforward to adapt the energy function to make it boundary-aware, i.e.,
UB(x,y)=(ε,η)Nx,yM(ε,η)({Sn(ε,η)cos[φe(x,y,ε,η)]}2+δ[φ0(ε,η)φe(x,y,ε,η)]2m(ε,η)),
where M(x,y) is the same as in Eq. (7), indicating that only pixels belonging to the segment being processed contribute to the fidelity and smoothness. The composite quality map follows Eq. (10) where Q(x,y) = ω(x,y) with ω(x,y) as the total local frequency defined in Eq. (13).

(c) Results

QFGRPT and BQFGRPT are applied to the normalized fringe patterns in Fig. 5 with 23 × 23 neighboring region and δ = 0.1. The results are shown in Fig. 6. Since QFGRPT requires the phase to be continuous, the method fails to converge at boundary pixels and propagates the errors to the rest of the image. As expected, BQFGRPT produces satisfactory results as the wrapped results shown in Figs. 6(b) and 6(e) and the unwrapped results shown in Figs. 6(c) and 6(f). Note that QFGRPT and BQFGRPT output unwrapped phase maps as their results, while we also show wrapped phase maps for better visualization. All the wrapped phases shown in this paper are in the range of [-π, π).

 figure: Fig. 6

Fig. 6 Demodulation results of (a) Fig. 5(a) using QFGRPT; (b) Fig. 5(a) using BQFGRPT (wrapped); (c) Fig. 5(a) using BQFGRPT (unwrapped); (d) Fig. 5(b) using QFGRPT; (e) Fig. 5(b) using BQFGRPT (wrapped); (f) Fig. 5(b) using BQFGRPT (unwrapped).

Download Full Size | PDF

4.2 The boundary-aware windowed Fourier ridges (BWFR)

(a) Original method

WFR is a windowed Fourier transform (WFT) based single fringe pattern demodulation method that has high robustness, accuracy and noise tolerance [1]. WFT is in the form of

Sf(u,v;ξx,ξy)=fa(x,y)gu,v;ξx,ξy(x,y)dxdy,
and the inverse WFT (IWFT) as
fa(x,y)=14π2Sf(u,v;ξx,ξy)gu,v;ξx,ξy(x,y)dξxdξydudv
where gu,v;ξx,ξy(x,y) is a kernel; * is the complex conjugate operator; the kernel gu,v;ξx,ξy(x,y) is defined as
gu,v;ξx,ξy(x,y)=g(xu,yv)exp(jξxx+jξyy),
where j=1 and g(xu,yv) is the window function which can be chosen as a normalized Gaussian function as
g(x,y)=1πσxσyexp(x22σx2y22σy2).
As the WFT is limited in space by the window function g(xu,yv), it can estimate the frequency information for each pixel of a fringe pattern and process the fringe pattern by manipulating these frequencies. Different values of (ξx,ξy) in gu,v;ξx,ξy(x,y) yield different Sf(u,v;ξx,ξy), among which, the one maximizes |Sf(u,v;ξx,ξy)|, i.e., at the ridge of |Sf(u,v;ξx,ξy)|, is considered as the local frequencies at (u,v) [1],
[ωax(u,v),ωay(u,v)]=argmaxξx,ξy|Sf(u,v;ξx,ξy)|,
where ωax(u,v) and ωay(u,v) are the estimated local frequencies at (u,v) along x and y direction, respectively. However, the true frequency [ωx,ωy] could be either [ωax,ωay] or [ωax,ωay], i.e., [ωx,ωy]=sign(u,v)[ωax,ωay]. Because local frequencies of neighboring pixels should be similar or continuous, a sign determination formula is developed as
sign(u,v)={1[ωax(u,v),ωay(u,v)][ωx(up,vp),ωy(up,vp)]01otherwise,
where (up,vp) is the coordinate of neighboring pixel that has already been demodulated. A frequency-guided scanning strategy is used [1]. After the local frequencies are estimated, the wrapped phase can be reconstructed as

φw(u,v)=angle{Sf[u,v;ωx(u,v),ωy(u,v)]}+ωx(u,v)u+ωy(u,v)v.

(b) Boundary adaption

WFR is a typical frequency-domain demodulation method. The processing of WFR can be separated into three parts, the estimation of [ωax,ωay], the determination of sign(u,v) and the calculation of φw(u,v). Firstly, for the estimation of [ωax,ωay], as Eqs. (16) and (20) show, it depends on its neighboring information, thus is also boundary-sensitive. One advantage of WFR is that its frequency estimation is not affected by zero padding [1]. Thus, after the discontinuous boundary is identified and discontinuous fringe patterns are made into continuous fringe segments, WFR can be applied to the padded segment as

SfB(u,v;ξx,ξy)=M(x,y)[fa(x,y)gu,v;ξx,ξy(x,y)]dxdy                        =Sa(x,y)gu,v;ξx,ξy(x,y)dxdy
Secondly, for the sign determination, the frequency-guided strategy has been used, which can be made boundary-aware using the composite quality map in Eq. (10) with Q(x,y) = ω(x,y). Thirdly, for the calculation of φw(u,v), as it is pixel-wise and not affected by the boundary, no adaption is required.

(c) Results

WFR and BWFR are applied to the normalized fringe patterns in Fig. 5, with the demodulation results shown in Fig. 7. Both algorithms output wrapped phases. Although WFR is less influenced by the discontinuity than QFGRPT, the result in Fig. 7(c) is partially wrong. The discontinuity takes effect in two ways. First, the ridge phase in the boundary may be wrongly estimated from the phase in the other segment, reducing the demodulation accuracy. Second, the determination of frequency sign can produce wrong results at discontinuous points and propagate to pixels processed after, reducing the demodulation robustness. BWFR well adapts to fringe segments and produces better results than WFR.

 figure: Fig. 7

Fig. 7 Wrapped demodulation results of (a) Fig. 5(a) using WFR; (b) Fig. 5(a) using BWFR; (c) Fig. 5(b) using WFR; (d) Fig. 5(b) using BWFR.

Download Full Size | PDF

4.3 The boundary-aware spiral phase quadrature transform (BSQT)

(a) Original method

SQT is a fast single fringe pattern demodulation method [7]. It can be regarded as two dimensional Hilbert transform for analytical signal estimation. For a fringe pattern fa(x,y), its SQT is [7]

V(fa)=iexp[jϑ(x,y)]F1{exp[jϕ(u,v)]F[fa(x,y)]}          =b(x,y)sin[φ(x,y)]
where exp[jϕ(u,v)]=(u+jv)/u2+v2 is a spiral phase function; F() and F1() are the Fourier transform and inverse Fourier transform, respectively; ϑ(-π,π] is the fringe direction. Thereafter the exponential phase field can be obtained and subsequently the phase is computed as

φw(x,y)=arctan2[V(fa),fa(x,y)]

A pre-determination of ϑ is critical to the SQT method. Current methods usually only estimate the fringe orientation θ(x,y)(-π/2,π/2] [20]. Among various orientation estimation methods, the Gradient-based orientation estimation method [21] is used as an example in this paper as

θ(x,y)=12arctan2{(ε,η)Nx,y2fxσ(ε,η)fyσ(ε,η),(ε,η)Nx,yfyσ2(ε,η)fxσ2(ε,η)}.
To find the modulo 2π fringe direction ϑ(x,y) from the modulo π fringe orientation θ(x,y), a density-guided orientation unwrapping method has been developed [20] as follows. First, θ(x,y) is unwrapped by comparing with one of its neighbors,
ϑ(x,y)={θ(x,y){cos[θ(x,y)],sin[θ(x,y)]}{cos[θ(xp,yp)],sin[θ(xp,yp)]}0M2π[θ(x,y)+π]else,
where M2π() is the modulo 2π operation that adds or subtracts 2π so that the output is within the range of (−π, π]; (xp,yp) is the coordinate of a neighboring pixel that has already been determined. Second, it is noticed that for a pixel with the phase derivatives or local frequencies close to zero, its direction is actually undetermined. Errors then occur if the orientation unwrapping passes through these points and propagate to all the pixels unwrapped afterwards. A density-guided scanning strategy is thus involved in the orientation unwrapping with density represented by total local frequency ω(x,y) in Eq. (13) or calculated from the structure tensor [20].

(b) Boundary adaption

We perform SQT segment by segment with zero padding, similar as BWFR, simply by replacing fa in Eqs. (24) and (25) with Sa. SQT heavily relies on the quality of fringe direction. Since direction estimation depends on its neighbors, both the orientation estimation and orientation unwrapping need to be boundary-aware. The orientation estimation in Eq. (26) is modified by incorporating the masking function to introduce boundary-awareness as

θB(x,y)=12arctan2{(ε,η)Nx,y2M(ε,η)Sxσ(ε,η)Syσ(ε,η),(ε,η)Nx,yM(ε,η)[Syσ2(ε,η)Sxσ2(ε,η)]}.
where Sxσ and Syσ are derivatives of padded segment S after Gaussian filtering with kernel σ. Other orientation estimation methods can also be made boundary-ware accordingly. Finally, the quality map for orientation unwrapping is adapted from the original density map to the composite quality map with the boundary information.

(c) Results

SQT and BSQT are applied to the normalized fringe patterns in Fig. 5, with the demodulation results shown in Fig. 8. Both algorithms output wrapped phases. Similar to the WFR method, although SQT is less influenced by the discontinuity than QFGRPT, partially wrong result is found in Fig. 8(c). BSQT well adapts to fringe segments and produces better results than SQT. In SQT, the discontinuity affects both orientation estimation and orientation unwrapping. The former reduces demodulation accuracy while the latter affects the demodulation robustness. These problems are avoided in BSQT.

 figure: Fig. 8

Fig. 8 Wrapped demodulation results of (a) Fig. 5(a) using SQT; (b) Fig. 5(a) using BSQT; (c) Fig. 5(b) using SQT; (d) Fig. 5(b) using BSQT.

Download Full Size | PDF

4.4 The boundary-aware phase unwrapping

Among the three demodulation methods discussed in Sec. 4.1-4.3, only the first one directly produces an unwrapped phase map. Phase unwrapping is needed by the other two methods and thus briefly discussed. Phase unwrapping has been investigated for more than three decades. Various methods have been studied and compared [8–10]. Among them, quality-guided phase unwrapping is very effective [22]. The typical unwrapping formula is as follows

φ(x,y)=φw(x,y)+round{[φ(xp,yp)φw(x,y)]/2π}2π

To avoid errors caused by low quality pixels, an unwrapping path is critical to the unwrapping success. As shown in Figs. 7 and 8, for sufficiently denoised phase maps, the phase quality is high. Thus, noise level is not a big concern in quality selection. On the contrary, it is well known that both WFR and SQT have lower performance in low density regions. Thus, in this paper, we use the total local frequency ω(x,y) as the quality map, i.e., Q(x,y) = ω(x,y). To make the unwrapping work for discontinuous fringe patterns, the composite quality map with boundary information is constructed to guide phase unwrapping. The phase results of BWFR and BSQT are unwrapped for demonstration as shown in Fig. 9.

 figure: Fig. 9

Fig. 9 Continuous demodulation results after boundary-aware unwrapping of (a) Fig. 7(b); (b) Fig. 7(d); (c) Fig. 8(b); (d) Fig. 8(d).

Download Full Size | PDF

5. Results and discussions

5.1 Quantitative evaluation

In this section, we quantitatively evaluate the performance of BQFGRPT, BWFR and BSQT. All experiments are carried out using MATLAB in a computer equipped with Intel® Xeon® CPU E5-2630 v3@2.4GHz. In addition to the demodulation results of Figs. 5(a) and 5(b) shown in Figs. 6-9, BQFGRPT, BWFR and BSQT are also applied to Figs. 2(a) and 2(b) with ideal segmentation for evaluation and comparison.

To reveal the performance differences between boundary pixels and non-boundary pixels, pixels away from the ideal boundary by d pixels are collected. The root mean square error (RMSE) of these pixels is calculated between the demodulated phase and the true phase, and denoted as RMSE(d). We have computed RMSE(d) for 1 ≤ d ≤ 15 and show them in Fig. 10. We can observe that, (1) the non-boundary pixels are accurately demodulated in all these cases, showing the effectiveness of these demodulation methods; (2) comparing the non-boundary pixels between the first row and the second row of Fig. 10 shows the effectiveness of the denoising method and normalization method used in this paper; (3) the boundary pixels in the first row show the effectiveness of the boundary-aware strategy, as the RMSEs are low; (4) comparing the boundary pixels between the first row and the second row shows the effectiveness of the LOCS segmentation method, as the large errors occur only for small d; (5) the boundary-pixels have larger demodulation errors in all these cases, especially in the second row, showing that the demodulation results of the boundary-pixels are not reliable and should be used with caution; (6) as a consequence, we can define the boundary-pixels as those with 1 ≤ d ≤ 4. As for the computation speed, for a fringe pattern of size 256 × 256, the time costs are about 2 minutes, 38 seconds and 21 seconds for BQFGRPT, BWFR and BSQT, respectively.

 figure: Fig. 10

Fig. 10 RMSEs of demodulation results of (a) Fig. 2(a); (b) Fig. 2(b); (c) Fig. 5(a); (d) Fig. 5(b).

Download Full Size | PDF

5.2 Experimental results

An experimental example from fringe projection profilometry is used to demonstrate the performance of the BQFGRPT, BWFR, and BSQT methods as shown in Fig. 11(a). Its segmentation result and normalization result are shown in Figs. 11(b) and 11(c), respectively. Figure 12 shows both the wrapped and unwrapped phases demodulated by BQFGRPT, BWFR, and BSQT methods. BQFGRPT, BWFR, and BSQT are seen well adapted to the discontinuity and produce satisfaction results.

 figure: Fig. 11

Fig. 11 Experimental fringe pattern. (a) An experimental fringe pattern from fringe projection profilometry; (b) the segmentation result; (c) the normalization result.

Download Full Size | PDF

 figure: Fig. 12

Fig. 12 Demodulate results of Fig. 11(c). Wrapped phase of (a) BQFGRPT; (b) BWFR; (c) BSQT; unwrapped phase of (d) BQFGRPT; (e) BWFR; (f) BSQT;

Download Full Size | PDF

The BQFGRPT, BWFR, and BSQT methods are applied to another experimental example from electronic speckle pattern shearing interferometry as shown in Fig. 13(a). Its segmentation result and normalization result are shown in Figs. 13(b) and 13(c), respectively. Figure 14 shows both the wrapped and unwrapped phases demodulated by the BQFGRPT, BWFR, and BSQT methods. Again, the results are satisfactory.

 figure: Fig. 13

Fig. 13 Experimental fringe pattern. (a) An experimental fringe pattern from electronic speckle pattern shearing interferometry; (b) the segmentation result; (c) the normalization result.

Download Full Size | PDF

 figure: Fig. 14

Fig. 14 Demodulate results of Fig. 13(c). Wrapped phase of (a) BQFGRPT; (b) BWFR; (c) BSQT; unwrapped phase of (d) BQFGRPT; (e) BWFR; (f) BSQT;

Download Full Size | PDF

6. Conclusion

With the increasing complexity of manufactured pieces, discontinuities commonly appear in fringe patterns. Techniques to demodulate a single fringe pattern with discontinuities are urgently demanded. We propose a complete flowchart for discontinuous single fringe pattern processing, where a local orientation coherence based fringe segmentation (LOCS) method is used as a universal pre-processing step for all discontinuous fringe pattern problems. Furthermore, we propose a systematic way to introduce boundary-awareness into demodulation methods by a masking function to improve demodulation accuracy and a quality-guided scanning strategy with a novel composite quality map to improve demodulation robustness. Three typical demodulation methods, the frequency-guided regularized phase tracker with quadratic phase matching, the windowed Fourier ridge, and the spiral phase quadrature transform, are used to demonstrate the effectiveness of the proposed method. Note that in this flowchart, the LOCS method can be replaced by other effective segmentation methods, and boundary-awareness can be introduced in other demodulation methods.

Funding

National Natural Science Foundation of China (NSFC) (61602414, 61527808); Singapore Academic Research Fund Tier 1 (RG28/15).

References and links

1. Q. Kemao, Windowed Fringe Pattern Analysis (SPIE, 2013).

2. M. Servin, J. L. Marroquin, and F. J. Cuevas, “Fringe-follower regularized phase tracker for demodulation of closed-fringe interferograms,” J. Opt. Soc. Am. A 18(3), 689–695 (2001).

3. J. A. Quiroga, J. A. G. Pedrero, and A. G. Botella, “Algorithm for fringe pattern normalization,” Opt. Commun. 197(1), 43–51 (2001).

4. Q. Yu, K. Andresen, W. Osten, and W. Jueptner, “Noise-free normalized fringe patterns and local pixel transforms for strain extraction,” Appl. Opt. 35(20), 3783–3790 (1996). [PubMed]  

5. H. Wang, K. Li, and Q. Kemao, “Frequency guided methods for demodulation of a single fringe pattern with quadratic phase matching,” Opt. Lasers Eng. 49(4), 564–569 (2011).

6. Q. Kemao, H. Wang, and W. Gao, “Windowed Fourier transform for fringe pattern analysis: theoretical analyses,” Appl. Opt. 47(29), 5408–5419 (2008). [PubMed]  

7. K. G. Larkin, D. J. Bone, and M. A. Oldfield, “Natural demodulation of two-dimensional fringe patterns. I. General background of the spiral phase quadrature transform,” J. Opt. Soc. Am. A 18(8), 1862–1870 (2001). [PubMed]  

8. E. Zappa and G. Busca, “Comparison of eight unwrapping algorithms applied to Fourier-transform profilometry,” Opt. Lasers Eng. 46(2), 106–116 (2008).

9. J. Parkhurst, G. Price, P. Sharrock, and C. Moore, “Phase unwrapping algorithms for use in a true real-time optical body sensor system for use during radiotherapy,” Appl. Opt. 50(35), 6430–6439 (2011). [PubMed]  

10. J. C. Estrada, M. Servin, and J. Vargas, “2D simultaneous phase unwrapping and filtering: a review and comparison,” Opt. Lasers Eng. 50(8), 1026–1029 (2012).

11. H. Wang, Q. Kemao, W. Gao, F. Lin, and H. S. Seah, “Fringe pattern denoising using coherence-enhancing diffusion,” Opt. Lett. 34(8), 1141–1143 (2009). [PubMed]  

12. M. Zhao, H. Wang, and Q. Kemao, “Snake-assisted quality-guided phase unwrapping for discontinuous phase fields,” Appl. Opt. 54(24), 7462–7470 (2015). [PubMed]  

13. H. Wang and Q. Kemao, “Local orientation coherence based segmentation and boundary-aware diffusion for discontinuous fringe patterns,” Opt. Express 24(14), 15609–15619 (2016). [PubMed]  

14. C. Galvan and M. Rivera, “Second-order robust regularization cost function for detecting and reconstructing phase discontinuities,” Appl. Opt. 45(2), 353–359 (2006). [PubMed]  

15. S. Li, X. Su, and W. Chen, “Hilbert assisted wavelet transform method of optical fringe pattern phase reconstruction for optical profilometry and interferometry,” Optik (Stuttg.) 123(1), 6–10 (2012).

16. B. Li, C. Tang, X. Zhu, Y. Su, and W. Xu, “Shearlet transform for phase extraction in fringe projection profilometry with edges discontinuity,” Opt. Lasers Eng. 78, 91–98 (2016).

17. J. C. Estrada, M. Servín, J. A. Quiroga, and J. L. Marroquín, “Path independent demodulation method for single image interferograms with closed fringes within the function space C(2),” Opt. Express 14(21), 9687–9698 (2006). [PubMed]  

18. Q. Kemao and S. Hock Soon, “Sequential demodulation of a single fringe pattern guided by local frequencies,” Opt. Lett. 32(2), 127–129 (2007). [PubMed]  

19. L. Kai and Q. Kemao, “Fast frequency-guided sequential demodulation of a single fringe pattern,” Opt. Lett. 35(22), 3718–3720 (2010). [PubMed]  

20. H. Wang and Q. Kemao, “Quality-guided orientation unwrapping for fringe direction estimation,” Appl. Opt. 51(4), 413–421 (2012). [PubMed]  

21. X. Zhou, J. P. Baird, and J. F. Arnold, “Fringe-orientation estimation by use of a Gaussian gradient filter and neighboring-direction averaging,” Appl. Opt. 38(5), 795–804 (1999). [PubMed]  

22. Q. Kemao, W. Gao, and H. Wang, “Windowed Fourier-filtered and quality-guided phase-unwrapping algorithm,” Appl. Opt. 47(29), 5420–5428 (2008). [PubMed]  

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (14)

Fig. 1
Fig. 1 Flowchart of boundary-aware single fringe pattern processing (dotted squares indicate optional processing steps)
Fig. 2
Fig. 2 Simulated fringe patterns. (a) A simulated fringe pattern with linear boundary; (b) a simulated fringe pattern with circular boundary; (c) Fig. (a) with additive noise; (d) Fig. (b) with speckle noise.
Fig. 3
Fig. 3 Segmentation results of (a) Fig. 2(a); (b) Fig. 2(b); (c) Fig. 2(c); (d) Fig. 2(d).
Fig. 4
Fig. 4 Denoising results of (a) Fig. 2(c); (b) Fig. 2(d).
Fig. 5
Fig. 5 Background removal and amplitude normalization results of (a) Fig. 4(a); (b) Fig. 4(b).
Fig. 6
Fig. 6 Demodulation results of (a) Fig. 5(a) using QFGRPT; (b) Fig. 5(a) using BQFGRPT (wrapped); (c) Fig. 5(a) using BQFGRPT (unwrapped); (d) Fig. 5(b) using QFGRPT; (e) Fig. 5(b) using BQFGRPT (wrapped); (f) Fig. 5(b) using BQFGRPT (unwrapped).
Fig. 7
Fig. 7 Wrapped demodulation results of (a) Fig. 5(a) using WFR; (b) Fig. 5(a) using BWFR; (c) Fig. 5(b) using WFR; (d) Fig. 5(b) using BWFR.
Fig. 8
Fig. 8 Wrapped demodulation results of (a) Fig. 5(a) using SQT; (b) Fig. 5(a) using BSQT; (c) Fig. 5(b) using SQT; (d) Fig. 5(b) using BSQT.
Fig. 9
Fig. 9 Continuous demodulation results after boundary-aware unwrapping of (a) Fig. 7(b); (b) Fig. 7(d); (c) Fig. 8(b); (d) Fig. 8(d).
Fig. 10
Fig. 10 RMSEs of demodulation results of (a) Fig. 2(a); (b) Fig. 2(b); (c) Fig. 5(a); (d) Fig. 5(b).
Fig. 11
Fig. 11 Experimental fringe pattern. (a) An experimental fringe pattern from fringe projection profilometry; (b) the segmentation result; (c) the normalization result.
Fig. 12
Fig. 12 Demodulate results of Fig. 11(c). Wrapped phase of (a) BQFGRPT; (b) BWFR; (c) BSQT; unwrapped phase of (d) BQFGRPT; (e) BWFR; (f) BSQT;
Fig. 13
Fig. 13 Experimental fringe pattern. (a) An experimental fringe pattern from electronic speckle pattern shearing interferometry; (b) the segmentation result; (c) the normalization result.
Fig. 14
Fig. 14 Demodulate results of Fig. 13(c). Wrapped phase of (a) BQFGRPT; (b) BWFR; (c) BSQT; unwrapped phase of (d) BQFGRPT; (e) BWFR; (f) BSQT;

Equations (29)

Equations on this page are rendered with MathJax. Learn more.

f( x,y )=a( x,y )+b( x,y )cos[ φ( x,y ) ]+n( x,y ),
T( x,y )=[ ( ε,η ) N x,y ρ( ε,η ) f xσ 2 ( ε,η ) ( ε,η ) N x,y ρ( ε,η ) f xσ ( ε,η ) f yσ ( ε,η ) ( ε,η ) N x,y ρ( ε,η ) f xσ ( ε,η ) f yσ ( ε,η ) ( ε,η ) N x,y ρ( ε,η ) f yσ 2 ( ε,η ) ],
D( x,y )=[ λ 2 ( x,y )>thr ],
f a ( x,y )=b( x,y )cos[ φ( x,y ) ];
f n ( x,y )=cos[ φ( x,y ) ].
S( x,y )=f( x,y )M( x,y ),
M( x,y )={ 1, ( x,y )s 0, ( x,y )s .
h( x,y )= ( ε,η ) N x,y r( ε,η ) ,
h B ( x,y )= ( ε,η ) N x,y M( ε,η )r( ε,η )
P( x,y )=[ Q( x,y )+ Q max ( ε,η ) N x,y M( ε,η ) ],
U( x,y )= ( ε,η ) N x,y ( { f n ( ε,η )cos[ φ e ( x,y;ε,η ) ] } 2 +δ [ φ 0 ( ε,η ) φ e ( x,y;ε,η ) ] 2 m( ε,η ) ) ,
φ e ( x,y;ε,η )= φ 0 ( x,y )+ ω x ( x,y )( εx )+ ω y ( x,y )( ηy ) + c xx ( x,y ) ( εx ) 2 /2+ c yy ( x,y ) ( ηy ) 2 /2+ c xy ( x,y )( εx )( ηy ),
ω( x,y )= ω x 2 ( x,y )+ ω y 2 ( x,y ) .
r( ε,η )= { f n ( ε,η )cos[ φ e ( x,y;ε,η ) ] } 2 +δ [ φ 0 ( ε,η ) φ e ( x,y;ε,η ) ] 2 m( ε,η ).
U B ( x,y )= ( ε,η ) N x,y M( ε,η )( { S n ( ε,η )cos[ φ e ( x,y,ε,η ) ] } 2 +δ [ φ 0 ( ε,η ) φ e ( x,y,ε,η ) ] 2 m( ε,η ) ) ,
Sf( u,v; ξ x , ξ y )= f a ( x,y ) g u,v; ξ x , ξ y ( x,y )dxdy ,
f a ( x,y )= 1 4 π 2 Sf( u,v; ξ x , ξ y ) g u,v; ξ x , ξ y ( x,y )d ξ x d ξ y dudv
g u,v; ξ x , ξ y ( x,y )=g( xu,yv )exp( j ξ x x+j ξ y y ) ,
g( x,y )= 1 π σ x σ y exp( x 2 2 σ x 2 y 2 2 σ y 2 ).
[ ω ax ( u,v ), ω ay ( u,v ) ]=arg max ξ x , ξ y | Sf( u,v; ξ x , ξ y ) |,
sign( u,v )={ 1 [ ω ax ( u,v ), ω ay ( u,v ) ][ ω x ( u p , v p ), ω y ( u p , v p ) ]0 1 otherwise ,
φ w ( u,v )=angle{ Sf[ u,v; ω x ( u,v ), ω y ( u,v ) ] }+ ω x ( u,v )u+ ω y ( u,v )v .
S f B ( u,v; ξ x , ξ y )= M( x,y )[ f a ( x,y ) g u,v; ξ x , ξ y ( x,y ) ]dxdy                         = S a ( x,y ) g u,v; ξ x , ξ y ( x,y )dxdy
V( f a )=iexp[ jϑ( x,y ) ] F 1 { exp[ jϕ( u,v ) ]F[ f a ( x,y ) ] }           =b( x,y )sin[ φ( x,y ) ]
φ w ( x,y )=arctan2[ V( f a ), f a ( x,y ) ]
θ( x,y )= 1 2 arctan2{ ( ε,η ) N x,y 2 f xσ ( ε,η ) f yσ ( ε,η ) , ( ε,η ) N x,y f yσ 2 ( ε,η ) f xσ 2 ( ε,η ) }.
ϑ( x,y )={ θ( x,y ) { cos[ θ( x,y ) ],sin[ θ( x,y ) ] }{ cos[ θ( x p , y p ) ],sin[ θ( x p , y p ) ] }0 M 2π [ θ( x,y )+π ] else ,
θ B ( x,y )= 1 2 arctan2{ ( ε,η ) N x,y 2M( ε,η ) S xσ ( ε,η ) S yσ ( ε,η ) , ( ε,η ) N x,y M( ε,η )[ S yσ 2 ( ε,η ) S xσ 2 ( ε,η ) ] } .
φ( x,y )= φ w ( x,y )+round{ [ φ( x p , y p ) φ w ( x,y ) ]/2 π }2π
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.