Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Interval type-II fuzzy anisotropic diffusion algorithm for speckle noise reduction in optical coherence tomography images

Open Access Open Access

Abstract

A novel, speckle noise reduction algorithm based on the combination of Anisotropic Diffusion (AD) filtering and Interval Type-II fuzzy sets was developed for reducing speckle noise in Optical Coherence Tomography (OCT) images. Unlike regular AD, the new Type-II fuzzy AD algorithm considers the uncertainty in the calculated diffusion coefficient and appropriate adjustments to the coefficient are made. The new algorithm offers flexibility in optimizing the trade-off between two of the image metrics: signal-to-noise (SNR) and Edginess, which are directly related to the structure of the imaged object. Application of the Type-II fuzzy AD algorithm to OCT tomograms acquired in-vivo from a human finger tip and human retina show reduction in the speckle noise with very little edge blurring and about 13 dB and 7 dB image SNR improvement respectively. Comparison with Wiener, Adaptive Lee and regular AD filters, applied to the same images, demonstrates the superior performance of the Type-II fuzzy AD algorithm in terms image SNR and edge preservation metrics improvement.

©2009 Optical Society of America

1. Introduction

Optical Coherence Tomography (OCT) is a modern biomedical imaging technology that allows for in vivo, non-invasive high-resolution imaging of biological tissues [1]. As any imaging technique that is based on detection of coherent waves, for example synthetic aperture radar (SAR), ultrasound, etc., OCT images are subjected to presence of speckle. Speckle in OCT tomograms is dependent both on the wavelength of the imaging beam and the structural details of the imaged object [2]. Therefore, speckle carries information about the morphology of the imaged object as well as a noise component, and the latter is responsible for the grainy appearance of OCT images. Development of fast and effective speckle noise reduction algorithms is of great importance, not only for improvement of the overall image quality of OCT tomograms, but also for improving the ease and performance of subsequently applied image segmentation and pattern recognition algorithms.

Development of successful speckle noise reduction algorithms for OCT is particularly challenging, because of the difficulty in separating the noise and the information components of a speckle pattern. Over the past 10 years a number of techniques for speckle noise reduction were proposed, based either on numerical image processing algorithms, or on alternative detection schemes of the OCT system design. For example, a number of speckle reduction techniques such as adaptive filters [2] and Rotating Kernel Transformation [3] have been developed or adapted specifically for improving the image quality of OCT tomograms. Filtering techniques based on the rotating kernel transform can produce good contrast enhancement of image features, however they result in significant edge blurring when strong noise reduction is required [4]. Most of the algorithms are limited in the amount of speckle that can be reduced because of their complexity. In addition to developing image processing algorithms, modifications to the OCT system design have been suggested to reduce speckle noise. One of the techniques uses a multimode fiber and a spatially coherent broadband light source to reduce speckle noise [5]. Spatial and frequency compounding are alternative techniques for reducing speckle noise in OCT images, however, these techniques require significant modifications to design of the OCT system. A frequency compounding technique, based on summing two incoherent, independent interferometric signals was investigated in reference [6] and results shows increased image contrast without loss of resolution. Recently, a spatial diversity technique based on shifting the focal plane of the probe beam was introduced to suppress speckle noise [7]. In a different study, angular compounding technique based on path length encoding was presented to reduce speckle in OCT images [8]. Speckle noise reduction in OCT images was recently achieved by use of angular compounding at multiple backscattering angles [9].

Anisotropic Diffusion (AD) has been applied previously for speckle noise reduction in OCT images. For example, in references [10, 11 and 12] the image gradient is used for calculation of the diffusion coefficient with no consideration of the actual noise present. In reference [13] the performance of two algorithms based on regular AD was compared as they were applied to speckled OCT images. The main problem with any image processing algorithm based on regular AD is the large number of iterations, necessary to reach a steady state solution [14]. As a result, longer computational time is required, along with more blurring, resulting in loss of sharpness at edges that correspond to real image features. Also, the effectiveness of regular AD algorithms is limited for images with a large noise component [14]. The wavelet based algorithm described in reference [4] provides ~7dB improvement in the image signal-to-noise (SNR) and does not reduce the image sharpness significantly, but the execution time for the algorithm on one 2D tomogram is about 7 min using Matlab implementation. Recently, a novel speckle reduction algorithm based on the interval Type-II fuzzy set was developed [15], that results in better image metrics (SNR, CNR, Edginess, etc) improvement as compared to other wavelet based algorithms, and that requires significantly less processing time for the same image size (40s).

In this paper, we present a novel, robust, speckle noise reduction algorithm based on nonlinear Anisotropic Diffusion. The novelty of the algorithm is in the use of Interval Type II fuzzy sets to determine the uncertainty in the diffusion process and the appropriate adjustment of the diffusion coefficient parameter. This results in significant speckle noise reduction with minimum blurring of image features edges and much shorter image processing time.

2. Theory

2.1 Isotropic diffusion

The mathematical model of isotropic diffusion can be explained best through an example. Consider the diffusion of heat throughout a material. Heat diffuses over time and its time dynamics can be modelled by the partial differential “heat equation”, Eq. (1).

u(x,y,t)t=c(t)Δu(x,y,t)

In Eq. (1), Δ represents the Laplace operator and c(t) is the “diffusion rate constant” or the “diffusion coefficient” which is spatially invariant. When the origin of heat is a single spike in time and space, the solution to Eq. (1) is a time progression of Gaussian functions, with progressively larger σ as time goes on. Therefore, any heat distribution can be represented as a combination of heat spikes. Since the differential operators are linear, the evolution of the heat distribution with time can be represented as a superposition of a group of spreading Gaussian functions.

The spreading of a Gaussian function is equivalent to convolving it with a Gaussian kernel, which in image processing can describe a blurring or smoothing process. Now, consider an image as a function indicating the temperature distribution in two-dimensional space. One way to blur the image is to let it evolve using the heat equation. Koenderink has shown that computing the convolution of an image with a Gaussian kernel is in fact equivalent to the solution of the standard heat conduction equation [16]. Therefore, we can consider that in Eq. (1), u is the 2D image and Δ represents the Laplace Operator applied to the image. The diffusion process described so far is isotropic. One drawback of using isotropic diffusion for image processing is that although it filters out noise, it also blurs the edges of the image, which reduces the image sharpness. The image blur can be minimized by varying the rate of diffusion in space so that diffusion is slower near the edges. Mathematically, anisotropic diffusion can be described with a spatially varying diffusion coefficient, c(x, y,t) .

2.2 Anisotropic diffusion

Anisotropic Diffusion (AD) is a multi-scale smoothing and edge enhancement image processing technique that has been proposed by Perona and Malik [17]. In AD, image processing algorithms the diffusion rate constant, c(t) is a function of image space:

c(x,y,t)={largeinhomogeneousimageregionssmallnearimageedges

By choosing a smaller diffusion coefficient near edges, the smoothing of edge regions in the image can be avoided. If blurring takes place separately in each image region with no interaction between regions, the region boundaries will remain sharp. Such approach can enhance the image region corrupted by speckle noise. Any edges in an image can be found using the gradient of the image. Thus, to construct a spatially varying diffusion constant c(x, y,t), the magnitude of the image gradient can be utilized. The gradient can be computed using the first difference. However, in this manuscript, Prewitt Operator defined by the convolution kernels given in Eq. (3) is used to obtain the image gradients. The Prewitt operator is less noise sensitive than the first difference operators.

x=[101101101]andy=[111000111]

According to Lin [18], in addition to the image gradient, the second derivative of the image can also be considered to preserve thin edges. However we will consider performance comparison of the two preservation techniques in a future work. To achieve a more stable performance, Catte et.al [19] proposed to calculate the image gradient after the image has been smoothed. This way, the noise does not significantly affect the calculation of the image gradient. According to Song at.al [20], post-edge detection process also enhances the edges. A Gaussian and a median filter were applied to the edges after the Prewitt operator to obtain edges with fine neighbourhoods. Thus, the new edge based AD coefficient is given by Eq. (4).

c(x,y,t)=g(Gσ*medx*(Gσ*I)2+y*(Gσ*I)2)

In Eq. (4), * denotes the convolution operator, Gσ is the Gaussian kernel, and med ⟨⟩ is the median filter. When the argument of the function g(·) is small (for homogeneous regions), the diffusion coefficient, c(x, y, t) is large and conversely when the argument is large (at or near edges), the diffusion coefficient is small, which corresponds to slower diffusion rates. The diffusion coefficient is a function of the image space. An example diffusion function proposed by Perona et.al. [17] and modified by Lin, et.al. [18] is utilized in this manuscript:

g(x)=11+xK2

When applied to speckled images, the function g(x) results in blurring of small discontinuities and sharpening of edges. The parameter K in Eq. (5) is chosen according to the speckle noise level and the edge strength. In any OCT image, the noise level is not known apriori with certainty. Furthermore, because of inherent speckle noise, the calculated edge strength based on gradient is not a true reflection of the edge strength. Therefore, the ambiguity of choosing a suitable value for the parameter K, and thus the uncertainty in the diffusion coefficient, justifies the use of the fuzzy set theory in such situations.

2.3 Interval type II fuzzy Anisotropic Diffusion

Fuzzy set theory is a soft computing technique that deals with the imprecision and vagueness of human understanding systems. It is able to directly model and minimize the effect of uncertainty in the chosen parameter K. Fuzzy set theory was first introduced by L. A. Zadeh in 1965 [21] and since then has been used extensively in image processing. By using fuzzy set theory, more specifically, fuzzy rule-based approach, the AD algorithm can be expressed as a fuzzy anisotropic diffusion (FAD) algorithm. In the past, the Type-I Fuzzy Anisotropic Diffusion (TIFAD) was applied for speckle noise reduction by a number of research groups [20, 21, 22] for images acquired with imaging modalities different than OCT. Also, Song [20] and Aja [22] utilized Type-I fuzzy with more rule sets than necessary while Sanchez-Ortiz [23] used a clustering approach to enhance cardiac MR images. By applying Interval Type-II fuzzy set, all the rules presented in references [20] and [22] can be reduced to a single rule. The uncertainty in reducing the number of rule sets in Type-I fuzzy is incorporated into the Type-II fuzzy set. In this paper, a novel algorithm based on Interval Type-II fuzzy Anisotropic Diffusion is implemented. This novel algorithm considers the uncertainty that is involved in reducing the many rule sets in TIFAD. The Interval Type-II fuzzy AD algorithm has five main steps which are shown in a block diagram in Fig. 1 and discussed in detail below.

Step 1: Image features corresponding to linguistic labels are established. Using the appropriate membership function (MF), each feature is fuzzified. The diffusion coefficient, c(x,y,t), is chosen to be largest near homogenous and noisy regions and smallest at edge regions. Therefore, at time t, in the image location (x,y), if the edge value is low and noise level is high then this pixel must be smoothed and the diffusion coefficient should be set to high. From this statement two linguistic variables can be identified; edge value and noise level at pixel location (x,y). Using these linguistic labels, two fuzzy variables are defined as follows:

Edginess Measure:

E(x,y,t)=Gσ*medx*(Gσ*I)2+y*(Gσ*I)2

Noisiness Measure according to reference [20]:

N(x,y,t)=I(x,y,t)1(2K+1)21i=Ki=Kj=Kj=KIp(x+i,y+j,t)

Edginess measure E(x,y,t) is based on median filtering the gradient of smoothed version of the image I(x,y) at time t as defined before. Song et.al. [20], used the standard deviation as a noisiness measure for each image pixel, which was calculated as the absolute difference between the pixel intensity and the local mean of its neighbourhood. In this paper, similar noisiness measure N(x,y,t) is implemented. A 7×7 window was used to calculate the local mean of the image (K = 3). It is important to note that high noisiness measure at (x,y,t) corresponds to high noise level at (x,y,t). Both edginess and noisiness measures were normalized to values between 0 and 1.

 figure: Fig. 1.

Fig. 1. Block diagram of the Fuzzy Type II Adaptive Diffusion algorithm.

Download Full Size | PDF

 figure: Fig. 2.

Fig. 2. Interval Type II Fuzzy Membership functions for the fuzzy variables a) “low edginess measure” and b) “high noisiness measure”.

Download Full Size | PDF

Two corresponding interval Type-II fuzzy sets can be assigned to these fuzzy variables. Type-II fuzzy set A is related to the low edginess measure and Type-II fuzzy set B corresponds to high noisiness measure (i.e. high noise level). The corresponding type-II membership functions are presented in Figs. 2(a) and 2(b). The shapes of the membership functions are obtained via Eq. (4). The upper and lower membership functions are determined based on work by Lin et.al. [18]. K is chosen so that 80%–90% of the pixels from the value of ∥∇(Gσ * I)∥ is less than K. Lin’s estimation of the K value is based on the original Anisotropic Diffusion algorithm by Perona and Malik [17], who in their paper use the “noise estimator” described by Canny to obtain a value for K. Since, the K value is obtained via a noise estimator; it applies to speckle reduction technique. Therefore, the K value of the upper MF of fuzzy set A is chosen such that 90% of the pixels from ∥∇(Gσ * I)∥ is less than K. Similarly, the K value of the lower MF of fuzzy set A is chosen such that 80% of the pixels from ∥∇(Gσ * I)∥ is less than K. And, the K value of the upper and lower MF of fuzzy set B is chosen from the background noise variance in the image.

The shaded regions in Figs. 2(a) and 2(b) correspond to the areas where the uncertainty lies in determining the diffusion coefficient c(x,y,t). In Fig. 2(a) if the value of E(x, y, t) is low corresponding to low edge values, the fuzzy membership value is near one. Consequently, if the value of E(x, y, t) is large, then the fuzzy membership value is near zero. However, if the value of E(x, y, t) is neither large nor small, then the fuzzy membership value takes on a range of values defined by the upper and lower membership values μUpperA and μLowerA. Similarly, in Fig. 2(b) if the value of N(x, y, t) is high corresponding to high noise level, the fuzzy membership value is near one. Consequently, if the value of N(x, y, t) is low, then the fuzzy membership value is near zero. If the value of N(x, y, t) is neither high nor low, then the fuzzy membership value takes on the range of values defined by the upper and lower membership values μUpperB and μLowerB. Graphical representation of these rules for both fuzzy sets is presented in Figs. 2(a) and 2(b). The uncertainty in choosing a value for the parameter K in Eq. (5) lies in this region. The membership values represent the degree by which the pixel value located at (x,y) needs to be smoothed. High membership value corresponds to high degree of smoothing. Likewise, a low membership value corresponds to low degree of smoothing.

Step 2: The next step in the Fuzzy type II AD algorithm is to set up a knowledge-base that is composed of fuzzy if-then rules. Making use of the fuzzy variables representing the image features the following fuzzy rules are obtained:

Rule 1: If edginess measure, E(x, y, t) is low AND noisiness measure, N(x,y,t) is high THEN the diffusion coefficient c(x, y, t) has a high value. It is important to remember that higher diffusion coefficient at (x, y) is required when edge value is low and noise level is high.

Step 3: For the above rule, the fuzzified inputs are combined and rule strength is obtained. This rule strength is used as the consequence of the rule. Applying the triangular norm to “E(x, y, t) is a low value AND N(x, y, t) is a high value,” the following truth value is obtained:

γx,y,tUpper=μAUpper(E(x,y,t))·μBUpper(N(x,y,t))
γx,y,tLower=μALower(E(x,y,t))·μBLower(N(x,y,t))

There are two consequences for the rule each corresponding to the upper and lower limit. This consequence represents the degree by which the pixel value at (x, y) should be smoothed. More specifically, Eq. (7.1) and Eq. (7.2) represent the degree of activation of (also known as index of ultrafuzziness) Rule 1 at the pixel location (x, y) at time t. This range of values (γUpperx,y,t, γLowerx,y,t) indicates the degree of membership in the fuzzy set “diffusion coefficient c(x, y, t) is a high value”. If the range of values are close to 1, this means that the diffusion coefficient is also close to 1 indicating maximum smoothing, while a degree value close to 0 indicates that the diffusion coefficient is close to 0 indicating no smoothing.

Step 4: Once the consequences from the rules are obtained, they are combined to obtain an output distribution range. This step is ignored in this study because only one fuzzy rule is used. Probabilistic sum triangular conorm is used in general for a number of fuzzy rules.

Step 5: A single value representing the diffusion coefficient is obtained by type reducing and de-fuzzifying the output from step 4. Eq. (7.1) and Eq. (7.2) represent the index of ultrafuzziness, which is an interval. In order to use the ultrafuzziness in the anisotropic diffusion algorithm it must be transformed to an index of fuzziness (type reduced). Using the average of the upper and the lower fuzziness values, the ultrafuzziness can be converted to index of fuzziness. In interval type-II fuzzy anisotropic diffusion, the new diffusion coefficient is defined as:

c(x,y,t)=γx,y,tUpper+γx,y,tLower2

Using the new diffusion coefficient, the fuzzy implementation of the anisotropic diffusion is the following. Starting with the partial differential Eq. (1) and replacing u for image I[28]:

I(x,y,t)t=c(x,y,t)ΔI(x,y,t)=c(x,y,t)I(x,y,t)=[c(x,y,t)I(x,y,t)]=
=divc(x,y,t)I(x,y,t)]=x[c(x,y,t)xI(x,y,t)]+yc(x,y,t)xI(x,y,t)]

In the above equation, div denotes the divergence operator and ∇ denotes the gradient of the image. The diffusion coefficient, c(x, y, t) in conjunction with the image gradient describe the diffusion flow between the image pixels, i.e. c(x,y,t)I(x,y,t). Applying the central differencing for xandy and results in the following expression[14,28]:

I(x,y,t)t=1Δx2c(x+Δx2,y,t)[I(x+Δx2,y,t)I(x,y,t)]1Δx2c(xΔx2,y,t)
[I(x,y,t)I(xΔx2,y,t)]+1Δy2c(x,y+Δy2,t)[I(x,y+Δy,t)I(x,y,t)]
1Δy2c(x,y+Δy2,t)[I(x,y,t)I(x,y+Δy,t)]

For the anisotropic diffusion represented by the above equation, the local flow is a function of the diffusion coefficient for every direction and the difference between the central pixel and each of the 4-connected pixels in very orientation. Now approximating I(x,y,t)t as Ix,y,t+Δt)I(x,y,t)t, the final equation representing the diffusion coefficient becomes:

I(x,y,t+Δt)I(x,y,t)tI(x,y,t)+I(x,y,t)tΔt

Equation (11) describes the iterative process of the anisotropic diffusion. At each new time step t+Δt a new image is obtained from the previous image at time t. Note that the time variable, t, in the 2D discrete implementation represents the iteration number, n. Δt is an integration constant that determines the stability of the iterative approximation. According to reference [17], there is no limitation for the lower-bound. However, a small value results in good approximation with many iteration steps. A value of 0.25 was used throughout the algorithm based on the upper bound given in reference [17]. In Eq. (11), the number of iterations to perform is not known in advance. Thus, a stopping parameter, NS, controls the number of iterations required for the fuzzy Anisotropic Diffusion and in essence, it acts as stopping time. By adjusting the parameter NS, the amount of speckle removed can be varied, resulting in less or more smoothing, depending on the input image used and the amount of speckle present. There is a trade-off between the integration constant, Δt and the iteration parameter NS [28]. A small value of Δt will yield a good approximation for the partial differential equation of the fuzzy AD however it requires many iteration steps, i.e. large NS value.

3. Results and discussion

To demonstrate the effectiveness of the Interval Type-II fuzzy AD algorithm, it was applied to images acquired in-vivo from a human finger tip and human retina. The images were acquired with a high speed, high resolution Fourier Domain OCT system (FD-OCT) operating in the 1060nm wavelength region, described in detail in reference [29]. Briefly, the FD-OCT system is powered with a superluminescent diode (λc = 1020nm, Δλ = 108nm, Pout = 10mW) that provides 3.5μm (axial) and 10μm (lateral) resolution in biological tissue. The FD-OCT system is equipped with a fast InGaAs linear array, 1024 pixel CCD camera (SUI, Goodrich) with a readout rate of 47 kHz. Two dimensional images (1000 A-scans × 512 pixels) of a human finger tip, (human retina) were acquired with 1.3mW (1mW) incident power and 100dB (97dB) SNR respectively. The human and rat retina images were acquired in accordance with the approved ethics regulations.

All images were processed with the interval Type-II fuzzy AD algorithm, implemented in MATLAB (v. 7.0.1). Standard performance metrics such as image SNR, contrast-to-noise ratio (CNR), equivalent number of looks (ENL), and edge preservation (η) were used to evaluate quantitatively the performance of the algorithm. The image quality metrics we used are the same as the metrics used in Refs. [4, 13, and 24], and are defined as follows:

SNR=10log10(max(I2)σn2)
ENL=1H(h=1Hμh2σh2)
CNR=1R(r=1R(μrμb)σr2+σb2)
η=ij(ΔIi,jΔĪi,j)·(ΔÎi,jΔÎ̄i,j)ij(ΔIi,jΔĪi,j)·(ΔIi,jΔĪi,j)·ijr(ΔÎi,jΔÎ̄i,j)·(ΔÎΔÎ̄i,j)

ENL measures the smoothness of a homogenous region of interest, while CNR measure the difference between an area of image feature and an area of background noise. Edge preservation is a correlation measure that indicates how the edges in the image are degrading. A value close to 1 indicates the filtered image is similar to the reference image. In the expression for SNR, I and σ 2 n an represent the linear magnitude image and the variance of the background noise region in the linear magnitude image respectively. In the expression for ENL, μh and σ 2 h represent the mean and the variance of the hth homogenous region of interests respectively. In the definition for CNR, μb and σ 2 b represent the mean and the variance of the same background noise region as in SNR and μr and σ 2 r represent the mean and the variance of the rth region of interest which includes the homogeneous regions. In the edge preservation measure, ΔI and ΔÎ represent the Laplace operator performed on the original image I and the filtered image Î respectively. Also, ΔĪ and ΔI ̂¯ represent the mean value of a 3×3 neighbourhood around the pixel location (i, j) of ΔI and ΔÎ respectively.

Anisotropic Diffusion algorithms were originally designed for additive noise [25], though speckle noise is multiplicative in nature. However, multiplicative noise can be transformed into additive noise by a logarithmic transformation. Therefore, the first step in our image enhancement algorithm involves transformation of the image into log domain. All calculations were performed in the logarithmic domain of the OCT image except for the SNR calculation. Several regions of interest (R), homogeneous (H) and non-homogeneous (NH) were defined for each of the fingertip and retina images. The parameters K and NS in the image processing algorithm were chosen to result in the best image SNR. These SNR values were used for generating the filtered images and for calculating the image metrics.

To compare the performance of the Type-II fuzzy AD algorithm with other, previously publishes speckle noise reduction algorithms, a Wiener filter [26] provided by Matlab (via the wiener2() function) was applied to the OCT fingertip and retina images as well. A modified adaptive Lee filter implemented using an algorithm described in reference [27] was applied to the image. The Wiener filter and the adaptive Lee filter were previously applied to OCT images and they showed valuable results [24]. In addition, a Type-I AD algorithm without Interval Type-II fuzzy sets was also applied to the fingertip and retina images, in order to directly evaluate the effect of the Type-II fuzzy set when combined with anisotropic diffusion. The Type-I fuzzy AD algorithm utilized the conclusions of a recently published study that compared various PDE-based non-linear diffusion approaches for image enhancement and denoising of OCT tomograms [13]. Also, the Interval Type-II fuzzy set based wavelet algorithm developed recently [15] was applied to the images to compare its performance to the Interval Type-II fuzzy anisotropic diffusion algorithm.

Figure 3 shows the original and processed images of the human fingertip. The unprocessed OCT tomogram in Fig. 3(a), has grainy appearance due to the presence of speckle noise. Six regions of interest (H = 3 (labeled 2,4 and 6), NH = 3 (labeled 3,5 and 7) were selected in the fingertip image and marked with green-line boxes. A “background” region, used for calculation of the image SNR, was marked with a red-line box in an area of the image above the biological tissue surface. A region in the image, marked with a blue line box and containing what we believe are sweat glands, was enlarged by 2x and shown as an inset in Fig. 3(a). Figure 3(b) shows the same image after Wiener filtering and a similar inset shows the reduced appearance of speckle pattern. Red arrows in the processed images mark the locations of sweat glands in the epidermal layer of the fingertip. Figure 3(c) shows the OCT image after adaptive Lee filtering. Both the Wiener and Lee filters are based on the optimal minimum mean square error estimator of the true image. Both filters appear to reduce the speckle pattern, yet in homogenous regions some speckle patterns are still clearly visible. Figs. 3(d) and 3(e) present the images processed with the Type-I fuzzy AD and interval Type-II fuzzy wavelet algorithms respectively. Figures 3(d) and 3(e) show no residual speckle noise pattern, however significant blurring of the edges of image features is seen. The Type-II fuzzy AD filtered image, shown on Fig. 3(f) exhibits clear edges of tissue features (for example see the sweat glands) and significantly reduced appearance of speckle patterns as compared to the other filters. Note that the same gray scale was used in all filtered images and the scale was not altered after the application of the Lee, Wiener and the Type-I and Type-II fuzzy filters.

 figure: Fig. 3.

Fig. 3. Human finger tip image, 1000 × 512 pixels, corresponding to 1 mm × 0.8 mm, processed with the following filters: Original image (a), Wiener (b), Adaptive Lee (c), Type I Fuzzy AD (d), Type II Fuzzy Wavelet (e) and Type II Fuzzy AD (f). The red and green line boxes in (a) mark the selected regions for the image metrics evaluation. The red arrows in (b) -(f) point at sweat glands in the epidermis. The blue line box in (a) marks a region containing sweat glands, that has been enlarged 2x and shown as an inset in (a). Similar insets are shown in the processed images (b) – (f). Enlarged copies of all insets are shown in (g) – (l) for close comparison of the performance of the five speckle reduction algorithms.

Download Full Size | PDF

Enlarged views of the 6 insets are presented in Fig. 3(g) (unprocessed image), Fig. 3(h) (Wiener filter), Fig. 3(i) (adaptive Lee filter), Fig. 3(j) (Type-I fuzzy AD filter), Fig. 3(k) (Interval Type-II fuzzy wavelet filter) and Fig. 3(l) (Type-II fuzzy AD filter) for direct comparison of the effect of filtering on the presence of a speckle pattern. It is clear from Figs. 3(g) – 3(l), that both the Wiener and adaptive Lee filters reduce speckle noise in the images, while preserving the edges of the sweat gland. However, some residual speckle pattern is still visible in the background tissue on Figs. 3 (h) and 3(i). The Type-I fuzzy AD filtered image (Fig. 3(j)) and Type-II fuzzy wavelet filtered image (Fig. 3(k)) show practically no presence of speckle noise, achieved at the expense of significant blurring of edges of tissue features such as the sweat glands. The Type-II fuzzy AD filter shows significant reduction in the speckle pattern with very good preservation of image features edges. Note that both the Type-I and Type II fuzzy AD filters were run for the same parameters, optimized for the best SNR.

Table 1 shows the image quality metric values for the original and the filtered human fingertip images. The use of both the Wiener and Lee filters results in some improvement in the image SNR (3.63 dB and 4.96 dB respectively) and CNR. The application of both the Type-I and Type-II fuzzy AD filters results in a more significant image quality improvement (6.67dB and 13.07dB respectively) which is also apparent by the change in image contrast in Figs. 3(d) and 3(f). The SNR improvement achieved with the Type-II fuzzy AD algorithm is better than the one obtained with the Type-II fuzzy wavelet. Also, the Type-II fuzzy AD filtered image is closer to the original as its edge preservation value is the greatest.

Tables Icon

Table 1. Image quality metrics evaluated for the human finger tip image. Values are relative to the original image.

All image processing algorithms mentioned above were also applied to an OCT image of a human retina (Fig. 4) with dimensions 1000 × 512 (A-scans × pixels) acquired in-vivo near the optical disk Fig. 4(a) shows the unprocessed retinal tomogram that has grainy appearance due to the presence of speckle. Five regions of interest (green line) and a background region (red line) above the image were selected and marked on the image. A region in the image marked with a blue line box and containing a choroidal blood vessel (red arrow), as well as the junction between the inner and outer segments of the photoreceptor layer (blue arrow) was enlarged to show the effect of filtering. Figure 4(b) shows the same image after Wiener filtering and a similar inset (Fig. 4(h)) shows the reduced appearance of speckle pattern. Figure 4(c) shows the OCT image after adaptive Lee filtering. Both Wiener and adaptive Lee filters appear to reduce the speckle pattern, yet in homogenous regions some speckle patterns are still clearly visible. Figure 4(d) presents the image processed with the Type-I AD algorithm, which shows no residual speckle noise pattern and significant blurring of the edges of image features. The Type-II fuzzy wavelet filtered image is shown in Fig. 4(e) which has blurred edges. The Type-II fuzzy AD filtered image, shown on Fig. 4(f) exhibits clear edge in the case of the inner – outer segment junction of the photoreceptor layer, the Retinal Pigmented Epithelium (RPE) and the walls of the small choroidal blood vessel, as well as significantly reduced appearance of speckle patterns as compared to the Wiener and Lee filtered images. Note that the same gray scale was used in all filtered images and the scale was not altered after the application processing algorithm filters.

 figure: Fig. 4.

Fig. 4. An OCT image of a human retina (1000 × 512), processed with the following filters: Original image (a), Wiener (b), Adaptive Lee (c), Type I Fuzzy AD (d), Type II Fuzzy Wavelet (e) and Type II Fuzzy AD (f). The blue-line box in (a) marks a region containing a choroidal blood vessel, (red arrows). Similar regions were selected for the filtered images and enlarged copies of them are shown in (g) – (l) for close comparison of the performance of all image processing algorithms. Blue arrows mark the inner – outer segment junction in the photoreceptor layer.

Download Full Size | PDF

Enlarged views of the 5 magnified regions containing the choroidal blood vessel are presented in Fig. 4(g) (unprocessed image), Fig. 4(h) (Wiener filter), Fig. 4(i) (Adaptive Lee filter), Fig. 4(j) (Type-I fuzzy AD), Fig. 4(k) (Type-II fuzzy wavelet) and Fig. 4(l) (Type-II fuzzy AD filter) for easy comparison of the effect of filtering on the presence of a speckle pattern. It is clear from Figs. 4(g) – 4(l), that both the Wiener and adaptive Lee filters reduce significantly speckle noise in the images, while preserving the edges of the blood vessel walls. However, some residual speckle pattern is still visible in the background tissue on Figs. 4(h) and 4(i). The Type-I fuzzy AD filtered image (Fig. 4(j)) shows practically no presence of speckle noise, achieved at the expense of significant blurring of edges of tissue features such as the blood-vessel. The Type-II fuzzy AD filter shows significantly improved reduction in the speckle pattern with very good preservation of image features edges. Note that both the Type-I fuzzy AD and the Type-II AD filters were run for the same parameters.

SNR improvement of 4.94, 6.80, 6.84, 10.23, and 6.88dB for Wiener, adaptive Lee, Type-I fuzzy AD, interval Type-II fuzzy wavelet, and interval Type-II fuzzy AD filters were obtained respectively. Even though the Interval Type II fuzzy wavelet algorithm resulted in similar SNR improvement as compared to the interval type-II fuzzy AD, the edge preservation of the Type-II fuzzy AD algorithm is significantly better as see from the metrics values listed in Table 2. CPU processing times for all algorithms are similar to the ones listed in Table 1.

Tables Icon

Table 2. Image quality metrics evaluated for the human retina image. Values are relative to the original image.

The Matlab implementation of the novel Type-II fuzzy AD algorithm required approximately 32 seconds on a computer with Intel Core 2 DUO processor with 2.0 GB of RAM for filtering a test image with dimensions of 1000 × 512 (lateral × axial) pixels with 15 iterations. As compared to the Wiener and Adaptive Lee approaches, our algorithm requires longer run time, while providing overall better image quality as defined by the image metric functions. Note, that the processing time of the Type-II fuzzy AD is significantly less as compared to a previously published algorithm which required approximately 7 min run time in Matlab on a Pentium IV with for a test image with similar dimension [4].

Conclusion

We have developed a novel algorithm for reducing speckle noise in OCT images, based on interval Type-II fuzzy Anisotropic Diffusion. When compared to previously published algorithms such as Wiener and Adaptive Lee filters and Type-I fuzzy AD, our technique shows superior image quality improvement for fairly short processing time. The effectiveness of the novel algorithm in reducing speckle noise was demonstrated on OCT tomograms of a human finger tip and retina. The novel algorithm resulted in SNR improvement of ~13dB for the human finger tip and ~7dB for the human retina. The image SNR improvement is dependent both on the overall noise level of the OCT tomograms, as well as the type of the biological tissue, that results in a particular speckle patters. The interval Type-II fuzzy AD can be easily modified to include other information from the image to further improve the speckle noise reduction performance. For example, by adding new fuzzy rules that take into account the image features of interest, the algorithm can be tuned specifically for different types of images, corresponding to different biological tissue types and features. This novel algorithm could be of particular use in ophthalmic OCT, where it can improve not only the overall appearance of retinal OCT tomograms, but also the performance of image segmentation, rendering and pattern recognition algorithms as applied to ophthalmic OCT tomograms.

Acknowledgments

The authors wish to thank P. Siva for the helpful suggestions. We gratefully acknowledge financial support from NSERC and University of Waterloo.

References and links

1. W. Drexler, “Ultrahigh-resolution optical coherence tomography,” J. Bio. Opt. 9, 47–74 (2004). [CrossRef]  

2. J. M. Schmitt, S. H. Xiang, and K. M. Yung, “Speckle in optical coherence tomography,” J. Bio. Opt. 4, 95–105 (1999). [CrossRef]  

3. J. Rogowska and M. E. Brezinski, “Evaluation of the adaptive speckle suppression filter for coronary optical coherence tomography imaging,” IEEE Trans. Med. Imaging 19, 1261–1266 (2000). [CrossRef]  

4. D. C. Adler, T. H. Ko, and J. G. Fujimoto, “Speckle reduction in optical coherence tomography images by use of a spatially adaptive wavelet filter,” Opt. Lett. 29, 2878-80 (2004). [CrossRef]  

5. J. Kim, D. T. Miller, E. Kim, S. Oh, J. Oh, and T. E. Milner, “Optical coherence tomography speckle reduction by a partially spatially coherent source,” J. Biomed. Opt. 10, 64034 -9 (2005). [CrossRef]  

6. M. Pircher, E. Götzinger, R. Leitgeb, A.F. Fercher, and C. K. Hitzenberger, “Speckle reduction in optical coherence tomography by frequency compounding,” J. Biomed. Opt. 8, 565–569 (2003). [CrossRef]   [PubMed]  

7. T. M. Jorgensen, L. Thrane, M. Mogensen, F. Pedersen, and P. E. Andersen, “Speckle reduction in optical coherence tomography images of human skin by a spatial diversity method,” in Proc. SPIE 6627, Munich, Germany, 66270P (2007).

8. N. Iftimia, B. E. Bouma, and G. J. Tearney, “Speckle reduction in optical coherence tomography by path length encoded angular compounding,” J. Bio. Opt. 8, 260–263 (2003). [CrossRef]  

9. A. E. Desjardins, B. J. Vakoc, W. Y. Oh, S. M. R. Motaghiannezam, G. J. Tearney, and B. E. Bouma, “Angle-resolved Optical Coherence Tomography with sequential angular selectivity for speckle reduction,” Opt. Express 15, 6200–6209 (2007). [CrossRef]   [PubMed]  

10. D. C. Fernandez and H. M. Salinas, “Evaluation of a nonlinear diffusion process for segmentation and quantification of lesions in optical coherence tomography images,” Proc. SPIE 5747, 1834–1843 (2005). [CrossRef]  

11. D. C. Fernandez, H. M. Salinas, and C. A. Puliafito, “Automated detection of retinal layer structures on optical coherence tomography images,” Opt. Express 13, 10200–10216 (2005). [CrossRef]  

12. R. K. Wang, “Reduction of speckle noise for optical coherence tomography by the use of nonlinear anisotropic diffusion,” Proc. SPIE 5690, 380–385 (2005). [CrossRef]  

13. H. M. Salinas and D. C. Fernandez, “Comparison of pde-based nonlinear diffusion approaches for image enhancement and denoising in optical coherence tomography,” IEEE Trans. Med. Imaging 26, 761–771 (2007). [CrossRef]   [PubMed]  

14. S. Aja, C. Alberola, and J. Ruiz, “Fuzzy anisotropic diffusion for speckle filtering,” in Proc. IEEE ICASSP, 2, 1261–1264 (2001).

15. P. Puvanathasan and K. Bizheva, “Speckle noise reduction algorithm for optical coherence tomography based on interval type II fuzzy set,” Opt. Express , 15, 15747–15758 (2007). [CrossRef]   [PubMed]  

16. J. Koenderink, “The structure of images,” Biol. Cybern. 50, 363–370 (1984). [CrossRef]   [PubMed]  

17. P. Perona and J. Malik, “Scale-space and edge detection using anisotropic diffusion,” IEEE Trans. Pattern Anal. and Mach. Intell. 12, 629–639 (1990). [CrossRef]  

18. Z. Lin and Q. Shi, “An anisotropic diffusion PDE for noise reduction and thin edge preservation,” in Proc. 10th Int. Conf. Image Analysis and Processing, 102–107 (1999).

19. F. Catte, P. L. Lions, J. M. Morel, and T. Coll, “Image selective smoothing and edge detection by nonlinear diffusion,” SIAM J. Numer. Anal. 29, 182–193 (1992). [CrossRef]  

20. J. Song and H. R. Tizhoosh, “Fuzzy Anisotropic Diffusion: A Rule-based Approach,” in Proc. SCI, 4, 241–246 (2003).

21. L. A. Zadeh, “Fuzzy sets,” Information Control 8, 338–353 (1965). [CrossRef]  

22. S. Aja, C. Alberola, and J. Ruiz, “Fuzzy anisotropic diffusion for speckle filtering,” in IEEE Int. Conf. Acoust. Speech Signal Process. 2, 1261–1264 (2001).

23. G. Sanchez-Ortiz and A. Noble, “Fuzzy clustering driven anisotropic diffusion: Enhancement and segmentation of cardiac MR images,” in IEEE Nuclear Symp. and Med. Imag. Conf. 3, 1873–1874 (1998).

24. A. Ozcan, A. Bilenca, A. E. Desjardins, B. E. Bouma, and G. J. Tearney, “Speckle reduction in optical coherence tomography images using digital filtering,” J. Opt. Soc. Am. A. 24, 1901–1910 (2007). [CrossRef]  

25. S. Intajag, V. Tipsuwanpon, and F. Cheevasuwit, “Anisotropic diffusion in synthetic aperture radars,” in Proc. CCECE 2005, 277–280 (2005).

26. S. J. Lim, Two-Dimensional Signal and Image Processing (Prentice Hall, 1990).

27. Y. H. Lu, S. Y. Tan, T. S. Yeo, W. E. Ng, I. Lim, and C. B. Zhang, “Adaptive filtering algorithms for SAR speckle reduction,” in Proc. IGARSS, 1, 67–69 (1996)

28. G. Gerig, O. Kubler, R. Kikinis, and F. A. Jolesz, “Nonlinear anisotropic filtering of mri data,” IEEE Trans. Med. Imaging 11, 221–232 (1992). [CrossRef]   [PubMed]  

29. P. Puvanathasan, P. Forbes, Z. Ren, D. Malchow, S. Boyd, and K. Bizheva, “High-speed, high-resolution Fourier-domain optical coherence tomography system for retinal imaging in the 1060 nm wavelength region,” Opt. Lett. 33, 2479–2481 (2008). [PubMed]  

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (4)

Fig. 1.
Fig. 1. Block diagram of the Fuzzy Type II Adaptive Diffusion algorithm.
Fig. 2.
Fig. 2. Interval Type II Fuzzy Membership functions for the fuzzy variables a) “low edginess measure” and b) “high noisiness measure”.
Fig. 3.
Fig. 3. Human finger tip image, 1000 × 512 pixels, corresponding to 1 mm × 0.8 mm, processed with the following filters: Original image (a), Wiener (b), Adaptive Lee (c), Type I Fuzzy AD (d), Type II Fuzzy Wavelet (e) and Type II Fuzzy AD (f). The red and green line boxes in (a) mark the selected regions for the image metrics evaluation. The red arrows in (b) -(f) point at sweat glands in the epidermis. The blue line box in (a) marks a region containing sweat glands, that has been enlarged 2x and shown as an inset in (a). Similar insets are shown in the processed images (b) – (f). Enlarged copies of all insets are shown in (g) – (l) for close comparison of the performance of the five speckle reduction algorithms.
Fig. 4.
Fig. 4. An OCT image of a human retina (1000 × 512), processed with the following filters: Original image (a), Wiener (b), Adaptive Lee (c), Type I Fuzzy AD (d), Type II Fuzzy Wavelet (e) and Type II Fuzzy AD (f). The blue-line box in (a) marks a region containing a choroidal blood vessel, (red arrows). Similar regions were selected for the filtered images and enlarged copies of them are shown in (g) – (l) for close comparison of the performance of all image processing algorithms. Blue arrows mark the inner – outer segment junction in the photoreceptor layer.

Tables (2)

Tables Icon

Table 1. Image quality metrics evaluated for the human finger tip image. Values are relative to the original image.

Tables Icon

Table 2. Image quality metrics evaluated for the human retina image. Values are relative to the original image.

Equations (20)

Equations on this page are rendered with MathJax. Learn more.

u(x,y,t)t=c(t)Δu(x,y,t)
c(x,y,t)={largeinhomogeneousimageregionssmallnearimageedges
x=[101101101]andy=[111000111]
c(x,y,t)=g(Gσ*medx*(Gσ*I)2+y*(Gσ*I)2)
g(x)=11+xK2
E(x,y,t)=Gσ*medx*(Gσ*I)2+y*(Gσ*I)2
N(x,y,t)=I(x,y,t)1(2K+1)21i=Ki=Kj=Kj=KIp(x+i,y+j,t)
γx,y,tUpper=μAUpper(E(x,y,t))·μBUpper(N(x,y,t))
γx,y,tLower=μALower(E(x,y,t))·μBLower(N(x,y,t))
c(x,y,t)=γx,y,tUpper+γx,y,tLower2
I(x,y,t)t =c(x,y,t) Δ I (x,y,t)=c(x,y,t) I (x,y,t) =[c(x,y,t)I(x,y,t)]=
=div c (x,y,t) I (x,y,t) ] =x[c(x,y,t)xI(x,y,t)]+yc(x,y,t)xI(x,y,t)]
I(x,y,t)t=1Δx2c(x+Δx2,y,t) [I(x+Δx2,y,t)I(x,y,t)] 1Δx2c(xΔx2,y,t)
[I(x,y,t)I(xΔx2,y,t)]+1Δy2c(x,y+Δy2,t)[I(x,y+Δy,t)I(x,y,t)]
1Δy2c(x,y+Δy2,t) [I(x,y,t)I(x,y+Δy,t)]
I(x,y,t+Δt)I(x,y,t)tI(x,y,t)+I(x,y,t)tΔt
SNR=10log10(max(I2)σn2)
ENL=1H(h=1Hμh2σh2)
CNR=1R(r=1R(μrμb)σr2+σb2)
η=ij(ΔIi,jΔĪi,j)·(ΔÎi,jΔÎ̄i,j)ij(ΔIi,jΔĪi,j)·(ΔIi,jΔĪi,j)·ijr(ΔÎi,jΔÎ̄i,j)·(ΔÎΔÎ̄i,j)
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.