Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Three-dimensional curvelet-based dictionary learning for speckle noise removal of optical coherence tomography

Open Access Open Access

Abstract

Optical coherence tomography (OCT) is a recently emerging non-invasive diagnostic tool useful in several medical applications such as ophthalmology, cardiology, gastroenterology and dermatology. One of the major problems with OCT pertains to its low contrast due to the presence of multiplicative speckle noise, which limits the signal-to-noise ratio (SNR) and obscures low-intensity and small features. In this paper, we recommend a new method using the 3D curvelet based K-times singular value decomposition (K-SVD) algorithm for speckle noise reduction and contrast enhancement of the intra-retinal layers of 3D Spectral-Domain OCT (3D-SDOCT) images. In order to benefit from the near-optimum properties of curvelet transform (such as good directional selectivity) on top of dictionary learning, we propose a new plan in dictionary learning by using the curvelet atoms as the initial dictionary. For this reason, the curvelet transform of the noisy image is taken and then the noisy coefficients matrix in each scale, rotation and spatial coordinates is passed through the K-SVD denoising algorithm with predefined 3D initial dictionary that is adaptively selected from thresholded coefficients in the same subband of the image. During the denoising of curvelet coefficients, we can also modify them for the purpose of contrast enhancement of intra-retinal layers. We demonstrate the ability of our proposed algorithm in the speckle noise reduction of 17 publicly available 3D OCT data sets, each of which contains 100 B-scans of size 512×1000 with and without neovascular age-related macular degeneration (AMD) images acquired using SDOCT, Bioptigen imaging systems. Experimental results show that an improvement from 1.27 to 7.81 in contrast to noise ratio (CNR), and from 38.09 to 1983.07 in equivalent number of looks (ENL) is achieved, which would outperform existing state-of-the-art OCT despeckling methods.

© 2020 Optical Society of America under the terms of the OSA Open Access Publishing Agreement

Corrections

10 April 2020: A typographical correction was made to the author listing.

1. Introduction

Retinal Optical Coherence Tomography (OCT) is a recently developed imaging technique, which provides cross-sectional high-resolution images from the retinal microstructures and a non-invasive 3D view of the layered structure of the retina [1]. This allows precise monitoring of diseases like Age-related Macular Degeneration (AMD) and retinopathy [2]. However, because of coherent detection nature of OCT, the captured images suffer from the speckle noise, which causes difficulty in the actual recognition of morphological characteristics extracted from OCT tomograms, such as the thickness of intra-retinal layers, and the shape of structural features like drusens, macular holes, macular edema, nerve fiber atrophy and cysts, which can be used as biomarkers in the clinical investigation and diagnosis of retinal diseases [3,4].

Many methods have been proposed over the years to address speckle noise reduction from the OCT images. An extensive literature review on OCT denoising methods has been provided by Kafieh et al [5], which is concluded in Table 1 (see [5], and references therein for more details). OCT despeckling methods are categorized as OCT signal denoising on complex domain (before producing the magnitude of OCT interference signal) or on magnitude domain (named as hardware/software-based methods in Table 1, respectively). The magnitude domain techniques can be applied either directly to the raw OCT image or to the transformed data (using an appropriate sparse representation). The traditional speckle filtering methods in raw image domain such as median and Lee filtering [69], adaptive median and Wiener filtering [10,11] provide inadequate noise reduction under high-level speckle noise, as well as cause loss of meaningful subtle features. In the recent years, numerous more advanced methods have been proposed for speckle noise reduction, such as anisotropic diffusion-based techniques [1214], wavelet-based methods [15], denoising using dual-tree complex wavelet transform [16] and curvelet transform [17], sparsity-based denoising [18,19], complex wavelet-based K-SVD dictionary learning technique (CWDL) [5], deep convolutional neural network based methods [2022] and robust principal component analysis (RPCA)-based method [23].

Tables Icon

Table 1. Available denoising methods in OCT images [5, a]

In this paper, a novel speckle noise reduction algorithm is developed, which is optimized to reduce the speckle noise of OCT images while preserving edge sharpness. For this purpose, we introduce a new K-SVD dictionary learning strategy in the curvelet transform domain for speckle noise reduction of 3-D OCT images. By taking advantage of this sparse multiscale directional transform, we propose a new plan in dictionary learning by denoising and modifying each noisy curvelet subband with a pre-defined initial 3-D sparse dictionary. The 3-D initial dictionary for each 3-D curvelet subband is independently selected from thresholded coefficients in the same scale, rotation and spatial coordinates of the image. This method does not need any high-SNR scans (averaged versions of repeated scans) for dictionary learning used in other works [18,19].

The paper is organized as follows. Section 2 provides an introduction to 3-D digital curvelet transform (3DCUT). In Section 3 our proposed method, which is a dictionary learning method by using 3DCUT atoms as start dictionary, is described and the results and performance evaluation are presented in Section 4. Finally, the conclusions are given in Section 5.

2. Three-dimensional (3D) digital curvelet transform

The curvelet transform provides a sparse representation of objects in natural images. The basis functions of this joint time–frequency transform have good directional selectivity and are highly anisotropic [53]. The directional selectivity of curvelets and the spatially localized property of each curvelet can be employed to preserve the image features along specific directions in each subband. Following this reasoning, curvelets are appropriate basis elements (atoms) for representing multidimensional objects, which are smooth apart from singularities along smooth manifolds of codimension 1 [54].

Although the direct analyzing of 3-D data as a volume and considering the 3-D geometrical nature of the data is computationally expensive, it has been shown that 3-D analysis of 3-D data outperforms 2-D slice-by-slice analyzing of data [55]. The parabolic scaling, good directional selectivity, tightness and sparse representation properties of 3-D curvelet singularities, provide new opportunities for processing and analysis of large scale medical data-sets. 3-D curvelet elements are plate-like shapes of 2−j/2 in two directions and width about 2−j in the orthonormal direction and are suitable for demonstrating smooth 3D objects with singularities along smooth surfaces which is well adapted for represntating smooth intra-retinal surfaces in 3D-OCT images. The 3-D discrete curvelet transform is a 3-D extension of 2-D curvelet transform proposed by Candes et al [54]. In 2-D, the curvelet dictionary is generated by translation ($\textrm{b} \in {\Re ^2}$) and rotation ($\theta $) of the basic element ${\phi _{{\textrm{a}},0,0}}$:

$${\phi _{\textrm{a},\textrm{b},\theta }}({\textrm{x}}) = {\phi _{\textrm{a},0,0}}{\kern 1pt} ({\textrm{R}_\theta }(\textrm{x} - \textrm{b}))$$
where ${\textrm{R}_\theta } = \left( {\begin{array}{cc} {\cos \theta }&{ - \sin \theta }\\ {\sin \theta }&{\cos \theta } \end{array}} \right)$ is 2×2 rotation matrix with angle ɵ. ${\phi _{\textrm{a},0,0}} \in {\textrm{L}^2}({\Re ^2})$ in Fourier domain (${\hat{\phi }_{\textrm{a},0,0}}(\xi ) = {\textrm{U}_{\textrm{a}}}(\xi )$) is proposed as a polar wedge used for building curvelet atoms [56] as follows:
$${\hat{\phi }_{\textrm{a},\textrm{b},\theta }}(\xi ) = {\textrm{e}^{ - \textrm{i}\left\langle {\textrm{b},\xi } \right\rangle }}{\hat{\phi }_{\textrm{a},0,0}}({\textrm{R}_\theta }\xi ) = {\textrm{e}^{ - \textrm{i}\left\langle {\textrm{b},\xi } \right\rangle }}{\textrm{U}_{\textrm{a}}}({\textrm{R}_\theta }\xi )$$
By sampling of scales ${\textrm{a}_\textrm{j}} = {2^{ - \textrm{j}}},{\textrm{j}} \ge 0$, orientations ${\theta _{\textrm{j},\textrm{l}}} = \frac{{\pi \textrm{l}{2^{ - \lceil{{\textrm{j}}/2} \rceil }}}}{2},{\kern 1pt} {\kern 1pt} {\textrm{l}} = 0,1,\ldots ,{4.2^{\lceil{{\textrm{j}}/2} \rceil }} - 1$ ($\lceil \textrm{x} \rceil /\lfloor \textrm{x} \rfloor $ denote the smallest integer being greater/smaller than or equal to x), and locations $\textrm{b}_\textrm{k}^{\textrm{j},\textrm{l}} = \textrm{b}_{\textrm{k}1,\textrm{k}2}^{\textrm{j},\textrm{l}} = \textrm{R}_{{\theta _{\textrm{j},\textrm{l}}}}^{ - 1}{(\frac{{{\textrm{k}_1}}}{{{2^{\textrm{j}}}}},\frac{{{\textrm{k}_2}}}{{{2^{\textrm{j}/2}}}})^{\textrm{T}}}{\kern 1pt} {\kern 1pt} ,{\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\textrm{k}_1},{\textrm{k}_2} \in {\rm Z}$, the curvelet coefficients are defined as:
$${\textrm{c}_{\textrm{j},\textrm{k},\textrm{l}}}(\textrm{f}) = \int\limits_{{\textrm{R}^2}} {\hat{\textrm{f}}(\xi )\;} {\textrm{U}_\textrm{j}}({\textrm{R}_{{\theta _{\textrm{j},\;\textrm{l}}}}}\xi ){\textrm{e}^{\textrm{i}\left\langle {\textrm{b}_\textrm{k}^{\textrm{j},\;\textrm{l}},\xi } \right\rangle }}\textrm{d}\xi {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} $$
The a-scaled window ${\textrm{U}_\textrm{a}}$ provides the polar tiling of the frequency plane while the Cartesian arrays are desired for digital analysis of the images. In this base, the Cartesian window ${\tilde{\textrm{U}}_{\textrm{j}}}(\xi )$ is proposed [56], which recognizes the frequencies in the trapezoid $\{{({\xi_1},{\xi_2}):{2^{\textrm{j} - 1}} \le {\xi_1} \le {2^{\textrm{j} + 1}}, - {2^{ - \lfloor{\textrm{j}/2} \rfloor }} \le (3\xi 2)/(2\xi 1) \le {2^{ - \lfloor{\textrm{j}/2} \rfloor }}} \}$. So, the Cartesian counterpart of the curvelet coefficients can be obtained as:
$${\tilde{\textrm{c}}_{\textrm{j},\textrm{k},\textrm{l}}}(\textrm{f}) = \int\limits_{{\textrm{R}^2}} {\hat{\textrm{f}}(\xi )\;} {\tilde{\textrm{U}}_{\textrm{j}}}(\textrm{S}_{{\theta _{\textrm{j},\;\textrm{l}}}}^{ - 1}\xi ){\textrm{e}^{\textrm{i}\left\langle {\tilde{\textrm{b}}_{\textrm{k}}^{\textrm{j},\;\textrm{l}},\xi } \right\rangle }}\textrm{d}\xi $$
where ${\textrm{k}_\textrm{j}} = {({\textrm{k}_1}\;{2^{ - \textrm{j}}},{\textrm{k}_2}\;{2^{ - \lfloor{{{\textrm{j}} \mathord{\left/ {\vphantom {\textrm{j} 2}} \right.} 2}} \rfloor }})^{\textrm{T}}},{({\textrm{k}_1},{\textrm{k}_2})^{\textrm{T}}} \in \;{{\rm Z}^2}$ and $\tilde{\textrm{b}}_{\textrm{k}}^{\textrm{j},\;\textrm{l}} = \textrm{S}_{{\theta _{{\textrm{j}},\;\textrm{l}}}}^{ - \textrm{T}}({\textrm{k}_1}{2^{ - \textrm{j}}},{\textrm{k}_2}{2^{ - \lfloor{\textrm{j}/2} \rfloor }}) = \textrm{S}_{{\theta _{\textrm{j},\;\textrm{l}}}}^{ - \textrm{T}}{\textrm{k}_\textrm{j}}$ and ${\textrm{S}_\theta } = \left( {\begin{array}{cc} 1&0\\ { - \tan \theta }&1 \end{array}} \right)$. Using common rectangular grid instead of tilted grid, the Cartesian curvelets can be calculated as:
$${\tilde{c}_{\textrm{j},\textrm{k},\textrm{l}}}(\textrm{f}) = \int\limits_{{\textrm{R}^2}} {\hat{\textrm{f}}(\xi )\;} {\tilde{\textrm {U}}_{\textrm{j}}}(\textrm{S}_{{\theta _{\textrm{j},\;\textrm{l}}}}^{ - 1}\xi ){\textrm{e}^{\textrm{i}\left\langle {{\textrm{k}_\textrm{j}},\xi } \right\rangle }}\textrm{d}\xi {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} $$
where ${\textrm{k}_\textrm{j}} = {({\textrm{k}_1}\;{2^{ - \textrm{j}}},{\textrm{k}_2}\;{2^{ - \lfloor{{\textrm{j} \mathord{\left/ {\vphantom {\textrm{j} 2}} \right.} 2}} \rfloor }})^{\textrm{T}}}$ taken on values on a rectangular grid, is substituted by $\tilde{\textrm{b}}_{\textrm{k}}^{\textrm{j},\;\textrm{l}} = \textrm{S}_{{\theta _{\textrm{j},\;\textrm{l}}}}^{ - \textrm{T}}{\textrm{k}_\textrm{j}}$ in Eq. (4).

Similar to 2-D, the Cartesian window ${\tilde{\textrm {U}}_{\textrm{j}}}(\xi )$ in 3-D is defined by ${\tilde{\textrm {U}}_{\textrm{j}}}(\xi ) = {\tilde{\textrm {U}}_{\textrm{j}}}({\xi _1},{\xi _2},{\xi _3})$ that isolates the frequencies in the truncated pyramid

$$\left\{ {{{({\xi_1},{\xi_2},{\xi_3})}^{\textrm{T}}}:{2^{\textrm{j} - 1}} \le {\xi_1} \le {2^{\textrm{j} + 1}}, - {2^{ - \lfloor{{\textrm{j}}/2} \rfloor }} \le \frac{{3{\xi_2}}}{{2{\xi_1}}} \le {2^{ - \lfloor{{\textrm{j}}/2} \rfloor }}, - {2^{ - \lfloor{{\textrm{j}}/2} \rfloor }} \le \frac{{3{\xi_3}}}{{2{\xi_1}}} \le {2^{ - \lfloor{\textrm{j}/2} \rfloor }}} \right\}.$$
With the angles ${\theta _{\textrm{j},\textrm{l}}}$ and ${\upsilon _{\textrm{j},\textrm{m}}}$ the 3D shear matrix is defined as ${\textrm{S}_{{\theta _{\textrm{j},\textrm{l}}},{\upsilon _{\textrm{j},\textrm{m}}}}} = \left( {\begin{array}{ccc} 1&0&0\\ { - \tan {\theta_{\textrm{j},\textrm{l}}}}&1&0\\ { - \tan {\upsilon_{\textrm{j},\textrm{m}}}}&0&1 \end{array}} \right)$ where
$$\tan {\theta _{\textrm{j},\textrm{l}}} = {\textrm{l}}{\kern 1pt} {\kern 1pt} {\kern 1pt} {2^{ - \lfloor{\textrm{j}/2} \rfloor }}{\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} \textrm{l} ={-} {2^{\lfloor{\textrm{j}/2} \rfloor }} + 1,\ldots ,{2^{\lfloor{\textrm{j}/2} \rfloor }} + 1,$$
$$\tan {\upsilon _{\textrm{j},\textrm{m}}} = \textrm{m}{\kern 1pt} {\kern 1pt} {2^{ - \lfloor{\textrm{j}/2} \rfloor }}{\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} \textrm{m} ={-} {2^{\lfloor{\textrm{j}/2} \rfloor }} + 1,\ldots ,{2^{\lfloor{\textrm{j}/2} \rfloor }} + 1,$$
and ${\textrm{k}_\textrm{j}} = {({\textrm{k}_1}{2^{ - \textrm{j}}},{\textrm{k}_2}{2^{ - \lfloor{\textrm{j}/2} \rfloor }},{\textrm{k}_3}{2^{ - \lfloor{\textrm{j}/2} \rfloor }})^{\textrm{T}}}{\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} \,\,{\kern 1pt} {\kern 1pt} {\kern 1pt} ,{({\textrm{k}_1},{\textrm{k}_2},{\textrm{k}_3})^{\textrm{T}}} \in {{\rm Z}^3}.$

In 3-D, according to 6 faces of the unit cube, each Cartesian corona has 6 components regularly partitioned into wedges with same volume (Fig. 1).

 figure: Fig. 1.

Fig. 1. 3D rendering of a curvelet atom in (a) space domain, (b) frequency domain, and (c) discrete frequency domain. The shaded area separates the proposed 3D wedge associated with curvelet atom.

Download Full Size | PDF

The curvelet function in the cone $\left\{ {({\xi_1},{\xi_2},{\xi_3}):{\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} 0 < {\xi_1}{\kern 1pt} {\kern 1pt} {\kern 1pt} ,{\kern 1pt} {\kern 1pt} - 1 \le \frac{{{\xi_2}}}{{{\xi_1}}} < 1,{\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} - 1 \le \frac{{{\xi_3}}}{{{\xi_1}}} < 1} \right\}$ is given by:

$${\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\tilde{\varphi }_{\textrm{j},\textrm{k},\textrm{l},\textrm{m}}} = {\tilde{\varphi }_{\textrm{j},0,0,0}}({\textrm{S}^{\textrm{T}}}_{{\theta _{\textrm{j},\textrm{l}}},{\upsilon _{\textrm{j},\textrm{m}}}}(\textrm{x} - \tilde{\textrm{b}}_{\textrm{k}}^{\textrm{j},\textrm{l},\textrm{m}})){\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} $$
So, its Fourier transform would be:
$${\hat{\tilde{\phi }}_{\textrm{j},\textrm{k},\textrm{l},\textrm{m}}} = {\textrm{e}^{ - \textrm{i}\left\langle {\tilde{\textrm{b}}_{\textrm{k}}^{\textrm{j},\textrm{l},\textrm{m}},\xi } \right\rangle }}{\hat{\tilde{\phi }}_{\textrm{j},0,0,0}}(\textrm{S}_{{\theta _{\textrm{j},\textrm{l}}}{\upsilon _{\textrm{j},\textrm{m}}}}^{ - 1}\xi ) = {\textrm{e}^{ - \textrm{i}\left\langle {\tilde{\textrm{b}}_{\textrm{k}}^{\textrm{j},\textrm{l},\textrm{m}},\xi } \right\rangle }}{\tilde{\textrm {U}}_{\textrm{j}}}(\textrm{S}_{{\theta _{\textrm{j},\textrm{l}}}{\upsilon _{\textrm{j},\textrm{m}}}}^{ - 1}\xi )$$
where ${\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\hat{\tilde{\varphi }}_{\textrm{j},0,0,0}}(\xi ) = {\tilde{\textrm {U}}_{\textrm{j}}}(\xi ){\kern 1pt} $. Analogously as in [5], the curvelet coefficients are calculated as follows:
$${\tilde{\textrm{C}}_{\textrm{j},\textrm{k},\textrm{l},\textrm{m}}}(\textrm{f}) = \left\langle {\textrm{f},{{\tilde{\phi }}_{\textrm{j},\textrm{k},\textrm{l},\textrm{m}}}} \right\rangle = \int\limits_{{\textrm{R}^3}} {\hat{\textrm{f}}(\xi )} {\tilde{\textrm {U}}_{\textrm{j}}}(\textrm{S}_{{\theta _{\textrm{j},\textrm{l}}},{\upsilon _{\textrm{j},\textrm{m}}}}^{ - 1}\xi ){\textrm{e}^{\textrm{i}\left\langle {\tilde{\textrm{b}}_{\textrm{k}}^{\textrm{j},\textrm{l},\textrm{m}},\xi } \right\rangle }}\textrm{d}\xi = \int\limits_{{\textrm{R}^3}} {\hat{\textrm{f}}({\textrm{S}_{{\theta _{\textrm{j},\textrm{l}}}}},{\upsilon _{\textrm{j},\textrm{m}}}\xi )} {\tilde{\textrm {U}}_{\textrm{j}}}(\xi ){\textrm{e}^{\textrm{i}\left\langle {{\textrm{k}_\textrm{j}},\xi } \right\rangle }}\textrm{d}\xi $$
In this paper, a new implementation of 3-D fast curvelet transform (3DFCT) [57,58] with a reduced redundancy factor and strong directional selectivity at the finest scale (comparing to the wrapping-based implementation proposed in curvelab Toolbox [53,54]) is proposed.

For this purpose, by taking curvelet coefficients:

  • 1. Cartesian coronization is performed to decompose the object into dyadic coronae based on concentric cubes. Each corona is subdivided into trapezoidal regions conforming the usual parabolic scaling as shown in Fig. 1.
  • 2. The 3-D coefficients are obtained by applying an inverse 3-D FFT to each wrapped wedge as shown in Fig. 1, which appropriately fits into a 3-D rectangular parallelepipeds of dimensions∼ (2j, 2j/2, 2j/2) centered at the origin.

3. Proposed OCT denoising

OCT denoising can improve the image quality in favor of an accurate analysis of image information such as interpretation of intra-retinal layers (e.g., the result of accurate detection of these layers is dependent on edge enhancement through image denoising [59,60]). Unprocessed OCT images, similar to ultrasound images, have a rough appearance due to the presence of speckle [48], which contaminates the image features. Speckle noise is not pure noise and may carry important information correlated with it, which should be separated from the noise. However, much of the speckle noise can be suppressed after applying an appropriate despeckling algorithm; making image features more clear and resulting in a more accurate OCT image analysis.

In our proposed method we show the ability of dictionary learning for image denoising in transform domain instead of the image domain. First, 3D curvelet transform is applied to the 3D noisy OCT image, and then in each subband in the 3D curvelet domain, the coefficient matrix is denoised using K-SVD for dictionary learning.

Since the curvelet coefficients give an almost optimal sparse representation of the image, we have only a few large coefficients, which reflect the basic structures of the image and the remaining coefficients are close to zero [54]. This transform maps signals and noise into different regions and the total energy of the signal is concentrated in a small number of curvelet coefficients.

The choice of the start dictionary plays an important role in the performance of the aforesaid model. In order to prevent the solution of the optimization problem from falling into the local minimums because of the non-convexity of cost function, it is important to start from a wisely selected dictionary [18]. Since the Spectral-Domain OCT (SDOCT) images are affected by speckle noise, the quality of the trained dictionary from the noisy image may be degraded due to the inefficiency of the initial dictionary extracted from the corrupted image itself, which subsequently results in a suboptimal image restoration. Another choice to have a near-optimal solution is to learn the dictionary from the noiseless/ high SNR images. Since in practice such an ideal noise-free image may not be available, the thresholded curvelet coefficients of the noisy image in each subband is selected as initial dictionary for K-SVD based denoising of the 3-D curvelet coefficients in the same subband. So, each initial dictionary will have a different size, depending on the size of the coefficient matrix in each scale, rotation and spatial coordinates. Selecting the initial dictionary to be variable in size instead of traditional fixed forms will effectively represent different structures in the image. By increasing the scale of the curvelet coefficients matrix (or reducing in resolution), the block size is also increased while in high resolutions the block size is reduced, resulting in better representation of particular structures of the image.

The proposed method for the initial dictionary selection is as follows:

  • Forward DCUT: Produce the curvelet coefficients C(j,l,p) (j is the scale, l is the orientation, and p is the spatial coordinates) by applying 3-D curvelet transform.
  • Initial Denoising: Apply a threshold Tj,l,p to each curvelet coefficient such that:
    $${\textrm{C}(\textrm{j},\textrm{l},\textrm{p})} = \left\{ {\begin{array}{c} {{\textrm{C}(\textrm{j},\textrm{l},\textrm{p})}{\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} if{\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\textrm{C}(\textrm{j},\textrm{l},\textrm{p})} \ge {\textrm{T}_{\textrm{j,l,p}}}{\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} }\\ {0{\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} else} \end{array}} \right.{\kern 1pt} {\kern 1pt}$$
    To set the threshold Tj,l,p, we use a traditional strategy called k-sigma method [61], in which Tj,l,p=kσ1σ2, where k is a tunable parameter, σ1 is the noise standard deviation obtained from a region that does not have any image features (background region), and σ2 is the standard deviation of noise in the related curvelet subband [61].

    At this stage by taking the inverse 3-D curvelet transform of the thresholded curvelet coefficients, the initial high SNR enhanced image is obtained as shown in Fig. 2. In order to get a substantial increase in SNR without taking inverse 3-D curvelet transform, the initial dictionary is selected as follows:

  • Initial Dictionary: Each thresholded 3D-coefficient matrix in each scale, rotation and spatial coordinates is selected as the initial dictionary of the same subband. Since selecting a global dictionary is not effective to demonstrate different structures, we select the initial dictionary from thresholded curvelet coefficients in different scales, rotations, and spatial coordinates. After finding the appropriate 3-D initial dictionary, D, for each subband, the noisy curvelet coefficient matrix of the noisy image in the same scale, rotation and spatial coordinates is despeckled as follows:
  • Final K-SVD Denoising: If ${\alpha _{\textrm{i}}}$ represents a sparse vector of coefficients for the ith denoised patch and Ri shows the matrix extracting patch Ci from the curvelet coefficients C at location i, and Y and C indicate the noisy and denoised version of the curvelet coefficients respectively, the following problem should be solved:
    $${{\hat{\alpha }}_\textrm{i}}{,\hat{\textrm C} = }\mathop {\textrm{argmin(}}\limits_{{{\alpha }_\textrm{i}}{,\textrm{C}}} ||{{\textrm {C} - \textrm{Y}}} ||_\textrm{2}^\textrm{2}{ + \lambda }||{\textrm{D}{{\alpha }_\textrm{i}}\textrm{ - }{\textrm{R}_\textrm{i}}\textrm{C}} ||_\textrm{F}^\textrm{2}\textrm{ + }\sum\limits_\textrm{i} {{{\mu }_\textrm{i}}} {||{{{\alpha }_\textrm{i}}} ||_\textrm{0}}\textrm{)}$$
    where λ and µi are scalar multipliers for minimization of the cost function.
    • - Initialization with setting:
      • • C = Y (noisy curvelet coefficient matrix),
      • • D = Thresholded matrix C.
    • - Repeat J times (J is the number of training iterations):
      • • Sparse coding based on Orthogonal Matching Pursuit (OMP) to compute the representation vectors ${{\alpha }_\textrm{i}}$ for each patch ${\textrm{R}_\textrm{i}}\textrm{C}$.
        $${\forall _i}{\kern 1pt} {\kern 1pt} \mathop {\min }\limits_{{\alpha _i}} {||{{\alpha_i}} ||_0}{\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} s.t.{\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} ||{{R_i}C - D{\alpha_i}} ||_2^2 \le {(g\sigma )^2}$$

        The parameter σ is the noise level and g is the noise gain set to g = 1.15.

      • • Using the extracted representation vectors in the above stage, the dictionary D is updated (one column at each time) based on the K-SVD algorithm.
      • • Set each coefficient matrix in scale j, rotation l and spatial coordinate p with:
        $$\textrm{C}(\textrm{j,l,p}) = {(\lambda \textrm{I} + \sum\limits_\textrm{i} {\textrm{R}_\textrm{i}^{\textrm{T}}} {\textrm{R}_\textrm{i}})^{ - 1}}(\lambda \textrm{Y} + \sum\limits_\textrm{i} {\textrm{R}_\textrm{i}^{\textrm{T}}} \textrm{D}{\alpha _{\textrm{i}}})$$

        The parameter λ is dependent on the noise level and is set to λ=30/σ.

  • Contrast enhancement: Since curvelet transform is able to successfully deal with curve singularities and edge discontinuities, it can be employed for edge enhancement in natural images. So, in order to enhance the contrast of intra-retinal layer boundaries, before taking the 3D inverse discrete curvelet transform (3D-IDCUT), denoised curvelet coefficients are modified for the purpose of edge enhancement [62,63]. For this reason, the following function is defined to modify the values of the curvelet coefficients (${\textrm{k}_\textrm{c}}({C_{\textrm{j},\textrm{l},\textrm{p}}})$):
    $${\kern 1pt} {\kern 1pt} {\textrm{K}_\textrm{c}}(\textrm{A}){\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} = \left\{ \begin{array}{l} {\kern 1pt} 2{\kern 1pt} \textrm{A}{\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} \textrm{if}{\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} \textrm{A} < \textrm{N}\\ {\kern 1pt} {\kern 1pt} {\left|{\frac{\textrm{A}}{\textrm N}} \right|^{0.3}}{\kern 1pt} \textrm{A}{\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} \textrm{if}{\kern 1pt} {\kern 1pt} {\kern 1pt} \,{\kern 1pt} {\kern 1pt} {\kern 1pt} \textrm{N} \le \textrm{A} < 3 \textrm{N}\\ {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\left|{\frac{\textrm M}{\textrm A}{\kern 1pt} } \right|^{0.5}}\textrm{A}\,{\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} \textrm{if}{\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} 3 \textrm{N} \le \textrm{A} \end{array} \right.{\kern 1pt} {\kern 1pt} $$

    In this equation N = 0.2M, where M is the maximum value of curvelet coefficients in the relative subband.

  • Converting to image domain: Then, the enhanced image is reconstructed from the denoised and modified curvelet coefficients by applying IDCUT.

The outline of the whole denoising procedure is shown in Fig. 3.

 figure: Fig. 2.

Fig. 2. Results of reconstruction of 3D-OCT images from the thresholded curvelet coefficients.(a) initial image, (b) extracted image by taking inverse 3D curvelet transform of thresholded coefficients.

Download Full Size | PDF

 figure: Fig. 3.

Fig. 3. The outline of the proposed method.

Download Full Size | PDF

A summary of the proposed algorithm for despeckling and enhancing of OCT images is as follows:

boe-11-2-586-i002

4. Results

We tested our algorithm on 17 publically available 2D and also 17 3D OCT images [19,64] with and without non-neovascular AMD. All volumetric scans, a square ∼6.6 × 6.6 mm volume scan with 100 B-scans of size 512×1000 and 1000 A-scans, were acquired using SDOCT, Bioptigen imaging systems (In the 3D implementation of our proposed method, for computational time saving and memory constraint, selected B-scans are resized to achieve B-Scans of size 256×512). For comparison, a 2D implementation of the proposed method [65] is also applied to the 2D images. So, we take the 2D curvelet transform of the noisy image, then each coefficient matrix is despeckled based on the 2D curvelet-based K-SVD dictionary learning (by selecting each thresholded curvelet coefficient matrix of the noisy image in each scale and rotation as the initial dictionary of the same subband). Before taking the inverse curvelet transform, the despeckeled coefficients are also modified in order to further enhance the intra-retinal layer boundaries. Figure 4 demonstrates the samples of the 2D initial selected dictionaries used for 2D K-SVD based denoising of each curvelet subbands. The dictionary size in (a) and (b) is 16×128, which is used for denoising of the curvelet coefficient matrix in scale 6, and in (c), (d) is 16×256 for the coefficient matrix in scale 7.

 figure: Fig. 4.

Fig. 4. Samples of the selected 2D-initial dictionaries used for denoising the curvelet coefficient matrix in each scale and rotation. The dictionary size in (a) and (b) is 16×128 in scale 6 with 2 different orientations (l = 5 and 7 respectively) and in (c),(d) is 16×256 for scale 7 with 2 different orientations (l = 5 and 7 respectively) of the coefficient matrix.

Download Full Size | PDF

In order to quantitatively compare the efficiency of different denoising algorithms, the following parameters are computed:

  • - Contrast-to-Noise Ratio (CNR), which measures the contrast between a feature of interest and the background noise [66].
  • - Texture Preservation (TP), which is a measure of retaining texture in a region of interest (ROI) [67] (TP would be close to 0 for severely flattened image and in the best case remains close to 1).
  • - Equivalent Number of Looks (ENL), which measures smoothness in areas that should appear homogeneous, but are contaminated by speckle. A strong speckle smoothing leads to a large ENL [67].
  • - Edge Preservation (EP), which shows the degree of edge blurring inside the ROI based on the methods discussed in [67].
  • - Mean to Standard-deviation Ratio (MSR), which measures the mean to the standard deviation of the foreground regions [68].
  • - The Structural Similarity Index (SSIM), which is a perceptual metric that quantifies image quality impairement caused by processing [69]
Figure 5 shows the qualitative performance of the proposed method on two selected slices from two retinal 3D-SDOCT.

 figure: Fig. 5.

Fig. 5. Results of reconstruction of 3D-OCT images from the enhanced curvelet coefficients. (a,c) Initial images, and (b,d) Obtained images by proposed method.

Download Full Size | PDF

The 3D implementation of the proposed method is applied on 17 3D data set and MSRs, CNRs, TPs, ENLs and EPs obtained from ten ROIs for each B-scan (similar to the foreground ellipse boxes #2-11 in Fig. 6) are calculated and averaged. The 2D implementation of the proposed method is also evaluated on 17 available 2D and each slice of 17 available 3D data sets from AMD patients.

 figure: Fig. 6.

Fig. 6. Selected background and foreground ROIs for evaluation. Image shows 11 selected regions, which bigger ellipse outside the retinal region is used as the background ROI and other circles represent the foreground ROIs.

Download Full Size | PDF

Table 2 shows the mean and std of the CNR, MSR, ENL, TP and EP for different despeckling techniques against our proposed method [19]. The EP and TP values in this table show the ability of the proposed method in edge preserving and maintaining image structures. These measurements have smaller values close to 0 when the edges inside the ROI are more blurred and the image structures are more flattened. The implementation of algorithm in MATLAB requires around 31 minutes of computation time for denoising of each 3D image (with 100 B-scans of size 256×512) using a desktop with an Intel (R) Core i7 CPU and 4 GB of RAM.

Tables Icon

Table 2. Mean and std of the CNRs, MSRs, ENL, TP and EP for 17 SDOCT retinal images using Methods of Tikhonov (15), K-SVD [38], MSBTD (37), NWSR (68), 3D CWDL (5) and Proposed Method.

We also computed the SSIM of despeckled image with our proposed method for 17 subjects that their averaged (high-SNR) B-scans are available. Figure 7 shows the local structure similarity map calculated between a denoised image by our algorithm and its corresponding averaged (high-SNR) B-scan (Large values of local SSIM appear as bright pixels). The averaged global SSIM for these 17 subjects was 0.71.

 figure: Fig. 7.

Fig. 7. Local structure similarity map,(a) Original noisy image (b) Average image (c) Denoised image (d) SSIM map

Download Full Size | PDF

In order to see the effect of the type of transform, we substituted the curvelet transform with shearlet transform [70] and denoised each shearlet subband by applying hard thresholding function. Similar to the proposed strategy in the previous section, the threshold is chosen based on k-sigma method [61]. As shown in Fig. 8, some ringing artifacts are appeared in the reconstructed image by thresholded coefficients. Just like our proposed method in the previous section, the thresholded coefficients for each scale and direction are selected as the initial dictionaries in our K-SVD based denoising method. For this reason, each image is decomposed by using four-level shearlet decomposition in which there are l = 3, 3, 4 and 4 numbers of shearing directions at each level, respectively. Figure 9 shows decomposed coefficients in the eight directional subbands (N = 2l) of the first level.

 figure: Fig. 8.

Fig. 8. Reconstructed image by thresholded shearlet coefficients.

Download Full Size | PDF

 figure: Fig. 9.

Fig. 9. The shearlet coefficients of first level in detailed subbands.

Download Full Size | PDF

In the available implementation of shearlet toolbox [71], the size of the coefficient matrix in each subband is the same as that of the original image. So, by selecting this matrix as the initial dictionary, the computational time of K-SVD denoising is increased and the reconstructed image by applying the inverse shearlet transform to the K-SVD-based denoised coefficients suffer from blurring effdect, as shown in Fig. 10.

 figure: Fig. 10.

Fig. 10. The result of the shearlet based K-SVD denoising method (a) Initial image, (b) Reconstructed image by applying inverse shearlet transform to the K-SVD-based denoised coefficients

Download Full Size | PDF

5. Discussion

In this paper, we introduced a new atomic representation for speckle noise removal from 3D OCT images and showed the advantage of the proposed method over other prevalent methods. Using curvelet transform and decomposing the image to different scale and applying the K-SVD dictionary learning in transform domain for each subband will be well representing retinal layers with different thicknesses and orientations. Also selecting the initial dictionary to be variable in size instead of traditional fixed form in K-SVD dictionary learning will effectively represent different structures in the image. At the same time with noise suppression the intraretinal boundaries will be enhanced by using Eq. (16). In comparison to the recently published 2D and 3D complex wavelet based dictionary learning (CWDL) method [5] that uses initial fixed size complex wavelet-based dictionary, the drawback of the proposed method in [5] is its time complexity (it takes more than 5 hours for denoising each 3D OCT image including 50 B-scans of size 256×512). The comparison of the quantitative performance of our algorithm with the proposed method in [5] indicates the similar performance of these methods in CNR (7.81 in our vs. 7.31 in [5]) and SNR (14.36 in our vs. 14.45 in [5]), EP (0.96 in our vs. 0.91). However the TP of the proposed algorithm is higher (0.7534 in our vs. 0.41 in [5]), which indicates the absence of unwanted flattening and edge blurring inside the region of interests. Figure 11 shows the results of our proposed method for sample B-scans correspond to AMD eyes. As shown in this figure, the visual interpretation of the proposed method (e.g., in the intra-retinal layer boundaries) also confirms the superiority of the proposed method in comparison with other OCT denoising methods.

 figure: Fig. 11.

Fig. 11. Visual comparison of different SDOCT retinal image denoising methods. (a) The original noisy image (b) The denoising results using K-SVD method [41]. (c) The denoising results using the Tikhonov method [33]. (d) The denoising results using MSBTD method [19]. (e) The denoising results using AWBF method [47]. (f) The denoising results using NWSR algorithm [72]. (g) Results of the proposed method.

Download Full Size | PDF

The proposed method does not require high-SNR scans for learning noise-free dictionary or any repeated scans or averaged versions of scans used in other works [18,19]. In addition, Fig. 12 shows the results of applying 3D dictionary learning in the image domain for different values of σ (σ=5, 15, 25) in comparison to our proposed method. As shown in this figure, the selection of σ will influence the results of dictionary learning in the image domain.

 figure: Fig. 12.

Fig. 12. Visual comparison of dictionary learning for image denoising in the image and transform domain. (a) Original noisy image. (b) The denoised image by our proposed method. (c), (d), (e) are the results of K-SVD-based denoising in the image domain with σ=5,15 and 25, respectively.

Download Full Size | PDF

The threshold Tj,l,p in Eq. (12) is determined based on a traditional strategy called k-sigma method. As shown in Fig. 13, by increasing this threshold (k = 2), we suppress more noises as well as thin edges in the image, on the other hand by decreasing this value (k = 0.01) some ring artifact will be appeared in the reconstructed image.

 figure: Fig. 13.

Fig. 13. Reconstructed image by thresholding curvelet coefficients at each subband by Tj,l,p=kσ1σ2 (a) Original noisy image,(b) Reconstructed image with k = 0.01 and (c) k = 2.

Download Full Size | PDF

By taking curvelet transform of an image and modification of its coefficients, the light areas in an image will become lighter and dark areas will become darker which in turn will generally increase the contrast of reconstructed image. As shown in Fig. 12, the sigma parameter influence the resulted image by K-SVD algorithm. The larger values of this value results in loss of structural information and over-smoothing of layers (especially RPE and choroidal layers) in the reconstructed image by modified curvelet coefficients. Selecting the initial dictionary to be variable in size instead of traditional fixed forms will effectively represent different structures in the image. By increasing the scale of the curvelet coefficients matrix (or reducing in resolution), the block size is also increased while in high resolutions the block size is reduced, resulting in better representation of particular structures of the image.

Decomposition level and the number of scales used in curvelet transform will change the shape of wedge like curvelets at each subband and the size of coefficient matrix will be decreased by increasing the decomposition level. Fine details in an image will be well represented by decreasing the size of coefficient matrix at each scale and rotation. Also reducing the size of coefficient matrix will increase the speed of our proposed K-SVD based denoising algorithm in which the initial dictionary was selected from thresholded coefficient matrix at each scale. Figure 14 shows the effect of decomposition level of curvelet transform in the resulted enhanced image by proposed method.

 figure: Fig. 14.

Fig. 14. The effect of decomposition level in the reconstructed despeckled image of our proposed method. (a) Initial noisy image, (b) Denoising results using 4 level decomposition and (c) 8 level decomposition of curvelet transform in our denoising method.

Download Full Size | PDF

As shown in Fig. 15, the proposed algorithm (with no changes in parameters) can well afford to suppress the noise in images taken by various OCT imaging systems such as Topcon, Nidek and Cirrus HD-OCT (Carl Zeiss Meditec).

 figure: Fig. 15.

Fig. 15. Visual performance of the proposed method on images taken by various OCT imaging systems. (a),(c) and (e) show the initial noisy images taken by Topcon, Nidek and zeiss imaging system respectively and (b),(d) and (f) illustrates the results of our proposed method.

Download Full Size | PDF

6. Conclusion

Speckle noise in OCT tomograms results in an incorrect recognition and interpretation of morphological characteristics such as the width of intra-retinal layers, and the shape of structural features such as drusens, macular holes, macular edema, nerve fiber atrophy and cysts (that can be used as imaging markers in the clinical investigation and diagnostics of retinal diseases). So, in order to suppress noise while preserving and enhancing edges and to preserve the geometric properties of the main structures of the image based on exploiting the regularity of edges, we introduced a new 3-D curvelet-based K-SVD algorithm and demonstrated its instrumentality in the speckle reduction of OCT datasets. We discussed the application of dictionary learning along with curvelet transform for denoising normal and AMD 3D SDOCT retinal images. As the curvelet transform gives a sparse representation of objects and is computationally efficient in dealing with geometric features like line and surface singularities, and in order to use multiscale directional properties of this transform, we introduced a new dictionary learning strategy in the curvelet domain. The K-SVD algorithm may fall in local minimums during dictionary updating because of highly non-convex functional terms in its formula. So, instead of redundant traditional dictionaries, we selected our initial dictionary from the thresholded curvelet coefficients that are less affected by noise.

We also modify each curvelet coefficient after updating each coefficient matrix in each scale and direction in order to further enhance intra-retinal layer boundaries and abnormal features. Our proposed method also does not need any high-SNR / repeated / averaged versions of scans for dictionary learning as in some cases there is no access to such scans. Designing 3D dictionaries for 3D OCT images ensures a considerable improvement in the denoising results. In addition, applying 3D curvelet transform and decomposing the full size 3D image to low size matrices (subbands), and denoising these subbands in the curvelet domain by K-SVD-based dictionary learning will significantly reduce the computation time.

We observed that the proposed curvelet-based K-SVD algorithm outperforms other OCT despeckling methods. This method has potential to increase the accuracy of available segmentation methods especially for the automatic identification of abnormalities such as cystoid spaces in 3D OCT data and the resulted image can be used to accurately detect intra retinal layer boundaries.

Funding

Isfahan University of Medical Sciences (195084, 196076, 393733).

Acknowledgments

Authors would like thank Duke University Vision and Image Processing (VIP) Laboratory for providing freely available image database used in our work to evaluate the proposed method.

Disclosures

The authors declare no conflicts of interest.

References

1. D. Huang, E. A. Swanson, C. P. Lin, J. S. Schuman, W. G. Stinson, W. Chang, M. R. Hee, T. Flotte, K. Gregory, C. A. Puliafito, and J. G. Fujimoto, "Optical coherence tomography," Science 254, 1178 (1991). [CrossRef]  

2. M. R. Hee, C. A. Puliafito, C. Wong, J. S. Duker, E. Reichel, B. Rutledge, J. S. Schuman, E. A. Swanson, and J. G. Fujimoto, “Quantitative assessment of macular edema with optical coherence tomography,” Arch. Ophthalmol. 113(8), 1019–1029 (1995). [CrossRef]  

3. J. M. Schmitt, S. H. Xiang, and K. M. Yung, “Speckle in optical coherence tomography,” J. Biomed. Opt. 4(1), 95–105 (1999). [CrossRef]  

4. A. G. Podoleanu, “Optical coherence tomography,” Br. J. Radiol. 78(935), 976–988 (2005). [CrossRef]  

5. R. Kafieh, H. Rabbani, and I. Selesnick, “Three dimensional data-driven multi scale atomic representation of optical coherence tomography,” IEEE Trans. Med. Imaging 34(5), 1042–1062 (2015). [CrossRef]  

6. A. George, J. L. Dillensinger, M. Weber, and A. Pechereau, “Optical coherence tomography image processing,” Invest. Ophthalmol. Visual Sci. 41(4), S173 (2000).

7. A. Herzog, K. L. Boyer, and C. Roberts, “Robust extraction of the optic nerve head in optical coherence tomography,” Computer Vision and Mathematical Methods in Medical and Biomedical Image Analysis 3117, 395–407 (2004). [CrossRef]  

8. D. Koozekanani, K. Boyer, and C. Roberts, “Retinal thickness measurements from optical coherence tomography using a Markov boundary model,” IEEE Trans. Med. Imaging 20(9), 900–916 (2001). [CrossRef]  

9. A. Ozcan, A. Bilenca, and A. E. Desjardins, “Speckle reduction in optical coherence tomography images using digital filtering,” J. Opt. Soc. Am. A 24, 1901–1910 (2007). [CrossRef]  

10. T. Loupas, W. N. Mcdicken, and P. L. Allan, “An Adaptive Weighted Median Filter for Speckle Suppression in Medical Ultrasonic Images,” IEEE Trans. Circuits Syst. 36(1), 129–135 (1989). [CrossRef]  

11. J. Portilla, V. Strella, M. J. Wainwright, and E. P. Simoncelli, “Adaptive Wiener denoising using a Gaussian scale mixture model in the wavelet domain,” 2001 International Conference on Image Processing, Vol II, Proceedings2, 37–40 (2001).

12. S. Aja, C. Alberola, and J. Ruiz, “Fuzzy anisotropic diffusion for speckle filtering,” 2001 Ieee International Conference on Acoustics, Speech, and Signal Processing, Vols I-Vi, Proceedings21261–1264 (2001).

13. P. Puvanathasan and K. Bizheva, “Interval type-II fuzzy anisotropic diffusion algorithm for speckle noise reduction in optical coherence tomography images,” Opt. Express 17(2), 733–746 (2009). [CrossRef]  

14. Y. Yu and S. T. Acton, “Speckle reducing anisotropic diffusion,” IEEE Trans. on Image Process. 11(11), 1260–1270 (2002). [CrossRef]  

15. F. Luisier, T. Blu, and M. Unser, “A new SURE approach to image denoising: interscale orthonormal wavelet thresholding,” IEEE Trans. on Image Process. 16(3), 593–606 (2007). [CrossRef]  

16. S. Chitchian, M. A. Fiddy, and N. M. Fried, “Denoising during optical coherence tomography of the prostate nerves via wavelet shrinkage using dual-tree complex wavelet transform,” J. Biomed. Opt. 14(1), 014031 (2009). [CrossRef]  

17. Z. Jian, Z. Yu, and L. Yu, “Speckle attenuation in optical coherence tomography by curvelet shrinkage,” Opt. Lett. 34(10), 1516–1518 (2009). [CrossRef]  

18. M. Elad and M. Aharon, “Image denoising via sparse and redundant representations over learned dictionaries,” IEEE Trans. on Image Process. 15(12), 3736–3745 (2006). [CrossRef]  

19. L. Fang, S. Li, Q. Nie, J. A. Izatt, C. A. Toth, and S. Farsiu, “Sparsity based denoising of spectral domain optical coherence tomography images,” Biomed. Opt. Express 3(5), 927–942 (2012). [CrossRef]  

20. A. Abbasi, A. Monadjemi, L. Fang, H. Rabbani, and Y. Yang, “Three-dimensional optical coherence tomography image denoising through multi-input fully-convolutional networks,” Comput. Biol. Med. 108, 1–8 (2019). [CrossRef]  

21. N. Gour and P. Khanna, “Speckle denoising in optical coherence tomography images using residual deep convolutional neural network,” Multimedia Tools and Applications, 1–17 (2019).

22. F. Shi, N. Cai, Y. Gu, D. Hu, Y. Ma, Y. Chen, and X. Chen, “DeSpecNet: a CNN-based method for speckle reduction in retinal optical coherence tomography images,” Phys. Med. Biol. 64(17), 175010 (2019). [CrossRef]  

23. F. Luan and Y. Wu, “Application of RPCA in optical coherence tomography for speckle noise reduction,” Laser Phys. Lett. 10(3), 035603 (2013). [CrossRef]  

24. M. Bashkansky and J. Reintjes, , “Statistics and reduction of speckle in optical coherence tomography,” Opt. Lett. 25(8), 545–547 (2000). [CrossRef]  

25. M. Hughes, M. Spring, and A. Podoleanu, “Speckle noise reduction in optical coherence tomography of paint layers,” Appl. Opt. 49(1), 99–107 (2010). [CrossRef]  

26. N. Iftimia, B. E. Bouma, and G. J. Tearney, “Speckle reduction in optical coherence tomography by “path length encoded” angular compounding,” J. Biomed. Opt. 8(2), 260–263 (2003). [CrossRef]  

27. L. Ramrath, G. Moreno, H. Mueller, T. Bonin, G. Huettmann, and A. Schweikard, “Towards multi-directional OCT for speckle noise reduction,” Med Image Comput Comput Assist Interv11(Pt 1), 815–823 (2008).

28. A. E. Desjardins, B. J. Vakoc, G. J. Tearney, and B. E. Bouma, , “Speckle reduction in OCT using massively-parallel detection and frequency-domain ranging,” Opt. Express 14(11), 4736–4745 (2006). [CrossRef]  

29. M. Pircher, E. Götzinger, R. Leitgel, A. F. Fercher, and C. K. Hitzenberger,, “Speckle reduction in optical coherence tomography by frequency compounding,” J. Biomed. Opt. 8(3), 565–569 (2003). [CrossRef]  

30. B. Sander, M. Larsen, L. Thrane, J. Hougaard, and T. Jørgensen, “Enhanced optical coherence tomography imaging by multiple scan averaging,” Br. J. Ophthalmol. 89(2), 207–212 (2005). [CrossRef]  

31. E. Gotzinger, M. Pircher, and C. K. Hitzenberger, “High speed spectral domain polarization sensitive optical coherence tomography of the human retina,” Opt. Express 13(25), 10217–10229 (2005). [CrossRef]  

32. R. D. Ferguson, D. X. Hammer, L. A. Paunescu, S. Beaton, and J. S. Schuman, “Tracking optical coherence tomography,” Opt. Lett. 29(18), 2139–2141 (2004). [CrossRef]  

33. G. T. Chong, S. Farsiu, S. F. Freedman, N. Sarin, A. F. Koreishi, J. A. Izatt, and C. A. Toth, , “Abnormal foveal morphology in ocular albinism imaged with spectral-domain optical coherence tomography,” Arch. Ophthalmol. 127(1), 37–44 (2009). [CrossRef]  

34. M. R. Hee, J. A. Izatt, E. A. Swanson, D. Huang, J. S. Schuman, C. P. Lin, C. A. Puliafito, and J. G. Fujimoto, , “Optical coherence tomography of the human retina,” Arch. Ophthalmol. 113(3), 325–332 (1995). [CrossRef]  

35. H. Ishikawa, D. M. Stein, G. Wollstein, S. Beaton, J. G. Fujimoto, and J. S. Schuman,, “Macular segmentation with optical coherence tomography,” Invest. Ophthalmol. Visual Sci. 46(6), 2012–2017 (2005). [CrossRef]  

36. M. Mayer, R. Tornow, R. Bock, and F. Kruse, “Automatic nerve fiber layer segmentation and geometry correction on spectral domain OCT images using fuzzy C-means clustering,” Invest. Ophthalmol. Visual Sci. 49(13), 1880 (2008).

37. M. Baroni, P. Fortunato, and A. La Torre, “Towards quantitative analysis of retinal features in optical coherence tomography,” J. Biomed. Eng. 29(4), 432–441 (2007). [CrossRef]  

38. D. L. Marks, T. S. Ralston, and S. A. Boppart, “Speckle reduction by I-divergence regularization in optical coherence tomography,” J. Opt. Soc. Am. A 22(11), 2366–2371 (2005). [CrossRef]  

39. D. C. Fernandez, et al., “Comparing total macular volume changes measured by optical coherence tomography with retinal lesion volume estimated by active contours,” Invest. Ophthalmol. Visual Sci. 45(13), U61 (2004).

40. M. K. Garvin, M. D. Abràmoff, R. Kardon, S. R. Russell, X. Wu, and M. Sonka,, “Intraretinal layer segmentation of macular optical coherence tomography images using optimal 3-D graph search,” IEEE Trans. Med. Imaging 27(10), 1495–1505 (2008). [CrossRef]  

41. R. Bernardes, C. Maduro, and P. Serranho, “Improved adaptive complex diffusion despeckling filter,” Opt. Express 18(23), 24048–24059 (2010). [CrossRef]  

42. H. M. Salinas and D. C. Fernandez, “Comparison of PDE-based nonlinear diffusion approaches for image enhancement and denoising in optical coherence tomography,” IEEE Trans. Med. Imaging 26(6), 761–771 (2007). [CrossRef]  

43. A. M. Bagci, M. Shahidi, R. Ansari, M. Blair, N. P. Blair, and R. Zelkha, “Thickness Profiles of Retinal Layers by Optical Coherence Tomography Image Segmentation,” Am. J. Ophthalmol. 146(5), 679–687.e1 (2008). [CrossRef]  

44. J. Rogowska and M. E. Brezinski, “Evaluation of the adaptive speckle suppression filter for coronary optical coherence tomography imaging,” IEEE Trans. Med. Imaging 19(12), 1261–1266 (2000). [CrossRef]  

45. A. Mishra, A. Wong, K. Bizheva, and D. A. Clausi, “Intra-retinal layer segmentation in optical coherence tomography images,” Opt. Express 17(26), 23719–23728 (2009). [CrossRef]  

46. A. Fuller, R. Zawadzki, S. Choi, D. Wiley, J. Werner, and B. Hamann, “Segmentation of three-dimensional retinal image data,” IEEE Trans. Visual. Comput. Graphics 13(6), 1719–1726 (2007). [CrossRef]  

47. N. Anantrasirichai, L. Nicholson, J. Morgan, and I. Erchova, “Adaptive-weighted bilateral filtering for optical coherence tomography,” Image Processing (ICIP), 2013 20th IEEE International Conference on 1110-1114 (2013).

48. A. Wong, A. Mishra, K. Bizheva, and D. A. Clausi, “General Bayesian estimation for speckle noise reduction in optical coherence tomography retinal imagery,” Opt. Express 18(8), 8338–8352 (2010). [CrossRef]  

49. L. Fang, S. Li, R. P. McNabb, Q. Nie, A. N. Kuo, C. A. Toth, J. A. Izatt, and S. Farsiu, “Fast acquisition and reconstruction of optical coherence tomography images via sparse representation,” IEEE Trans. Med. Imaging 32(11), 2034–2049 (2013). [CrossRef]  

50. A. Baghaie, R. M. D’Souza, and Z. Yu, “Application of Independent Component Analysis Techniques in Speckle Noise Reduction of Retinal OCT Images,” Optik 127(15), 5783–5791 (2016). [CrossRef]  

51. F. Zaki, Y. Wang, and H. Su, “Noise adaptive wavelet thresholding for speckle noise removal in optical coherence tomography,” Biomed. Opt. Express 8(5), 2720–2731 (2017). [CrossRef]  

52. R. Kafieh, H. Rabbani, M. Abramoff, and M. Sonka, “Curvature correction of retinal OCTs using graph-based geometry detection,” Phys. Med. Biol. 58(9), 2925–2938 (2013). [CrossRef]  

53. L. Ying, L. Demanet, and E. Candes, “3D discrete curvelet transform,” Applied and Computational Mathematics50 (2005).

54. E. Candès, L. Demanet, D. Donoho, and L. Ying, “Fast discrete curvelet transforms,” Multiscale Model. Simul. 5(3), 861–899 (2006). [CrossRef]  

55. H. Rabbani, M. Sonka, and M. D. Abramoff, “Optical Coherence tomography noise reduction using anisotropic local bivariate Gaussian mixture prior in 3D complex wavelet domain,” Int. J. Biomed. Imaging 2013, 1–23 (2013). [CrossRef]  

56. J. Ma and G. Plonka, “A review of curvelets and recent applications,” IEEE Signal Process. Mag. 27(2), 118–133 (2010). [CrossRef]  

57. A. Woiselle, J. L. Starck, and J. Fadili, “3-D Data Denoising and Inpainting with the Low-Redundancy Fast Curvelet Transform,” J Math Imaging Vis 39(2), 121–139 (2011). [CrossRef]  

58. A. Woiselle, J. L. Starck, and J. Fadili, “3D curvelet transforms and astronomical data restoration,” Appl Comput Harmon A. 28(2), 171–188 (2010). [CrossRef]  

59. M. Bonesi, S. G. Proskurin, and I. V. Meglinski, “Imaging of subcutaneous blood vessels and flow velocity profiles by optical coherence tomography,” Laser Phys. 20(4), 891–899 (2010). [CrossRef]  

60. V. Kajić, B. Považay, B. Hermann, B. Hofer, D. Marshall, P. L. Rosin, and W. Drexler, “Robust segmentation of intraretinal layers in the normal human fovea using a novel statistical model based on texture and shape analysis,” Opt. Express 18(14), 14730–14744 (2010). [CrossRef]  

61. J. L. Starck, E. J. Candes, and D. L. Donoho, “The curvelet transform for image denoising,” IEEE Trans. on Image Process. 11(6), 670–684 (2002). [CrossRef]  

62. M. Esmaeili, H. Rabbani, and A. M. Dehnavi, “Automatic optic disk boundary extraction by the use of curvelet transform and deformable variational level set model,” Pattern Recognit. 45(7), 2832–2842 (2012). [CrossRef]  

63. M. Esmaeili, H. Rabbani, A. Mehri, and A. Deghani, “Extraction of Retinal Blood Vessels by Curvelet Transform,” 2009 16th IEEE International Conference on Image Processing, Vols 1-6, pp. 3353-+ (2009).

64. S. Farsiu, S. J. Chiu, R. V. O’Connell, F. A. Folgar, E. Yuan, J. A. Izatt, C. A. Toth, and Age-Related Eye Disease Study 2 Ancillary Spectral Domain Optical Coherence Tomography Study Group, “Quantitative Classification of Eyes with and without Intermediate Age-related Macular Degeneration Using Optical Coherence Tomography,” Ophthalmology 121(1), 162–172 (2014). [CrossRef]  

65. M. Esmaeili, A. M. Dehnavi, H. Rabbani, and F. Hajizadeh, “Speckle noise reduction in optical coherence tomography using two-dimensional curvelet-based dictionary learning,” J Med Signals Sens 7(2), 86 (2017). [CrossRef]  

66. P. Bao and L. Zhang, “Noise reduction for magnetic resonance images via adaptive multiscale products thresholding,” IEEE Trans. Med. Imaging 22(9), 1089–1099 (2003). [CrossRef]  

67. A. Pizurica, L. Jovanov, B. Huysmans, V. Zlokolica, P. De Keyser, F. Dhaenens, and W. Philips, “Multiresolution Denoising for Optical Coherence Tomography: A Review and Evaluation,” Curr. Med. Imaging Rev. 4(4), 270–284 (2008). [CrossRef]  

68. G. Cincotti, G. Loi, and M. Pappalardo, “Frequency decomposition and compounding of ultrasound medical images with wavelet packets,” IEEE Trans. Med. Imaging 20(8), 764–771 (2001). [CrossRef]  

69. Z. Wang, A. C. Bovik, H. R. Sheikh, and E. P. Simoncelli, “Image quality assessment: from error visibility to structural similarity,” IEEE Trans. on Image Process. 13(4), 600–612 (2004). [CrossRef]  

70. P. S. Negi and D. Labate, “3-D discrete shearlet transform and video processing,” IEEE Trans. on Image Process. 21(6), 2944–2954 (2012). [CrossRef]  

71. G. Kutyniok, W.-Q. Lim, and R. Reisenhofer, “Shearlab 3D: Faithful digital shearlet transforms based on compactly supported shearlets,” arXiv preprint arXiv:1402.5670 (2014).

72. A. Abbasi, A. Monadjemi, L. Fang, and H. Rabbani, “Optical coherence tomography retinal image reconstruction via nonlocal weighted sparse representation,” J. Biomed. Opt. 23(03), 1–11 (2018). [CrossRef]  

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (15)

Fig. 1.
Fig. 1. 3D rendering of a curvelet atom in (a) space domain, (b) frequency domain, and (c) discrete frequency domain. The shaded area separates the proposed 3D wedge associated with curvelet atom.
Fig. 2.
Fig. 2. Results of reconstruction of 3D-OCT images from the thresholded curvelet coefficients.(a) initial image, (b) extracted image by taking inverse 3D curvelet transform of thresholded coefficients.
Fig. 3.
Fig. 3. The outline of the proposed method.
Fig. 4.
Fig. 4. Samples of the selected 2D-initial dictionaries used for denoising the curvelet coefficient matrix in each scale and rotation. The dictionary size in (a) and (b) is 16×128 in scale 6 with 2 different orientations (l = 5 and 7 respectively) and in (c),(d) is 16×256 for scale 7 with 2 different orientations (l = 5 and 7 respectively) of the coefficient matrix.
Fig. 5.
Fig. 5. Results of reconstruction of 3D-OCT images from the enhanced curvelet coefficients. (a,c) Initial images, and (b,d) Obtained images by proposed method.
Fig. 6.
Fig. 6. Selected background and foreground ROIs for evaluation. Image shows 11 selected regions, which bigger ellipse outside the retinal region is used as the background ROI and other circles represent the foreground ROIs.
Fig. 7.
Fig. 7. Local structure similarity map,(a) Original noisy image (b) Average image (c) Denoised image (d) SSIM map
Fig. 8.
Fig. 8. Reconstructed image by thresholded shearlet coefficients.
Fig. 9.
Fig. 9. The shearlet coefficients of first level in detailed subbands.
Fig. 10.
Fig. 10. The result of the shearlet based K-SVD denoising method (a) Initial image, (b) Reconstructed image by applying inverse shearlet transform to the K-SVD-based denoised coefficients
Fig. 11.
Fig. 11. Visual comparison of different SDOCT retinal image denoising methods. (a) The original noisy image (b) The denoising results using K-SVD method [41]. (c) The denoising results using the Tikhonov method [33]. (d) The denoising results using MSBTD method [19]. (e) The denoising results using AWBF method [47]. (f) The denoising results using NWSR algorithm [72]. (g) Results of the proposed method.
Fig. 12.
Fig. 12. Visual comparison of dictionary learning for image denoising in the image and transform domain. (a) Original noisy image. (b) The denoised image by our proposed method. (c), (d), (e) are the results of K-SVD-based denoising in the image domain with σ=5,15 and 25, respectively.
Fig. 13.
Fig. 13. Reconstructed image by thresholding curvelet coefficients at each subband by Tj,l,p=kσ1σ2 (a) Original noisy image,(b) Reconstructed image with k = 0.01 and (c) k = 2.
Fig. 14.
Fig. 14. The effect of decomposition level in the reconstructed despeckled image of our proposed method. (a) Initial noisy image, (b) Denoising results using 4 level decomposition and (c) 8 level decomposition of curvelet transform in our denoising method.
Fig. 15.
Fig. 15. Visual performance of the proposed method on images taken by various OCT imaging systems. (a),(c) and (e) show the initial noisy images taken by Topcon, Nidek and zeiss imaging system respectively and (b),(d) and (f) illustrates the results of our proposed method.

Tables (2)

Tables Icon

Table 1. Available denoising methods in OCT images [5, a ]

Tables Icon

Table 2. Mean and std of the CNRs, MSRs, ENL, TP and EP for 17 SDOCT retinal images using Methods of Tikhonov (15), K-SVD [38], MSBTD (37), NWSR (68), 3D CWDL (5) and Proposed Method.

Equations (16)

Equations on this page are rendered with MathJax. Learn more.

ϕ a , b , θ ( x ) = ϕ a , 0 , 0 ( R θ ( x b ) )
ϕ ^ a , b , θ ( ξ ) = e i b , ξ ϕ ^ a , 0 , 0 ( R θ ξ ) = e i b , ξ U a ( R θ ξ )
c j , k , l ( f ) = R 2 f ^ ( ξ ) U j ( R θ j , l ξ ) e i b k j , l , ξ d ξ
c ~ j , k , l ( f ) = R 2 f ^ ( ξ ) U ~ j ( S θ j , l 1 ξ ) e i b ~ k j , l , ξ d ξ
c ~ j , k , l ( f ) = R 2 f ^ ( ξ ) U ~ j ( S θ j , l 1 ξ ) e i k j , ξ d ξ
{ ( ξ 1 , ξ 2 , ξ 3 ) T : 2 j 1 ξ 1 2 j + 1 , 2 j / 2 3 ξ 2 2 ξ 1 2 j / 2 , 2 j / 2 3 ξ 3 2 ξ 1 2 j / 2 } .
tan θ j , l = l 2 j / 2 l = 2 j / 2 + 1 , , 2 j / 2 + 1 ,
tan υ j , m = m 2 j / 2 m = 2 j / 2 + 1 , , 2 j / 2 + 1 ,
φ ~ j , k , l , m = φ ~ j , 0 , 0 , 0 ( S T θ j , l , υ j , m ( x b ~ k j , l , m ) )
ϕ ~ ^ j , k , l , m = e i b ~ k j , l , m , ξ ϕ ~ ^ j , 0 , 0 , 0 ( S θ j , l υ j , m 1 ξ ) = e i b ~ k j , l , m , ξ U ~ j ( S θ j , l υ j , m 1 ξ )
C ~ j , k , l , m ( f ) = f , ϕ ~ j , k , l , m = R 3 f ^ ( ξ ) U ~ j ( S θ j , l , υ j , m 1 ξ ) e i b ~ k j , l , m , ξ d ξ = R 3 f ^ ( S θ j , l , υ j , m ξ ) U ~ j ( ξ ) e i k j , ξ d ξ
C ( j , l , p ) = { C ( j , l , p ) i f C ( j , l , p ) T j,l,p 0 e l s e
α ^ i , C ^ = argmin( α i , C | | C Y | | 2 2 + λ | | D α i  -  R i C | | F 2  +  i μ i | | α i | | 0 )
i min α i | | α i | | 0 s . t . | | R i C D α i | | 2 2 ( g σ ) 2
C ( j,l,p ) = ( λ I + i R i T R i ) 1 ( λ Y + i R i T D α i )
K c ( A ) = { 2 A if A < N | A N | 0.3 A if N A < 3 N | M A | 0.5 A if 3 N A
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.