## Abstract

Optical coherence tomography (OCT) is an important imaging technique extensively applied in medical sciences. However, OCT images often suffer from speckle noise, which is a kind of multiplicative noise inherited in coherent imaging systems. A speckle noise reduction algorithm with total variation (TV) regularization is proposed to restore speckled OCT images. It constructs the regularization parameter and utilizes the tuning function. The proposed algorithm realizes sufficient speckle noise reduction and delicate edge preservation. Simulations demonstrate the performance of the proposed algorithm with respect to visual effects, processing time and image quality metrics of signal-to-noise ratio (SNR), equivalent number of looks (ENL), contrast-to-noise ratio (CNR), relative mean square error (RMSE) and edge preservation factor. Compared with some classical and typical despeckling algorithms, the proposed algorithm exhibits good results in edge preservation, recovery error and time efficiency, and presents better SNR, ENL and CNR. The applicability of the proposed algorithm with regard to OCT in-device preprocessing is discussed in details. Therefore, it promotes the application of OCT imaging in clinical diagnosis and analysis.

© 2015 Optical Society of America

## 1. Introduction

Optical coherence tomography (OCT) is an important imaging technique that acquires images with light in semitransparent and weakly scattering biological tissues [1,2]. It is capable of penetrating into the scattering media and obtaining micrometer-resolution 3D imaging in a non-contact way. Therefore, it has extensive applications in material and medical imaging [3–5 ]. OCT takes advantage of low-coherence interferometry, which inevitably introduces speckle noise. Speckle noise is caused by the destructive interference of multiple-scattered waves, and it inherently exists in coherent imaging systems. Speckle noise appears grainy with dark and bright spots on the imaged object, which severely degrades the OCT imaging quality for clinical diagnosis and analysis. Physically, it is related to the biological structures of the imaged tissues and the coherent length of the imaging system. For the mathematical model, speckle noise is essentially a kind of multiplicative noise with gamma distributions [6–9 ].

Many approaches have been proposed for despeckling in OCT and other coherent imaging systems, which can be mainly categorized as hardware-based despeckling and software-based despeckling. For hardware-based despeckling, the general idea is to take multiple images and average the uncorrelated noise for speckle noise reduction. A kind of implementation is the “angle averaging” [10–13 ], which samples the imaged signal in different beam angles to acquire multiple frames with uncorrelated noise. Another common approach is the “spatial averaging” [14, 15], which acquires signal samples at different positions. However, these hardware-based approaches will inevitably introduce more complicated optical designs and prolong the sampling period.

As a result, many software-based despeckling algorithms have been proposed for the past few decades. Basically, these algorithms focus on adaptive spatial window filter based on local statistics [16–18 ], wavelet decomposition [19–23 ] and diffusion methods [24–28 ]. Local window filter works fast, but it blurs the light targets and fails to remove noise spots in dark areas. Wavelet algorithms decompose the image in different scales and perform speckle noise suppression by adjusting the wavelet coefficients in the respective wavelet sub-bands. In practice, wavelet decomposition demonstrates satisfactory speckle noise reduction. However, when the decomposed sub-bands come to re-composition, they may cause unwanted artefacts that degrade the original OCT image. Diffusion algorithms have better results with respect to artefacts, yet their performance of despeckling is usually compromised. The aforementioned algorithms do not reflect enough spatial information of signal and noise, making it difficult to extract signals from the speckled OCT image.

Looking back on the task we are faced with when we receive a speckled OCT image, we need to suppress speckle noise and preserve image edges simultaneously. Total variation (TV) regularization proposed by Rudin *et al.* in [29] realizes good edge preservation. In [30], Candès *et al.* prove that it can recover exact information from incomplete observations on certain conditions. Many classical denoising algorithms with TV regularization have been proposed [31–33
], and a tuning function is introduced for the smoothing term [34]. It is worth mentioning that all the classical algorithms are based on energy minimization, and they are designed for additive noise with zero mean [35]. For OCT applications, the speckle noise does not have zero mean even if it is projected into the logarithm space to satisfy the “additive” condition.

In this way, we propose a speckle noise reduction algorithm with TV regularization to deal with the problem in OCT. In our algorithm, we adopt the AA model proposed by Aubert and Aujol in [36] for multiplicative noise and add the TV regularization term. Traditional TV regularization applies a fixed (spatially invariant) regularization parameter, and it causes artefacts known as “staircase effects”. To overcome this, we combine the construction model for TV regularization parameter that we propose in [37] and the tuning function Φ [34] for the regularization term to make it spatially adaptive with sufficient noise suppression and delicate edge preservation. The physical noise model and mathematical derivations of our algorithm are described in details. In Simulations, we present a simulated speckled image and a real speckled OCT image to exhibit the despeckling effects of the proposed algorithm. Image recovery with some classical and typical algorithms is also provided. Common image quality metrics (SNR, ENL, CNR, RMSE, edge preservation factor *η*) and processing time *T* of all these algorithms are listed for quantitative comparison and evaluation. The applicability of the proposed algorithm with regard to OCT in-device preprocessing is discussed and analyzed. The discussion of parameter selection in our algorithm is also presented in a detailed way.

## 2. Mathematical description of the proposed algorithm

#### 2.1. The noise model and derivations of the proposed algorithm

In OCT imaging, the speckle noise is modeled as [38]

where*i*,

*o*and

*n*denote the intensities of the observed image, the original image and speckle noise, respectively. The term “image” in this paper refers to its intensity in linear magnitude. In Eq. (1),

**∈**

*s***represents pixels**

*S***in the image space**

*s***, generally represented as (**

*S**x*,

*y*) in a 2D Cartesian coordinate system.

For the speckle noise *n*, it follows a gamma distribution [38]

*L*is the equivalent number of incoherently independent images taken to obtain the final image. Here, we take

*L*= 1 and Eq. (2) becomes the negative exponential distribution:

*n′*= log(

*n*) is

*f*(

_{N′}*n′*) = exp(

*n′*− exp(

*n′*)) [8]. Furthermore, its expectation E(n′) is −

*γ*rather than 0, where

*γ*= 0.5772... is the Euler constant. For this reason, classical algorithms with TV regularization for additive noise with zero mean are invalid [35]. We choose AA model [36] based on maximum

*a posteriori*(MAP) conditions. In this way, we aim to minimize the following functional:

*ℓ*

_{2}-norm and ∇ denotes the gradient function.

The TV regularization term is introduced in Eq. (4) to avoid the ill-posed problem, which is inherited in inversion operation. However, the fixed (spatially invariant) TV regularization parameter *λ* causes staircase effects in the recovered image. Therefore, we apply the construction model and the tuning function Φ to deal with the problem. The functional ** J** in Eq. (4) becomes

We differentiate the functional *𝒥* in Eq. (5) and express it discretely as:

Further, we have the iterative equation for the original image *o* as:

In Eqs. (6) and (7), (*k* + 1) and (*k*) denote numbers of iterations, div(·) refers to the divergence function, ‖ · ‖ represents *ℓ*
_{2}-norm, and *δ* is the iteration step size.

#### 2.2. Properties of λ(**s**) and Φ

TV regularization is required to be spatially adaptive, i. e. it can preserve delicate edges and suppress speckle noise simultaneously. Therefore, we need to discuss the properties of *λ*(** s**) and Φ in Eq. (5).

First, we propose the construction model for *λ*(** s**) in [37]. It is explicitly expressed as

*λ*(

**) =**

*s**λ*

_{0}·

*f*(

*β*·

*EI*(

**)), where**

*s**f*is the constructor function,

*EI*(

**) is the edge indicator,**

*s**λ*

_{0}is the intensity factor and

*β*is the shaping factor.

*f*,

*EI*(

**),**

*s**λ*

_{0}and

*β*are four fundamental elements of the model.

Here in the proposed algorithm, we choose the difference eigenvalue edge indicator *P*(*u*) in [39] and modify it to be the edge indicator *EI*(** s**). The modification lies in the Hessian matrix of the image, where we convolve every element of the Hessian matrix with a Gaussian kernel to make it more robust to the image noise. This edge indicator extracts sufficient local information to adapt the regularization parameter

*λ*(

**) in lower gradient areas and higher gradient areas. For the constructor function, we choose**

*s**f*(

*x*) = 2/

*π*· arccot(

*x*) for its good performances of high iterative convergence speed and low relative mean square error [37].

Next follows the analysis of the tuning function Φ. The tuning function makes the optimization solution closer to a piecewise constant function, which exhibits as homogeneous regions with boundaries of sharp edges in recovered images. It agrees with the despeckling objective in OCT. It is obvious that there should be a zero point at *x* = 0 for the first derivative of Φ to avoid singularity in Eq. (6). For TV regularization, we would like to increase regularization in lower gradient areas to suppress noise, and to decrease regularization in higher gradient areas to preserve image information. Therefore, for lower gradient areas, the coefficient for ‖*o*‖ should be positive to keep the outer-pointing normal direction. On the other hand, for higher gradient areas, the coefficient for ‖*o*‖ should be near zero to preserve edges. In summary, if Φ ∈ *𝒞*
^{2}(0 ∪ ℝ^{+}), we have

With the conditions for Φ″(*x*) in Eqs. (9) and (10), we assume that Φ(*x*) is convex, and further assume it is non-decreasing. Without loss of generality, we set Φ(0) = 0. Given all these conditions, a feasible option for Φ would be
$\mathrm{\Phi}(x)=\sqrt{1+{x}^{2}}-1$.

## 3. Simulations

In the simulations, we present two images to show the despeckling effects of our algorithm. Figure 1(a) is a noise-free standard test image of Lenna, and we will add simulated speckle noise to perform despeckling recovery. Figure 1(b) is an OCT image of retina from Peking Union Medical College Hospital (with use permission from Dr. Lve Li, Senior Attending Ophthalmologist). Apart from the whole images, we also select some regions of interest (ROIs) marked in red boxes in Figs. 1(a) and 1(b) to demonstrate the performance of our algorithm. For comparison, we choose some classical and typical despeckling algorithms with high performance to recover the whole images and their respective ROIs. These algorithms are Lee filter [16], Frost filter [17], Kuan filter [18], 2D adaptive complex diffusion despeckling filter (2D-NCDF) [25], anisotropic diffusion speckle filter (DPAD) [28] and versatile wavelet domain noise filtration (Wavelet) [23]. The parameters of the compared algorithms are carefully selected to optimize their performances in regard to the visual effects, image quality metrics and processing time.

#### 3.1. Recovery of a simulated noisy image

For the standard test image in Fig. 1(a), we add simulated speckle noise and recover it with different algorithms. The recovery of the whole image and the two enlarged ROIs is presented in Figs. 2
–4, respectively. The parameters in our algorithm are set as *λ*
_{0} = 5, *β* = 1, *δ* = 0.1, and the despeckling goes through 100 iterations.

In Figs. 2 –4, the noisy images are seriously speckled, which simulate the speckle noise in OCT images. All the recovered versions exhibit speckle noise suppression and image enhancement. In Fig. 2, our algorithm demonstrates better overall recovery for the whole image. For the enlarged ROIs, the proposed algorithm preserves more details and better suppresses speckle noise in Fig. 3, and it realizes better despeckling and homogeneous region preservation in Fig. 4. These guarantee the applicability of the proposed algorithm in OCT image processing.

In order to evaluate the despeckling results of different algorithms quantitatively, the image quality metrics of signal-to-noise ratio (SNR), equivalent number of looks (ENL), contrast-to-noise ratio (CNR), relative mean square error (RMSE) and the edge preservation factor (*η*) defined in *linear magnitude* images are employed for comparison and analysis. SNR describes the level of the desired OCT signal to the level of the speckle noise. ENL depicts the smoothness of images and reflects the despeckling effects of algorithms, since speckle noise appears grainy and it may add false edges to an originally smooth region. CNR is similar to SNR, and it removes the background mean in its signal term and adds the background noise in its noise term. RMSE represents the error of the recovered image with respect to the original noise-free image. The edge preservation factor *η* shows the edge preservation effects with respect to the original image. These metrics are commonly used [19, 37, 40, 41], and they are defined as follows:

*μ*and

_{R}*σ*are mean and standard deviation of the ROI, respectively.

_{R}*μ*and

_{B}*σ*are mean and standard deviation of the background region, respectively. In Eqs. (14) and (15),

_{B}*i*,

*ô*and

*o*represent the speckled image, the recovered image and the original noise-free image, respectively. ‖ · ‖ and ∇

^{2}denote

*ℓ*

_{2}–norm and the Laplace operator, respectively. The mean are all averaged in a 3 × 3 neighborhood. The definitions in Eqs. (11)–(15) apply in the whole image as well as in a single ROI. Besides, we also provide the processing time

*T*of different despeckling algorithms to compare their computational complexity. The running environment is Intel Core i3, 3.4 GHz CPU with 8 GB RAM. We optimize the time efficiency for filters of Lee, Frost and Kuan, and we implement the authors’ original codes for algorithms of 2D-NCDF, DPAD and Wavelet.

The SNR, ENL, RMSE, *η* and *T* data for images in Figs. 2
–4 are listed in Tables 1
–3, respectively. The best result for each column is marked in bold.

From metrics presented in Tables 1
–3, we can see that our algorithm harvests best SNR and ENL results with delicate edge preservation, low recovery error and high time efficiency, while 2D-NCDF provides best *η* results.

#### 3.2. Recovery of a real OCT image

Figure 1(b) is a clinical SD-OCT image (raw data) of retina, which is heavily speckled (with negative SNR). It is taken by Heidelberg Engineering SPECTRALIS(R) OCT device. The laser wavelength is 820nm, and the superluminescent diode wavelength is 870nm. The scan speed is 40,000 A-scan/s. The axial resolution is 3.9*μ*m, and the transverse resolution is 14*μ*m. The max FOV is 30° × 30°. The whole image and the two enlarged ROIs are presented for performance comparison of different algorithms in Figs. 5
–7, respectively. The blue boxes in Fig. 1(b) indicate background noise regions corresponding to the respective ROIs. The parameters in our algorithm are set as *λ*
_{0} = 4.8, *β* = 1, *δ* = 0.1, and the despeckling goes through 100 iterations.

In Figs. 5 –7, the inter-layer edges and layer presence of retina often indicate physiological information (e.g. vessel diameter). For all recovered versions in these figures, the speckle noise is reduced and the edge is more prominent in comparison with the original version. The proposed algorithm recovers the whole image better (Fig. 5), and realizes delicate edge preservation and sufficient speckle noise suppression in the enlarged ROIs (Figs. 6 and 7).

In order to measure the recovery results quantitatively, the SNR, ENL, CNR, *η* and processing time *T* data for images in Figs. 5
–7 are listed in Tables 4
–6, respectively. The best result for each column is marked in bold.

From metrics presented in Tables 4
–6, we can see our algorithm harvests overall best SNR, ENL and CNR with high time efficiency, and 2D-NCDF provides best *η* results. The reason for the smaller *η* of our algorithm may result from the despeckling process, which essentially reduces the speckle noise and smoothes the corresponding noisy edges. From Figs. 2
–7 and Tables 1
–6, we can see that our algorithm has better metrics of SNR, ENL and CNR with low recovery error and high time efficiency, while it shows good visual effects of edge preservation and speckle noise reduction in simulated and real speckled images at the same time. In this way, it promotes the application of OCT imaging in clinical diagnosis and analysis.

## 4. Discussions

#### 4.1. Applicability of the proposed algorithm with regard to OCT in-device preprocessing

In OCT, there is in-device preprocessing for the speckled OCT raw data, as shown in the flow diagram in Fig. 8. As a result, we will provide additional discussion on the applicability of the proposed algorithm with regard to OCT in-device preprocessing.

The proposed algorithm is based on the multiplicative noise model described in Eq. (1), and the variables of *i*, *o* and *n* and image quality metrics in Eqs. (11)–(15) are all defined on the linear magnitude. Therefore, the linearity of the OCT in-device preprocessing is the essential condition for the applicability of the proposed algorithm. The in-device preprocessing may include dynamic range compression, range clamping and even some other denoising. In these situations, the condition of multiplicative noise model does not hold. If the OCT in-device preprocessing is known and invertible (e.g. the logarithmic transformation *i′* = *c* · log(1 + *i*)), we can invert the preprocessing and recover the multiplicative noise model condition. Thus, the proposed algorithm is applicable. Otherwise, the preprocessing is either not known or not invertible (e.g. 2D-DCT in JPEG compression). As a result, our algorithm does not apply. On other occasions, the linearity of preprocessed image pixels still holds, i.e. the preprocessed data *i′* remains linear or quasi-linear (e.g. spatial averaging on very few singular spots). In this way, the proposed algorithm can perform despeckling. In brief, our algorithm is applicable for nonlinear preprocessing that is known and invertible, as well as all linear (quasi-linear) preprocessing. To ensure the applicability of the proposed algorithm, it is highly recommended that we should work on speckled OCT raw data *i*, especially when the OCT in-device preprocessing module is a “black box”.

Averaging is also a main preprocessing approach for despeckling in OCT devices. It usually requires the acquisition of multiple images, which takes longer time of multiple acquisition periods. In theory, the longer time itself is not a major disadvantage. In clinical practice, however, it is difficult for some patients to hold still for such a long time. On these occasions, our algorithm can substitute for averaging in OCT devices, since it despeckles unaveraged OCT raw data. It can achieve speckle noise reduction in a time-efficient way. Therefore, it promotes the application of OCT imaging in clinical diagnosis and analysis.

#### 4.2. Parameter selection in the proposed algorithm

It is noticed that the compared algorithms have one or two parameters (e.g. window size, diffusion time, number of steps, step size). In our algorithm, however, we have four parameters to adjust, i.e. the intensity factor *λ*
_{0}, the shaping factor *β*, the iteration step size *δ* and the number of iterations *k*. Therefore, we will discuss the parameter selection to deal with this seemingly-complicated issue in our algorithm.

The four parameters originate from two different sources, so they can be categorized into two pairs. The pair of *λ*
_{0} and *β* stems from the construction model that we propose in [37], while the pair of *δ* and *k* comes from discretizing the derivative of the functional *𝒥* in Eq. (6). In this way, we will discuss the parameter selection with respect to these two pairs, respectively.

The intensity factor *λ*
_{0} essentially determines the intensity of TV regularization. A larger *λ*
_{0} leads to more speckle noise suppression and a smaller *λ*
_{0} preserves the image edges better. The shaping factor *β* is also related to the TV regularization intensity, and it basically controls the sensitivity to image region activity (gradient) in TV regularization. A larger *β* results in more image edge preservation and a smaller *β* suppresses speckle noise better. The parameters of *λ*
_{0} and *β* counteract with each other in image edge preservation and speckle noise suppression. In this way, the choices of the two parameters need to be well-balanced. The choices of *λ*
_{0} and *β* in Section 3 provide practical parameter initialization. Based on this, one can increase or decrease the parameters according to the desire of more speckle noise suppression or more image edge preservation. The adjustment is also convenient.

For the choices of *δ* and *k*, implementation in practice reveals that iterations with a smaller *δ* (*δ* ∼ 0.01) will harvest better despeckling results and image quality. However, this may cause more iterations (*k* > 1000) and result in longer processing time. On the other hand, iterations with *δ* too large (*δ* ∼ 1) will lead to poor convergence and introduce visible image blur. As a result of the negative correlation of *δ* and *k*, if one wants to achieve satisfactory image despeckling and high time efficiency, the practical solution of a larger *δ* (*δ* ∼ 0.1) and a smaller number of iterations *k* (*k* ∼ 100) is recommended. The aforementioned simulations demonstrate the feasible ranges of the four parameters, and other image despeckling experiments that we conduct also verify the validity of the parameter selection.

## 5. Conclusion

In conclusion, this paper presents a speckle noise reduction algorithm with total variation regularization in optical coherence tomography. It makes use of the regularization parameter construction model and the tuning function. The speckle noise model and the derivation of the proposed algorithm are described in a detailed way. Image recovery with the proposed algorithm exhibits its visual effects of speckle noise suppression and image edge preservation. In comparison with some classical and typical despeckling algorithms, the proposed algorithm harvests more agreeable visual effects and better image quality metrics of SNR, ENL and CNR with high time efficiency, low recovery error and a reasonable *η*. The applicability of the proposed algorithm with regard to OCT in-device preprocessing is analyzed. The strategy of its parameter selection is also discussed in details. In this way, the proposed algorithm can effectively and efficiently reduce speckle noise and preserve edges, which promotes the application of OCT imaging in clinical diagnosis and analysis.

## Acknowledgments

This work was supported by the National Basic Research Program of China (Grant No. 2013CB329203), the National High Technology Research and Development Program of China (Grant No. 2013AA013601) and the SGCC Tech Program (Grant No. SG-HAZZ00FCJS1500238). We are very grateful to the authors of Refs. [23, 25, 28] for their generous share of the implementation codes. We also want to express gratitude to Dr. Lve Li and Dr. Xiaochen Yu from Peking Union Medical College Hospital for the OCT image. G. Gong would like to thank Chancellor’s Professor A. Ozcan from UCLA for his introduction to the amazing world of optics and its fascinating applications. G. Gong would like to thank Prof. B. Lujan from UC Berkeley and OCTMD.org for the helpful discussion and examples of preprocessing in some real OCT devices. The authors are more than grateful to the handling editor and the reviewers for their enlightening comments, valuable suggestions, helpful advice and careful correction to this paper.

## References and links

**1. **D. Huang, E. A. Swanson, C. P. Lin, J. S. Schuman, W. G. Stinson, W. Chang, M. R. Hee, T. Flotte, K. Gregory, C. A. Puliafito, and J. G. Fujimoto, “Optical coherence tomography,” Science **254**(5035), 1178–1181 (1991). [CrossRef] [PubMed]

**2. **A. F. Fercher, C. K. Hitzenberger, G. Kamp, and S. Y. El-Zaiat, “Measurement of intraocular distances by backscattering spectral interferometry,” Opt. Commun. **117**(1), 43–48 (1995). [CrossRef]

**3. **P. Targowski, M. Iwanicka, L. Tymińska-Widmer, M. Sylwestrzak, and E. A. Kwiatkowska, “Structural examination of easel paintings with optical coherence tomography,” Acc. Chem. Res. **43**(6), 826–836 (2010). [CrossRef] [PubMed]

**4. **M. Wojtkowski, “High-speed optical coherence tomography: basics and applications,” Appl. Opt. **49**(16), D30–D61 (2010). [CrossRef] [PubMed]

**5. **A. Ozcan, M. J. F. Digonnet, and G. S. Kino, “Minimum-phase-function-based processing in frequency-domain optical coherence tomography systems,” J. Opt. Soc. Am. A **23**(7), 1669–1677 (2006). [CrossRef]

**6. **J. W. Goodman, “Statistical properties of laser speckle patterns,” in *Laser Speckle and Related Phenomena*, J. C. Dainty, ed. (Springer, 1975), pp. 9–75.

**7. **F. T. Ulaby, F. Kouyate, B. Brisco, and T. H. L. Williams, “Textural information in SAR images,” IEEE Trans. Geosci. Remote Sensing **24**(2), 235–245 (1986). [CrossRef]

**8. **G. Franceschetti, V. Pascazio, and G. Schirinzi, “Iterative homomorphic technique for speckle reduction in synthetic-aperture radar imaging,” J. Opt. Soc. Am. A **12**(4), 686–694 (1995). [CrossRef]

**9. **A. A. Lindenmaier, L. Conroy, G. Farhat, R. S. DaCosta, C. Flueraru, and I. A. Vitkin, “Texture analysis of optical coherence tomography speckle for characterizing biological tissues in vivo,” Opt. Lett. **38**(8), 1280–1282 (2013). [CrossRef] [PubMed]

**10. **M. Bashkansky and J. Reintjes, “Statistics and reduction of speckle in optical coherence tomography,” Opt. Lett. **25**(8), 545–547 (2000). [CrossRef]

**11. **J. M. Schmitt, “Array detection for speckle reduction in optical coherence microscopy,” Phys. Med. Biol. **42**(7), 1427–1439 (1997). [CrossRef] [PubMed]

**12. **A. E. Desjardins, B. J. Vakoc, W. Y. Oh, S. M. Motaghiannezam, G. J. Tearney, and B. E. Bouma, “Angle-resolved optical coherence tomography with sequential angular selectivity for speckle reduction,” Opt. Express **15**(10), 6200–6209 (2007). [CrossRef] [PubMed]

**13. **N. Iftimia, B. E. Bouma, and G. J. Tearney, “Speckle reduction in optical coherence tomography by ‘path length encoded’ angular compounding,” J. Biomed. Opt. **8**(2), 260–263 (2003). [CrossRef] [PubMed]

**14. **D. P. Popescu, M. D. Hewko, and M. G. Sowa, “Speckle noise attenuation in optical coherence tomography by compounding images acquired at different positions of the sample,” Opt. Commun. **269**(1), 247–251 (2007). [CrossRef]

**15. **T. M. Jørgensen, L. Thrane, M. Mogensen, F. Pedersen, and P. E. Andersen, “Speckle reduction in optical coherence tomography images of human skin by a spatial diversity method,” in *European Conference on Biomedical Optics*, (Optical Society of America, 2007), pp. 66270P.

**16. **J. S. Lee, “Speckle suppression and analysis for synthetic aperture radar images,” Opt. Eng. **25**(5), 636–643 (1986). [CrossRef]

**17. **V. S. Frost, J. A. Stiles, K. S. Shanmugan, and J. Holtzman, “A model for radar images and its application to adaptive digital filtering of multiplicative noise,” IEEE Trans. Pattern Anal. Mach. Intell. **4**(2), 157–166 (1982). [CrossRef] [PubMed]

**18. **D. T. Kuan, A. Sawchuk, T. C. Strand, and P. Chavel, “Adaptive restoration of images with speckle,” IEEE Trans. Acoust., Speech, Signal Process. **35**(3), 373–383 (1987). [CrossRef]

**19. **D. C. Adler, T. H. Ko, and J. G. Fujimoto, “Speckle reduction in optical coherence tomography images by use of a spatially adaptive wavelet filter,” Opt. Lett. **29**(24), 2878–2880 (2004). [CrossRef]

**20. **M. A. Mayer, A. Borsdorf, M. Wagner, J. Hornegger, C. Y. Mardin, and R. P. Tornow, “Wavelet denoising of multiframe optical coherence tomography data,” Biomed. Opt. Express **3**(3), 572–589 (2012). [CrossRef] [PubMed]

**21. **M. R. N. Avanaki, P. P. Laissue, T. J. Eom, A. G. Podoleanu, and A. Hojjatoleslami, “Speckle reduction using an artificial neural network algorithm,” Appl. Opt. **52**(21), 5050–5057 (2013). [CrossRef] [PubMed]

**22. **M. Gargesha, M. W. Jenkins, A. M. Rollins, and D. L. Wilson, “Denoising and 4D visualization of OCT images,” Opt. Express **16**(16), 12313–12333 (2008). [CrossRef] [PubMed]

**23. **A. Pižurica, W. Philips, I. Lemahieu, and M. Acheroy, “A versatile wavelet domain noise filtration technique for medical imaging,” IEEE Trans. Med. Imag. **22**(3), 323–331 (2003). [CrossRef]

**24. **D. C. Fernández, H. M. Salinas, and C. A. Puliafito, “Automated detection of retinal layer structures on optical coherence tomography images,” Opt. Express **13**(25), 10200–10216 (2005). [CrossRef]

**25. **R. Bernardes, C. Maduro, P. Serranho, A. Araújo, S. Barbeiro, and J. Cunha-Vaz, “Improved adaptive complex diffusion despeckling filter,” Opt. Express **18**(23), 24048–24059 (2010). [CrossRef] [PubMed]

**26. **B. Heise, E. Leiss-Holzinger, M. Pircher, E. Götzinger, B. Baumann, C. K. Hitzenberger, and D. Stifter, “Advanced image processing of retardation scans for polarization-sensitive optical coherence tomography,” in *European Conference on Biomedical Optics*, (Optical Society of America, 2009), pp. 73720S.

**27. **Y. Yu and S. T. Acton, “Speckle reducing anisotropic diffusion,” IEEE Trans. Image Process. **11**(11), 1260–1270 (2002). [CrossRef]

**28. **S. Aja-Fernández and C. Alberola-López, “On the estimation of the coefficient of variation for anisotropic diffusion speckle filtering,” IEEE Trans. Image Process. **15**(9), 2694–2701 (2006). [CrossRef] [PubMed]

**29. **L. I. Rudin, S. Osher, and E. Fatemi, “Nonlinear total variation based noise removal algorithms,” Physica D **60**(1), 259–268 (1992). [CrossRef]

**30. **E. Candès, J. Romberg, and T. Tao, “Robust uncertainty principles: exact signal reconstruction from highly incomplete frequency information,” IEEE Trans. Inform. Theory **52**(2), 489–509 (2006). [CrossRef]

**31. **D. L. Donoho and M. Johnstone, “Adapting to unknown smoothness via wavelet shrinkage,” J. Amer. Statist. Assoc. **90**(432), 1200–1224 (1995). [CrossRef]

**32. **D. Geman and S. Geman, “Stochastic relaxation, Gibbs distributions, and the Bayesian restoration of images,” IEEE Trans. Pattern Anal. Mach. Intell. **6**(6), 721–741 (1984). [CrossRef] [PubMed]

**33. **Y. Meyer, “Oscillating patterns in image processing and nonlinear evolution equations,” in *the fifteenth Dean Jacqueline B. Lewis memorial lectures (Vol. 22)* (American Mathematical Society, 2001).

**34. **G. Aubert and P. Kornprobst, *Mathematical Problems in Image Processing: Partial Differential Equations and the Calculus of Variations* (Springer Science & Business Media, 2006).

**35. **L. Rudin, P. L. Lions, and S. Osher, “Multiplicative denoising and deblurring: theory and algorithms,” in *Geometric Level Set Methods in Imaging, Vision, and Graphics*, S. Osher and N. Paragios, eds. (Springer, 2003), pp. 103–119. [CrossRef]

**36. **G. Aubert and J. F. Aujol, “A variational approach to removing multiplicative noise,” SIAM J. Appl. Math. **68**(4), 925–946 (2008). [CrossRef]

**37. **G. Gong, H. Zhang, and M. Yao, “Construction model for total variation regularization parameter,” Opt. Express **22**(9), 10500–10508 (2014). [CrossRef] [PubMed]

**38. **J. R. Goodman and R. L. Haupt, *Statistical Optics* (John Wiley & Sons, 2015).

**39. **H. Tian, H. Cai, J. Lai, and X. Xu, “Effective image noise removal based on difference eigenvalue,” in *Proceedings of ICIP 2011 International Conference on Image Processing* (Brussels, 2011). [CrossRef]

**40. **A. Ozcan, A. Bilenca, A. E. Desjardins, B. E. Bouma, and G. J. Tearney, “Speckle reduction in optical coherence tomography images using digital filtering,” J. Opt. Soc. Am. A **22**(7), 1901–1910 (2007). [CrossRef]

**41. **F. Sattar, L. Floreby, G. Salomonsson, and B. Lovstrom, “Image enhancement based on a nonlinear multiscale method,” IEEE Trans. Image Process. **6**(6), 888–895 (1997). [CrossRef] [PubMed]