Abstract

High image quality is of great importance for precise diagnosis and therapeutics of eye disease in clinic. A human retina OCT angiography (OCTA) image can be extracted from multiple OCT B-scans to visualize the distribution of blood vessels. However, OCTA suffer from the degeneration of image quality due to inherent Gaussian noise of the OCT system while the blood vessel’s signal is extracted. The degeneration of the noise in OCTA image will be more conducive to the evaluation of abnormal and normal blood vessels in the human eye. To precisely assist diagnosis and therapeutics in clinic by reducing the Gaussian noise in the OCTA image, an OCTA image denoising method is proposed based on the dual-tree complex wavelet transform and bilateral shrinking Bayes frame. Initially, OCTA images are extracted from the raw data based on the optical microangiography algorithm. Then, the image is decomposed into the wavelet domain using the dual-tree complex wavelet transform. The signal and noise among different wavelet scale layers are separated on the basis of the Bayesian posterior probability. Finally, the inverse wavelet transform is employed to reconstruct the denoised image. Through the noise reduction process of the algorithm, the PSNR and CNR of the OCTA image are increased by 49.15% and 47.91%, respectively. According to the results, the wavelet transform can effectively separate the blood flow signal and noise in processing the OCTA signal, which will provide an effective image processing method for the clinical evaluation requiring high-quality OCTA images.

© 2020 Optical Society of America under the terms of the OSA Open Access Publishing Agreement

1. Introduction

Optical coherence tomography angiography (OCTA) [1,2] is considered as a non-invasive imaging technology for imaging the morphology and structure of blood vessels at the capillary level. It has been extensively adopted in various fields such as the retina, skin and brain imaging. OCTA is implemented by characterizing the relative motion of red blood cells and surrounding tissues as endogenous markers of blood flow. Blood flow information is extracted from the phase and amplitude characteristics of Optical Coherence Tomography (OCT) [3] signal. Several methods can be used to extract blood flow signal, such as phase-based Phase Variance method [4], amplitude-based SSADA method [5], and phase-amplitude complex signal OMAG method [6]. However, these methods are inevitably influenced by the inherent Gaussian noise and speckle signals of the imaging system. Undesired signals will reduce the accuracy of the blood flow signal extraction and decrease the contrast of the image. However, the quality of OCTA images provides great significance in quantitatively assessing blood vessel density or calculating burned skin area [710]. Consequently, reducing the noise in OCTA imaging and improving the image quality will be more conducive to the application of the current technique to actual pathological judgment.

For the conventional OCTA algorithm, the blood flow signal is obtained by extracting the alterations of signals among multiple structural diagrams. In this process, randomly generated noise and changing blood flow signals are retained. In the meanwhile, most of the noise signals are weaker than blood flow signals. The purpose of improving the contrast of the OCTA image is to keep the intensity of blood flow signals from changing and eliminate these noises. M. Chlebiej et.al [11] selected an elliptical kernel as a median filter template to filter multiple B-frames in OCTA image, significantly improving the sharpness, contrast information and the vascular connectivity of OCTA image. However, median filtering will weaken some high-intensity signals and expand some low-intensity noise signals into vascular signals. Based on the Bayes classifier, R. Reif et al [12] performed a posterior selection on the blood flow signal of B-frame images extracted by Optical Microangiography (OMAG) algorithm with the aim to eliminate the noise and reduce bright lines caused by motion jitter. Nevertheless, this method requires expert calibration of the blood flow signals in the first few B-frame diagrams and training of the classifier, which will impose extremely demanding requirements on actual application scenarios. Recently, with the rapid development of artificial intelligence, C. S. Lee et al [13] proposed a neural network model without the requirement of expert calibration. In addition, they applied it to the extraction of blood flow signals from B-frame structure images. However, the quality of the images generated by this method is worse than the OCTA images generated by conventional blood flow extraction algorithms. Moreover, there exists a problem that the signals of micro-vessels cannot be accurately extracted.

OCTA is developed on the basis of OCT. In addition, some techniques (such as Multiple scan averaging [14], Anisotropic diffusion filtering [15], Deep learning [16] and Wavelet transform [17]) used in OCT to remove noise may also be applicable to solve the noise problem in OCTA. A. Uji et al [18] proved that the averaging method which overlays multiple OCTA images could reduce the noise impact and improve the image quality. However, it also takes many times longer for the image acquisition in comparison with normal process. In actual applications, the proposed method will expand the problem of image quality degradation which is caused by subjective factors such as human eye motion and blinking. P. Li et.al [19] proposed a method of shape-based classification of "vesselness" with anisotropic Gaussian filter to improve the contrast and connectivity of vascular images in OCTA. Besides, scholars based on deep learning methods [20,21] have also conducted a lot of research reports. The mentioned methods to reduce OCTA noise possess their own advantages. However, the method of employing wavelet transforms to analyze blood flow and noise information in OCTA data seems to be a vacancy. S. Chitchian et.al [22,23] reported the elimination of noise in OCT images using Dual-tree complex wavelet transform (DTCWT) in combination with shrinkage threshold method. However, compared with OCT, OCTA needs repeated scanning of the same cross section for several times. Meanwhile, increased acquisition time will also increase noise interference caused by eye movements. Moreover, OCTA is a three-dimensional data set, and the factors that need to be considered to eliminate noise are more complicated than OCT data. Consequently, using wavelet transform to solve the noise problem in OCTA images still remains an interesting challenge.

Based on the methods and problems mentioned above, the OCTA images are first extracted to analyze the B-frame images of OCT from a multi-scale and directional perspective in combination with the DTCWT method. After the images are processed, the blood flow signal and the noise signal are distinguished. Subsequently, detailed comparison experiments are performed. The rest of the paper is organized as follows. In Section Two, the theoretical algorithm of OCTA and the mathematical model of the noise reduction method built in the present study will be provided. Afterwards, our experimental acquisition system and the specific innovative scheme will be introduced. In Section Three, the experimental details and selected experimental parameters of the OCTA image noise reduction will be presented. In addition, the detailed comparative analysis and discussion will be implemented based on the obtained experimental results. The conclusion will be presented in Section Four.

2. Methods

2.1 OCT angiography setups and methods

To measure the actual effects of the noise reduction algorithm by employing the dual-tree complex wavelet transform on OCTA images, we built a spectral-domain OCT system [24] to obtain the structural data of human retina. The design of the specific system was similar to that described in the paper [25], as shown in Fig. 1. A light source (EXALOS, EXS210006-01) with the center wavelength of 840 nm and a bandwidth of 49 nm (-3 dB) was selected to emit a beam of light. The light was split into two beams after the fiber coupler (50:50 coupling ratio (Thorlabs, TW850R5A2). One beam passed through the collimator (Thorlabs, TC06FC-850) into a reference arm and the other beam passed through the fiber collimator into the sample arm. The X-Y scanning galvanometers (Thorlabs, GVS311/M) control the tomography imaging position. Beam passed through a lens group consisting of an objective lens and ocular lens, which focused on the fundus imaging area. The returned light from the reference arm and the sample arm passed through the fiber coupler and entered the imaging acquisition part. It initially passed through a collimator lens (75 mm focal length) and then hit the grating (1800 l / mm, Wasatch). Subsequently, it passed a focus lens (150mm focal length) and concentrated on the photosensitive layer of the CMOS camera (E2V) for signal acquisition. The fastest imaging speed of camera is 140kHz and the imaging speed of the system is 100kHz. The axial resolution of the system is approximately 5.8 um (in the air). The irradiation pupil power of the system is about 2 mW.

 figure: Fig. 1.

Fig. 1. The schematic of the spectral-domain OCT system.

Download Full Size | PPT Slide | PDF

We used volume scanning mode to collect the data of three-dimensional structural images of the human retina. The collected data contains 960 B-scans, of which every 4 B-scans were scans of the same plane. Each B-scan is composed of 240 A-scans, and each A-scan is consisted of 1024 pixels, constituting a three-dimensional structure with the size of $1024 \times 240 \times 240$ in total. Our sampling method provides proper data for denoise experiment. The detailed acquisition mode parameters are shown in Table 1.

Tables Icon

Table 1. OCT angiography acquisition mode parameter.

To generate OCTA images, the collected B-scans data with the volume scanning mode also needed to be combined with the OMAG algorithm as an extraction method for the blood flow signal. OMAG was a blood flow extraction algorithm developed by R.K.Wang et al [26]. In 2007, it analyzes the data sets based on the complex signals which were composed of amplitude and phase. Based on the difference of OCT complex signals of adjacent B-scans, the algorithm extracted dynamic blood flow signals from static tissue signals and reconstructed microvascular distribution images. Its blood flow extraction mathematical formula is presented in Eq. (1).

$$Flow_{OMAG}(x,z)=\frac{1 }{N-1}\sum_{i=0}^{N-1}\left |C_{i+1}(x,z)- C_{i}(x,z) \right |$$
where $N$ represents the number of times that B-scans is repeated in the same plane. In the current experiment, we set it at four times. $C_{i}(x,z)$ denotes the complex signal (including amplitude and phase values) of the $i$-th B-scans at lateral location $x$ and depth position $z$. $Flow_{OMAG}$ signifies the blood flow signal of lateral location $x$ and depth position $z$.

2.2 Bivariate shrinkage and dual-tree complex wavelet transform

During the SD-OCT imaging, the Gaussian noise brought by the CMOS cameras and the signal changes caused by optical speckle are the main reasons accounting for the degradation of OCT image quality. In the OCTA data, because the background noise is random, the OMAG algorithm that implements the difference operation still cannot handle the problem of image contrast reduction caused by noise. Similarly, this kind of background noise is also inevitable in other OCTA algorithms based on adjacent B-scans for differential or cross-correlation. Consequently, we adopted a DTCWT based on denoising model to reduce the image quality degradation of OCTA images caused by background noise. In accordance with the constructed model, the OCT image is considered to be an image contaminated by Gaussian noise with a variance of $\sigma _{n}^{2}$. The mathematical formula can be described as Eq. (2):

$$m_{s}=a_{s}+n_{s}$$
where $s$ indicates the spatial position coordinate of the image pixel, $m$ is the actual observed signal value of the image, $a$ is the signal value obtained by theoretical analysis, and $n$ denotes the estimated value of noise.

The parameter $m$ is transformed into the wavelet domain $w$ using the dual-tree complex wavelet transform $W$. $w$ will acquire six high-frequency sub-bands and two low-frequency sub-bands. These six high-frequency sub-bands are the details of the image in the direction of 15$^{\circ }$, 45$^{\circ }$, 75$^{\circ }$, -75$^{\circ }$, -45$^{\circ }$, and -15$^{\circ }$. However, the wavelet coefficients generated by the noise are uncorrelated in all directions, thus providing a basis for filtering Gaussian noise. The two low-frequency sub-bands contain the low-frequency information of the image. We set the decomposition layers number $J$ and employ DTCWT on two low-frequency sub-bands to form $J$-layer scale data. It is emphasized that each time DTCWT is performed and the image size is reduced by four times. The transformation follows the principle of complete reversibility, as shown in Eq. (3) and Eq. (4):

$$w=Wm$$
$$m=wW^{T}$$
where $W$ and $W^{T}$ denote wavelet transform and inverse wavelet transform respectively. Gaussian noise is filtered by selecting appropriate thresholds for the six high-frequency sub-bands in each layer . We refer to the bivariate shrinking Bayesian threshold model [27], whose threshold formula can be found in Eq. (5):
$$\widehat{w}_{s}^{j}=\frac{(\sqrt{{w_{s}^{j}}^{2}+{w_{s}^{j+1}}^{2}}-\frac{\sqrt{3}\sigma _{n}^{2}}{\sigma })_{+}}{\sqrt{{w_{s}^{j}}^{2}+{w_{s}^{j+1}}^{2}}}\bullet w_{s}^{j}$$
where $w_{s}^{j}$ signifies the wavelet coefficient at the position $S$ on the $j$-th wavelet scale layer. $\widehat {w}$ denotes the wavelet coefficient after we set the threshold, $\sigma _{n}$ is the estimated noise value, $\sigma$ is the mean value of the signal in the neighborhood $N(k)$ of the position $S$, and the specific position relationship is shown in Fig. 2. $(g)_{+}$ presents a contraction function, which is defined as Eq. (6):
$$(g)_{+}=\left\{\begin{matrix} 0,if:g<0\\ g,otherwise\end{matrix}\right.$$

 figure: Fig. 2.

Fig. 2. Illustration of neighborhood $N(k)$.

Download Full Size | PPT Slide | PDF

According to Eq. (5), to obtain the denoised wavelet coefficient $\widehat {w}$, $\sigma _{n}$ and $\sigma$ need to be calculated. The traditional estimation method of $\sigma _{n}$ is proposed in the paper [28], which sorts all wavelet coefficients of the current sub-bands and divides their median value by a robust coefficient, as shown in Eq. (7).

$$\widehat{\sigma }_{n}=median(\left | w_{s} \right |)/0.6745$$

Although such noise estimation method is effective in most images, its actual effect is not ideal for the OCTA. In the OCTA data, the blood flow signal is erroneously estimated as noise, which will cause the detected blood vessel do deviate from the actual situation. To handle this problem, we proposed an adaptive noise variance estimation method, as shown in Eq. (8). It contains three items of $w_{s}$, $\lambda$, $T$.

$$\widehat{\sigma }_{n}=\sqrt{median(\left | w_{s} \right |)*T*\lambda }$$
$$\lambda =\sqrt{2*log_{2}N}$$
where $w_{s}$ represents the wavelet coefficient of the current sub-bands, $T$ is the adaptive parameter, and the value is between [0,1]. $\lambda$ represents the stability threshold in the current sub-bands,as shown in Eq. (9). $N$ denotes the total number of wavelet coefficients in the sub-bands. Compared with Eq. (7) and  (8), a more appropriate $\sigma _{n}$ can be obtained by adjusting $T$. It is of importance to point out that the value of $T$ is an empirical value.

For the estimation of $\sigma$, the mean in the neighborhood is introduced for calculation. Since the model is built in Eq. (1), $\sigma$ can be obtained from Eq. ( (10):

$$\widehat{\sigma }=\sqrt{({\widehat{\sigma} _{y}^{2}}-{\widehat{\sigma} _{n}^{2}})_{+}}$$
$$\widehat{\sigma }_{y}^{2}=\frac{1}{M}\sum_{y_{i}\in N(k)}^{}y_{i}^{2}$$
where $\widehat {\sigma }_{y}$ denotes the variance of the wavelet coefficients in the neighborhood, which can be calculated by Eq. (11), and $M$ represents the number of data of the current neighborhood $N(k)$.

The algorithm can be summarized as follows:

  • 1) Wavelet decomposition of input data according to the preset decomposition scale $J$.
  • 2) For each wavelet coefficient $w_{s}^{j}$:
    • (a) Through Eq. (8), the noise estimation value $\sigma _{n}$ of the sub-bands where the current wavelet coefficient locates is calculated.
    • (b) Based on Eq. (10), the signal estimated value $\sigma$ of the current wavelet coefficient is calculated.
    • (c) On the basis of the values of $\sigma _{n}$ and $\sigma$, the wavelet coefficient $\widehat {w}_{s}^{j}$ is calculated by Eq. (5).

2.3 Implementation of the proposed method in OCT angiography

The current research involved human subjects. Informed consent was obtained from the participants after explanation of the research procedure and possible consequences of their participation. The present research was approved by the institutional review board of the Foshan University.

Our experiment data is the blood vessel distribution image of the retina of the volunteers, which is collected with volume scanning (three-dimensional data: $1024 \times 240 \times 240$, en-face image resolution: $1024 \times 240$).We combined our proposed algorithm and OMAG algorithm to extract the retina blood flow image from the collected data.

Figure 3 presents the specific experimental procedure. The data collected by the OCT system volume scanning mode is employed to extract dynamic signals as the preparation for the distribution of retina blood vessel image. Since the OMAG algorithm does not solve the problem of background noise, it will reduce the contrast of the final blood vessel distribution image. We choose the method proposed in Section 2.2 to filter the motion signal extracted by the OMAG algorithm.

 figure: Fig. 3.

Fig. 3. Flow Diagram of Experiment.

Download Full Size | PPT Slide | PDF

The specific algorithm implementation is shown in Fig. 4. DTCWT is used to convert blood flow data to wavelet domain. The decomposition scale is $J$=4, which will produce 4 multi-resolution scale layers (each scale layer contains six decomposition direction wavelet coefficient sub-bands) and two low frequency wavelet coefficient sub-bands. The principle of DTCWT is described in the paper [29]. We perform bivariate shrinkage Bayesian filtering on the wavelet coefficient sub-bands of the multi-resolution scale layer. Adaptive $T$ value is selected from sub-bands of different scale layers and different decomposition directions When calculating the noise estimate $\sigma _{n}$ according to Eq. (8). When calculating the signal estimate $\sigma$ based on Eq. (10), we set the size of the neighborhood window $N(k)$ of the first two scale layers to be 9*5, and the size of the neighborhood window $N(k)$ of the latter two scale layers to be 7*3. After determining the values of $\sigma _{n}$ and $\sigma$, they were taken into Eq. (5) to obtain the filtered wavelet coefficient sub-bands. Finally, all wavelet coefficients are converted into time-domain images through inverse wavelet transform.

 figure: Fig. 4.

Fig. 4. Flow chart of the proposed algorithm.

Download Full Size | PPT Slide | PDF

We show the partial process while perform DTCWT and bivariate shrinkage threshold on a B-frame image. As shown in Fig. 5, (a) is an image of blood flow information obtained by B-scan. Image resolution is 1024*240. We intercept the image between the two red lines in the figure to carry out subsequent analysis. Figure 5(b), (c) and (d) are the signal expression forms of (a) in the wavelet domain. They respectively represent the signal at different resolutions in the wavelet domain. It should be noted that these three pictures only reflect the results of the signal which is decomposed into the -75$^{\circ }$ direction. The other five directions are not presented. When the scale increases, the amount of data in the sub-bands is decreasing. In the meanwhile, the difference between the blood flow signal and the noise appears. Figure 5(e), (f), (g) show the three comparison images after noise reduction by our algorithm. Below the icon, we indicate the value of the noise estimation $T$ parameter. In qualitative comparison, noise and signal are clearly separated by adjusting the value of $T$ in the wavelet domain.

 figure: Fig. 5.

Fig. 5. B-frame blood flow and denoised images in wavelet domain. (a) B-frame blood flow image. (b)-(d) are wavelet sub-bands in the first to third wavelet scale with the decomposition direction of -75$^{\circ }$ respectively. (e)-(g) are wavelet sub-bands in the first to third wavelet scale with the decomposition direction of -75$^{\circ }$. The corresponding adaptive parameters T are 0.42, 0.47 and 0.61, respectively.

Download Full Size | PPT Slide | PDF

To solve the noise problem in the three-dimensional dataset of OCTA, we propose a method which is different from time-domain filtering. The fundamental principle of the method proposed in the present study aims to distinguish the noise and blood flow signals decomposed into multiple directions in the wavelet domain and combine the bivariate shrinkage threshold to separate the signal and noise. We determine the noise estimate value based on Eq. (8). The proposed algorithm and the related comparison methods will be implemented with Python (version 3.6). The principle of controlling a single variable is applied and run for the test with a computer equipped with i5-4570cpu 3.3Ghz 8G memory.

2.4 Image evaluation metrics

To quantitatively compare the denoising performance of the algorithm, we choose three evaluation indicators, involving peak-signal-to-noise ratio (PSNR), contrast-to-noise ratio (CNR), and structural similarity index (SSIM). Regarding the calculation of PSNR and SSIM, we select the whole image. To estimate CNR, we select a few regions from the image to conduct the analysis. The evaluation indicators used are the same as the indicators used in the paper [22,30].

$$PSNR=10log_{10}(\frac{max(I)^{2}}{\sigma ^{2}})$$
$$CNR=\frac{1}{H}\sum_{h=1}^{H}\begin{bmatrix} 10log\left ( \frac{\mu _{h}-\mu _{b}}{\sqrt{\sigma _{h}^{2}+\sigma _{b}^{2}}} \right ) \end{bmatrix}$$
$$SSIM(x,\widehat{x})=\frac{(2\mu _{x}\mu _{\widehat{x}}+C_{1})(\sigma _{x\widehat{x}}+C_{2})}{(\mu _{x}^{2}+\mu _{\widehat{x}}^{2}+C_{1})(\sigma _{x}^{2}+\sigma _{\widehat{x}}^{2}+C_{2})}$$

PSNR is the ratio of the intensity of the image signal to the noise. The larger the value is, the higher the peak-signal-to-noise ratio is. In Eq. (12), $max(I)$ is the maximum gray value in the image $I$. $\sigma ^{2}$ represents the signal variance of the selected background area. CNR is an indicator of the contrast between the signal area and the background area. In general, the higher the contrast is, the better the image quality is. In Eq. (13), $h$ represents the number of the selected ROI, $b$ denotes the number of the selected background area, $\mu$ indicates the mean of the selected area and $\sigma$ represents the standard deviation of the selected area. SSIM is employed to reflect the structural similarity between the unprocessed image and the denoised image and its value is between [0,1]. The closer the SSIM value is to 1, the higher the similarity of the filtered image structure. In Eq. (14), $x$ and $\widehat {x}$ represent unprocessed image and filtered image. $C_{1}$ and $C_{2}$ are constants which can be used to avoid instability.

3. Results and discussion

3.1 OCT angiography image results

We conduct experiments according to the procedure in Section 2.3. At first, to obtain the experimental data, the blood flow signal was extracted through the OMAG algorithm. Then, the noise in the blood flow signal was suppressed using the proposed algorithm.

We took a frame of data from the denoised image for comparative analysis and used a segmentation algorithm to analyze the three layers of ILM, OPL, and ONL. As presented in Fig. 6, (a) and (b) respectively represent the original blood flow signal and the blood flow signal after noise reduction with the application of our algorithm. The three ROIs in the figure are intercepted separately. Regarding ROI-1, we selected the background area in the image. To illustrate the effectiveness of our algorithm, we increased the total signal intensity in this area 5 times. In terms of ROI-2, an intermediate layer of ILM and OPL with real blood flow signals is selected. For ROI-3, the area between the OPL layer and the ONL layer with nearly no blood flow signals in the clinical analysis is selected. For these three areas, after analyzing and comparing Fig. 6 (a)-1 and (b)-1, obviously, the background area noise is significantly suppressed in the denoised image Fig. 6(b)-1, indicating that the noise reduction model in the present study is valid for blood flow data. After the comparison of (a)-2 and (b)-2, it is found that the blood flow signal shown in Fig. 6 before and after noise reduction basically remains consistent. The area indicated by the yellow arrow in the figure is the blood flow signal, and the noise in the upper left corner of the image is filtered out, proving that the algorithm can retain the blood flow signal while filter Gaussian noise. Theoretically [31], the area between the OPL layer and the ONL layer has no blood flow. Due to the effects of noise such as Gaussian and speckle and the tailing phenomenon in OCT imaging, the OMAG algorithm extracts signals in this layer, which is shown as pepper granular in Fig. 6(a)-3. These signals generate visible noise in the image, which will affect the quality of the en-face vascular map. As shown in Fig. 6(b)-3, the noise in the same area is significantly reduced, while the reduction effect is not as apparent as that in the background area of (b)-1. We analyzed the reason for the weakening of the noise filtering effect in (b)-3, finding that there were other noises such as speckles in the the area between the OPL layer and the ONL layer, which were not considered in our model. This will be the goal of the next stage of our research.

 figure: Fig. 6.

Fig. 6. B-scan image denoise comparison. (a) the retinal blood flow signal was collected by the experimental system and processed by OMAG algorithm. (b) image of retinal blood flow signal after noise reduction.

Download Full Size | PPT Slide | PDF

To evaluate the merits of the proposed method, we compare against 4 similar types of noise reduction methods, respectively, median filtering, anisotropic diffusion(A-D) filtering, K-svd method [32], and DTCWT filtering. We take the same set of images for comparative experiments. Among them, we employ A-D method for 20 cycles with the diffusion coefficient of 25, and the smooth coefficient of 0.25. The windows size of median filtering is $5\times 5$. In K-SVD method, the block size is $14\times 14$. The redundancy is set to 4, and the noise estimate is 20. In addition, the correlation coefficients were proved to be optimal/ideal for the current experiment.

In terms of each B-Frame image, the maximum signal value of A-scan between the two segmented lines are taken to generate an en-face image of the blood vessel. As shown in Fig. 7, our algorithm and traditional algorithm are compared. The image is taken from the blood vessel between ILM and OPL layer of OCTA data. For better visual comparison, three boundary areas (red rectangles #1-3) are selected and enlarged in Fig. 7. Figure 7(a) presents the en-face image generated by the original OCTA data. Obvious noise interference can be observed in the background region (as presented in the figure area pointed by the yellow arrow). Due to the incomplete removal of background noise by the OMAG algorithm, signals from the other tissue rather than the blood vessels were introduced, leading to a decrease in the contrast of the vascular distribution image, which might further affect the resolution of tiny blood vessels. Figure 7(b) shows the en-face image generated by the median filtering of the OCTA data. The image reveals that the adopted square template is not suitable for filtering OCTA data. In addition, much blood vessel information is eliminated by mistake. The elliptical directions template used in the literature [11] provides an efficient solution. However, the problem of the decrease in signal strength due to median filtering requires an additional sharpening function. Figure 7(c) shows the en-face image generated by the anisotropic diffusion filtering of the OCTA data. In comparison with Fig. 7(a), the noise signal in Fig. 7(c) is significantly suppressed and the blood vessel information is more prominent. While the blood vessels near the foveal area of the human eye are also suppressed. Simultaneously, the real blood vessel information is lost. Figure 7(d) shows the en-face image generated by the K-svd filtering of the OCTA data. The algorithm divides the image into multiple blocks for dictionary learning and acquires a sparse matrix for noise reduction. However, based on the results, this method is not an effective method for OCTA 3D data. The reason for the difference between the results of Fig. 7(e) and Fig. 7(f) refers to that the noise estimation value $\sigma _{n}$ of the proposed algorithm in the wavelet domain is adaptively taken according to the decomposition scale and direction. As the results shown in Fig. 7, the proposed algorithm is the method that can reduce the influence of noise and retain blood flow information.

 figure: Fig. 7.

Fig. 7. Comparison of denoised en-face image of octa data. (a) original image. the yellow box will be used to calculate the CNR index, and the yellow arrow points to the noise signal area. (b) image after filtering with anisotropic diffusion method. (c) image after filtering with medium method. (d) image after filtering with k-svd method. (e) image after filtering with dual-tree complex wavelet transform method. (f) image after filtering with the proposed method.

Download Full Size | PPT Slide | PDF

Additionally, we also selected the red and blue lines of the three ROI regions in Fig. 7 for signal strength analysis. As shown in Fig. 8, the horizontal axis represents the pixel coordinates, and the vertical axis stands for the normalization of the intensity value of the pixel. Figure 8(a) shows the red dotted line and the original line has the highest coincidence in the high-intensity area. We select area ①② to enlarge and the result shows that the line of our algorithm in Box ① basically remains the same as the original line, indicating that the blood vessel signal in this area has not been disturbed by the algorithm, and the peak value of the signal keeps intact. The line of all filtering algorithms has dropped in box ②, suggesting that the area is suppressed as noise. Based on the results of Fig. 8(a), our algorithm has almost the same noise filtering ability as the comparison algorithm. Nevertheless, our algorithm is better than the comparison algorithm for the protection of high-intensity signals. Figure 8(b) and (c) are the comparison graphs of ROI-2 and ROI-3 respectively. The situation which they reflect is similar to Fig. 8(a).

 figure: Fig. 8.

Fig. 8. Qualitative comparison of denoising en-face image of OCTA.

Download Full Size | PPT Slide | PDF

For further quantitative evaluation, we calculated the evaluation index based on the evaluation function proposed in Section 2.4, as shown in Table 2. We use the ROI-0 area of the image in Fig. 7 as the background to calculate PSNR and CNR. CNR is a character of the contrast between a feature in ROI and the noisy background. The yellow framed area in Fig. 7 (a) is employed as the ROI signal part to calculate CNR. SSIM is obtained by calculating the original and denoised images. Obviously, in Table 2, the proposed method obtains much higher PSNR and CNR in comparison with other methods. We analyze SSIM’s sub-optimal situation and it is possible that some signals which are similar to noise are misjudged by our threshold algorithm. Their disruption of the image structure is greater than the impact of the image in time domain. However, in the wavelet domain, we can preferentially retain high-energy signals. This feature will provide an effective reference for handling the noise problem in OCTA images while protecting high-intensity blood flow signals.

Tables Icon

Table 2. OCT angiography acquisition mode parameter.

With the application of segmentation algorithms, the OCTA technique can concentrate on tissues within a certain range for analysis. The specific segmentation line is shown in Fig. 9(a), (d) and (g), with the ZERO layer being the top pixel column of the image and the rest of the layers labeled referring to the paper [33]. The data comparison will be performed on the en-face blood vessel distribution image before and after noise reduction from the data between the NFL-OPL, IPL-OPL and ZERO-OPL segmentation lines. Figure 9(b) and (c) present NFL-OPL before and after noise reduction. Generally, the blood vessel in this layer is selected to analyze the pathology of the whole retina in the human, and high-contrast image is required to provide better diagnostic results. Figure 9(e) and (f) are the comparison before and after noise reduction of IPL-OPL. The blood vessel density in this layer is an essential indicator for measuring diseases [34]. After noise reduction is performed by employing our algorithm, the noise in the foveal area is significantly suppressed. Figure 9(h) and (i) present ZERO-OPL before and after noise reduction. The enface image contains more noise signal when there exists only one segmentation line. Based on our algorithm, it can accurately filter out background noise. The complete retinal blood vessel network is reflected in the result and PSNR index has been significantly improved.

 figure: Fig. 9.

Fig. 9. (a), (d)and(g) Segmented layers of the human retina. (b) and (c) represent the blood vessel distribution image extracted from the NFL-OPL segmentation layer of the retina before and after noise reduction. (e) and (f) stand for the blood vessel distribution image extracted from the IPL-OPL segmentation layer of the retina before and after noise reduction. (h) and (i) denote the blood vessel distribution image extracted from the ZERO-OPL segmentation layer of the retina before and after noise reduction.

Download Full Size | PPT Slide | PDF

3.2 Results discussion

In the present study, we proposed a method based on dual-tree complex wavelet transform for the reduction of OCTA image noise and improvement of image contrast. We employ the SD-OCT system and OMAG algorithm to obtain the OCTA signal of the three-dimensional data. Through filtering multiple B-scans data, the affection of noise on the OCTA image is reduced. In the comparative experiment, the algorithm which we proposed has a significant advantage in the improvement of the image PSNR value. The algorithm can accurately filter out the background Gaussian noise distributed in the OCTA data, which will provide an effective pre-processing way for the experimental operations with a lack of precise segmentation lines.

Different from using multiple sets of OCTA data for superimposed noise reduction, our method used only one set of OCTA data, which could avoid the influence of eye movement caused by long image acquisition time. In addition, the proposed method also shows difference from the method that requires gold standard OCTA data for classifier training. Our method only needs an adaptive parameter T to achieve the effect of filtering noise. We provide a reliable solution for experiments that do not have a large gold standard sample set. Compared with the medium filtering methods which can lead to a decrease in blood vessel signal intensity, our approach reduces the problem of intensity decay by conserving the primary blood flow signal in the wavelet domain. More importantly, it significantly enhances the quality of image.

However, there still remain some shortcomings in the algorithm of the present study. (1) The algorithm does not perform detailed modeling for non-Gaussian noise, such as speckle noise and pseudo-blood signal of soft tissue. (2) The T value in Eq. (8) is only an empirical coefficient, and we have not determined the evaluation criteria for the specific T value. (3) The current work only tested the OCTA data extracted using the OMAG algorithm, which did not conduct a comparative study on the noise reduction results of other OCTA algorithms. The exploration and analysis of these issues will become parts of our subsequent work.

4. Summary

To conclude, the present study proposed a method to improve the quality of OCTA images which is implemented by reducing Gaussian noise on OCTA signals of multiple B-scans. In our algorithm, the dual-tree complex wavelet transform was used on OCTA data. In wavelet domain, the reduction of Gaussian noise in OCTA image is achieved by employing proposed methods, which are adaptive noise estimation method and bilateral shrinking Bayes threshold respectively. Through the quantitative and qualitative experiments with human retina OCTA images obtained by OMAG algorithm, it could be found that the wavelet transform can effectively separate the real blood flow signal and noise in processing OCTA signal. The proposed algorithm reduced the blood flow signal extraction errors caused by Gaussian noise, significantly improving the PSNR and CNR indicators of the image. Moreover, it will provide more reliable and accurate quality of OCTA image data for those studies in which vessel density is required for the evaluation.

Funding

National Natural Science Foundation of China (81771883, 81801746).

Acknowledgments

The authors would like to acknowledge the guidance and discussions with Drs. Yanping Huang, Gongpu Lan and Jingjiang Xu.

Disclosures

The authors declare no conflicts of interest.

References

1. T. E. de Carlo, A. Romano, N. K. Waheed, and J. S. Duker, “A review of optical coherence tomography angiography (octa),” Int. J. Retin Vitr. 1(1), 5 (2015). [CrossRef]  

2. C.-L. Chen and R. K. Wang, “Optical coherence tomography based angiography,” Biomed. Opt. Express 8(2), 1056–1082 (2017). [CrossRef]  

3. D. Huang, E. Swanson, C. Lin, J. Schuman, W. Stinson, W. Chang, M. Hee, T. Flotte, K. Gregory, and C. Puliafito, “Optical coherence tomography,” Science 254(5035), 1178–1181 (1991). [CrossRef]  

4. J. Fingler, R. J. Zawadzki, J. S. Werner, D. Schwartz, and S. E. Fraser, “Volumetric microvascular imaging of human retina using optical coherence tomography with a novel motion contrast technique,” Opt. Express 17(24), 22190–22200 (2009). [CrossRef]  

5. Y. Jia, O. Tan, J. Tokayer, B. Potsaid, Y. Wang, J. J. Liu, M. F. Kraus, H. Subhash, J. G. Fujimoto, J. Hornegger, and D. Huang, “Split-spectrum amplitude-decorrelation angiography with optical coherence tomography,” Opt. Express 20(4), 4710–4725 (2012). [CrossRef]  

6. L. An and R. K. Wang, “In vivo volumetric imaging of vascular perfusion within human retina and choroids with optical micro-angiography,” Opt. Express 16(15), 11438–11452 (2008). [CrossRef]  

7. P. Gong, S. Es’haghian, F. M. Wood, D. D. Sampson, and R. A. McLaughlin, “Optical coherence tomography angiography for longitudinal monitoring of vascular changes in human cutaneous burns,” Exp. Dermatol. 25(9), 722–724 (2016). [CrossRef]  

8. M. Battaglia Parodi, A. Rabiolo, M. V. Cicinelli, P. Iacono, F. Romano, and F. Bandello, “Quantitative analysis of optical coherence tomography angiography in adult-onset foveomacular vitelliform dystrophy,” Retina 38(2), 237–244 (2018). [CrossRef]  

9. A. Y. Kim, D. C. Rodger, A. Shahidzadeh, Z. Chu, N. Koulisis, B. Burkemper, X. Jiang, K. L. Pepple, R. K. Wang, C. A. Puliafito, N. A. Rao, and A. H. Kashani, “Quantifying retinal microvascular changes in uveitis using spectral-domain optical coherence tomography angiography,” Am. J. Ophthalmol. 171, 101–112 (2016). [CrossRef]  

10. A. J. Deegan, S. P. Mandell, and R. K. Wang, “Optical coherence tomography correlates multiple measures of tissue damage following acute burn injury,” Quant. Imaging. Med. Surg. 9(5), 731–741 (2019). [CrossRef]  

11. M. Chlebiej, I. Gorczynska, A. Rutkowski, J. Kluczewski, T. Grzona, E. Pijewska, B. L. Sikorski, A. Szkulmowska, and M. Szkulmowski, “Quality improvement of oct angiograms with elliptical directional filtering,” Biomed. Opt. Express 10(2), 1013–1031 (2019). [CrossRef]  

12. R. Reif, U. Baran, and R. K. Wang, “Motion artifact and background noise suppression on optical microangiography frames using a naive bayes mask,” Appl. Opt. 53(19), 4164–4171 (2014). [CrossRef]  

13. C. S. Lee, A. J. Tyring, Y. Wu, S. Xiao, A. S. Rokem, N. P. DeRuyter, Q. Zhang, A. Tufail, R. K. Wang, and A. Y. Lee, “Generating retinal flow maps from structural optical coherence tomography with artificial intelligence,” Sci. Rep. 9(1), 5694 (2019). [CrossRef]  

14. B. Sander, M. Larsen, L. Thrane, J. L. Hougaard, and T. M. Jørgensen, “Enhanced optical coherence tomography imaging by multiple scan averaging,” Br. J. Ophthalmol. 89(2), 207–212 (2005). [CrossRef]  

15. R. Bernardes, C. Maduro, P. Serranho, A. Araújo, S. Barbeiro, and J. Cunha-Vaz, “Improved adaptive complex diffusion despeckling filter,” Opt. Express 18(23), 24048–24059 (2010). [CrossRef]  

16. Y. Huang, Z. Lu, Z. Shao, M. Ran, J. Zhou, L. Fang, and Y. Zhang, “Simultaneous denoising and super-resolution of optical coherence tomography images based on generative adversarial network,” Opt. Express 27(9), 12289–12307 (2019). [CrossRef]  

17. M. A. Mayer, A. Borsdorf, M. Wagner, J. Hornegger, C. Y. Mardin, and R. P. Tornow, “Wavelet denoising of multiframe optical coherence tomography data,” Biomed. Opt. Express 3(3), 572–589 (2012). [CrossRef]  

18. A. Uji, S. Balasubramanian, J. Lei, E. Baghdasaryan, M. Al-Sheikh, E. Borrelli, and S. R. Sadda, “Multiple enface image averaging for enhanced optical coherence tomography angiography imaging,” Acta Ophthalmol. 96(7), e820–e827 (2018). [CrossRef]  

19. P. Li, Z. Huang, S. Yang, X. Liu, Q. Ren, and P. Li, “Adaptive classifier allows enhanced flow contrast in oct angiography using a histogram-based motion threshold and 3d hessian analysis-based shape filtering,” Opt. Lett. 42(23), 4816–4819 (2017). [CrossRef]  

20. J. Hossbach, L. Husvogt, M. F. Kraus, J. G. Fujimoto, and A. K. Maier, “Deep oct angiography image generation for motion artifact suppression,” in Bildverarbeitung fur die Medizin 2020, T. Tolxdorff, T. M. Deserno, H. Handels, A. Maier, K. H. Maier-Hein, and C. Palm, eds. (Springer, 2020), pp. 248–253.

21. X. Liu, Z. Huang, Z. Wang, C. Wen, Z. Jiang, Z. Yu, J. Liu, G. Liu, X. Huang, A. Maier, Q. Ren, and Y. Lu, “A deep learning based pipeline for optical coherence tomography angiography,” J. Biophotonics 12(10), e201900008 (2019). [CrossRef]  

22. S. Chitchian, M. A. Mayer, A. Boretsky, F. J. van Kuijk, and M. Motamedi, “Retinal optical coherence tomography image enhancement via shrinkage denoising using double-density dual-tree complex wavelet transform,” J. Biomed. Opt. 17(11), 116009 (2012). [CrossRef]  

23. S. Chitchian, M. A. Fiddy, and N. M. Fried, “Denoising during optical coherence tomography of the prostate nerves via wavelet shrinkage using dual-tree complex wavelet transform,” J. Biomed. Opt. 14(1), 014031 (2009). [CrossRef]  

24. J. F. de Boer, B. Cense, B. H. Park, M. C. Pierce, G. J. Tearney, and B. E. Bouma, “Improved signal-to-noise ratio in spectral-domain compared with time-domain optical coherence tomography,” Opt. Lett. 28(21), 2067–2069 (2003). [CrossRef]  

25. L. An, H. M. Subhush, D. J. Wilson, and R. K. Wang, “High-resolution wide-field imaging of retinal and choroidal blood perfusion with optical microangiography,” J. Biomed. Opt. 15(2), 026011 (2010). [CrossRef]  

26. R. K. Wang, S. L. Jacques, Z. Ma, S. Hurst, S. R. Hanson, and A. Gruber, “Three dimensional optical angiography,” Opt. Express 15(7), 4083–4097 (2007). [CrossRef]  

27. L. Sendur and I. W. Selesnick, “Bivariate shrinkage with local variance estimation,” IEEE Signal Process Lett. 9(12), 438–441 (2002). [CrossRef]  

28. D. L. Donoho and I. M. Johnstone, “Ideal spatial adaptation by wavelet shrinkage,” Biometrika 81(3), 425–455 (1994). [CrossRef]  

29. I. W. Selesnick, R. G. Baraniuk, and N. C. Kingsbury, “The dual-tree complex wavelet transform,” IEEE Signal Process Mag. 22(6), 123–151 (2005). [CrossRef]  

30. J. Yang, Y. Hu, L. Fang, J. Cheng, and J. Liu, “Universal digital filtering for denoising volumetric retinal oct and oct angiography in 3d shearlet domain,” Opt. Lett. 45(3), 694–697 (2020). [CrossRef]  

31. A. Petzold, Optical Coherence Tomography to Assess Neurodegeneration in Multiple Sclerosis (Springer, 2016), pp. 131–141.

32. M. Aharon, M. Elad, and A. Bruckstein, “K-svd: An algorithm for designing overcomplete dictionaries for sparse representation,” IEEE Trans. Signal Process. 54(11), 4311–4322 (2006). [CrossRef]  

33. T. B. Dubose, D. Cunefare, E. Cole, P. Milanfar, J. A. Izatt, and S. Farsiu, “Statistical models of signal and noise and fundamental limits of segmentation accuracy in retinal optical coherence tomography,” IEEE Trans. Med. Imaging 37(9), 1978–1988 (2018). [CrossRef]  

References

  • View by:
  • |
  • |
  • |

  1. T. E. de Carlo, A. Romano, N. K. Waheed, and J. S. Duker, “A review of optical coherence tomography angiography (octa),” Int. J. Retin Vitr. 1(1), 5 (2015).
    [Crossref]
  2. C.-L. Chen and R. K. Wang, “Optical coherence tomography based angiography,” Biomed. Opt. Express 8(2), 1056–1082 (2017).
    [Crossref]
  3. D. Huang, E. Swanson, C. Lin, J. Schuman, W. Stinson, W. Chang, M. Hee, T. Flotte, K. Gregory, and C. Puliafito, “Optical coherence tomography,” Science 254(5035), 1178–1181 (1991).
    [Crossref]
  4. J. Fingler, R. J. Zawadzki, J. S. Werner, D. Schwartz, and S. E. Fraser, “Volumetric microvascular imaging of human retina using optical coherence tomography with a novel motion contrast technique,” Opt. Express 17(24), 22190–22200 (2009).
    [Crossref]
  5. Y. Jia, O. Tan, J. Tokayer, B. Potsaid, Y. Wang, J. J. Liu, M. F. Kraus, H. Subhash, J. G. Fujimoto, J. Hornegger, and D. Huang, “Split-spectrum amplitude-decorrelation angiography with optical coherence tomography,” Opt. Express 20(4), 4710–4725 (2012).
    [Crossref]
  6. L. An and R. K. Wang, “In vivo volumetric imaging of vascular perfusion within human retina and choroids with optical micro-angiography,” Opt. Express 16(15), 11438–11452 (2008).
    [Crossref]
  7. P. Gong, S. Es’haghian, F. M. Wood, D. D. Sampson, and R. A. McLaughlin, “Optical coherence tomography angiography for longitudinal monitoring of vascular changes in human cutaneous burns,” Exp. Dermatol. 25(9), 722–724 (2016).
    [Crossref]
  8. M. Battaglia Parodi, A. Rabiolo, M. V. Cicinelli, P. Iacono, F. Romano, and F. Bandello, “Quantitative analysis of optical coherence tomography angiography in adult-onset foveomacular vitelliform dystrophy,” Retina 38(2), 237–244 (2018).
    [Crossref]
  9. A. Y. Kim, D. C. Rodger, A. Shahidzadeh, Z. Chu, N. Koulisis, B. Burkemper, X. Jiang, K. L. Pepple, R. K. Wang, C. A. Puliafito, N. A. Rao, and A. H. Kashani, “Quantifying retinal microvascular changes in uveitis using spectral-domain optical coherence tomography angiography,” Am. J. Ophthalmol. 171, 101–112 (2016).
    [Crossref]
  10. A. J. Deegan, S. P. Mandell, and R. K. Wang, “Optical coherence tomography correlates multiple measures of tissue damage following acute burn injury,” Quant. Imaging. Med. Surg. 9(5), 731–741 (2019).
    [Crossref]
  11. M. Chlebiej, I. Gorczynska, A. Rutkowski, J. Kluczewski, T. Grzona, E. Pijewska, B. L. Sikorski, A. Szkulmowska, and M. Szkulmowski, “Quality improvement of oct angiograms with elliptical directional filtering,” Biomed. Opt. Express 10(2), 1013–1031 (2019).
    [Crossref]
  12. R. Reif, U. Baran, and R. K. Wang, “Motion artifact and background noise suppression on optical microangiography frames using a naive bayes mask,” Appl. Opt. 53(19), 4164–4171 (2014).
    [Crossref]
  13. C. S. Lee, A. J. Tyring, Y. Wu, S. Xiao, A. S. Rokem, N. P. DeRuyter, Q. Zhang, A. Tufail, R. K. Wang, and A. Y. Lee, “Generating retinal flow maps from structural optical coherence tomography with artificial intelligence,” Sci. Rep. 9(1), 5694 (2019).
    [Crossref]
  14. B. Sander, M. Larsen, L. Thrane, J. L. Hougaard, and T. M. Jørgensen, “Enhanced optical coherence tomography imaging by multiple scan averaging,” Br. J. Ophthalmol. 89(2), 207–212 (2005).
    [Crossref]
  15. R. Bernardes, C. Maduro, P. Serranho, A. Araújo, S. Barbeiro, and J. Cunha-Vaz, “Improved adaptive complex diffusion despeckling filter,” Opt. Express 18(23), 24048–24059 (2010).
    [Crossref]
  16. Y. Huang, Z. Lu, Z. Shao, M. Ran, J. Zhou, L. Fang, and Y. Zhang, “Simultaneous denoising and super-resolution of optical coherence tomography images based on generative adversarial network,” Opt. Express 27(9), 12289–12307 (2019).
    [Crossref]
  17. M. A. Mayer, A. Borsdorf, M. Wagner, J. Hornegger, C. Y. Mardin, and R. P. Tornow, “Wavelet denoising of multiframe optical coherence tomography data,” Biomed. Opt. Express 3(3), 572–589 (2012).
    [Crossref]
  18. A. Uji, S. Balasubramanian, J. Lei, E. Baghdasaryan, M. Al-Sheikh, E. Borrelli, and S. R. Sadda, “Multiple enface image averaging for enhanced optical coherence tomography angiography imaging,” Acta Ophthalmol. 96(7), e820–e827 (2018).
    [Crossref]
  19. P. Li, Z. Huang, S. Yang, X. Liu, Q. Ren, and P. Li, “Adaptive classifier allows enhanced flow contrast in oct angiography using a histogram-based motion threshold and 3d hessian analysis-based shape filtering,” Opt. Lett. 42(23), 4816–4819 (2017).
    [Crossref]
  20. J. Hossbach, L. Husvogt, M. F. Kraus, J. G. Fujimoto, and A. K. Maier, “Deep oct angiography image generation for motion artifact suppression,” in Bildverarbeitung fur die Medizin 2020, T. Tolxdorff, T. M. Deserno, H. Handels, A. Maier, K. H. Maier-Hein, and C. Palm, eds. (Springer, 2020), pp. 248–253.
  21. X. Liu, Z. Huang, Z. Wang, C. Wen, Z. Jiang, Z. Yu, J. Liu, G. Liu, X. Huang, A. Maier, Q. Ren, and Y. Lu, “A deep learning based pipeline for optical coherence tomography angiography,” J. Biophotonics 12(10), e201900008 (2019).
    [Crossref]
  22. S. Chitchian, M. A. Mayer, A. Boretsky, F. J. van Kuijk, and M. Motamedi, “Retinal optical coherence tomography image enhancement via shrinkage denoising using double-density dual-tree complex wavelet transform,” J. Biomed. Opt. 17(11), 116009 (2012).
    [Crossref]
  23. S. Chitchian, M. A. Fiddy, and N. M. Fried, “Denoising during optical coherence tomography of the prostate nerves via wavelet shrinkage using dual-tree complex wavelet transform,” J. Biomed. Opt. 14(1), 014031 (2009).
    [Crossref]
  24. J. F. de Boer, B. Cense, B. H. Park, M. C. Pierce, G. J. Tearney, and B. E. Bouma, “Improved signal-to-noise ratio in spectral-domain compared with time-domain optical coherence tomography,” Opt. Lett. 28(21), 2067–2069 (2003).
    [Crossref]
  25. L. An, H. M. Subhush, D. J. Wilson, and R. K. Wang, “High-resolution wide-field imaging of retinal and choroidal blood perfusion with optical microangiography,” J. Biomed. Opt. 15(2), 026011 (2010).
    [Crossref]
  26. R. K. Wang, S. L. Jacques, Z. Ma, S. Hurst, S. R. Hanson, and A. Gruber, “Three dimensional optical angiography,” Opt. Express 15(7), 4083–4097 (2007).
    [Crossref]
  27. L. Sendur and I. W. Selesnick, “Bivariate shrinkage with local variance estimation,” IEEE Signal Process Lett. 9(12), 438–441 (2002).
    [Crossref]
  28. D. L. Donoho and I. M. Johnstone, “Ideal spatial adaptation by wavelet shrinkage,” Biometrika 81(3), 425–455 (1994).
    [Crossref]
  29. I. W. Selesnick, R. G. Baraniuk, and N. C. Kingsbury, “The dual-tree complex wavelet transform,” IEEE Signal Process Mag. 22(6), 123–151 (2005).
    [Crossref]
  30. J. Yang, Y. Hu, L. Fang, J. Cheng, and J. Liu, “Universal digital filtering for denoising volumetric retinal oct and oct angiography in 3d shearlet domain,” Opt. Lett. 45(3), 694–697 (2020).
    [Crossref]
  31. A. Petzold, Optical Coherence Tomography to Assess Neurodegeneration in Multiple Sclerosis (Springer, 2016), pp. 131–141.
  32. M. Aharon, M. Elad, and A. Bruckstein, “K-svd: An algorithm for designing overcomplete dictionaries for sparse representation,” IEEE Trans. Signal Process. 54(11), 4311–4322 (2006).
    [Crossref]
  33. T. B. Dubose, D. Cunefare, E. Cole, P. Milanfar, J. A. Izatt, and S. Farsiu, “Statistical models of signal and noise and fundamental limits of segmentation accuracy in retinal optical coherence tomography,” IEEE Trans. Med. Imaging 37(9), 1978–1988 (2018).
    [Crossref]

2020 (1)

2019 (5)

X. Liu, Z. Huang, Z. Wang, C. Wen, Z. Jiang, Z. Yu, J. Liu, G. Liu, X. Huang, A. Maier, Q. Ren, and Y. Lu, “A deep learning based pipeline for optical coherence tomography angiography,” J. Biophotonics 12(10), e201900008 (2019).
[Crossref]

A. J. Deegan, S. P. Mandell, and R. K. Wang, “Optical coherence tomography correlates multiple measures of tissue damage following acute burn injury,” Quant. Imaging. Med. Surg. 9(5), 731–741 (2019).
[Crossref]

M. Chlebiej, I. Gorczynska, A. Rutkowski, J. Kluczewski, T. Grzona, E. Pijewska, B. L. Sikorski, A. Szkulmowska, and M. Szkulmowski, “Quality improvement of oct angiograms with elliptical directional filtering,” Biomed. Opt. Express 10(2), 1013–1031 (2019).
[Crossref]

C. S. Lee, A. J. Tyring, Y. Wu, S. Xiao, A. S. Rokem, N. P. DeRuyter, Q. Zhang, A. Tufail, R. K. Wang, and A. Y. Lee, “Generating retinal flow maps from structural optical coherence tomography with artificial intelligence,” Sci. Rep. 9(1), 5694 (2019).
[Crossref]

Y. Huang, Z. Lu, Z. Shao, M. Ran, J. Zhou, L. Fang, and Y. Zhang, “Simultaneous denoising and super-resolution of optical coherence tomography images based on generative adversarial network,” Opt. Express 27(9), 12289–12307 (2019).
[Crossref]

2018 (3)

M. Battaglia Parodi, A. Rabiolo, M. V. Cicinelli, P. Iacono, F. Romano, and F. Bandello, “Quantitative analysis of optical coherence tomography angiography in adult-onset foveomacular vitelliform dystrophy,” Retina 38(2), 237–244 (2018).
[Crossref]

A. Uji, S. Balasubramanian, J. Lei, E. Baghdasaryan, M. Al-Sheikh, E. Borrelli, and S. R. Sadda, “Multiple enface image averaging for enhanced optical coherence tomography angiography imaging,” Acta Ophthalmol. 96(7), e820–e827 (2018).
[Crossref]

T. B. Dubose, D. Cunefare, E. Cole, P. Milanfar, J. A. Izatt, and S. Farsiu, “Statistical models of signal and noise and fundamental limits of segmentation accuracy in retinal optical coherence tomography,” IEEE Trans. Med. Imaging 37(9), 1978–1988 (2018).
[Crossref]

2017 (2)

2016 (2)

A. Y. Kim, D. C. Rodger, A. Shahidzadeh, Z. Chu, N. Koulisis, B. Burkemper, X. Jiang, K. L. Pepple, R. K. Wang, C. A. Puliafito, N. A. Rao, and A. H. Kashani, “Quantifying retinal microvascular changes in uveitis using spectral-domain optical coherence tomography angiography,” Am. J. Ophthalmol. 171, 101–112 (2016).
[Crossref]

P. Gong, S. Es’haghian, F. M. Wood, D. D. Sampson, and R. A. McLaughlin, “Optical coherence tomography angiography for longitudinal monitoring of vascular changes in human cutaneous burns,” Exp. Dermatol. 25(9), 722–724 (2016).
[Crossref]

2015 (1)

T. E. de Carlo, A. Romano, N. K. Waheed, and J. S. Duker, “A review of optical coherence tomography angiography (octa),” Int. J. Retin Vitr. 1(1), 5 (2015).
[Crossref]

2014 (1)

2012 (3)

2010 (2)

L. An, H. M. Subhush, D. J. Wilson, and R. K. Wang, “High-resolution wide-field imaging of retinal and choroidal blood perfusion with optical microangiography,” J. Biomed. Opt. 15(2), 026011 (2010).
[Crossref]

R. Bernardes, C. Maduro, P. Serranho, A. Araújo, S. Barbeiro, and J. Cunha-Vaz, “Improved adaptive complex diffusion despeckling filter,” Opt. Express 18(23), 24048–24059 (2010).
[Crossref]

2009 (2)

J. Fingler, R. J. Zawadzki, J. S. Werner, D. Schwartz, and S. E. Fraser, “Volumetric microvascular imaging of human retina using optical coherence tomography with a novel motion contrast technique,” Opt. Express 17(24), 22190–22200 (2009).
[Crossref]

S. Chitchian, M. A. Fiddy, and N. M. Fried, “Denoising during optical coherence tomography of the prostate nerves via wavelet shrinkage using dual-tree complex wavelet transform,” J. Biomed. Opt. 14(1), 014031 (2009).
[Crossref]

2008 (1)

2007 (1)

2006 (1)

M. Aharon, M. Elad, and A. Bruckstein, “K-svd: An algorithm for designing overcomplete dictionaries for sparse representation,” IEEE Trans. Signal Process. 54(11), 4311–4322 (2006).
[Crossref]

2005 (2)

I. W. Selesnick, R. G. Baraniuk, and N. C. Kingsbury, “The dual-tree complex wavelet transform,” IEEE Signal Process Mag. 22(6), 123–151 (2005).
[Crossref]

B. Sander, M. Larsen, L. Thrane, J. L. Hougaard, and T. M. Jørgensen, “Enhanced optical coherence tomography imaging by multiple scan averaging,” Br. J. Ophthalmol. 89(2), 207–212 (2005).
[Crossref]

2003 (1)

2002 (1)

L. Sendur and I. W. Selesnick, “Bivariate shrinkage with local variance estimation,” IEEE Signal Process Lett. 9(12), 438–441 (2002).
[Crossref]

1994 (1)

D. L. Donoho and I. M. Johnstone, “Ideal spatial adaptation by wavelet shrinkage,” Biometrika 81(3), 425–455 (1994).
[Crossref]

1991 (1)

D. Huang, E. Swanson, C. Lin, J. Schuman, W. Stinson, W. Chang, M. Hee, T. Flotte, K. Gregory, and C. Puliafito, “Optical coherence tomography,” Science 254(5035), 1178–1181 (1991).
[Crossref]

Aharon, M.

M. Aharon, M. Elad, and A. Bruckstein, “K-svd: An algorithm for designing overcomplete dictionaries for sparse representation,” IEEE Trans. Signal Process. 54(11), 4311–4322 (2006).
[Crossref]

Al-Sheikh, M.

A. Uji, S. Balasubramanian, J. Lei, E. Baghdasaryan, M. Al-Sheikh, E. Borrelli, and S. R. Sadda, “Multiple enface image averaging for enhanced optical coherence tomography angiography imaging,” Acta Ophthalmol. 96(7), e820–e827 (2018).
[Crossref]

An, L.

L. An, H. M. Subhush, D. J. Wilson, and R. K. Wang, “High-resolution wide-field imaging of retinal and choroidal blood perfusion with optical microangiography,” J. Biomed. Opt. 15(2), 026011 (2010).
[Crossref]

L. An and R. K. Wang, “In vivo volumetric imaging of vascular perfusion within human retina and choroids with optical micro-angiography,” Opt. Express 16(15), 11438–11452 (2008).
[Crossref]

Araújo, A.

Baghdasaryan, E.

A. Uji, S. Balasubramanian, J. Lei, E. Baghdasaryan, M. Al-Sheikh, E. Borrelli, and S. R. Sadda, “Multiple enface image averaging for enhanced optical coherence tomography angiography imaging,” Acta Ophthalmol. 96(7), e820–e827 (2018).
[Crossref]

Balasubramanian, S.

A. Uji, S. Balasubramanian, J. Lei, E. Baghdasaryan, M. Al-Sheikh, E. Borrelli, and S. R. Sadda, “Multiple enface image averaging for enhanced optical coherence tomography angiography imaging,” Acta Ophthalmol. 96(7), e820–e827 (2018).
[Crossref]

Bandello, F.

M. Battaglia Parodi, A. Rabiolo, M. V. Cicinelli, P. Iacono, F. Romano, and F. Bandello, “Quantitative analysis of optical coherence tomography angiography in adult-onset foveomacular vitelliform dystrophy,” Retina 38(2), 237–244 (2018).
[Crossref]

Baran, U.

Baraniuk, R. G.

I. W. Selesnick, R. G. Baraniuk, and N. C. Kingsbury, “The dual-tree complex wavelet transform,” IEEE Signal Process Mag. 22(6), 123–151 (2005).
[Crossref]

Barbeiro, S.

Battaglia Parodi, M.

M. Battaglia Parodi, A. Rabiolo, M. V. Cicinelli, P. Iacono, F. Romano, and F. Bandello, “Quantitative analysis of optical coherence tomography angiography in adult-onset foveomacular vitelliform dystrophy,” Retina 38(2), 237–244 (2018).
[Crossref]

Bernardes, R.

Boretsky, A.

S. Chitchian, M. A. Mayer, A. Boretsky, F. J. van Kuijk, and M. Motamedi, “Retinal optical coherence tomography image enhancement via shrinkage denoising using double-density dual-tree complex wavelet transform,” J. Biomed. Opt. 17(11), 116009 (2012).
[Crossref]

Borrelli, E.

A. Uji, S. Balasubramanian, J. Lei, E. Baghdasaryan, M. Al-Sheikh, E. Borrelli, and S. R. Sadda, “Multiple enface image averaging for enhanced optical coherence tomography angiography imaging,” Acta Ophthalmol. 96(7), e820–e827 (2018).
[Crossref]

Borsdorf, A.

Bouma, B. E.

Bruckstein, A.

M. Aharon, M. Elad, and A. Bruckstein, “K-svd: An algorithm for designing overcomplete dictionaries for sparse representation,” IEEE Trans. Signal Process. 54(11), 4311–4322 (2006).
[Crossref]

Burkemper, B.

A. Y. Kim, D. C. Rodger, A. Shahidzadeh, Z. Chu, N. Koulisis, B. Burkemper, X. Jiang, K. L. Pepple, R. K. Wang, C. A. Puliafito, N. A. Rao, and A. H. Kashani, “Quantifying retinal microvascular changes in uveitis using spectral-domain optical coherence tomography angiography,” Am. J. Ophthalmol. 171, 101–112 (2016).
[Crossref]

Cense, B.

Chang, W.

D. Huang, E. Swanson, C. Lin, J. Schuman, W. Stinson, W. Chang, M. Hee, T. Flotte, K. Gregory, and C. Puliafito, “Optical coherence tomography,” Science 254(5035), 1178–1181 (1991).
[Crossref]

Chen, C.-L.

Cheng, J.

Chitchian, S.

S. Chitchian, M. A. Mayer, A. Boretsky, F. J. van Kuijk, and M. Motamedi, “Retinal optical coherence tomography image enhancement via shrinkage denoising using double-density dual-tree complex wavelet transform,” J. Biomed. Opt. 17(11), 116009 (2012).
[Crossref]

S. Chitchian, M. A. Fiddy, and N. M. Fried, “Denoising during optical coherence tomography of the prostate nerves via wavelet shrinkage using dual-tree complex wavelet transform,” J. Biomed. Opt. 14(1), 014031 (2009).
[Crossref]

Chlebiej, M.

Chu, Z.

A. Y. Kim, D. C. Rodger, A. Shahidzadeh, Z. Chu, N. Koulisis, B. Burkemper, X. Jiang, K. L. Pepple, R. K. Wang, C. A. Puliafito, N. A. Rao, and A. H. Kashani, “Quantifying retinal microvascular changes in uveitis using spectral-domain optical coherence tomography angiography,” Am. J. Ophthalmol. 171, 101–112 (2016).
[Crossref]

Cicinelli, M. V.

M. Battaglia Parodi, A. Rabiolo, M. V. Cicinelli, P. Iacono, F. Romano, and F. Bandello, “Quantitative analysis of optical coherence tomography angiography in adult-onset foveomacular vitelliform dystrophy,” Retina 38(2), 237–244 (2018).
[Crossref]

Cole, E.

T. B. Dubose, D. Cunefare, E. Cole, P. Milanfar, J. A. Izatt, and S. Farsiu, “Statistical models of signal and noise and fundamental limits of segmentation accuracy in retinal optical coherence tomography,” IEEE Trans. Med. Imaging 37(9), 1978–1988 (2018).
[Crossref]

Cunefare, D.

T. B. Dubose, D. Cunefare, E. Cole, P. Milanfar, J. A. Izatt, and S. Farsiu, “Statistical models of signal and noise and fundamental limits of segmentation accuracy in retinal optical coherence tomography,” IEEE Trans. Med. Imaging 37(9), 1978–1988 (2018).
[Crossref]

Cunha-Vaz, J.

de Boer, J. F.

de Carlo, T. E.

T. E. de Carlo, A. Romano, N. K. Waheed, and J. S. Duker, “A review of optical coherence tomography angiography (octa),” Int. J. Retin Vitr. 1(1), 5 (2015).
[Crossref]

Deegan, A. J.

A. J. Deegan, S. P. Mandell, and R. K. Wang, “Optical coherence tomography correlates multiple measures of tissue damage following acute burn injury,” Quant. Imaging. Med. Surg. 9(5), 731–741 (2019).
[Crossref]

DeRuyter, N. P.

C. S. Lee, A. J. Tyring, Y. Wu, S. Xiao, A. S. Rokem, N. P. DeRuyter, Q. Zhang, A. Tufail, R. K. Wang, and A. Y. Lee, “Generating retinal flow maps from structural optical coherence tomography with artificial intelligence,” Sci. Rep. 9(1), 5694 (2019).
[Crossref]

Donoho, D. L.

D. L. Donoho and I. M. Johnstone, “Ideal spatial adaptation by wavelet shrinkage,” Biometrika 81(3), 425–455 (1994).
[Crossref]

Dubose, T. B.

T. B. Dubose, D. Cunefare, E. Cole, P. Milanfar, J. A. Izatt, and S. Farsiu, “Statistical models of signal and noise and fundamental limits of segmentation accuracy in retinal optical coherence tomography,” IEEE Trans. Med. Imaging 37(9), 1978–1988 (2018).
[Crossref]

Duker, J. S.

T. E. de Carlo, A. Romano, N. K. Waheed, and J. S. Duker, “A review of optical coherence tomography angiography (octa),” Int. J. Retin Vitr. 1(1), 5 (2015).
[Crossref]

Elad, M.

M. Aharon, M. Elad, and A. Bruckstein, “K-svd: An algorithm for designing overcomplete dictionaries for sparse representation,” IEEE Trans. Signal Process. 54(11), 4311–4322 (2006).
[Crossref]

Es’haghian, S.

P. Gong, S. Es’haghian, F. M. Wood, D. D. Sampson, and R. A. McLaughlin, “Optical coherence tomography angiography for longitudinal monitoring of vascular changes in human cutaneous burns,” Exp. Dermatol. 25(9), 722–724 (2016).
[Crossref]

Fang, L.

Farsiu, S.

T. B. Dubose, D. Cunefare, E. Cole, P. Milanfar, J. A. Izatt, and S. Farsiu, “Statistical models of signal and noise and fundamental limits of segmentation accuracy in retinal optical coherence tomography,” IEEE Trans. Med. Imaging 37(9), 1978–1988 (2018).
[Crossref]

Fiddy, M. A.

S. Chitchian, M. A. Fiddy, and N. M. Fried, “Denoising during optical coherence tomography of the prostate nerves via wavelet shrinkage using dual-tree complex wavelet transform,” J. Biomed. Opt. 14(1), 014031 (2009).
[Crossref]

Fingler, J.

Flotte, T.

D. Huang, E. Swanson, C. Lin, J. Schuman, W. Stinson, W. Chang, M. Hee, T. Flotte, K. Gregory, and C. Puliafito, “Optical coherence tomography,” Science 254(5035), 1178–1181 (1991).
[Crossref]

Fraser, S. E.

Fried, N. M.

S. Chitchian, M. A. Fiddy, and N. M. Fried, “Denoising during optical coherence tomography of the prostate nerves via wavelet shrinkage using dual-tree complex wavelet transform,” J. Biomed. Opt. 14(1), 014031 (2009).
[Crossref]

Fujimoto, J. G.

Y. Jia, O. Tan, J. Tokayer, B. Potsaid, Y. Wang, J. J. Liu, M. F. Kraus, H. Subhash, J. G. Fujimoto, J. Hornegger, and D. Huang, “Split-spectrum amplitude-decorrelation angiography with optical coherence tomography,” Opt. Express 20(4), 4710–4725 (2012).
[Crossref]

J. Hossbach, L. Husvogt, M. F. Kraus, J. G. Fujimoto, and A. K. Maier, “Deep oct angiography image generation for motion artifact suppression,” in Bildverarbeitung fur die Medizin 2020, T. Tolxdorff, T. M. Deserno, H. Handels, A. Maier, K. H. Maier-Hein, and C. Palm, eds. (Springer, 2020), pp. 248–253.

Gong, P.

P. Gong, S. Es’haghian, F. M. Wood, D. D. Sampson, and R. A. McLaughlin, “Optical coherence tomography angiography for longitudinal monitoring of vascular changes in human cutaneous burns,” Exp. Dermatol. 25(9), 722–724 (2016).
[Crossref]

Gorczynska, I.

Gregory, K.

D. Huang, E. Swanson, C. Lin, J. Schuman, W. Stinson, W. Chang, M. Hee, T. Flotte, K. Gregory, and C. Puliafito, “Optical coherence tomography,” Science 254(5035), 1178–1181 (1991).
[Crossref]

Gruber, A.

Grzona, T.

Hanson, S. R.

Hee, M.

D. Huang, E. Swanson, C. Lin, J. Schuman, W. Stinson, W. Chang, M. Hee, T. Flotte, K. Gregory, and C. Puliafito, “Optical coherence tomography,” Science 254(5035), 1178–1181 (1991).
[Crossref]

Hornegger, J.

Hossbach, J.

J. Hossbach, L. Husvogt, M. F. Kraus, J. G. Fujimoto, and A. K. Maier, “Deep oct angiography image generation for motion artifact suppression,” in Bildverarbeitung fur die Medizin 2020, T. Tolxdorff, T. M. Deserno, H. Handels, A. Maier, K. H. Maier-Hein, and C. Palm, eds. (Springer, 2020), pp. 248–253.

Hougaard, J. L.

B. Sander, M. Larsen, L. Thrane, J. L. Hougaard, and T. M. Jørgensen, “Enhanced optical coherence tomography imaging by multiple scan averaging,” Br. J. Ophthalmol. 89(2), 207–212 (2005).
[Crossref]

Hu, Y.

Huang, D.

Y. Jia, O. Tan, J. Tokayer, B. Potsaid, Y. Wang, J. J. Liu, M. F. Kraus, H. Subhash, J. G. Fujimoto, J. Hornegger, and D. Huang, “Split-spectrum amplitude-decorrelation angiography with optical coherence tomography,” Opt. Express 20(4), 4710–4725 (2012).
[Crossref]

D. Huang, E. Swanson, C. Lin, J. Schuman, W. Stinson, W. Chang, M. Hee, T. Flotte, K. Gregory, and C. Puliafito, “Optical coherence tomography,” Science 254(5035), 1178–1181 (1991).
[Crossref]

Huang, X.

X. Liu, Z. Huang, Z. Wang, C. Wen, Z. Jiang, Z. Yu, J. Liu, G. Liu, X. Huang, A. Maier, Q. Ren, and Y. Lu, “A deep learning based pipeline for optical coherence tomography angiography,” J. Biophotonics 12(10), e201900008 (2019).
[Crossref]

Huang, Y.

Huang, Z.

X. Liu, Z. Huang, Z. Wang, C. Wen, Z. Jiang, Z. Yu, J. Liu, G. Liu, X. Huang, A. Maier, Q. Ren, and Y. Lu, “A deep learning based pipeline for optical coherence tomography angiography,” J. Biophotonics 12(10), e201900008 (2019).
[Crossref]

P. Li, Z. Huang, S. Yang, X. Liu, Q. Ren, and P. Li, “Adaptive classifier allows enhanced flow contrast in oct angiography using a histogram-based motion threshold and 3d hessian analysis-based shape filtering,” Opt. Lett. 42(23), 4816–4819 (2017).
[Crossref]

Hurst, S.

Husvogt, L.

J. Hossbach, L. Husvogt, M. F. Kraus, J. G. Fujimoto, and A. K. Maier, “Deep oct angiography image generation for motion artifact suppression,” in Bildverarbeitung fur die Medizin 2020, T. Tolxdorff, T. M. Deserno, H. Handels, A. Maier, K. H. Maier-Hein, and C. Palm, eds. (Springer, 2020), pp. 248–253.

Iacono, P.

M. Battaglia Parodi, A. Rabiolo, M. V. Cicinelli, P. Iacono, F. Romano, and F. Bandello, “Quantitative analysis of optical coherence tomography angiography in adult-onset foveomacular vitelliform dystrophy,” Retina 38(2), 237–244 (2018).
[Crossref]

Izatt, J. A.

T. B. Dubose, D. Cunefare, E. Cole, P. Milanfar, J. A. Izatt, and S. Farsiu, “Statistical models of signal and noise and fundamental limits of segmentation accuracy in retinal optical coherence tomography,” IEEE Trans. Med. Imaging 37(9), 1978–1988 (2018).
[Crossref]

Jacques, S. L.

Jia, Y.

Jiang, X.

A. Y. Kim, D. C. Rodger, A. Shahidzadeh, Z. Chu, N. Koulisis, B. Burkemper, X. Jiang, K. L. Pepple, R. K. Wang, C. A. Puliafito, N. A. Rao, and A. H. Kashani, “Quantifying retinal microvascular changes in uveitis using spectral-domain optical coherence tomography angiography,” Am. J. Ophthalmol. 171, 101–112 (2016).
[Crossref]

Jiang, Z.

X. Liu, Z. Huang, Z. Wang, C. Wen, Z. Jiang, Z. Yu, J. Liu, G. Liu, X. Huang, A. Maier, Q. Ren, and Y. Lu, “A deep learning based pipeline for optical coherence tomography angiography,” J. Biophotonics 12(10), e201900008 (2019).
[Crossref]

Johnstone, I. M.

D. L. Donoho and I. M. Johnstone, “Ideal spatial adaptation by wavelet shrinkage,” Biometrika 81(3), 425–455 (1994).
[Crossref]

Jørgensen, T. M.

B. Sander, M. Larsen, L. Thrane, J. L. Hougaard, and T. M. Jørgensen, “Enhanced optical coherence tomography imaging by multiple scan averaging,” Br. J. Ophthalmol. 89(2), 207–212 (2005).
[Crossref]

Kashani, A. H.

A. Y. Kim, D. C. Rodger, A. Shahidzadeh, Z. Chu, N. Koulisis, B. Burkemper, X. Jiang, K. L. Pepple, R. K. Wang, C. A. Puliafito, N. A. Rao, and A. H. Kashani, “Quantifying retinal microvascular changes in uveitis using spectral-domain optical coherence tomography angiography,” Am. J. Ophthalmol. 171, 101–112 (2016).
[Crossref]

Kim, A. Y.

A. Y. Kim, D. C. Rodger, A. Shahidzadeh, Z. Chu, N. Koulisis, B. Burkemper, X. Jiang, K. L. Pepple, R. K. Wang, C. A. Puliafito, N. A. Rao, and A. H. Kashani, “Quantifying retinal microvascular changes in uveitis using spectral-domain optical coherence tomography angiography,” Am. J. Ophthalmol. 171, 101–112 (2016).
[Crossref]

Kingsbury, N. C.

I. W. Selesnick, R. G. Baraniuk, and N. C. Kingsbury, “The dual-tree complex wavelet transform,” IEEE Signal Process Mag. 22(6), 123–151 (2005).
[Crossref]

Kluczewski, J.

Koulisis, N.

A. Y. Kim, D. C. Rodger, A. Shahidzadeh, Z. Chu, N. Koulisis, B. Burkemper, X. Jiang, K. L. Pepple, R. K. Wang, C. A. Puliafito, N. A. Rao, and A. H. Kashani, “Quantifying retinal microvascular changes in uveitis using spectral-domain optical coherence tomography angiography,” Am. J. Ophthalmol. 171, 101–112 (2016).
[Crossref]

Kraus, M. F.

Y. Jia, O. Tan, J. Tokayer, B. Potsaid, Y. Wang, J. J. Liu, M. F. Kraus, H. Subhash, J. G. Fujimoto, J. Hornegger, and D. Huang, “Split-spectrum amplitude-decorrelation angiography with optical coherence tomography,” Opt. Express 20(4), 4710–4725 (2012).
[Crossref]

J. Hossbach, L. Husvogt, M. F. Kraus, J. G. Fujimoto, and A. K. Maier, “Deep oct angiography image generation for motion artifact suppression,” in Bildverarbeitung fur die Medizin 2020, T. Tolxdorff, T. M. Deserno, H. Handels, A. Maier, K. H. Maier-Hein, and C. Palm, eds. (Springer, 2020), pp. 248–253.

Larsen, M.

B. Sander, M. Larsen, L. Thrane, J. L. Hougaard, and T. M. Jørgensen, “Enhanced optical coherence tomography imaging by multiple scan averaging,” Br. J. Ophthalmol. 89(2), 207–212 (2005).
[Crossref]

Lee, A. Y.

C. S. Lee, A. J. Tyring, Y. Wu, S. Xiao, A. S. Rokem, N. P. DeRuyter, Q. Zhang, A. Tufail, R. K. Wang, and A. Y. Lee, “Generating retinal flow maps from structural optical coherence tomography with artificial intelligence,” Sci. Rep. 9(1), 5694 (2019).
[Crossref]

Lee, C. S.

C. S. Lee, A. J. Tyring, Y. Wu, S. Xiao, A. S. Rokem, N. P. DeRuyter, Q. Zhang, A. Tufail, R. K. Wang, and A. Y. Lee, “Generating retinal flow maps from structural optical coherence tomography with artificial intelligence,” Sci. Rep. 9(1), 5694 (2019).
[Crossref]

Lei, J.

A. Uji, S. Balasubramanian, J. Lei, E. Baghdasaryan, M. Al-Sheikh, E. Borrelli, and S. R. Sadda, “Multiple enface image averaging for enhanced optical coherence tomography angiography imaging,” Acta Ophthalmol. 96(7), e820–e827 (2018).
[Crossref]

Li, P.

Lin, C.

D. Huang, E. Swanson, C. Lin, J. Schuman, W. Stinson, W. Chang, M. Hee, T. Flotte, K. Gregory, and C. Puliafito, “Optical coherence tomography,” Science 254(5035), 1178–1181 (1991).
[Crossref]

Liu, G.

X. Liu, Z. Huang, Z. Wang, C. Wen, Z. Jiang, Z. Yu, J. Liu, G. Liu, X. Huang, A. Maier, Q. Ren, and Y. Lu, “A deep learning based pipeline for optical coherence tomography angiography,” J. Biophotonics 12(10), e201900008 (2019).
[Crossref]

Liu, J.

J. Yang, Y. Hu, L. Fang, J. Cheng, and J. Liu, “Universal digital filtering for denoising volumetric retinal oct and oct angiography in 3d shearlet domain,” Opt. Lett. 45(3), 694–697 (2020).
[Crossref]

X. Liu, Z. Huang, Z. Wang, C. Wen, Z. Jiang, Z. Yu, J. Liu, G. Liu, X. Huang, A. Maier, Q. Ren, and Y. Lu, “A deep learning based pipeline for optical coherence tomography angiography,” J. Biophotonics 12(10), e201900008 (2019).
[Crossref]

Liu, J. J.

Liu, X.

X. Liu, Z. Huang, Z. Wang, C. Wen, Z. Jiang, Z. Yu, J. Liu, G. Liu, X. Huang, A. Maier, Q. Ren, and Y. Lu, “A deep learning based pipeline for optical coherence tomography angiography,” J. Biophotonics 12(10), e201900008 (2019).
[Crossref]

P. Li, Z. Huang, S. Yang, X. Liu, Q. Ren, and P. Li, “Adaptive classifier allows enhanced flow contrast in oct angiography using a histogram-based motion threshold and 3d hessian analysis-based shape filtering,” Opt. Lett. 42(23), 4816–4819 (2017).
[Crossref]

Lu, Y.

X. Liu, Z. Huang, Z. Wang, C. Wen, Z. Jiang, Z. Yu, J. Liu, G. Liu, X. Huang, A. Maier, Q. Ren, and Y. Lu, “A deep learning based pipeline for optical coherence tomography angiography,” J. Biophotonics 12(10), e201900008 (2019).
[Crossref]

Lu, Z.

Ma, Z.

Maduro, C.

Maier, A.

X. Liu, Z. Huang, Z. Wang, C. Wen, Z. Jiang, Z. Yu, J. Liu, G. Liu, X. Huang, A. Maier, Q. Ren, and Y. Lu, “A deep learning based pipeline for optical coherence tomography angiography,” J. Biophotonics 12(10), e201900008 (2019).
[Crossref]

Maier, A. K.

J. Hossbach, L. Husvogt, M. F. Kraus, J. G. Fujimoto, and A. K. Maier, “Deep oct angiography image generation for motion artifact suppression,” in Bildverarbeitung fur die Medizin 2020, T. Tolxdorff, T. M. Deserno, H. Handels, A. Maier, K. H. Maier-Hein, and C. Palm, eds. (Springer, 2020), pp. 248–253.

Mandell, S. P.

A. J. Deegan, S. P. Mandell, and R. K. Wang, “Optical coherence tomography correlates multiple measures of tissue damage following acute burn injury,” Quant. Imaging. Med. Surg. 9(5), 731–741 (2019).
[Crossref]

Mardin, C. Y.

Mayer, M. A.

M. A. Mayer, A. Borsdorf, M. Wagner, J. Hornegger, C. Y. Mardin, and R. P. Tornow, “Wavelet denoising of multiframe optical coherence tomography data,” Biomed. Opt. Express 3(3), 572–589 (2012).
[Crossref]

S. Chitchian, M. A. Mayer, A. Boretsky, F. J. van Kuijk, and M. Motamedi, “Retinal optical coherence tomography image enhancement via shrinkage denoising using double-density dual-tree complex wavelet transform,” J. Biomed. Opt. 17(11), 116009 (2012).
[Crossref]

McLaughlin, R. A.

P. Gong, S. Es’haghian, F. M. Wood, D. D. Sampson, and R. A. McLaughlin, “Optical coherence tomography angiography for longitudinal monitoring of vascular changes in human cutaneous burns,” Exp. Dermatol. 25(9), 722–724 (2016).
[Crossref]

Milanfar, P.

T. B. Dubose, D. Cunefare, E. Cole, P. Milanfar, J. A. Izatt, and S. Farsiu, “Statistical models of signal and noise and fundamental limits of segmentation accuracy in retinal optical coherence tomography,” IEEE Trans. Med. Imaging 37(9), 1978–1988 (2018).
[Crossref]

Motamedi, M.

S. Chitchian, M. A. Mayer, A. Boretsky, F. J. van Kuijk, and M. Motamedi, “Retinal optical coherence tomography image enhancement via shrinkage denoising using double-density dual-tree complex wavelet transform,” J. Biomed. Opt. 17(11), 116009 (2012).
[Crossref]

Park, B. H.

Pepple, K. L.

A. Y. Kim, D. C. Rodger, A. Shahidzadeh, Z. Chu, N. Koulisis, B. Burkemper, X. Jiang, K. L. Pepple, R. K. Wang, C. A. Puliafito, N. A. Rao, and A. H. Kashani, “Quantifying retinal microvascular changes in uveitis using spectral-domain optical coherence tomography angiography,” Am. J. Ophthalmol. 171, 101–112 (2016).
[Crossref]

Petzold, A.

A. Petzold, Optical Coherence Tomography to Assess Neurodegeneration in Multiple Sclerosis (Springer, 2016), pp. 131–141.

Pierce, M. C.

Pijewska, E.

Potsaid, B.

Puliafito, C.

D. Huang, E. Swanson, C. Lin, J. Schuman, W. Stinson, W. Chang, M. Hee, T. Flotte, K. Gregory, and C. Puliafito, “Optical coherence tomography,” Science 254(5035), 1178–1181 (1991).
[Crossref]

Puliafito, C. A.

A. Y. Kim, D. C. Rodger, A. Shahidzadeh, Z. Chu, N. Koulisis, B. Burkemper, X. Jiang, K. L. Pepple, R. K. Wang, C. A. Puliafito, N. A. Rao, and A. H. Kashani, “Quantifying retinal microvascular changes in uveitis using spectral-domain optical coherence tomography angiography,” Am. J. Ophthalmol. 171, 101–112 (2016).
[Crossref]

Rabiolo, A.

M. Battaglia Parodi, A. Rabiolo, M. V. Cicinelli, P. Iacono, F. Romano, and F. Bandello, “Quantitative analysis of optical coherence tomography angiography in adult-onset foveomacular vitelliform dystrophy,” Retina 38(2), 237–244 (2018).
[Crossref]

Ran, M.

Rao, N. A.

A. Y. Kim, D. C. Rodger, A. Shahidzadeh, Z. Chu, N. Koulisis, B. Burkemper, X. Jiang, K. L. Pepple, R. K. Wang, C. A. Puliafito, N. A. Rao, and A. H. Kashani, “Quantifying retinal microvascular changes in uveitis using spectral-domain optical coherence tomography angiography,” Am. J. Ophthalmol. 171, 101–112 (2016).
[Crossref]

Reif, R.

Ren, Q.

X. Liu, Z. Huang, Z. Wang, C. Wen, Z. Jiang, Z. Yu, J. Liu, G. Liu, X. Huang, A. Maier, Q. Ren, and Y. Lu, “A deep learning based pipeline for optical coherence tomography angiography,” J. Biophotonics 12(10), e201900008 (2019).
[Crossref]

P. Li, Z. Huang, S. Yang, X. Liu, Q. Ren, and P. Li, “Adaptive classifier allows enhanced flow contrast in oct angiography using a histogram-based motion threshold and 3d hessian analysis-based shape filtering,” Opt. Lett. 42(23), 4816–4819 (2017).
[Crossref]

Rodger, D. C.

A. Y. Kim, D. C. Rodger, A. Shahidzadeh, Z. Chu, N. Koulisis, B. Burkemper, X. Jiang, K. L. Pepple, R. K. Wang, C. A. Puliafito, N. A. Rao, and A. H. Kashani, “Quantifying retinal microvascular changes in uveitis using spectral-domain optical coherence tomography angiography,” Am. J. Ophthalmol. 171, 101–112 (2016).
[Crossref]

Rokem, A. S.

C. S. Lee, A. J. Tyring, Y. Wu, S. Xiao, A. S. Rokem, N. P. DeRuyter, Q. Zhang, A. Tufail, R. K. Wang, and A. Y. Lee, “Generating retinal flow maps from structural optical coherence tomography with artificial intelligence,” Sci. Rep. 9(1), 5694 (2019).
[Crossref]

Romano, A.

T. E. de Carlo, A. Romano, N. K. Waheed, and J. S. Duker, “A review of optical coherence tomography angiography (octa),” Int. J. Retin Vitr. 1(1), 5 (2015).
[Crossref]

Romano, F.

M. Battaglia Parodi, A. Rabiolo, M. V. Cicinelli, P. Iacono, F. Romano, and F. Bandello, “Quantitative analysis of optical coherence tomography angiography in adult-onset foveomacular vitelliform dystrophy,” Retina 38(2), 237–244 (2018).
[Crossref]

Rutkowski, A.

Sadda, S. R.

A. Uji, S. Balasubramanian, J. Lei, E. Baghdasaryan, M. Al-Sheikh, E. Borrelli, and S. R. Sadda, “Multiple enface image averaging for enhanced optical coherence tomography angiography imaging,” Acta Ophthalmol. 96(7), e820–e827 (2018).
[Crossref]

Sampson, D. D.

P. Gong, S. Es’haghian, F. M. Wood, D. D. Sampson, and R. A. McLaughlin, “Optical coherence tomography angiography for longitudinal monitoring of vascular changes in human cutaneous burns,” Exp. Dermatol. 25(9), 722–724 (2016).
[Crossref]

Sander, B.

B. Sander, M. Larsen, L. Thrane, J. L. Hougaard, and T. M. Jørgensen, “Enhanced optical coherence tomography imaging by multiple scan averaging,” Br. J. Ophthalmol. 89(2), 207–212 (2005).
[Crossref]

Schuman, J.

D. Huang, E. Swanson, C. Lin, J. Schuman, W. Stinson, W. Chang, M. Hee, T. Flotte, K. Gregory, and C. Puliafito, “Optical coherence tomography,” Science 254(5035), 1178–1181 (1991).
[Crossref]

Schwartz, D.

Selesnick, I. W.

I. W. Selesnick, R. G. Baraniuk, and N. C. Kingsbury, “The dual-tree complex wavelet transform,” IEEE Signal Process Mag. 22(6), 123–151 (2005).
[Crossref]

L. Sendur and I. W. Selesnick, “Bivariate shrinkage with local variance estimation,” IEEE Signal Process Lett. 9(12), 438–441 (2002).
[Crossref]

Sendur, L.

L. Sendur and I. W. Selesnick, “Bivariate shrinkage with local variance estimation,” IEEE Signal Process Lett. 9(12), 438–441 (2002).
[Crossref]

Serranho, P.

Shahidzadeh, A.

A. Y. Kim, D. C. Rodger, A. Shahidzadeh, Z. Chu, N. Koulisis, B. Burkemper, X. Jiang, K. L. Pepple, R. K. Wang, C. A. Puliafito, N. A. Rao, and A. H. Kashani, “Quantifying retinal microvascular changes in uveitis using spectral-domain optical coherence tomography angiography,” Am. J. Ophthalmol. 171, 101–112 (2016).
[Crossref]

Shao, Z.

Sikorski, B. L.

Stinson, W.

D. Huang, E. Swanson, C. Lin, J. Schuman, W. Stinson, W. Chang, M. Hee, T. Flotte, K. Gregory, and C. Puliafito, “Optical coherence tomography,” Science 254(5035), 1178–1181 (1991).
[Crossref]

Subhash, H.

Subhush, H. M.

L. An, H. M. Subhush, D. J. Wilson, and R. K. Wang, “High-resolution wide-field imaging of retinal and choroidal blood perfusion with optical microangiography,” J. Biomed. Opt. 15(2), 026011 (2010).
[Crossref]

Swanson, E.

D. Huang, E. Swanson, C. Lin, J. Schuman, W. Stinson, W. Chang, M. Hee, T. Flotte, K. Gregory, and C. Puliafito, “Optical coherence tomography,” Science 254(5035), 1178–1181 (1991).
[Crossref]

Szkulmowska, A.

Szkulmowski, M.

Tan, O.

Tearney, G. J.

Thrane, L.

B. Sander, M. Larsen, L. Thrane, J. L. Hougaard, and T. M. Jørgensen, “Enhanced optical coherence tomography imaging by multiple scan averaging,” Br. J. Ophthalmol. 89(2), 207–212 (2005).
[Crossref]

Tokayer, J.

Tornow, R. P.

Tufail, A.

C. S. Lee, A. J. Tyring, Y. Wu, S. Xiao, A. S. Rokem, N. P. DeRuyter, Q. Zhang, A. Tufail, R. K. Wang, and A. Y. Lee, “Generating retinal flow maps from structural optical coherence tomography with artificial intelligence,” Sci. Rep. 9(1), 5694 (2019).
[Crossref]

Tyring, A. J.

C. S. Lee, A. J. Tyring, Y. Wu, S. Xiao, A. S. Rokem, N. P. DeRuyter, Q. Zhang, A. Tufail, R. K. Wang, and A. Y. Lee, “Generating retinal flow maps from structural optical coherence tomography with artificial intelligence,” Sci. Rep. 9(1), 5694 (2019).
[Crossref]

Uji, A.

A. Uji, S. Balasubramanian, J. Lei, E. Baghdasaryan, M. Al-Sheikh, E. Borrelli, and S. R. Sadda, “Multiple enface image averaging for enhanced optical coherence tomography angiography imaging,” Acta Ophthalmol. 96(7), e820–e827 (2018).
[Crossref]

van Kuijk, F. J.

S. Chitchian, M. A. Mayer, A. Boretsky, F. J. van Kuijk, and M. Motamedi, “Retinal optical coherence tomography image enhancement via shrinkage denoising using double-density dual-tree complex wavelet transform,” J. Biomed. Opt. 17(11), 116009 (2012).
[Crossref]

Wagner, M.

Waheed, N. K.

T. E. de Carlo, A. Romano, N. K. Waheed, and J. S. Duker, “A review of optical coherence tomography angiography (octa),” Int. J. Retin Vitr. 1(1), 5 (2015).
[Crossref]

Wang, R. K.

A. J. Deegan, S. P. Mandell, and R. K. Wang, “Optical coherence tomography correlates multiple measures of tissue damage following acute burn injury,” Quant. Imaging. Med. Surg. 9(5), 731–741 (2019).
[Crossref]

C. S. Lee, A. J. Tyring, Y. Wu, S. Xiao, A. S. Rokem, N. P. DeRuyter, Q. Zhang, A. Tufail, R. K. Wang, and A. Y. Lee, “Generating retinal flow maps from structural optical coherence tomography with artificial intelligence,” Sci. Rep. 9(1), 5694 (2019).
[Crossref]

C.-L. Chen and R. K. Wang, “Optical coherence tomography based angiography,” Biomed. Opt. Express 8(2), 1056–1082 (2017).
[Crossref]

A. Y. Kim, D. C. Rodger, A. Shahidzadeh, Z. Chu, N. Koulisis, B. Burkemper, X. Jiang, K. L. Pepple, R. K. Wang, C. A. Puliafito, N. A. Rao, and A. H. Kashani, “Quantifying retinal microvascular changes in uveitis using spectral-domain optical coherence tomography angiography,” Am. J. Ophthalmol. 171, 101–112 (2016).
[Crossref]

R. Reif, U. Baran, and R. K. Wang, “Motion artifact and background noise suppression on optical microangiography frames using a naive bayes mask,” Appl. Opt. 53(19), 4164–4171 (2014).
[Crossref]

L. An, H. M. Subhush, D. J. Wilson, and R. K. Wang, “High-resolution wide-field imaging of retinal and choroidal blood perfusion with optical microangiography,” J. Biomed. Opt. 15(2), 026011 (2010).
[Crossref]

L. An and R. K. Wang, “In vivo volumetric imaging of vascular perfusion within human retina and choroids with optical micro-angiography,” Opt. Express 16(15), 11438–11452 (2008).
[Crossref]

R. K. Wang, S. L. Jacques, Z. Ma, S. Hurst, S. R. Hanson, and A. Gruber, “Three dimensional optical angiography,” Opt. Express 15(7), 4083–4097 (2007).
[Crossref]

Wang, Y.

Wang, Z.

X. Liu, Z. Huang, Z. Wang, C. Wen, Z. Jiang, Z. Yu, J. Liu, G. Liu, X. Huang, A. Maier, Q. Ren, and Y. Lu, “A deep learning based pipeline for optical coherence tomography angiography,” J. Biophotonics 12(10), e201900008 (2019).
[Crossref]

Wen, C.

X. Liu, Z. Huang, Z. Wang, C. Wen, Z. Jiang, Z. Yu, J. Liu, G. Liu, X. Huang, A. Maier, Q. Ren, and Y. Lu, “A deep learning based pipeline for optical coherence tomography angiography,” J. Biophotonics 12(10), e201900008 (2019).
[Crossref]

Werner, J. S.

Wilson, D. J.

L. An, H. M. Subhush, D. J. Wilson, and R. K. Wang, “High-resolution wide-field imaging of retinal and choroidal blood perfusion with optical microangiography,” J. Biomed. Opt. 15(2), 026011 (2010).
[Crossref]

Wood, F. M.

P. Gong, S. Es’haghian, F. M. Wood, D. D. Sampson, and R. A. McLaughlin, “Optical coherence tomography angiography for longitudinal monitoring of vascular changes in human cutaneous burns,” Exp. Dermatol. 25(9), 722–724 (2016).
[Crossref]

Wu, Y.

C. S. Lee, A. J. Tyring, Y. Wu, S. Xiao, A. S. Rokem, N. P. DeRuyter, Q. Zhang, A. Tufail, R. K. Wang, and A. Y. Lee, “Generating retinal flow maps from structural optical coherence tomography with artificial intelligence,” Sci. Rep. 9(1), 5694 (2019).
[Crossref]

Xiao, S.

C. S. Lee, A. J. Tyring, Y. Wu, S. Xiao, A. S. Rokem, N. P. DeRuyter, Q. Zhang, A. Tufail, R. K. Wang, and A. Y. Lee, “Generating retinal flow maps from structural optical coherence tomography with artificial intelligence,” Sci. Rep. 9(1), 5694 (2019).
[Crossref]

Yang, J.

Yang, S.

Yu, Z.

X. Liu, Z. Huang, Z. Wang, C. Wen, Z. Jiang, Z. Yu, J. Liu, G. Liu, X. Huang, A. Maier, Q. Ren, and Y. Lu, “A deep learning based pipeline for optical coherence tomography angiography,” J. Biophotonics 12(10), e201900008 (2019).
[Crossref]

Zawadzki, R. J.

Zhang, Q.

C. S. Lee, A. J. Tyring, Y. Wu, S. Xiao, A. S. Rokem, N. P. DeRuyter, Q. Zhang, A. Tufail, R. K. Wang, and A. Y. Lee, “Generating retinal flow maps from structural optical coherence tomography with artificial intelligence,” Sci. Rep. 9(1), 5694 (2019).
[Crossref]

Zhang, Y.

Zhou, J.

Acta Ophthalmol. (1)

A. Uji, S. Balasubramanian, J. Lei, E. Baghdasaryan, M. Al-Sheikh, E. Borrelli, and S. R. Sadda, “Multiple enface image averaging for enhanced optical coherence tomography angiography imaging,” Acta Ophthalmol. 96(7), e820–e827 (2018).
[Crossref]

Am. J. Ophthalmol. (1)

A. Y. Kim, D. C. Rodger, A. Shahidzadeh, Z. Chu, N. Koulisis, B. Burkemper, X. Jiang, K. L. Pepple, R. K. Wang, C. A. Puliafito, N. A. Rao, and A. H. Kashani, “Quantifying retinal microvascular changes in uveitis using spectral-domain optical coherence tomography angiography,” Am. J. Ophthalmol. 171, 101–112 (2016).
[Crossref]

Appl. Opt. (1)

Biomed. Opt. Express (3)

Biometrika (1)

D. L. Donoho and I. M. Johnstone, “Ideal spatial adaptation by wavelet shrinkage,” Biometrika 81(3), 425–455 (1994).
[Crossref]

Br. J. Ophthalmol. (1)

B. Sander, M. Larsen, L. Thrane, J. L. Hougaard, and T. M. Jørgensen, “Enhanced optical coherence tomography imaging by multiple scan averaging,” Br. J. Ophthalmol. 89(2), 207–212 (2005).
[Crossref]

Exp. Dermatol. (1)

P. Gong, S. Es’haghian, F. M. Wood, D. D. Sampson, and R. A. McLaughlin, “Optical coherence tomography angiography for longitudinal monitoring of vascular changes in human cutaneous burns,” Exp. Dermatol. 25(9), 722–724 (2016).
[Crossref]

IEEE Signal Process Lett. (1)

L. Sendur and I. W. Selesnick, “Bivariate shrinkage with local variance estimation,” IEEE Signal Process Lett. 9(12), 438–441 (2002).
[Crossref]

IEEE Signal Process Mag. (1)

I. W. Selesnick, R. G. Baraniuk, and N. C. Kingsbury, “The dual-tree complex wavelet transform,” IEEE Signal Process Mag. 22(6), 123–151 (2005).
[Crossref]

IEEE Trans. Med. Imaging (1)

T. B. Dubose, D. Cunefare, E. Cole, P. Milanfar, J. A. Izatt, and S. Farsiu, “Statistical models of signal and noise and fundamental limits of segmentation accuracy in retinal optical coherence tomography,” IEEE Trans. Med. Imaging 37(9), 1978–1988 (2018).
[Crossref]

IEEE Trans. Signal Process. (1)

M. Aharon, M. Elad, and A. Bruckstein, “K-svd: An algorithm for designing overcomplete dictionaries for sparse representation,” IEEE Trans. Signal Process. 54(11), 4311–4322 (2006).
[Crossref]

Int. J. Retin Vitr. (1)

T. E. de Carlo, A. Romano, N. K. Waheed, and J. S. Duker, “A review of optical coherence tomography angiography (octa),” Int. J. Retin Vitr. 1(1), 5 (2015).
[Crossref]

J. Biomed. Opt. (3)

L. An, H. M. Subhush, D. J. Wilson, and R. K. Wang, “High-resolution wide-field imaging of retinal and choroidal blood perfusion with optical microangiography,” J. Biomed. Opt. 15(2), 026011 (2010).
[Crossref]

S. Chitchian, M. A. Mayer, A. Boretsky, F. J. van Kuijk, and M. Motamedi, “Retinal optical coherence tomography image enhancement via shrinkage denoising using double-density dual-tree complex wavelet transform,” J. Biomed. Opt. 17(11), 116009 (2012).
[Crossref]

S. Chitchian, M. A. Fiddy, and N. M. Fried, “Denoising during optical coherence tomography of the prostate nerves via wavelet shrinkage using dual-tree complex wavelet transform,” J. Biomed. Opt. 14(1), 014031 (2009).
[Crossref]

J. Biophotonics (1)

X. Liu, Z. Huang, Z. Wang, C. Wen, Z. Jiang, Z. Yu, J. Liu, G. Liu, X. Huang, A. Maier, Q. Ren, and Y. Lu, “A deep learning based pipeline for optical coherence tomography angiography,” J. Biophotonics 12(10), e201900008 (2019).
[Crossref]

Opt. Express (6)

Opt. Lett. (3)

Quant. Imaging. Med. Surg. (1)

A. J. Deegan, S. P. Mandell, and R. K. Wang, “Optical coherence tomography correlates multiple measures of tissue damage following acute burn injury,” Quant. Imaging. Med. Surg. 9(5), 731–741 (2019).
[Crossref]

Retina (1)

M. Battaglia Parodi, A. Rabiolo, M. V. Cicinelli, P. Iacono, F. Romano, and F. Bandello, “Quantitative analysis of optical coherence tomography angiography in adult-onset foveomacular vitelliform dystrophy,” Retina 38(2), 237–244 (2018).
[Crossref]

Sci. Rep. (1)

C. S. Lee, A. J. Tyring, Y. Wu, S. Xiao, A. S. Rokem, N. P. DeRuyter, Q. Zhang, A. Tufail, R. K. Wang, and A. Y. Lee, “Generating retinal flow maps from structural optical coherence tomography with artificial intelligence,” Sci. Rep. 9(1), 5694 (2019).
[Crossref]

Science (1)

D. Huang, E. Swanson, C. Lin, J. Schuman, W. Stinson, W. Chang, M. Hee, T. Flotte, K. Gregory, and C. Puliafito, “Optical coherence tomography,” Science 254(5035), 1178–1181 (1991).
[Crossref]

Other (2)

J. Hossbach, L. Husvogt, M. F. Kraus, J. G. Fujimoto, and A. K. Maier, “Deep oct angiography image generation for motion artifact suppression,” in Bildverarbeitung fur die Medizin 2020, T. Tolxdorff, T. M. Deserno, H. Handels, A. Maier, K. H. Maier-Hein, and C. Palm, eds. (Springer, 2020), pp. 248–253.

A. Petzold, Optical Coherence Tomography to Assess Neurodegeneration in Multiple Sclerosis (Springer, 2016), pp. 131–141.

Cited By

OSA participates in Crossref's Cited-By Linking service. Citing articles from OSA journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (9)

Fig. 1.
Fig. 1. The schematic of the spectral-domain OCT system.
Fig. 2.
Fig. 2. Illustration of neighborhood $N(k)$ .
Fig. 3.
Fig. 3. Flow Diagram of Experiment.
Fig. 4.
Fig. 4. Flow chart of the proposed algorithm.
Fig. 5.
Fig. 5. B-frame blood flow and denoised images in wavelet domain. (a) B-frame blood flow image. (b)-(d) are wavelet sub-bands in the first to third wavelet scale with the decomposition direction of -75 $^{\circ }$ respectively. (e)-(g) are wavelet sub-bands in the first to third wavelet scale with the decomposition direction of -75 $^{\circ }$ . The corresponding adaptive parameters T are 0.42, 0.47 and 0.61, respectively.
Fig. 6.
Fig. 6. B-scan image denoise comparison. (a) the retinal blood flow signal was collected by the experimental system and processed by OMAG algorithm. (b) image of retinal blood flow signal after noise reduction.
Fig. 7.
Fig. 7. Comparison of denoised en-face image of octa data. (a) original image. the yellow box will be used to calculate the CNR index, and the yellow arrow points to the noise signal area. (b) image after filtering with anisotropic diffusion method. (c) image after filtering with medium method. (d) image after filtering with k-svd method. (e) image after filtering with dual-tree complex wavelet transform method. (f) image after filtering with the proposed method.
Fig. 8.
Fig. 8. Qualitative comparison of denoising en-face image of OCTA.
Fig. 9.
Fig. 9. (a), (d)and(g) Segmented layers of the human retina. (b) and (c) represent the blood vessel distribution image extracted from the NFL-OPL segmentation layer of the retina before and after noise reduction. (e) and (f) stand for the blood vessel distribution image extracted from the IPL-OPL segmentation layer of the retina before and after noise reduction. (h) and (i) denote the blood vessel distribution image extracted from the ZERO-OPL segmentation layer of the retina before and after noise reduction.

Tables (2)

Tables Icon

Table 1. OCT angiography acquisition mode parameter.

Tables Icon

Table 2. OCT angiography acquisition mode parameter.

Equations (14)

Equations on this page are rendered with MathJax. Learn more.

F l o w O M A G ( x , z ) = 1 N 1 i = 0 N 1 | C i + 1 ( x , z ) C i ( x , z ) |
m s = a s + n s
w = W m
m = w W T
w ^ s j = ( w s j 2 + w s j + 1 2 3 σ n 2 σ ) + w s j 2 + w s j + 1 2 w s j
( g ) + = { 0 , i f : g < 0 g , o t h e r w i s e
σ ^ n = m e d i a n ( | w s | ) / 0.6745
σ ^ n = m e d i a n ( | w s | ) T λ
λ = 2 l o g 2 N
σ ^ = ( σ ^ y 2 σ ^ n 2 ) +
σ ^ y 2 = 1 M y i N ( k ) y i 2
P S N R = 10 l o g 10 ( m a x ( I ) 2 σ 2 )
C N R = 1 H h = 1 H [ 10 l o g ( μ h μ b σ h 2 + σ b 2 ) ]
S S I M ( x , x ^ ) = ( 2 μ x μ x ^ + C 1 ) ( σ x x ^ + C 2 ) ( μ x 2 + μ x ^ 2 + C 1 ) ( σ x 2 + σ x ^ 2 + C 2 )

Metrics