Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Fast OCT image enhancement method based on the sigmoid-energy conservation equation

Open Access Open Access

Abstract

Optical coherence tomography (OCT) is an important medical diagnosis technology, but OCT images are inevitably interfered by speckle noise and other factors, which greatly reduce the quality of the OCT image. In order to improve the quality of the OCT image quickly, a fast OCT image enhancement method is proposed based on the fusion equation. The proposed method consists of three parts: edge detection, noise suppression, and image fusion. In this paper, the improved wave algorithm is used to detect the image edge and its fine features, and the averaging uncorrelated images method is used to suppress speckle noise and improve image contrast. In order to sharpen image edges while suppressing the speckle noise, a sigmoid-energy conservation equation (SE equation) is designed to fuse the edge detection image and the noise suppression image. The proposed method was tested on two publicly available datasets. Results show that the proposed method can effectively improve image contrast and sharpen image edges while suppressing the speckle noise. Compared with other state-of-the-art methods, the proposed method has better image enhancement effect and speed. Under the same or better enhancement effect, the processing speed of the proposed method is 2 ∼ 34 times faster than other methods.

© 2021 Optical Society of America under the terms of the OSA Open Access Publishing Agreement

1. Introduction

Optical coherence tomography (OCT) is a non-invasive, high-resolution biological tissue imaging technology, which has been widely used in the fields of ophthalmology medicine and clinical diagnosis. However, the OCT image quality has been disturbed by many factors, and these factors are inherent to all OCT imaging methodologies [1]. Generally, the speckle noise seriously reduces the quality of OCT images, and the absorption and scattering of the high-scattering tissue will reduce the contrast of the OCT image [2]. Low-quality image makes it particularly challenging for doctors to identify the fine features of biological tissue for clinical examination. As an important basis for medical diagnosis [34], the quality of OCT images cannot be ignored. Therefore, the research of OCT image enhancement method has always been a research focus in the field of medical image processing.

In recent years, there are mainly hardware-based methods and software-based methods for OCT image enhancement. The hardware-based method needs to modify the existing imaging system, which is not easily adapted to standard medical OCT equipment. The software-based method is an efficient, economical and fast image enhancement method. Among many OCT image enhancement algorithms based on the software method, most studies improve the quality of OCT images by suppressing speckle noise. In [5], the total variation method was used to suppress the speckle noise of OCT images. This method can suppress noise and preserve the edge details of the OCT image, but it may introduce staircase effect. Wavelet transform [6] can also remove speckle noise well, but the choice of an appropriate threshold is difficult [7], and it may introduce unwanted visual artifacts. The nonlocal means method [8] can better suppress speckle noise while preserving the image edge details. However, the computational complexity of this method is too high. Increasing the window size can improve the denoising effect, but it will also lead to a surge in computation. In [910], the sparse representation method can obtain better speckle noise reduction, but this method is highly dependent on the dictionary. If there is an untrained structure in the OCT image, it cannot be well-represented by the dictionary. The method based on low-rank representation [1112] can remove speckle noise with higher rank in OCT image, but the processing speed of this method is slow and cannot meet the real-time requirements. Although the deep learning method [13] has a good denoising effect, the performance of this method depends on the quality and quantity of training data, and its generalization ability is poor.

Compared with ordinary image enhancement, the requirements of medical image enhancement are more stringent: Remove speckle noise and improve image contrast as much as possible while preserving the image fine features (These fine features may be the early pathological evidences of human diseases [4]). It is imperfect to improve the quality of medical images only by speckle noise suppression. Most denoising methods will reduce the image contrast and blur the image edge while suppressing the speckle noise. However, removing speckle noise while preserving the image fine features is a challenging task. The demand of real-time in modern medicine is more and more urgent [3], but most image enhancement algorithms are slower, and it may take more than ten seconds or hundreds of seconds to process an image. Therefore, improving the speed of image algorithms is also a very important task.

To solve the above problems, this paper proposes a new OCT image enhancement method. In the proposed method, the improved wave algorithm is used to detect image edge, the averaging uncorrelated images method is used to suppress the speckle noise, and finally the edge detection and noise suppression are cleverly fused through the SE equation. Experimental results show that the proposed method can improve the contrast of OCT image and preserve the image fine features while suppressing the speckle noise. From quantitative evaluation and visual quality, we can see that on the premise that the image enhancement effect is better than or equal to other methods, the proposed method has obvious speed advantage, which is about 2∼34 times faster than other state-of-the-art methods.

2. Method

This paper proposes an OCT image enhancement method based on image fusion, and the outline of the proposed method is shown in Fig. 1. The proposed method consists of three parts: edge detection, noise suppression and image fusion. The edge detection is used to segment biological tissue boundaries and fine features from OCT images. The noise suppression is used to remove speckle noise in OCT images and improve the contrast between the retinal area and the background area. Finally, the results of edge detection and noise suppression are cleverly fused by the SE equation. The proposed method improves the recognition of image edge information while the speckle noise is sufficiently suppressed, and achieves a better image enhancement effect and image processing speed.

 figure: Fig. 1.

Fig. 1. The outline of the proposed method.

Download Full Size | PDF

2.1 Edge detection

In this paper, an improved wave algorithm is used to detect the texture information of the OCT images. The wave algorithm is a new image segmentation method proposed by our research group, which has been used for retinal layer segmentation [3]. The wave algorithm regards the change of gray in the image as the change of potential energy in the ideal fluid, thereby transforming the image segmentation problem into a problem that solves the potential energy. In order to improve the noise immunity of the algorithm, we also constrain the solving process according to the prior knowledge of the OCT image. The wave algorithm is composed of two parts: the wave potential energy equation and the potential energy correction equation.

The wave potential energy equation (the blue rectangles in Fig. 2) is designed according to the fluid potential energy equation in fluid mechanics. In fluid mechanics, gh is the gravitational potential energy, v2/2 is the kinetic energy and P/ρ is the pressure potential energy. In the wave algorithm, the gray of OCT image is defined as the height of fluid (h), the gravitational acceleration is set as g=1, and the pressure potential energy P/ρ=0. Based on the above definition, the gravitational potential energy (gh) can be replaced by the normalized gray value (φg), the fluid velocity (V) is replaced by the normalized gray difference value (v and vq), v is the fluid velocity in the template, vq is the fluid velocity in the front of template (vq can control the size of the fluid kinetic energy, and thus the interference area is effectively removed [3]), and σ is the speed regulation factor. After the OCT image is processed by the wave potential energy equation, a set of points (the green area and blue area in Fig. 2) containing boundary lines is obtained. To extract the real boundary line from this point set, the potential energy correction equation (the red rectangles in Fig. 2) is needed.

 figure: Fig. 2.

Fig. 2. The outline of the improved wave algorithm. In the retina image on the right, the blue area and the green area are the boundary area, the red curve is the real boundary line extracted from the blue area and the green area.

Download Full Size | PDF

The potential energy correction equation is composed of the geometric discriminant equation (K) and the gray discriminant function (C). K is the ratio of the slope of the back area of the boundary point to the slope of the front area of the boundary point. C is the ratio of the average gray in front of the boundary point to the average gray behind the boundary point. Only when K and C are greater than 1 at the same time, the detection point can be considered as a boundary point. The OCT image is processed by the wave potential energy equation and the potential energy correction equation, the edge detection result image (the red curve in Fig. 2) can be obtained.

Although the wave algorithm has better performance compared with other methods, it also has the problem of poor robustness, which is because the gray level of images obtained by different OCT devices are different. To solve this problem, this paper proposes an image preprocessing method to improve the robustness of the wave algorithm. First, the original image must be filtered in frequency domain. Then the average gray value (mean) of the image is calculated. Finally, the preprocessed image (Fig. 3(b)) is obtained according to the preprocessing formula (the gray rectangles in Fig. 2). In the preprocessing formula, Yi is the original gray value, yi is the gray value after preprocessing, and MAX is the maximum gray value in the original image. The number of speckle noise in the OCT image is far greater than the number of useful signals, and the gray value of speckle noise is mostly concentrated in the range below the average gray value of the image. Therefore, using mean as the threshold can remove a lot of speckle noise while retaining most of the useful signal. Setting the image preprocessing in the wave algorithm can make this method applicable to images taken by various OCT devices, which is beneficial to improve the segmentation accuracy and robustness of the wave algorithm. The improved wave algorithm is composed of three parts: image preprocessing, the wave potential energy equation and the potential energy correction equation. The outline of the improved wave algorithm is shown in Fig. 2. After the original image is preprocessed, the preprocessed image shown in Fig. 3(b) can be obtained. The preprocessed image is processed by the wave potential energy equation and the potential energy correction equation, the edge detection result image shown in Fig. 3(c) can be obtained.

 figure: Fig. 3.

Fig. 3. Schematic diagram of algorithm results. (a) Original image. (b) Image after preprocessing. (c) Image of edge detection results.

Download Full Size | PDF

2.2 Noise suppression

The OCT image is different from the ordinary camera image. The ordinary camera image is usually imaged by CCD and other photosensitive chips, but OCT is laser scanning imaging. Due to the particularity of the OCT imaging principle, the lines of the OCT image are independent of each other, that is, the gray value of each A-Scan is not affected by the surrounding A-Scan. Therefore, two new OCT images (the down-sampling of odd rows is shown in Fig. 4(B0) and the down-sampling of even rows is shown in Fig. 4(C0)) with low correlation speckle patterns can be obtained by interlaced down-sampling of the original OCT image (Fig. 4(a)). Then, the two new OCT images are filtered by a simple frequency domain filter, and the filtered images (Fig. 4(B1) and Fig. 4(C1)) are interlaced up-sampling. In the process of up-sampling, the classic bilinear interpolation algorithm is used to calculate the gray value of the new pixels. Figure 4(b) and Fig. 4(c) show two OCT images obtained after interlaced up-sampling. The physical size and imaging area of the two up-sampled images are consistent with the original OCT image, but the speckle pattern is low correlation. The signal-to-noise ratio can be improved by averaging multiple uncorrelated signals at the same sample position [1415]. Therefore, averaging two up-sampled images (Fig. 4(c) and Fig. 4(b)) can improve the quality of the OCT image.

 figure: Fig. 4.

Fig. 4. The flow diagram of image noise suppression. (a) Original image. (b) B1's up-sampling image. (c) C1's up-sampling image. (d) Image after preprocessing. (e) Image of noise suppression results. (B0) The down-sampling image of original image's odd rows. (B1) The filtered image of B0. (C0) The down-sampling image of original image's even rows. (C1) The filtered image of C0.

Download Full Size | PDF

In order to further improve the image quality, the averaged image and the preprocessed image (Fig. 4(d)) are fused by the following formula:

$${e_i} = \left\{ \begin{array}{lc} {({d_i} + c{b_i})} / 2, &({d_i} > {D_{mean}})and(c{b_i} > {B_{mean}}) \\ c{b_i} \qquad , &{other} \end{array}\right.$$

Where di is the gray value of the preprocessed image, cbi is the gray of the averaged image, Dmean is the average gray of the background area in the preprocessed image, Bmean is the average gray of the background area in the averaged image and ei is the gray of the fused image. The fused image is smoothed by Wiener filter to get the final noise suppression image. This method can effectively improve image contrast and suppress speckle noise. Although boundary blur is inevitably caused in the process of suppressing speckle noise, this problem can be remedied by SE equation.

2.3 Image fusion

In order to solve the problem that edge detection and noise suppression cannot be effectively fused, this paper proposes a new image fusion equation: Sigmoid-Energy conservation equation (SE). The SE equation can achieve the image enhancement effect of edge-preserving and noise-smoothing by cleverly fusing the noise suppression image and the edge detection image together. The SE equation consists of the sigmoid function and the energy conservation constraint:

$$\left\{\begin{array}{l} {S_i} = \frac{1}{{1 + {e^{ - \beta {x_i}}}}} \times N{I_i}\\ \left|{\left. {\sum {{S_i}} - \sum {N{I_i}} } \right|} \right. \le \varepsilon \end{array}\right.$$

Where i ={r-(R-1)/2, r+(R-1)/2}, xi={-(R-1)/2, (R-1)/2}, R is the mapping range (R must be an odd number greater than 1, and the general value is 7), r is the pixel coordinate of the image, NIi is the gray of the noise suppression image, Si is the gray of the fused image, β is the control parameter of sigmoid function (the initial value of β can be set to 1) and ε is an infinitesimal number. The result of edge detection is binary image, the gray of the boundary is 255, and the gray of the background is 0. The SE equation only adjusts the gray of the noise suppression image at the boundary point (a pixel with a gray of 255 in binary image), and there is no operation at other pixels. Suppose a A-scan in the OCT image is shown in Fig. 5, the blue pixel is the boundary, and the gray of pixels near the boundary changes linearly (the boundary area is blurred). If the mapping range R=7, the change trend of the gray near the boundary can be changed from a linear function to a step function (the boundary area is clear) through the SE equation. In order to ensure that the sigmoid function does not change the topological structure of the OCT image while sharpening the boundary area, we add energy conservation constraints to the SE equations, that is, the gray value sum after the change is basically consistent with the gray value sum before the change. When the control parameter β is increased, the sigmoid function will become closer to a step function; when β is reduced, the sigmoid function will become closer to a linear function. The energy conservation constraint adjusts the degree of gray change by adaptively changing β, that is, the energy conservation constraint is a process of finding the best β.

 figure: Fig. 5.

Fig. 5. The flow diagram of image fusion. The red line is the gray value change curve. The blue box is the real boundary. (a) The edge detection image. (b) The noise suppression image. (c) the fused image.

Download Full Size | PDF

Due to the addition of energy conservation constraints, the SE equation has higher ability of fault-tolerance. When there is an error in edge detection (Falsely detect the background as the boundary), the energy conservation constraint will make the β value of the SE equation very small. A small β value will make the sigmoid function approach a linear function, so that the change of gray near error boundary is limited or unchanged. It can be seen from the local enlarged image shown in Fig. 5 that the edge detection image (Fig. 5(a)) and the noise suppression image (Fig. 5(b)) can be fused by the SE equations to obtain an enhanced image (Fig. 5(c)) with clear boundaries. The SE equation can remove the speckle noise at edges without changing the topological structure of image, and does not change the gray distribution of other areas. It is an interesting fusion method of the edge detection image and the noise suppression image. The SE equation is not only limited to the image fusion of this method, but can also be used for the fusion of a variety of edge detection methods and a variety of speckle noise suppression methods, and has strong robustness and wide applicability.

3. Experiments and results

3.1 Data

In following experiments, we use the public human eye OCT dataset [9] (Dataset 1) and the public pig eye OCT dataset [16] (Dataset 2). Dataset 1 was acquired by SDOCT imaging systems from Bioptigen, Inc. (Research Triangle Park, NC). Dataset 2 was acquired by Spectralis HRA & OCT (Heidelberg Engineering), which includes 35 sets at different eye positions, each set contains 13 frames (768 × 496 pixels). More information about Dataset 1 and Dataset 2 can be found in related papers [9,16]. Both the original image and the standard reference image are obtained in dataset 1 and dataset 2 (Fig. (6)). However, as shown in Fig. 6, the image quality in Dataset 1 is obviously better than that in Dataset 2, and testing these datasets at the same time helps to compare the robustness of different methods.

 figure: Fig. 6.

Fig. 6. Images from datasets 1 and datasets 2. (a) Human retinal original image. (b) Pig retinal original image (c) Human retinal reference images. (d) Pig retinal reference images.

Download Full Size | PDF

3.2 Image quality metrics

In this paper, cross-correlation (XCOR), peak signal-to-noise-ratio (PSNR), contrast-to-noise ratio (CNR) and equivalent number of looks (ENL) are used to objectively evaluate the image enhancement effect of different method. The PSNR is used to evaluate the level of image noise, and a high quality denoised image has a higher PSNR value. The CNR evaluate the contrast between foreground objects (the areas in the blue rectangles in Fig. 7(a) and Fig. 8(a)) and background region (the areas in the red rectangles in Fig. 7(a) and Fig. 8(a)). The ENL is used to evaluate the smoothness of the homogeneous areas (the areas in the green rectangles in Fig. 7(a) and Fig. 8(a)). The XCOR measures the similarity between the enhanced image and the reference image, a larger XCOR value means that the enhanced image is more similar to the reference image. These metrics are defined as follows:

$$PSNR = 10 \times {\log _{10}}\left( {\frac{{MA{X^2}}}{{\frac{1}{N}\sum\nolimits_{i = 1}^N {{{({I_i} - {{\hat{I}}_i})}^2}} }}} \right)$$
$$XCOR = \frac{{\sum\nolimits_{i = 1}^N {{I_i} \times {{\hat{I}}_i}} }}{{\sqrt {\left[ {\sum\nolimits_{i = 1}^N {{I_i}^2} } \right] \times \left[ {\sum\nolimits_{i = 1}^N {{{\hat{I}}_i}^2} } \right]} }}$$
$$CNR = \frac{1}{n}\left( {\sum\limits_{i = 1}^n {\frac{{{\mu_i} - {\mu_b}}}{{\sqrt {\sigma_i^2 + \sigma_b^2} }}} } \right)$$
$$ENL = \frac{1}{n}\left( {\sum\limits_{i = 1}^n {{\raise0.7ex\hbox{${\mu_i^2}$} \!\mathord{\left/ {\vphantom {{\mu_i^2} {\sigma_i^2}}} \right.}\!\lower0.7ex\hbox{${\sigma_i^2}$}}} } \right)$$

Where μi and σi are mean and standard deviation of the foreground areas. μb and σb are mean and standard deviation of the background areas. n is the number of ROI (region of interest). MAX is the maximum gray value of the image and N is the total number of pixels. I is the image processed by the algorithm, and $\hat{I}$ is the reference image.

 figure: Fig. 7.

Fig. 7. Results of deferent methods in dataset 1. (a) Noisy image. (b) -(f) Results of Noise2Void method, BM3D method, PNLM method, WGLRR method and the proposed method.

Download Full Size | PDF

 figure: Fig. 8.

Fig. 8. Results of deferent methods in dataset 2. (a) Noisy image (five frames). (b) -(f) Results of Noise2Void method, BM3D method, PNLM method, WGLRR method and the proposed method.

Download Full Size | PDF

3.3 Comparison of results

In order to objectively evaluate the enhancement effect of the proposed method, we test the proposed method on two datasets and compare against 4 state-of-the-art methods: probability-based non-local means filter (PNLM) [8], Noise2Void (N2V) [17], weighted group low-rank representation model (WGLRR) [18] and BM3D [19]. To ensure the accuracy of each methods, we downloaded the source code of PNLM, WGLRR and BM3D [8,18,19], and all processing is implemented on the same computer (a desktop PC with an Intel Core i5-7500 CPU at 3.4 GHz and 16 GB of RAM). In dataset 1, we take a single frame (10_Raw Image.tif) as input. In dataset 2, we take the average of 5 frames (from “17_ 5.tif” to “17_ 9.tif”) as their input. N2V was trained and tested in Google Colab (the data of training set comes from dataset 1 and dataset 2). In dataset 1, we take the 10th set as testing data, and other sets as training data. In dataset 2, we take the 17th set as testing data, and other sets as training data. In order to objectively compare the robustness of each methods, the parameter settings of each methods are consistent in dataset 1 and dataset 2.

The test results of each methods in dataset 1 and dataset 2 are shown in Fig. 7 and Fig. 8. However, as shown in Fig. 7 and Fig. 8, whether it is a human retina image with better image quality or a pig retina image with poor image quality, the proposed method can effectively remove speckle noise, improve image contrast, and preserve the image delicate edge. In order to stronger compare the enhancement effect of each methods, we enlarged white rectangle regions (Fig. 8(a)) in the pig eye retina image which is more severely polluted by speckle noise. It can be seen from Fig. 9 that the suppression of speckle noise by PNLM, N2V and BM3D are obviously insufficient. WGLRR has the problem of over-smoothed and also introduces unwanted visual artifacts (the red dotted lines area in Fig. 9) into the image. However, the proposed method does not change the image geometric structure, sufficient speckle noise is removed and more edge details are preserved.

 figure: Fig. 9.

Fig. 9. Closeups of the white rectangle region in Fig. 3(a). (a) Noisy image. (b)-(f) Results of Noise2Void method, BM3D method, PNLM method, WGLRR method and the proposed method.

Download Full Size | PDF

Table 1 and Table 2 are the quantification results of different methods in dataset 1 and dataset 2. Among these methods, the enhancement effect of the proposed method and WGLRR are batter in these metrics, while PNLM, N2V and BM3D have obvious insufficient. The ENL of the proposed method is smaller than that of WGLRR, which is because the image is over-smoothed by WGLRR. Although over-smoothing can improve quantification results, it will cause edge details and fine features of the OCT image to change, which should be avoided in medical image processing. In terms of robustness, the proposed method has a good enhancement effect on different quality images. The proposed method does not need to manually set algorithm parameters (except R, the value of R is generally between 3 and 11) for different images, and important parameters are obtained adaptively. However, the enhancement effect of WGLRR, PNLM, N2V and BM3D on different images are obviously different. These methods can only get the best enhancement by setting different optimal parameters for different images. N2V has a better noise suppression effect in OCT images with less speckle noise pollution, but when the noise pollution is serious, the noise suppression effect will be worse, especially when there are bright and isolated pixel in OCT image. N2V does not require noisy image pairs, nor clean target images [17], which is very suitable for biomedical image data (it is difficult to obtain clean ground truth data from medical imaging equipment), but the noise suppression effect of such methods needs to be improved. In terms of algorithm speed, the proposed method is the fastest of all methods except N2V. Compared with WGLRR, the proposed method is about 29∼34 times faster than WGLRR. In WGLRR, the main computational cost is the SVD (Singular Value Decomposition) operator, and the proposed method is mainly based on differencing operator, so the proposed method is obviously faster than WGLRR. Compared with PNLM and BM3D whose enhancement effect is not as good as the proposed method, the proposed method is about 2∼7 times faster than them. N2V uses GPU acceleration technology (other methods use CPU), data processing speed is faster, but this undoubtedly increases hardware costs. In summary, we can see that the better performance of the proposed method on OCT images compared with other methods.

Tables Icon

Table 1. Comparisons of various methods on Dataset 1

Tables Icon

Table 2. Comparisons of various methods on Dataset 2

3.4 Discussion on limitations

The edge detection and noise suppression of the proposed method are two independent parts, which are fused together by the SE equation. This algorithm structure is conducive to the free combination of various noise suppression methods and edge detection methods, thereby achieving multiple enhancement effects. If the edge detection effect is poor, it will cause insufficient sharpening of the image edge or produce false edges. Even if the SE equation has good fault-tolerance for edge detection, it cannot completely solve these problems. If the noise suppression is insufficient, no matter how good the edge detection method is, the image quality cannot be greatly improved. For these problems, we plan to complement each other with edge detection and noise suppression, so that edge information and noise information are no longer isolated. In order to encourage people to study in this field, we publish the algorithm code (Code 1 [20]) of this paper for reference.

4. Conclusion

In order to solve the problem that noise suppression and edge sharpening can't be achieved simultaneously in image enhancement, this paper proposes a fast OCT image enhancement method based on image fusion. In this method, the improved wave algorithm is used for edge detection, and the averaging uncorrelated images method is used to suppress noise, and SE equation is designed to fuse the edge detection image and the noise suppression image. The proposed method was tested on two publicly available datasets, and compared with state-of-the-art methods in terms of quantitative evaluation and visual quality. The results show that the proposed method has a better image enhancement effect. Under the same or better enhancement effect, the proposed method has obvious speed advantages, which is beneficial to the realization of real-time imaging of the OCT system. We can see that the proposed method is a high performance-cost ratio image enhancement method, which has excellent performance in both human and pig retinal images. In addition, the SE equation can be used for the fusion of multiple edge detection methods and multiple speckle noise suppression methods, and can be applied to multiple types of images, with strong robustness and wide applicability.

Funding

National Key Research and Development Program of China (2017YFC0109901).

Acknowledgments

Authors are grateful to S. Farsiu and M. A. Mayer for the release of the OCT dataset.

Disclosures

The authors declare that there are no conflicts of interest related to this article.

References

1. O. Liba, M. D. Lew, E. D. Sorelle, R. Dutta, D. Sen, D. M. Moshfeghi, S. Chu, and A. de la Zerda, “Speckle-modulating optical coherence tomography in living mice and humans,” Nat. Commun. 8(1), 15845 (2017). [CrossRef]  

2. Y. B. Hu, C. Tang, M. Xu, and Z. K. Lei, “Selective retinex enhancement based on the clustering algorithm and block-matching 3D for optical coherence tomography images,” Appl. Opt. 58(36), 9861–9869 (2019). [CrossRef]  

3. S. L. Lou, X. D. Chen, X. Y. Han, J. Liu, Y. Wang, and H. Y. Cai, “Fast Retinal Segmentation Based on the Wave Algorithm,” IEEE Access 8, 53678–53686 (2020). [CrossRef]  

4. J. Wu, S. L. Lou, Z. T. Xiao, L. Geng, F. Zhang, W. Wang, and M. J. Liu, “Design of optical system for binocular fundus camera,” Comput Assist Surg 22(sup1), 61–69 (2017). [CrossRef]  

5. J. Duan, W. Lu, C. Tench, I. Gottlob, F. Proudlock, N. N. Samani, and L. Bai, “Denoising optical coherence tomography using second order total generalized variation decomposition,” Biomed. Signal Proces. 24, 120–127 (2016). [CrossRef]  

6. W. Habib, T. Sarwar, A. M. Siddiqui, and I. Touqir, “Wavelet denoising of multiframe optical coherence tomography data using similarity measures,” IET Image Proces. 11(1), 64–79 (2017). [CrossRef]  

7. F. Zaki, Y. Wang, and H. Su, “Noise adaptive wavelet thresholding for speckle noise removal in optical coherence tomography,” Biomed. Opt. Express 8(5), 2720–2731 (2017). [CrossRef]  

8. H. Yu, J. Gao, and A. Li, “Probability-based non-local means filter for speckle noise suppression in optical coherence tomography images,” Opt. Lett. 41(5), 994–997 (2016). [CrossRef]  

9. L. Fang, S. Li, Q. Nie, J. A. Izatt, C. A. Toth, and S. Farsiu, “Sparsity based denoising of spectral domain optical coherence tomography images,” Biomed. Opt. Express 3(5), 927–942 (2012). [CrossRef]  

10. A. Abbasi, A. Monadjemi, L. Fang, and H. Rabbani, “Optical coherence tomography retinal image reconstruction via nonlocal weighted sparse representation,” J. Biomed. Opt. 23(03), 1–11 (2018). [CrossRef]  

11. C. D. Tao, Y. Quan, D. W. K. Wong, G. C. M. Cheung, M. Akiba, and J. Liu, “Speckle reduction in 3d optical coherence tomography of retina by a-scan reconstruction,” IEEE Trans. Med. Imaging 35(10), 2270–2279 (2016). [CrossRef]  

12. S. Gu, Q. Xie, D. Meng, W. Zuo, X. Feng, and L. Zhang, “Weighted Nuclear Norm Minimization and Its Applications to Low Level Vision,” Int. J. Comput. Vis. 121(2), 183–208 (2017). [CrossRef]  

13. Y. Ma, X. Chen, W. Zhu, X. Cheng, D. Xiang, and F. Shi, “Speckle noise reduction in optical coherence tomography images based on edge-sensitive cGAN,” Biomed. Opt. Express 9(11), 5129–5146 (2018). [CrossRef]  

14. M. Bashkansky and J. Reintjes, “Statistics and reduction of speckle in optical coherence tomography,” Opt. Lett. 25(8), 545–547 (2000). [CrossRef]  

15. B. Baumann, C. W. Merkle, R. A. Leitgeb, M. Augustin, A. Wartak, M. Pircher, and C. K. Hitzenberger, “Signal averaging improves signal-to-noise in OCT images: But which approach works best, and when?” Biomed. Opt. Express 10(11), 5755–5775 (2019). [CrossRef]  

16. M. A. Mayer, A. Borsdorf, M. Wagner, J. Hornegger, C. Y. Mardin, and P. R. Tornow, “Wavelet denoising of multiframe optical coherence tomography data,” Biomed. Opt. Express 3(3), 572–589 (2012). [CrossRef]  

17. A. Krull, T.-O. Buchholz, and F. Jug, “Noise2void-learning denoising from single noisy images,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, (2019), pp. 2129–2137.

18. C. Tang, L. Cao, J. Chen, and X. Zheng, “Speckle noise reduction for optical coherence tomography images via non-local weighted group low-rank representation,” Laser Phys. Lett. 14(5), 056002 (2017). [CrossRef]  

19. K. Dabov, A. Foi, V. Katkovnik, and K. Egiazarian, “Image denoising by sparse 3-D transform-domain collaborative filtering,” IEEE Trans. Image Process. 16(8), 2080–2095 (2007). [CrossRef]  

20. S. L. Lou and X. D. Chen, “OCT-image-SE-method,” Github (2020). https://github.com/loushiliang/OCT-image-SE-method.git.

Supplementary Material (1)

NameDescription
Code 1       Algorithm code in the paper

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (9)

Fig. 1.
Fig. 1. The outline of the proposed method.
Fig. 2.
Fig. 2. The outline of the improved wave algorithm. In the retina image on the right, the blue area and the green area are the boundary area, the red curve is the real boundary line extracted from the blue area and the green area.
Fig. 3.
Fig. 3. Schematic diagram of algorithm results. (a) Original image. (b) Image after preprocessing. (c) Image of edge detection results.
Fig. 4.
Fig. 4. The flow diagram of image noise suppression. (a) Original image. (b) B1's up-sampling image. (c) C1's up-sampling image. (d) Image after preprocessing. (e) Image of noise suppression results. (B0) The down-sampling image of original image's odd rows. (B1) The filtered image of B0. (C0) The down-sampling image of original image's even rows. (C1) The filtered image of C0.
Fig. 5.
Fig. 5. The flow diagram of image fusion. The red line is the gray value change curve. The blue box is the real boundary. (a) The edge detection image. (b) The noise suppression image. (c) the fused image.
Fig. 6.
Fig. 6. Images from datasets 1 and datasets 2. (a) Human retinal original image. (b) Pig retinal original image (c) Human retinal reference images. (d) Pig retinal reference images.
Fig. 7.
Fig. 7. Results of deferent methods in dataset 1. (a) Noisy image. (b) -(f) Results of Noise2Void method, BM3D method, PNLM method, WGLRR method and the proposed method.
Fig. 8.
Fig. 8. Results of deferent methods in dataset 2. (a) Noisy image (five frames). (b) -(f) Results of Noise2Void method, BM3D method, PNLM method, WGLRR method and the proposed method.
Fig. 9.
Fig. 9. Closeups of the white rectangle region in Fig. 3(a). (a) Noisy image. (b)-(f) Results of Noise2Void method, BM3D method, PNLM method, WGLRR method and the proposed method.

Tables (2)

Tables Icon

Table 1. Comparisons of various methods on Dataset 1

Tables Icon

Table 2. Comparisons of various methods on Dataset 2

Equations (6)

Equations on this page are rendered with MathJax. Learn more.

e i = { ( d i + c b i ) / 2 , ( d i > D m e a n ) a n d ( c b i > B m e a n ) c b i , o t h e r
{ S i = 1 1 + e β x i × N I i | S i N I i | ε
P S N R = 10 × log 10 ( M A X 2 1 N i = 1 N ( I i I ^ i ) 2 )
X C O R = i = 1 N I i × I ^ i [ i = 1 N I i 2 ] × [ i = 1 N I ^ i 2 ]
C N R = 1 n ( i = 1 n μ i μ b σ i 2 + σ b 2 )
E N L = 1 n ( i = 1 n μ i 2 / μ i 2 σ i 2 σ i 2 )
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.