Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Development and validation of a deep learning algorithm for distinguishing the nonperfusion area from signal reduction artifacts on OCT angiography

Open Access Open Access

Abstract

The capillary nonperfusion area (NPA) is a key quantifiable biomarker in the evaluation of diabetic retinopathy (DR) using optical coherence tomography angiography (OCTA). However, signal reduction artifacts caused by vitreous floaters, pupil vignetting, or defocus present significant obstacles to accurate quantification. We have developed a convolutional neural network, MEDnet-V2, to distinguish NPA from signal reduction artifacts in 6×6 mm2 OCTA. The network achieves strong specificity and sensitivity for NPA detection across a wide range of DR severity and scan quality.

© 2019 Optical Society of America under the terms of the OSA Open Access Publishing Agreement

1. Introduction

Assessment of capillary damage is a key biomarker to evaluate diabetic retinopathy (DR). With the availability of commercial optical coherence tomography angiography (OCTA) technology, ophthalmologists can easily acquire high quality, three-dimensional images of retinal and choroidal circulation with capillary-level detail, which allow quantification of vascular metrics that correlate with clinical severity and disease progression.

In recent years, some researchers have proposed many methods to quantify capillary dropout on OCTA. These methods can be divided into two categories: vascular density-based methods and intercapillary size-based methods. Although vascular density-based methods [1] have been commercially adopted and widely used, they are less sensitive than the latter method on measuring nonperfusion area (NPA) [2,3]. Our group initiated automated quantification of NPA on 3 × 3 mm2 macular OCTA using an intercapillary size-based method [4,5], and demonstrated that NPA of the superficial vascular complex (SVC) in the retina is an important indicator of DR stage and progression [4–7]. To measure NPA on wider field of view, we developed a deep learning based solution, named MEDnet [8], on 6 × 6 mm2 OCTA images. MEDnet is a pixel-wise semantic segmentation network with U-Net-like [9] architecture.

Many network structures with excellent feature expression ability have been proposed (e.g. U-Net, ResNet [10], Inception [11], DenseNet [12]). By referencing these state-of-the-art network structures, researchers can easily build networks that meet their needs (e.g. R2U-Net [13], M2U-Net [14]). Recent studies in semantic segmentation using deep convolution neural networks (CNN) have greatly promoted the application of neural networks in medical image processing [15,16]. In ophthalmology, CNNs can detect types of retinopathy on OCT images, like diabetic macular edema [17,18], age-related macular degeneration (AMD) [19,20], or drusen [21]. Some researchers also applied CNNs to segment retinal layers on OCT images [22–26], and used the powerful feature extraction capabilities of CNNs to generate retina flow maps from structural OCT images [27].

In our previous work, MEDnet detected NPA well on 6 × 6 mm2 OCTA images, but it was susceptible to severe signal reduction artifacts caused by opacities anterior to the retina, pupil vignetting, or defocus. Signal reduction due to shadow and defocus affects OCTA with a wider field of view more profoundly, decreasing the specificity of NPA detection. In this study, we evaluate a new algorithm that can distinguish NPA from signal reduction artifacts.

2. Methods

2.1 Data acquisition

All OCTA scans were acquired over a 6 × 6 mm2 region using a 70-kHz OCT commercial AngioVue system (RTVue-XR; Optovue, Inc.) centered at 840mm with a full-width half maximum bandwidth of 45nm. Two repeated B-scans were taken at each of 304 raster positions and each B-scan consists of 304 A-lines. The OCTA data were computed using the split-spectrum amplitude decorrelation angiography (SSADA) algorithm [28]. Retinal layer boundaries [Fig. 1(A)] were segmented by using a guided bidirectional graph search (GB-GS) algorithm [29]. Angiograms of the superficial vascular complex (SVC) [Fig. 1(B-C)] and reflectance images of inner retina [Fig. 1(D-E)] were generated by projecting OCTA/OCT data within the slab of interest [7]. The thickness maps of the inner retina [Fig. 1(F)] were generated by projecting the distances between the inner limiting membrane (upper boundary) and the outer plexiform layer (lower boundary), excluding the contribution from retinal fluid.

 figure: Fig. 1

Fig. 1 Data acquisition for MEDnet-V2. (A) Segmentation results of the retinal layer boundaries on a B-scan. (B) Definition of the superficial vascular complex (SVC) slab. (C) SVC angiogram produced by maximum projection of the OCTA data within the SVC slab. (D) Definition of the inner retina slab. (E) Reflectance image of the inner retina produced by mean projection of OCT data within the inner retina slab. (F) Thickness map of the inner retina. ILM — inner limiting membrane. NFL — nerve fiber layer. GCL — ganglion cell layer. IPL — inner plexiform layer. INL — inner nuclear layer. OPL — outer plexiform layer. ONL — outer nuclear layer. EZ — ellipsoid zone. RPE — retinal pigment epithelium. BM — Bruch’s membrane. SVC — Superficial vascular complex. DVC – Deep vascular complex. B – Boundary.

Download Full Size | PDF

2.2 Network architecture

In our previous work, we regarded the NPA detection task as separating two categories—perfusion and nonperfusion. Although MEDnet limits interference of signal reduction by including the OCT reflectance image of the SVC slab in the network, the constraint of this two-category approach results in the failure of distinction between capillary dropout and severe signal reduction artifacts, especially in the scans with low signal strength (<55). The new approach proposed in this study assigns three possible categories—perfusion, nonperfusion and signal reduction.

Figure 2 illustrates the architecture of MEDnet-V2. The input to the network consists of three parts [Fig. 2(B-D)]. Before feeding the en face image of the inner retinal tissue reflectance [Fig. 2(A)] into the network, we applied a multi-Gaussian filter (Eq. (1) to produce a reflectance intensity map [Fig. 2(B)] and remove artifacts (e.g. due to large vessels) and noise:

 figure: Fig. 2

Fig. 2 The brief network architecture of MEDnet-V2. (A) OCT reflectance image of the inner retina. (B) Gaussian-filtered reflectance intensity map of the inner retina. (C) Inner retinal thickness map. (D) The en face angiogram of the superficial vascular complex. (E1-E3) Three convolution networks with the same structure. (F) Detection result with probability maps for perfusion loss (blue) and signal reduction artifacts (yellow).

Download Full Size | PDF

M=φ(1Ni=1NG(h,σi)(II¯+1N)).

Here, φ() is a rectifier function that sets a matrix element value to 1 if the element value is greater than 1. N is the number of Gaussian filters. 𝐺(h,σi) is the Gaussian filter with size of h×h (h=9, an empirically determined value) and standard deviation of σi ( σ=[9, 15, 21], again empirical values). * is the convolution operator, I¯ is the image matrix, and I¯is the mean value of image.

In the reflectance intensity map, both the fovea and shadow-affected areas show low reflectance. To distinguish them, we fed the thickness map of the inner retina [Fig. 2(C)], which shows low values around the fovea, to a subnetwork to remove its impact on the detection of signal reduction artifacts. After passing through two convolutional neural networks [Fig. 2(E1-E2)], the features from reflectance intensity map and the inner retinal thickness map were added. The decision to add the features from these maps was made empirically, after determining that the network performed better using this operation than alternatives (e.g., concatenation). Then, the en face angiogram of the SVC [Fig. 2(D)] was concatenated with the features from the previous networks and fed to the last network [Fig. 2(E3)]. Figure 2(F) shows the detection result with probability maps for perfusion loss (blue) and signal reduction artifact (yellow) overlaid on SVC angiogram.

MEDnet-V2 is comprised of three sub convolution networks, each with identical structure [Fig. 3(A)]. The sub convolution network uses a U-Net-like architecture. We modified the multi-scale module [Fig. 3(B)] and encoder-decoder modules of the previous version of MEDnet to enhance the feature representation capabilities of the network by making the network deeper. In each encoder and decoder block, we replaced plain connection blocks with residual blocks [Fig. 3(C-D)] from ResNet [10]. After each convolution layer, we added a batch normalization layer to accelerate the training phase and reduce overfitting.

 figure: Fig. 3

Fig. 3 (A) Network architecture of subnetworks in MEDnet-V2. (B) Multi-scale convolutional block. (C-D) Residual blocks from ResNet.

Download Full Size | PDF

2.3 Training

2.3.1 Generation of ground truth and training data

To obtain accurate ground truth maps for training, three certified graders (trained technicians) manually delineated NPA and signal reduction artifacts using in-house graphical user interface software [Fig. 4(A)]. The software allowed graders to delineate NPA and signal reduction artifacts simultaneously on SVC angiogram. The signal reduction artifacts can be delineated using the reflectance intensity map as a reference. To generate the final ground truth map from these three manual delineation results, a voting method was employed. The category that receives the majority of votes (≥2/3) decided each pixel’s identity (back ground (perfusion area), NPA (green), and signal reduction artifacts (yellow)) [Fig. 4(B-C)]. If the case contains undecidable pixels, each expert vote it with different categories, the final ground truth map will be determined by a discussion of these three graders.

 figure: Fig. 4

Fig. 4 Manual delineation of ground truth for training. (A) The in-house graphical user interface software. (B) Three experts delineated ground truth maps for nonperfusion area (green) and signal reduction artifacts (yellow) overlaid on the superficial vascular complex angiograms. (C) The final ground truth map overlaid on the superficial vascular complex angiogram.

Download Full Size | PDF

The input data set consisted of en face angiograms of the SVC [Fig. 5(A)], the inner retinal thickness map [Fig. 5(B)], the OCT reflectance image of the inner retina [Fig. 5(C)], and the corresponding manually delineated NPA and the regions affected by signal reduction artifacts [Fig. 5(D)].

 figure: Fig. 5

Fig. 5 Representative input data set. (A) En face angiogram of superficial vascular complex from a patient with diabetic retinopathy. (B) Inner retinal thickness map. (C) Reflectance image acquired by projecting the reflectance OCT data within the inner retina. (D) The ground truth map of the nonperfusion area (green) and signal reduction artifact (yellow) overlaid on the superficial vascular complex angiogram.

Download Full Size | PDF

The data set was collected from 180 participants in a clinical diabetic retinopathy (DR) study (76 healthy controls, 34 participants with diabetes without retinopathy, 31 participants with mild or moderate non-proliferative DR (NPDR) and 39 participants with severe DR). Two repeat volume scans were acquired from the same eye of each participant. OCTA scans were also acquired from 13 healthy volunteers, and 6 repeat volume scans (one reference scan, two scans with manufactured shadow, and three defocused scans with different diopters) were acquired from each volunteer (Table 1). To increase the number of training samples, we applied several data augmentation operations, including addition of Gaussian noise (mean = 0, sigma = 0.5), salt and pepper noise (salt = 0.001, pepper = 0.001), horizontal flipping, and vertical flipping.

Tables Icon

Table 1. Data set used in MedNet-V2

2.3.2 Loss function and optimizer

In healthy eyes, the NPA is limited to the macular area accounting for a small proportion of the overall angiogram. Even in the eyes with DR, NPA constitutes a minority of the angiogram. However, signal strength reduction can affect en face angiograms at any location. This constitutes a serious category imbalance problem in this segmentation task. To address this, we designed a weighted Jaccard coefficient loss function (Eq. (2). This loss function L imposes different weights to each category to adjust the category balance:

L=i=1NJi×wi,i=1Nwi=1.J=(1xy(x)×y^(x)+αx(y(x)+y^(x))xy(x)×y^(x)+α)×α.

Here, N is category index, and  wi is the weight of i–th category associated with Jaccard coefficient  Ji. In this task, we set the three categories (perfusion area, NPA, and signal reduction artifacts) with weights as w= (0.25, 0.5, 0.25). x denotes the position of each pixel, y(x) is ground truth, y^(x) is the output of the network, and α is a smoothing factor set to 100.

We used the Adam algorithm [30], a stochastic gradient-based optimizer, with an initial learning rate 0.001 to train our network by minimizing the weighted Jaccard coefficient loss function. An additional global learning decay strategy was employed to reduce the learning rate during training. In this learning rate decay strategy, we reduce the learning rate l to l*0.9 when the loss shows no decrease after 10 epochs. This decay strategy will stop when the learning rate l is lower than 1 × 10−6. The training process will also stop when both the learning rate and loss stop declining. To initialize the convolution kernels we used He normal initialization [31].

We implemented MEDnet-V2 in Python 3.6 with Keras (Tensorflow-backend) on a PC with an Intel i7 CPU, NVidia GeForce GTX 1080Ti graphics card, and 32G RAM.

3. Result

3.1 Performance evaluation

We applied six-fold cross validation to evaluate the performance of MEDnet-V2 on the entire data set. The data set was split into six subsets and the data of training set and test set are from different eyes. Six networks were trained on five of these six subsets alternately and validated on the remaining one. The performance of the network might be affected by several factors, principally the severity of the disease and low OCT signal strength index (SSI). We separated the test set into two groups, a group with different disease severity and a group with different SSI. For each group, we divided scans into 4 different sub-groups according to a gradient of disease severity or SSI. We calculated 4 measures (accuracy, specificity, sensitivity, and dice coefficient (Eq. (3)) and NPA of each sub-groups (Table 2). Our metrics are defined as

Tables Icon

Table 2. Agreement (in pixels) between automated detection and manual delineation of nonperfusion area (mean ± standard deviation)

Accuracy=TP+TNTP+FP+TN+FN,Sepcificity=TNTN+FP,Sensitivity=TPTP+FN,Dice=2×TP2×TP+FP+FN.

Where TP is true positives (correctly predicted NPA pixels), TN is true negatives (perfusion area and signal reduction artifacts were considered as negatives), FP is false positives (perfusion or signal reduced area segmented as NPA), and FN is false negatives (NPA segmented as either perfusion or artifact). The specificity approached unity across disease state and SSI, indicating nearly perfect segmentation of healthy tissue. Sensitivity and dice coefficient deteriorated in more severe cases, because of the cumulative error increases with the increasing complexity and size of NPA. In the SSI group, sensitivity and dice coefficient didn’t show obvious decline as SSI decreased, which means our network was robust to low-quality images and avoided introducing an artificial trend into the NPA measurements. In fact sensitivity actually rose slightly with decreasing SSI. This can be explained as an artifact in the data: we found that lower SSI correlates with larger NPA. According to Eq. (3), less NPA means more sensitivity to segmentation error since the number of true positives (TP) is low.

Signal reduction artifacts originate in a variety of ways. As a supplement to the data set, several typical signal reduction artifacts were simulated on healthy controls [Fig. 6]. For each healthy control, a scan under normal conditions was acquired as a reference [Fig. 6(A1-D1)]. Then, we acquired a scan with simulated pupil vignetting [Fig. 6(A2-D2)], a scan with simulated floater shadows [Fig. 6(A3-D3)], and three defocus scans with diopters ranging from 1 to 3 [Fig. 6(A4-D4, A5-D5, A6-D6)] to simulate clinical signal reduction artifacts. The results of MEDnet-V2 show that the signal reduction artifacts can be well distinguished from NPA [Fig. 6(D1-D6)].

 figure: Fig. 6

Fig. 6 Results of simulated signal reduction artifacts on healthy controls by MEDnet-V2. (A1-D1) Reference scan under normal conditions. (A2-D2) Scan with simulated pupil vignetting. (A3-D3) Scan with simulated floater shadows. (A4-D4) Scan with 1 diopter defocus. (A5-D5) Scan with 2 diopters defocus. (A6-D6) Scan with 3 diopters defocus. First row (A), en face inner retinal reflectance images. Second row (B), en face angiograms of the superficial vascular complex. Third row (C), ground truth (green) of the nonperfusion areas, overlaid on the en face angiograms. Last row (D), the predicted results of nonperfusion areas (blue) and signal reduction artifacts (yellow) by MEDnet-V2 overlaid on the en face angiograms.

Download Full Size | PDF

In clinical cases, the signal reduction artifacts are considerably more complex than simulated ones. Shadows on en face angiograms may connect to the center of the macula [Fig. 7(A1-D1)], and several kinds of signal reduction artifacts can overlap [Fig. 7(A2-D2)]. And, furthermore, since NPA and signal reduction artifacts can occur anywhere in our OCTA scans on eyes with DR, the two may co-occur [Fig. 7(D3-D5)]. When the signal reduction artifacts combined with NPA [Fig. 7(D4-D5)], our network can still produce an accurate prediction result.

 figure: Fig. 7

Fig. 7 Results of nonperfusion area detection on clinical cases. (A1-D1) signal reduction artifacts connected to the macular area on a healthy control. (A2-D2) A healthy case with signal reduction artifacts caused by floater shadows and vignetting. (A3-D3) Mild to moderate DR case with signal reduction artifacts. (A4-D4) A severe diabetic retinopathy (DR) case with no signal reduction artifacts. (A5-D5) Severe DR case with strong signal reduction artifacts. First row (A), inner retinal reflectance en face images. Second row (B), en face superficial vascular complex angiograms. Third row (C), the ground truth (green) of the nonperfusion areas, overlaid on the en face angiograms. Last row (D), the predicted results of nonperfusion areas (blue) and signal reduction artifacts (yellow) by MEDnet-V2 overlaid on the en face angiograms.

Download Full Size | PDF

3.2 Repeatability

We measured the repeatability using the pooled standard deviation (Eq. (4) and coefficient of variation (Eq. (5) in healthy controls and DR cases with two intra-visit repeated scans (Table 3) and compared it to manual delineation by retinal experts.

Tables Icon

Table 3. Intra-visit repeatability of MEDnet-V2 and manual delineation on NPA detection (SSI≥55)

P=1Ni=1Nsi2.
C=P1Ni=1Nμi.

Where P is the pooled standard deviation, C is the coefficient of variation, N is the number of eyes, s is the NPA standard deviation of two repeat scans within the same visit from the same eye, and μ is the mean NPA of two repeat scans within the same visit from the same eye. Our method shows high repeatability, with a smaller coefficient of variation than grading by human experts. In the DR group, the mean and standard deviation of NPA is larger than that in healthy controls, as we expect given the nature of the pathology. Pooled standard deviation and coefficient of variation also deteriorated within DR group, because the cumulative error of detection of NPA will increase as NPA grows, although our method shows higher performance than human experts.

3.3 Defocus effect

Defocus can cause signal loss, which may affect the detection of NPA in en face angiograms. We scanned 13 healthy eyes to test the robustness of our method to defocus, each eye has four repeat scans with diopter levels from 0 to 3 (Fig. 8). To evaluate the change in NPA with defocuses, the linear regression was performed on NPA changes from the baseline (normal condition without defocus) on the same eye. The low coefficient of determination (r = 0.16) and a low significance level (p = 0.26) indicated that the NPA detection results by MEDnet-V2 are not affected by defocus.

 figure: Fig. 8

Fig. 8 The effect of defocus on nonperfusion area detection by MEDnet-V2.

Download Full Size | PDF

4. Discussion

In this paper, we have described MEDnet-V2 as a refined method for NPA detection. NPA is a key biomarker for DR [4–7], but it has been hindered by the inability of automated algorithms to correctly recognize artifacts. OCTA is an extremely powerful technology in terms of the vascular detail it can capture (i.e., it obtains high resolution, volumetric data), but even in the best circumstances it is prone to imaging artifacts [32]. This drawback is exacerbated for scans of DR eyes, which are often rich in disease features but inconsistent in quality. NPA then, though an excellent indicator of DR progression, may remain under-utilized unless algorithms can distinguish true NPA from flow signal loss due to signal reduction artifacts. The problem is relatively intractable for traditional image analysis techniques. Previously published results that relied on such approaches have either ignored complications from signal reduction artifacts, as with Agemy et al. [1], Schottenhamml et al. [2], and Nesper et al. [33], or required manual correction, as in Alibhai et al. [3]. Similarly, our previous algorithm, which was also based on a deep learning approach, failed to distinguish severe signal reduction artifacts from true NPA, but recent work of our group has demonstrated that the shadow artifacts in OCTA can be detected [34].

MEDnet-V2 represents a significant improvement in that it can accurately distinguish between true NPA and signal reduction artifacts without appealing to user input. The ability to discriminate signal reduction artifacts from NPA means that we gain access to data from lower quality scans that may otherwise have been unusable.

MEDnet-V2 achieves these results through several architectural decisions. We used a U-Net-like architecture that enabled MEDnet-V2 to obtain a stable training process while achieving high resolution in the output results. In our previous work, MEDnet showed excellent feature extraction capability. As we expanded the size of the network by embedding new structures from state of the art networks, MEDnet-V2 correspondingly acquired a stronger ability to extract an expanded cohort of features. To obtain an accurate ground truth for NPA and signal reduction artifacts, we developed in-house graphical user interface software to help certified graders delineate a ground truth map. In order to suppress the delineation errors caused by individual subjectivity, we counted each grader’s classification as a vote, and took the majority opinion as the final ground truth map to train our network. Our experimental results indicate that MEDnet-V2 gives excellent performance (dice coefficient > 0.87) on scans of different disease severity and defocus.

Although MEDnet-V2 achieves good performance on most OCTA scans, there are some factors that may cause segmentation to fail. The inner retinal thickness map used to help the network distinguish FAZ from low reflectance area is vulnerable to inaccuracy in retinal layer segmentation, for instance due to the presence of structural abnormalities like edema. Although we automatically excluded the edema area when calculating the thickness map, this process still may need minor manual adjustments for complex cases. Similarly, the algorithm [29] can fail in eyes with severe anatomic abnormalities. Finally, when NPA and signal reduction artifacts coincide, the algorithm can generate false negatives by choosing to segment affected areas as artifact, even though NPA is present.

In future work we can attempt to resolve these issues. There are also several possibilities for broadening MEDnet-V2’s functionality. MEDnet-V2 was trained and tested on 6 × 6 mm2 macular angiograms of the SVC, but it is known that NPA in DR can manifest first in the periphery [35]. As OCT systems continue to develop, larger fields of view are becoming available. Semi-automated NPA detection has already been demonstrated in wide-field OCTA [3,25], but they require more human intervention at this time. Extending MEDnet-V2’s capabilities to obtain a fully automated widefield NPA detection algorithm could be particularly useful. Finally, MEDnet-V2 currently only functions on SVC scans, but DR causes NPA in other plexuses also. Integrating the network with projection-resolved OCTA [36,37] can extend NPA detection to the deeper retinal plexuses.

5. Conclusions

In summary, we proposed a deep learning based solution, which we named MEDnet-V2, to address the problem of signal reduction artifacts in the detection and quantification of capillary dropout in the retina using OCTA. The network contains three input images, and outputs a nonperfusion area and signal reduction artifacts distribution map. Features of signal reduction artifacts and NPA were extracted separately before being fused together, which is the key to this network’s favorable performance.

Funding

National Institutes of Health (R01 EY027833, R01 EY024544, DP3 DK104397, P30 EY010572); Unrestricted Departmental Funding Grant and William & Mary Greve Special Scholar Award from Research to Prevent Blindness (New York, NY).

Disclosures

Oregon Health & Science University (OHSU), Acner Camino, David Huang and Yali Jia have a significant financial interest in Optovue, Inc. These potential conflicts of interest have been reviewed and managed by OHSU.

References

1. S. A. Agemy, N. K. Scripsema, C. M. Shah, T. Chui, P. M. Garcia, J. G. Lee, R. C. Gentile, Y.-S. Hsiao, Q. Zhou, T. Ko, and R. B. Rosen, “Retinal vascular perfusion density mapping using optical coherence tomography angiography in normals and diabetic retinopathy patients,” Retina 35(11), 2353–2363 (2015). [CrossRef]   [PubMed]  

2. J. Schottenhamml, E. M. Moult, S. Ploner, B. Lee, E. A. Novais, E. Cole, S. Dang, C. D. Lu, L. Husvogt, N. K. Waheed, J. S. Duker, J. Hornegger, and J. G. Fujimoto, “An automatic, intercapillary area-based algorithm for quantifying diabetes-related capillary dropout using optical coherence tomography angiography,” Retina 36(Suppl 1), S93–S101 (2016). [CrossRef]   [PubMed]  

3. A. Y. Alibhai, L. R. De Pretto, E. M. Moult, C. Or, M. Arya, M. McGowan, O. Carrasco-Zevallos, B. Lee, S. Chen, C. R. Baumal, A. J. Witkin, E. Reichel, A. Z. de Freitas, J. S. Duker, J. G. Fujimoto, and N. K. Waheed, “Quantification of retinal capillary nonperfusion in diabetics using wide-field optical coherence tomography angiography,” Retina, epub ahead of print (2018).

4. T. S. Hwang, A. M. Hagag, J. Wang, M. Zhang, A. Smith, D. J. Wilson, D. Huang, and Y. Jia, “Automated quantification of nonperfusion areas in 3 vascular plexuses with optical coherence tomography angiography in eyes of patients with diabetes,” JAMA Ophthalmol. 136(8), 929–936 (2018). [CrossRef]   [PubMed]  

5. M. Zhang, T. S. Hwang, C. Dongye, D. J. Wilson, D. Huang, and Y. Jia, “Automated quantification of nonperfusion in three retinal plexuses using projection-resolved optical coherence tomography angiography in diabetic retinopathy,” Invest. Ophthalmol. Vis. Sci. 57(13), 5101–5106 (2016). [CrossRef]   [PubMed]  

6. T. S. Hwang, Y. Jia, S. S. Gao, S. T. Bailey, A. K. Lauer, C. J. Flaxel, D. J. Wilson, and D. Huang, “Optical coherence tomography angiography features of diabetic retinopathy,” Retina 35(11), 2371–2376 (2015). [CrossRef]   [PubMed]  

7. T. S. S. Hwang, M. Zhang, K. Bhavsar, X. Zhang, J. P. P. Campbell, P. Lin, S. T. T. Bailey, C. J. J. Flaxel, A. K. K. Lauer, D. J. J. Wilson, D. Huang, and Y. Jia, “Visualization of 3 distinct retinal plexuses by projection-resolved optical coherence tomography angiography in diabetic retinopathy,” JAMA Ophthalmol. 134(12), 1411–1419 (2016). [CrossRef]   [PubMed]  

8. Y. Guo, A. Camino, J. Wang, D. Huang, T. S. Hwang, and Y. Jia, “MEDnet, a neural network for automated detection of avascular area in OCT angiography,” Biomed. Opt. Express 9(11), 5147–5158 (2018). [CrossRef]   [PubMed]  

9. O. Ronneberger, P. Fischer, and T. Brox, “U-Net: Convolutional networks for biomedical image segmentation,” in International Conference on Medical Image Computing and Computer-Assisted Intervention (2015), pp. 234–241. [CrossRef]  

10. K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (2016), 4, pp. 770–778.

11. C. Szegedy, S. Ioffe, V. Vanhoucke, and A. A. Alemi, “Inception-v4, inception-resnet and the impact of residual connections on learning,” in AAAI (2017).

12. G. Huang, Z. Liu, L. Van Der Maaten, and K. Q. Weinberger, “Densely connected convolutional networks,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (2017), pp. 4700–4708.

13. M. Z. Alom, M. Hasan, C. Yakopcic, T. M. Taha, and V. K. Asari, “Recurrent residual convolutional neural network based on u-net (r2u-net) for medical image segmentation,” arXiv Prepr. arXiv1802.06955 (2018).

14. T. Laibacher, T. Weyde, and S. Jalali, “M2U-Net: Effective and efficient retinal vessel segmentation for resource-constrained environments,” arXiv Prepr. arXiv1811.07738 (2018).

15. D. S. W. Ting, L. R. Pasquale, L. Peng, J. P. Campbell, A. Y. Lee, R. Raman, G. S. W. Tan, L. Schmetterer, P. A. Keane, and T. Y. Wong, “Artificial intelligence and deep learning in ophthalmology,” Br. J. Ophthalmol. 103(2), 167–175 (2019). [CrossRef]   [PubMed]  

16. G. Litjens, T. Kooi, B. E. Bejnordi, A. A. A. Setio, F. Ciompi, M. Ghafoorian, J. A. W. M. van der Laak, B. van Ginneken, and C. I. Sánchez, “A survey on deep learning in medical image analysis,” Med. Image Anal. 42, 60–88 (2017). [CrossRef]   [PubMed]  

17. F. G. Venhuizen, B. van Ginneken, B. Liefers, F. van Asten, V. Schreur, S. Fauser, C. Hoyng, T. Theelen, and C. I. Sánchez, “Deep learning approach for the detection and quantification of intraretinal cystoid fluid in multivendor optical coherence tomography,” Biomed. Opt. Express 9(4), 1545–1569 (2018). [CrossRef]   [PubMed]  

18. C. S. Lee, A. J. Tyring, N. P. Deruyter, Y. Wu, A. Rokem, and A. Y. Lee, “Deep-learning based, automated segmentation of macular edema in optical coherence tomography,” Biomed. Opt. Express 8(7), 3440–3448 (2017). [CrossRef]   [PubMed]  

19. M. Treder, J. L. Lauermann, and N. Eter, “Automated detection of exudative age-related macular degeneration in spectral domain optical coherence tomography using deep learning,” Graefes Arch. Clin. Exp. Ophthalmol. 256(2), 259–265 (2018). [CrossRef]   [PubMed]  

20. C. S. Lee, D. M. Baughman, and A. Y. Lee, “Deep learning is effective for the classification of OCT images of normal versus age-related macular degeneration,” arXiv Prepr. arXiv1612.04891 (2016).

21. D. S. Kermany, M. Goldbaum, W. Cai, C. C. S. Valentim, H. Liang, S. L. Baxter, A. McKeown, G. Yang, X. Wu, F. Yan, J. Dong, M. K. Prasadha, J. Pei, M. Y. L. Ting, J. Zhu, C. Li, S. Hewett, J. Dong, I. Ziyar, A. Shi, R. Zhang, L. Zheng, R. Hou, W. Shi, X. Fu, Y. Duan, V. A. N. Huu, C. Wen, E. D. Zhang, C. L. Zhang, O. Li, X. Wang, M. A. Singer, X. Sun, J. Xu, A. Tafreshi, M. A. Lewis, H. Xia, and K. Zhang, “Identifying medical diagnoses and treatable diseases by image-based deep learning,” Cell 172(5), 1122–1131 (2018). [CrossRef]   [PubMed]  

22. L. Fang, D. Cunefare, C. Wang, R. H. Guymer, S. Li, and S. Farsiu, “Automatic segmentation of nine retinal layer boundaries in OCT images of non-exudative AMD patients using deep learning and graph search,” Biomed. Opt. Express 8(5), 2732–2744 (2017). [CrossRef]   [PubMed]  

23. S. K. Devalla, P. K. Renukanand, B. K. Sreedhar, S. Perera, J.-M. Mari, K. S. Chin, T. A. Tun, N. G. Strouthidis, T. Aung, A. H. Thiery, and M. J. A. Girard, “DRUNET: A dilated-residual U-Net deep learning network to digitally stain optic nervehead tissues in optical coherence tomography images,” arXiv Prepr. arXiv1803.00232 (2018).

24. J. Hamwood, D. Alonso-Caneiro, S. A. Read, S. J. Vincent, and M. J. Collins, “Effect of patch size and network architecture on a convolutional neural network approach for automatic segmentation of OCT retinal layers,” Biomed. Opt. Express 9(7), 3049–3066 (2018). [CrossRef]   [PubMed]  

25. A. G. Roy, S. Conjeti, S. P. K. Karri, D. Sheet, A. Katouzian, C. Wachinger, and N. Navab, “ReLayNet: retinal layer and fluid segmentation of macular optical coherence tomography using fully convolutional networks,” Biomed. Opt. Express 8(8), 3627–3642 (2017). [CrossRef]   [PubMed]  

26. F. G. Venhuizen, B. van Ginneken, B. Liefers, M. J. J. P. van Grinsven, S. Fauser, C. Hoyng, T. Theelen, and C. I. Sánchez, “Robust total retina thickness segmentation in optical coherence tomography images using convolutional neural networks,” Biomed. Opt. Express 8(7), 3292–3316 (2017). [CrossRef]   [PubMed]  

27. C. S. Lee, A. J. Tyring, Y. Wu, S. Xiao, A. S. Rokem, N. P. Deruyter, Q. Zhang, A. Tufail, R. K. Wang, and A. Y. Lee, “Generating retinal flow maps from structural optical coherence tomography with artificial intelligence,” arXiv Prepr. arXiv1802.08925 (2018).

28. Y. Jia, O. Tan, J. Tokayer, B. Potsaid, Y. Wang, J. J. Liu, M. F. Kraus, H. Subhash, J. G. Fujimoto, J. Hornegger, and D. Huang, “Split-spectrum amplitude-decorrelation angiography with optical coherence tomography,” Opt. Express 20(4), 4710–4725 (2012). [CrossRef]   [PubMed]  

29. Y. Guo, A. Camino, M. Zhang, J. Wang, D. Huang, T. Hwang, and Y. Jia, “Automated segmentation of retinal layer boundaries and capillary plexuses in wide-field optical coherence tomographic angiography,” Biomed. Opt. Express 9(9), 4429–4442 (2018). [CrossRef]   [PubMed]  

30. D. P. P. Kingma and J. Ba, “Adam: A method for stochastic optimization,” AIP Conf. Proc. 1631, 58–62 (2014).

31. K. He, X. Zhang, S. Ren, and J. Sun, “Delving deep into rectifiers: Surpassing human-level performance on imagenet classification,” in Proceedings of the IEEE International Conference on Computer Vision (2015), pp. 1026–1034. [CrossRef]  

32. R. F. Spaide, J. G. Fujimoto, N. K. Waheed, S. R. Sadda, and G. Staurenghi, “Optical coherence tomography angiography,” Prog. Retin. Eye Res. 64, 1–55 (2018). [CrossRef]   [PubMed]  

33. P. L. Nesper, P. K. Roberts, A. C. Onishi, H. Chai, L. Liu, L. M. Jampol, and A. A. Fawzi, “Quantifying microvascular abnormalities with increasing severity of diabetic retinopathy using optical coherence tomography angiography,” Invest. Ophthalmol. Vis. Sci. 58(6), BIO307 (2017). [CrossRef]   [PubMed]  

34. A. Camino, Y. Jia, J. Yu, J. Wang, L. Liu, and D. Huang, “Automated detection of shadow artifacts in optical coherence tomography angiography,” Biomed. Opt. Express 10(3), 1514–1531 (2019). [CrossRef]   [PubMed]  

35. T. Niki, K. Muraoka, and K. Shimizu, “Distribution of capillary nonperfusion in early-stage diabetic retinopathy,” Ophthalmology 91(12), 1431–1439 (1984). [CrossRef]   [PubMed]  

36. M. Zhang, T. S. Hwang, J. P. Campbell, S. T. Bailey, D. J. Wilson, D. Huang, and Y. Jia, “Projection-resolved optical coherence tomographic angiography,” Biomed. Opt. Express 7(3), 816–828 (2016). [CrossRef]   [PubMed]  

37. J. Wang, M. Zhang, T. S. Hwang, S. T. Bailey, D. Huang, D. J. Wilson, and Y. Jia, “Reflectance-based projection-resolved optical coherence tomography angiography,” Biomed. Opt. Express 8(3), 1536–1548 (2017). [CrossRef]   [PubMed]  

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (8)

Fig. 1
Fig. 1 Data acquisition for MEDnet-V2. (A) Segmentation results of the retinal layer boundaries on a B-scan. (B) Definition of the superficial vascular complex (SVC) slab. (C) SVC angiogram produced by maximum projection of the OCTA data within the SVC slab. (D) Definition of the inner retina slab. (E) Reflectance image of the inner retina produced by mean projection of OCT data within the inner retina slab. (F) Thickness map of the inner retina. ILM — inner limiting membrane. NFL — nerve fiber layer. GCL — ganglion cell layer. IPL — inner plexiform layer. INL — inner nuclear layer. OPL — outer plexiform layer. ONL — outer nuclear layer. EZ — ellipsoid zone. RPE — retinal pigment epithelium. BM — Bruch’s membrane. SVC — Superficial vascular complex. DVC – Deep vascular complex. B – Boundary.
Fig. 2
Fig. 2 The brief network architecture of MEDnet-V2. (A) OCT reflectance image of the inner retina. (B) Gaussian-filtered reflectance intensity map of the inner retina. (C) Inner retinal thickness map. (D) The en face angiogram of the superficial vascular complex. (E1-E3) Three convolution networks with the same structure. (F) Detection result with probability maps for perfusion loss (blue) and signal reduction artifacts (yellow).
Fig. 3
Fig. 3 (A) Network architecture of subnetworks in MEDnet-V2. (B) Multi-scale convolutional block. (C-D) Residual blocks from ResNet.
Fig. 4
Fig. 4 Manual delineation of ground truth for training. (A) The in-house graphical user interface software. (B) Three experts delineated ground truth maps for nonperfusion area (green) and signal reduction artifacts (yellow) overlaid on the superficial vascular complex angiograms. (C) The final ground truth map overlaid on the superficial vascular complex angiogram.
Fig. 5
Fig. 5 Representative input data set. (A) En face angiogram of superficial vascular complex from a patient with diabetic retinopathy. (B) Inner retinal thickness map. (C) Reflectance image acquired by projecting the reflectance OCT data within the inner retina. (D) The ground truth map of the nonperfusion area (green) and signal reduction artifact (yellow) overlaid on the superficial vascular complex angiogram.
Fig. 6
Fig. 6 Results of simulated signal reduction artifacts on healthy controls by MEDnet-V2. (A1-D1) Reference scan under normal conditions. (A2-D2) Scan with simulated pupil vignetting. (A3-D3) Scan with simulated floater shadows. (A4-D4) Scan with 1 diopter defocus. (A5-D5) Scan with 2 diopters defocus. (A6-D6) Scan with 3 diopters defocus. First row (A), en face inner retinal reflectance images. Second row (B), en face angiograms of the superficial vascular complex. Third row (C), ground truth (green) of the nonperfusion areas, overlaid on the en face angiograms. Last row (D), the predicted results of nonperfusion areas (blue) and signal reduction artifacts (yellow) by MEDnet-V2 overlaid on the en face angiograms.
Fig. 7
Fig. 7 Results of nonperfusion area detection on clinical cases. (A1-D1) signal reduction artifacts connected to the macular area on a healthy control. (A2-D2) A healthy case with signal reduction artifacts caused by floater shadows and vignetting. (A3-D3) Mild to moderate DR case with signal reduction artifacts. (A4-D4) A severe diabetic retinopathy (DR) case with no signal reduction artifacts. (A5-D5) Severe DR case with strong signal reduction artifacts. First row (A), inner retinal reflectance en face images. Second row (B), en face superficial vascular complex angiograms. Third row (C), the ground truth (green) of the nonperfusion areas, overlaid on the en face angiograms. Last row (D), the predicted results of nonperfusion areas (blue) and signal reduction artifacts (yellow) by MEDnet-V2 overlaid on the en face angiograms.
Fig. 8
Fig. 8 The effect of defocus on nonperfusion area detection by MEDnet-V2.

Tables (3)

Tables Icon

Table 1 Data set used in MedNet-V2

Tables Icon

Table 2 Agreement (in pixels) between automated detection and manual delineation of nonperfusion area (mean ± standard deviation)

Tables Icon

Table 3 Intra-visit repeatability of MEDnet-V2 and manual delineation on NPA detection (SSI≥55)

Equations (5)

Equations on this page are rendered with MathJax. Learn more.

M=φ( 1 N i=1 N G(h, σ i )(I I ¯ + 1 N ) ).
L= i=1 N J i × w i , i=1 N w i =1. J=( 1 x y( x )× y ^ ( x )+α x ( y( x )+ y ^ ( x ) ) x y( x )× y ^ ( x )+α )×α.
Accuracy= TP+TN TP+FP+TN+FN ,Sepcificity= TN TN+FP , Sensitivity= TP TP+FN ,Dice= 2×TP 2×TP+FP+FN .
P= 1 N i=1 N s i 2 .
C= P 1 N i=1 N μ i .
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.