Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Automated diagnosis and segmentation of choroidal neovascularization in OCT angiography using deep learning

Open Access Open Access

Abstract

Accurate identification and segmentation of choroidal neovascularization (CNV) is essential for the diagnosis and management of exudative age-related macular degeneration (AMD). Projection-resolved optical coherence tomographic angiography (PR-OCTA) enables both cross-sectional and en face visualization of CNV. However, CNV identification and segmentation remains difficult even with PR-OCTA due to the presence of residual artifacts. In this paper, a fully automated CNV diagnosis and segmentation algorithm using convolutional neural networks (CNNs) is described. This study used a clinical dataset, including both scans with and without CNV, and scans of eyes with different pathologies. Furthermore, no scans were excluded due to image quality. In testing, all CNV cases were diagnosed from non-CNV controls with 100% sensitivity and 95% specificity. The mean intersection over union of CNV membrane segmentation was as high as 0.88. By enabling fully automated categorization and segmentation, the proposed algorithm should offer benefits for CNV diagnosis, visualization monitoring.

© 2020 Optical Society of America under the terms of the OSA Open Access Publishing Agreement

1. Introduction

Age related macular degeneration (AMD) is a leading cause of vision loss and irreversible blindness [13]. AMD is characterized as neovascular based on the presence of choroidal neovascularization (CNV), a pathological condition in which new vessels grow from the choroid into the outer retina [36]. CNV often results in vision loss because it can result in subretinal hemorrhage, lipid exudation, subretinal fluid, intraretinal fluid, or formation of fibrotic scars [7,8]. Fluorescein (FA) and indocyanine green angiography (ICGA) are traditionally used for CNV identification and visualization, but drawbacks to dye based angiography include that it provides only two-dimensional visualization of vascular networks, that invasive intravenous contrast dye can lead to nausea and anaphylaxis [9], and that long acquisition times makes high volume and multiple follow-up angiograms impractical. A promising alternative approach is optical coherence tomographic angiography (OCTA), which measures flow signal in vivo by evaluating motion contrast between subsequent OCT B-scans at the same location [10,11]. As opposed to conventional dye-based imaging modalities, OCTA is non-invasive, has rapid acquisition, is high-resolution, and generates three-dimensional datasets. However, OCTA is susceptible to several imaging artifacts [1113]. Projection artifacts cause specious flow signal in deeper anatomical layers. CNV assessment in particular suffers, since its proper visualization requires images of the outer retina, where projection artifacts are especially prominent due to proximity to the highly reflective retinal pigment epithelium (RPE). Recently, projection-resolved (PR) OCTA [1416] has proven adept at removing projection artifacts, and consequently shown diagnostic potential and enabled detailed quantification of CNV [1719].

Because it is non-invasive and images are acquired easily and rapidly in the clinical setting, PR-OCTA provides the opportunity to develop routine imaging for CNV detection and monitoring. Such monitoring has major clinical potential, since early detection of CNV and conversion to exudative CNV is crucial for successful intervention. Furthermore, improved quantification of CNV features may provide vital indicators for disease progression [20]. However, the presence of artifacts in OCTA images require careful interpretation. In such an environment, even evaluating images for the presence of CNV can be time consuming for clinicians. Furthermore, CNV metrics such as vessel density or morphology require membrane and vessel segmentation as a first step, but feature extraction may go awry when artifacts interfere with image analysis. These concerns argue for robust software automation solutions that will be capable of accurate identification of CNV and its precise segmentation in real-world datasets that may include poor quality scans or highly pathological scans in which CNV is not present.

A previous attempt at automated CNV segmentation was saliency-based [21]. This algorithm uses a saliency map to highlight the dominant objects that have strong distinctiveness defined by brightness, orientation contrast and position distance. The crux of this method is that CNV flow signal is higher than artifacts or background noise. However, even with PR-OCTA, large persistent artifacts in outer retina can sometimes remain. The saliency-based approach will segment such artifacts as CNV, and furthermore large CNV membranes filling most of the angiogram cannot be fully segmented with such an approach. Finally, the saliency-based algorithm always segments CNV, regardless of whether it actually exists in an input scan. An important first step for a completely automated approach, then, is the ability to classify scans based on the presence or absence of CNV. However, CNV segmentation using en face outer retinal angiogram is compromised by a variety of image features, including residual projection and motion artifacts, as well as background noise. On scans where artifacts are small and have lower signal than the CNV, our previous saliency-based algorithm could segment the CNV properly (Fig. 1, case1). However, on scans where the area and signal of residual artifacts are similar to the CNVs, the algorithm readily kept false positives on the saliency map (Fig. 1, case2). Since saliency algorithm segments CNV based on the degree of saliency, larger CNV vessels with strong signal are easier to separate from artifacts than smaller CNV vessels. In cases with very large CNV, the large thick CNV vessels have a much higher flow signal than the smaller CNV capillaries and the saliency algorithm may mistakenly segment these smaller vessels as background (Fig. 1, case3). In a non-CNV control cases, it is important for an automated CNV segmentation algorithm to clean up the background noise instead of segmenting false positives. The saliency algorithm also fails in this case (Fig. 1, case4), therefore, an algorithm using CNNs was proposed in this study to overcome each of these limitations.

 figure: Fig. 1.

Fig. 1. CNV segmentation on challenging scans using a saliency-based algorithm. Small residual projection artifacts are excluded in the saliency map (A1&B1, highlighted by white arrows). Strong residual artifacts in CNV and non-CNV scans were over-segmented in the saliency map, providing false positives (A2&B2, A4&B4, highlighted by red arrows), while large CNV was under-segmented in the saliency map, producing false negatives (A3&B3, highlighted by green arrows).

Download Full Size | PDF

Convolution neural networks (CNNs), one of the outstanding artificial intelligence (AI) techniques, provide a feasible way to accomplish the goals in this study. Published results indicate the high performance of CNNs adapted to complex tasks such as object classification and segmentation [22]. In the context of structural and angiographic OCT, CNNs have been used to identify glaucoma [23], segment non-perfusion area [24,25] and perform retinal layer segmentation [26,27]. However, an automated CNV diagnosis and quantification system has never been developed. As CNV represents the most important pathological development in AMD, this is a major limitation in current clinical practice.

In this study, we developed an algorithm based on two convolutional neural networks (CNNs). The algorithm classifies input scans based on the presence or absence of segmented CNV membrane, and then, if CNV is present, segments the CNV vasculature it encloses. In order to accomplish these tasks, we trained two separate CNNs, as detailed below. These networks perform complimentary tasks, and together comprise a robust system for CNV characterization.

2. Data acquisition and preprocessing

OCTA datasets with a large range of signal strength index (SSI) were collected from the retina clinics at the Casey Eye Institute, Oregon Health & Science University, Portland, OR, USA and Shanxi Eye Hospital, Taiyuan, Shanxi, PR China. The study was conducted in compliance with the Declaration of Helsinki.

Participants were scanned using a $70 kHz$ commercial OCTA system (RTVue-XR; Optovue, Fremont, CA) with a central wavelength of $840nm$. The scan area was $3\times 3mm$ and centered on the fovea. Two repeat B-scans were collected at the same position, and retinal flows were detected using the commercial split-spectrum amplitude-decorrelation angiography (SSADA) algorithm [10]. One X-fast and one Y-fast scan were obtained and registered to suppress motion artifacts. Patients with CNV secondary to neovascular AMD were enrolled. Control eyes without CNV included eyes with non-neovascular AMD, diabetic retinopathy (DR), branch retinal vein/artery occlusion (BRVO/BRAO), central serous chorioretinopathy (CSC), and healthy eyes. No scans were excluded due to low image quality.

Input to the CNN consisted of several en face images, each of which was useful for CNV identification or segmentation in some capacity. To generate the en face retinal angiograms used in this study requires anatomic slab segmentation; this was accomplished using a semi-automatic approach based on graph search [28,29] and implemented in our in-house OCTA processing toolkit. Regarding our study aims, segmentation of the inner limiting membrane (ILM), outer border of the outer plexiform layer (OPL), and Bruch’s membrane (BM) was pertinent, and used to construct inner and outer retinal images (Fig. 2). The input images consisted of the uncorrected original inner and outer retinal en face angiograms, as well as slab-subtracted [21,30,31] and projection-resolved outer retinal angiograms [14] (Fig. 3). Most of these images are useful for artifact removal. PR-OCTA achieves the clearest angiograms in the outer retina, but some residual projection artifacts remain that could be confused for CNV. Slab-subtraction is a faster and simpler method for projection artifact removal wherein flow signal from superficial slabs is subtracted from deeper; compared to PR-OCTA it retains fewer regions with spurious signal, but true vessels are often interrupted or erased, severely disrupting vascular morphology. Original uncorrected angiograms of the outer retina have the opposite problem, as they are projection artifact-rich, with many false flow pixels mimicking superficial vasculature. Including all four of these angiograms (original inner and outer angiograms, slab-subtracted outer retina, and projection-resolved outer retina) allowed the CNN to efficiently differentiate artifacts from true signal, since the original angiograms could be used to corroborate the location of false flow, the slab-subtracted angiograms could be used to identify regions with a high probability of including CNV, and the PR angiograms provides excellent image quality in the region containing CNV. Finally, an outer retinal structural volume was also included in the CNN input; since CNV appears along with the elevation of RPE, the slab can facilitate artifact removal in regions where the RPE is in normal (Fig. 2B1), detached (Fig. 2B2) or lost (Fig. 2B3) states. To generate tidy outer volumes with the same size, all A-lines in the outer slab were resampled to same voxel depth for data alignment (Fig. 4).

 figure: Fig. 2.

Fig. 2. Comparison of non-CNV, including a healthy (column 1), diabetic retinopathy (DR, column 2), and dry AMD (column 3), and CNV (wet AMD, column 4) scans with en face outer retinal angiograms (row A) and cross-sectional structural OCT overlaid with OCTA (row B) showing inner retinal (violet), choroidal (red), and pathological outer retinal flow (yellow). Slab segmentation lines are the inner limiting membrane (violet), outer border of the outer plexiform layer (yellow), and Bruch’s membrane (green). White dotted lines in row A indicate the locations of the cross sections in row B. Red arrows indicate the pathologies in outer retina.

Download Full Size | PDF

 figure: Fig. 3.

Fig. 3. Input angiographic image set. (A) Original (A1) and projection-resolved (PR) OCTA (A2) with inner retinal (violet), choroidal (red), and outer retinal (yellow) flow overlaid on structural OCT; (B) inner retinal angiogram, with white dotted line indicating the position of the B-scans in (A); (C) outer retinal angiogram generated from the original OCTA demonstrated in (A1); (D) outer retinal angiogram processed by slab -subtraction; (E) PR outer retinal angiogram. In (E) the entire CNV is preserved but some residual projection artifacts persist.

Download Full Size | PDF

 figure: Fig. 4.

Fig. 4. Generation of outer retinal structural volume input. (A) Original structural OCT volume; (B) extracted outer retinal volume; (C) original cross-sectional OCT, with anatomic slab segmentation overlaid in violet (inner limiting membrane, ILM), yellow (outer plexiform layer, OPL), and green (Bruch’s membrane, BM); (D) segmented outer retinal cross section, resampled so that the volume has a constant voxel depth.

Download Full Size | PDF

3. CNV identification and segmentation using convolutional neural networks

Using the proposed algorithm, CNV scans could be identified based on the presence of a CNV membrane which is the damaged tissue mass within outer retina where CNVs grow into. Then, CNV vessels were further segmented on identified CNV scans.

3.1 Algorithm outline

The proposed algorithm incorporates two CNNs, one for CNV membrane identification and segmentation (CNN-M) and the other for pixel-wise vessel segmentation (CNN-V) (Fig. 5). The data pipeline is as follows. First of all, both the structural and angiographic image sets are fed into the CNN-M for CNV membrane segmentation. The CNV is diagnosed based on the presence of a detected CNV membrane. Without CNV present, the algorithm will classify the scan as CNV-free; however, even in some scans without CNV present, the CNN-M will be fooled into classifying a residual artifact as CNV. Since the interference of residuals is inevitable, a size cutoff threshold, which was estimated by maximizing the identification sensitivity in the training dataset, was applied. If a membrane (above the size cutoff) was segmented, then the input was diagnosed as CNV, and the PR outer retinal angiogram is multiplied by the segmented CNV membrane probability map to suppress interference from the background. Last, the structural volume, angiographic image set and CNV membrane probability weighted PR outer retinal angiogram are fed into the CNN-V for CNV vessel segmentation.

 figure: Fig. 5.

Fig. 5. Outline of the proposed automated CNV identification and segmentation method. Input consists of the original inner retinal angiogram and original, slab-subtracted, and projection-resolved (PR) outer retinal angiograms, and volumetric structural data from the outer retina. We trained two separate CNNs in order to segment the CNV membrane and vessels, respectively. The first one (CNN-M) segments CNV membrane and outputs a mask corresponding to its location (if it is present). The second (CNN-V) segments CNV vascular pixels within the CNV membrane output by the first CNN.

Download Full Size | PDF

3.2 CNV membrane segmentation using convolutional neural network

Repeated pooling layers are used in many image processing networks. They are beneficial to image classification in order to compress the key information for decision-making and extract features across different scales. However, the width of CNV vessels can be as small as just a single pixel. Repeated pooling layers may be problematic because they decrease feature resolution and therefore may remove thin vessels. To preserve feature resolution while maintaining segmentation across multiple scales, one alternative is using larger kernels. With this approach, however, the memory and computational cost would be overly burdensome. Instead, our network design replaced most pooling layers with atrous kernels, which do not reduce feature resolution, in the convolutional layers [32,33]. Atrous kernels dilate $3\times 3$ kernels by inserting zeros between the kernel elements. The atrous kernel with size $1$ is the original $3\times 3$ kernel, while inserting one zero between the elements creates an atrous kernel of size two, and the atrous kernel with size $3$ is created by inserting two zeros (etc.). As the size of the atrous kernel increases, the field of view is enlarged, but the memory and computational cost is nonetheless equivalent to the original $3\times 3$ kernel.

The CNN architecture used for each part of the designed algorithm (membrane, CNN-M, and vessel, CNN-V, segmentation, respectively) have different designs. For CNN-M (Fig. 6), features in the structural volume, angiographic image set were first extracted, respectively. To feed 3D outer retinal volumes into 2D CNN, each separate depth in the original 3D volume was input as a separate channel. The following concatenation and convolutional layers merged the structural and angiographic features, and then fed them into the encoder blocks for feature extraction. A single pooling layer was added after feature merging. The encoder block was designed using atrous kernels, and the dilation rate increased in deeper layers. To make the best of low-level features and to transmit the loss from deeper to shallower layers, a U-net like architecture was applied in the decoder section. Moreover, the decision-making layer was parallelized using atrous kernels to refer features in multi-scales, with the atrous kernel dilation rates varying from $1$ to $32$, increasing by multiples of $2$. A softmax activation was able to output the CNV membrane probability map.

 figure: Fig. 6.

Fig. 6. CNN architecture for CNV membrane segmentation (CNN-M). The atrous kernel sizes ($Rate = 1\;to\;32$) are annotated below each encoder block. The number of kernels is annotated below each convolutional layer. Label $I$ and $I/2$ indicate operations on the full ($304\times 304-pixel$) and half-sized ($152\times 152-pixel$) image, respectively.

Download Full Size | PDF

Complex tasks often call for the addition of more convolutional layers and kernels, but memory considerations induce a limit on the number the network could use. The number of kernels in encoder and decoder layers was therefore fixed at 32. A densely connected CNN [34] structure (modified to include atrous kernels) was applied in the encoder blocks, and features at low levels were concatenated to deeper levels (Fig. 7). The dilation rate of each encoder block varied as demonstrated in (Fig. 6).

 figure: Fig. 7.

Fig. 7. Encoder block architecture. The number of kernels are annotated below each convolutional layer. Dots at intersections along the lines indicate connections between layers.

Download Full Size | PDF

3.3 CNV vessel segmentation using convolutional neural network

The CNN-M output is a membrane probability map. Pixels with probabilities higher than $0.5$ were identified as belong to the CNV membrane area. Here, the primary task (locating the mask) is reliant on detection across multiple scales, a challenge distinct from vessel segmentation (since vessels mostly vary between $1$ and $5$ pixels in width). The CNN architecture used for membrane segmentation was simplified for CNV vessel segmentation (Fig. 8). The pooling layers in the feature merging section were removed to keep the resolution sufficient to segment any CNV vessel. The dilation rate of the atrous kernels were also reduced to $rate = 1,\;2,\;4,\;8$ in the encoder and decision-making blocks.

 figure: Fig. 8.

Fig. 8. CNN architecture for CNV vessel segmentation (CNN-V). The atrous kernel sizes ($Rate = 1\;to\;8$) are annotated below each encoder block. The number of kernels is annotated below the convolutional layer. Label beside the block is the image size; in this case, the network operates on the fully-sized($304\times 304-pixel$) image.

Download Full Size | PDF

3.4 Training

3.4.1 Training dataset

The training dataset included both CNV and non-CNV control cases. The CNV patients were diagnosed by retinal specialists, with CNV due to AMD visible using PR-OCTA. The non-CNV control cases consist of healthy eyes and other retinal diseases including non-neovascular AMD, diabetic retinopathy (DR), branch retinal vein/artery occlusion (BRVO/BRAO), and central serous chorioretinopathy (CSC). A total of $1676$ scans including repeat and follow-up scans were collected from same macular area centered on the fovea. We treated the scans obtained from follow up appointments as unique cases as the CNV patterns changed significantly between imaging sessions. No scan was excluded due to low image quality. The datasets used for training and testing are from completely different eyes (i.e., no single eye was included in both training and testing) and are listed in (Table 1). In order to prevent bias, the datasets used for training and testing were randomly selected from the entire dataset. Additionally, while the number of scans used in the test set was smaller, the number of eyes was comparable to the training set. The CNN’s good performance on such a unique dataset indicates that the algorithm should be generalizable to other scans and that our results are not artificially inflated due to overfitting. Finally, we also examined the algorithm’s performance on individual scans from the testing dataset that exhibit particular features that often confound OCTA data analysis; the algorithm’s performance on these representative scans is discussed below.

Tables Icon

Table 1. Dataset for training and testing

3.4.2 Ground truth

A certified grader, who is an experienced clinician, manually segmented the ground truth CNV membrane used in training. For this purpose, we used PR outer retinal angiograms since these include the fewest projection artifacts. To exclude any remaining artifacts, the grader also referred to the uncorrected original angiograms of the inner and outer retina. For small CNV in low quality scans, B-scans were also reviewed to confirm the CNV position. The CNV membrane area was manually delineated (Fig. 9(B)). The Otsu algorithm with manual correction was applied in the graded CNV membrane area to generate the ground truth of CNV vasculatures. To avoid observer bias, the segmentation was reviewed by a second certified grader. If there was any disagreement, the second grader would correct the ground truth and send it back to the first grader for confirmation.

 figure: Fig. 9.

Fig. 9. Ground truth generation. (A) outer retina angiogram generated from projection resolved (PR)-OCTA; (B) CNV membrane outline drawn by an expert grader; (C) CNV vessel mask verified by an expert grader.

Download Full Size | PDF

4. Results

4.1 CNV diagnostic accuracy

A highly sensitive algorithm for CNV identification is desirable because missed CNV may result in vision loss that is otherwise treatable. Likewise, high specificity is also desirable for not mistakenly identifying CNV in non-CNV eyes. However, some residual artifacts are inevitable, and may mimic the appearance of CNV and so be erroneously segmented. Because the network used in this study sometimes misidentifies small residual artifacts as CNV, we incorporated a cutoff value for CNV membrane area. Any regions the CNN identified as CNV membranes that were smaller than this cutoff were re-classified as background. This step removed many false positive identifications from our results. We chose the size cutoff value to maximize the detection sensitivity in order to guarantee the fewest number for false negative diagnoses, since conversion to wet AMD is a priority in AMD monitoring. Scans with CNV membrane areas smaller than $0.004 mm^{2}$, which is equivalent to $49-pixel$ area in image, were not considered to contain CNV. The sensitivity and specificity on our test data were $100\%$ and $95\%$, respectively (Table 2), indicating we successfully achieved the goal of no missed diagnoses on this dataset. The area under receiver operating characteristic curve (AROC) is $0.997$, which demonstrates reliable diagnostic performance.

Tables Icon

Table 2. CNV diagnostic accuracy

4.2 CNV segmentation accuracy

The CNV membrane segmentation accuracy was evaluated by intersection over union (IOU), precision, recall, and F1 score, which are defined by:

$$IOU = \frac{GT {\bigcap} Out}{GT {\bigcup} Out}$$
$$precision = \frac{TP}{TP + FP}$$
$$recall = \frac{TP}{TP + FN}$$
$$F1 = 2 \times \frac{precision \times recall}{precision + recall}$$
where $GT$ is the manually graded CNV membrane and $Out$ is the CNV membrane segmented by the proposed algorithm, $TP$ is true positive, $FP$ is the false positive, and $FN$ is false negative. Overall, the algorithm achieved high scores in each of these metrics (Table 3). Comparing with the saliency-based CNV detection method [21], the segmentation accuracy was significantly improved using our proposed method.

Tables Icon

Table 3. Agreement between CNV membrane outputs and ground truth ($mean \pm std$)

Repeatability of CNV membrane segmentation performed manually, using the saliency-based method and our proposed method was measured by coefficient of variation (CV) from 28 participants with repeated scans in the testing data and compared in (Table 4). Both the ground truth and proposed method had low CV, which indicated the good performance and repeatability of our proposed method and the reliable ground truth for training.

In order to better elucidate these results, we also report the proposed algorithm’s performance on several exemplar scans exhibiting a variety of features. In the scans with clear CNV vasculature and minor artifacts (Fig. 10(A)), the CNV membrane outline and vascular patterns are prominent. In the CNV membrane probability map obtained from CNN-M, the region with high probability matched well with the CNV membrane area. Multiplying the PR-OCTA outer retinal angiogram with the CNV membrane probability map suppresses residual artifacts outside the membrane area, which improved the reliability of CNV vessel segmentation. It is also apparent that CNN-V was able to remove the noise in the CNV inter-capillary space (Fig. 10(E1)). The proposed method not only demonstrated clear CNV vasculature (Fig. 10(E1)), but also excluded the artifacts surrounding the CNV that might be mis-segmented (Fig. 10(E2)).

 figure: Fig. 10.

Fig. 10. CNV segmentation on scans with good image quality. (A) Projection-resolved (PR) outer retinal angiogram; (B) manually delineated CNV membrane (red outline) and vessel (white pixels) ground truths; (C) probability map output by the membrane segmentation CNN (CNN-M); (D) probability map output by the vessel segmentation CNN (CNN-V); (E) segmented CNV membrane (white outline, with $probability>0.5$) and vessels (with pixels of $probability>0.5$).

Download Full Size | PDF

Tables Icon

Table 4. Comparison of repeatability among ground truth, saliency-based method and our proposed method

The dataset used in this study also contained challenging scans that our previous saliency-based algorithm had difficulty correctly analyzing. One type of challenge is large CNV membrane area with wide range of flow rates (Fig. 11(A)). Especially in the membrane periphery, where vessels are generally smaller and have only low flow signal, the CNV area is difficult to distinguish. The saliency-based algorithm would both reject such peripheral CNV vessels (creating false negatives), and under-segment gaps in the CNV vasculature (Fig. 11(A1), highlighted by white star). Using our proposed method, the entire CNV membrane region showed high probability despite the influence of slow flow and large inter-capillary space (Fig. 11(C1)), and the residual projection artifacts were also excluded in the probability map (Fig. 11(C2)). To accomplish this, it was important to include all of the inner and outer retinal (original, slab-subtracted, and PR-OCTA) angiograms in the CNN inputs, since in tandem they could indicate the location of low-flow CNV vessels that might otherwise be mistaken for projection artifacts. After excluding the residual artifacts in the PR outer retinal angiogram, the CNV vessels were further segmented with high probabilities by CNN-V.

 figure: Fig. 11.

Fig. 11. CNV segmentation on challenging scans containing a wide range of flow rates. (A) Projection-resolved (PR) outer retinal angiogram; (B) manually delineated CNV membrane (red outline) and vessels (white pixels) ground truths; (C) probability map output by the membrane segmentation CNN (CNN-M); (D) probability map output by the vessel segmentation CNN (CNN-V); (E) segmented CNV membrane (white outline, with $probability>0.5$) and vessels (with pixels of $probability>0.5$). Large inter capillary space, highlighted by stars, were correctly included in the membrane area by the proposed algorithm.

Download Full Size | PDF

Vessels in the CNV membrane with low flow may be faint enough to appear as projection artifacts, but in other cases projection artifacts are obtrusive enough that they appear as prominent as any vessels in the CNV membrane (Fig. 12(A)). The saliency-based algorithm would mis-segment such artifacts as true CNV (Fig. 1(A2) & (B2)). Our proposed method can successfully distinguish real CNV from strong residual projection artifacts in CNV cases (Fig. 12(A1)) and a case diagnosed with retinal angiomatous proliferation (Fig. 12(A2)), in which the neovascularization lesion lies on top of these intense residual projection artifacts since it is growing from inner retina down to the outer retina. As in previous example with a large CNV membrane area, including each of the differently processed outer retinal angiograms enabled the trained network to distinguish true CNV from artifacts, since the angiographic image set and outer retinal structural volume yield features that can uniquely identify an artifact and true signal.

 figure: Fig. 12.

Fig. 12. CNV segmentation on two cases with strong residual projection artifacts. Bottom row shows a special case with a retinal angiomatous proliferation lesion. (A) Projection-resolved (PR) outer retinal angiogram; (B) manually delineated CNV membrane (red outline) and vessel (white pixels) ground truths; (C) probability map output by the membrane segmentation CNN (CNN-M); (D) probability map output by the vessel segmentation CNN (CNN-V); (E) segmented CNV membrane (white outline, with $probability>0.5$) and vessels (with pixels of $probability>0.5$).

Download Full Size | PDF

Another source of difficulty for CNV analysis is low scan quality (Fig. 13(A)). Two common sources of low scan quality are low signal strength and defocus. Defocus not only causes a reduction in signal strength, but also causes broadening of capillaries and generally makes images less clear (Fig. 13(A)). In defocused scans, the membrane outline is consequently blurred and indistinct. Simultaneously, such low quality scans are problematic for PR-OCTA correction, leading to more prevalent residual projection artifacts. As in the previous examples, the full angiographic image set was essential for correct exclusion of these artifacts, but as can be seen (Fig. 13case2) CNN-M still incorrectly segmented some projection artifacts. However, CNN-V further shrank the artifacts in determining the vessel probability (Fig. 13(D2)), yielding a vessel segmentation that was correct despite the false positive membrane segmentions. With the benefits of CNV membrane and vessel segmentation, the visualization of the CNV on defocused scans was dramatically improved, providing an image with clear boundaries and vasculature.

 figure: Fig. 13.

Fig. 13. CNV segmentation on scans with defocusing. (A) Projection-resolved (PR) outer retinal angiogram; (B) manually delineated ground truth of CNV membrane (red outline) and vessels (white pixels); (C) probability map output by the membrane segmentation CNN (CNN-M); (D) probability map output by the vessel segmentation CNN (CNN-V); (E) segmented CNV membrane (white outline, with $probability>0.5$) and vessels (with pixels of $probability>0.5$).

Download Full Size | PDF

One more important advantage of the proposed algorithm over the saliency-based approach is its ability to correctly omit CNV from scans in which it is not present. These scans are challenging because even in the absence of CNV many scans contain spurious artifactual signal. In particular, in the scans with low SSI, proprietary motion correction technology (MCT) software may fail to suppress motion artifacts after merging one X-fast and one Y-fast scan. In the saliency-based approach these artifacts would be identified as CNV (Fig. 1(A4) & (B4)), since they appear as bright as real CNV. They also pose problems for differentiating artifact from signal using the angiographic image set used in this study, since they do not share the same relationships between the images as projection artifacts. The inclusion of the outer retinal reflectance image is useful in such cases, since CNV development induces structural changes in the retina that can be used to differentiate afflicted eyes from either healthy eyes or eyes that have developed different pathology in outer retina. By including the reflectance information as well, the proposed method was able to correctly classify eyes as CNV-free, as shown for a dry-AMD and DR case in Fig. 14. These cases are indicative of the proposed algorithm’s robust performance, since CNV was not detected in either.

 figure: Fig. 14.

Fig. 14. The proposed method correctly classifies scans with no CNV present. Shown are a case with dry age-related macular degeneration (AMD; row 1) and diabetic retinopathy (DR; row 2). No CNV is delineated in ground truths (column B). Despite strong motion artifacts (column A), the proposed method’s probability map (column C&D) does not indicate any CNV, and so the algorithm correctly does not segment any membrane or vessels in the output (column E).

Download Full Size | PDF

5. Discussion and conclusion

In this study, we demonstrated a new CNV identification and segmentation algorithm using deep learning. Two tasks were achieved in this work: first, the algorithm was able to classify CNV as absent or present, and secondly the CNV membrane and vessels were successfully segmented. The proposed method accomplished these tasks on a diverse dataset that included both CNV scans and others with different pathologies, and our performance assessment did not exclude any scans due to low image quality. The high sensitivity, specificity, and AROC values reported in these conditions indicate our proposed method achieved robust identification and segmentation.

Since CNV is a vision-threatening development in a common retinal disease, it has been the target of several studies seeking to use OCTA to quantify or visualize its scope. In clinical research, CNV membrane areas are often drawn or segmented by manually adjusted thresholding [3538], but this is time-consuming, particularly since for accurate measurement the effect of artifacts on the OCTA visualization must be carefully considered. At the same time, automation approaches such as the saliency-based approach [21,30,31] are readily foiled by the presence of artifacts, which are inevitable in clinical datasets. Because CNV lesions often grow from their periphery, by achieving accurate segmentation of membrane area even when the peripheral vessels are small and dim the algorithm proposed here contributes more to CNV monitoring than the performance metrics evaluated above indicate. Furthermore, to the best of our knowledge, all previous attempts to automate CNV identification have limited their scope to just membrane segmentation. Since CNV vessel morphology is associated with CNV treatment response [39], vessel segmentation is also highly desirable.

We believe that the work presented here represents a significant improvement upon previous CNV detection and segmentation algorithms. We have already discussed the limitations of our saliency-based algorithm [21]. We also previously published a distance mapping approach [40]. This method, in common with the saliency algorithm, was a just an intensity-based algorithm that will necessarily detect some CNV in any input image, and so requires manual classification of images for the presence of CNV. The distance mapping method is also vulnerable to disruptions caused by projection or other artifacts. Our initial efforts on CNV segmentation was conducted on the slab-subtracted outer retinal angiogram [21,30], and Zhang et al. [41,42] also proposed a morphology and edge detection based method that relies on slab-subtraction to help mitigate the most egregious effects from projection artifacts. But as noted (Fig. 3(D)), CNV vascular integrity is easily damaged by slab subtraction. Finally, both of these methods, as well as our saliency-based method, relied on hand-crafted features. CNV vascular patterns contain unique features that differentiate them from noise that such approaches cannot, in general, use for segmentation (in contrast to deep learning-based approaches). The use of a CNN can circumvent many of these difficulties. Zhang et al. reported a CNN-based outer retinal structural abnormality detection algorithm that may detect CNV [43]; however, their approach relied exclusively on structural OCT data, which cannot by itself differentiate CNV from other pathologies that cause retinal layer disorganization. As a result, the specificity of this algorithm for CNV detection in clinical datasets may be compromised.

To the best of our knowledge, we are the first group to diagnose and segment CNV from OCTA using CNNs. By using a varied and information-rich input dataset, including outer retinal volumetric structural data and en face angiograms of the inner and outer retina with different levels and methods of error correction, the CNN-based algorithm was able to exclude the remaining projection artifacts and noise from the CNV membrane and vessels. Several other design choices contributed to the high performance of our proposed method. The designed CNNs utilized a modified dense network with atrous kernels in the encoder blocks. Features were extracted across multiple scales by increasing the atrous kernel dilation rate, and parallelized feature extraction across low and high levels helped to accelerate the training progress. The number of kernels was reduced and kept the constant to achieve a deeper networks. Several groups have reported vessel segmentation algorithms [4449], including some results based on CNNs [4549], these accomplish general vessel segmentation, rather than CNV-specific segmentation. Comparison between these algorithms is therefore misleading, but also difficult due to problems with fairly measuring vessel segmentation against both each other and the ground truth manual segmentation. In particular pixel-scale variation in vascular segmentation for small vessels can lead to low scores for common metrics such as dice coefficient or mean intersection over union, even when the ground truth and algorithm output essentially agree. Furthermore, in such cases there is little reason to prefer either the manually segmented ground truth or any specific algorithm output.

While we achieved highly accurate CNV diagnosis and segmentation, there are several ways that future work could improve upon the results we present here. One issue is that layer segmentation for the ILM, OPL and BM are required in pre-processing. Even though these layers were automatically segmented using a graph-search based algorithm [29,42], manual correction was sometimes needed to modify the segmentation of BM (for example under large drusen). Future work could focus on improving the automation of this step in order to achieve a fully automated data pipeline. Another issue is quantifying our algorithm’s performance. In AMD scans, the entire CNV membrane area is easily quantified and compared to the manually delineated equivalent to extract, e.g., F1 scores. Similar comparison is difficult for the output vessel masks. Since vessels are thin, small (i.e., pixel scale) discrepancies between the manual graded and CNN outputs can lead to significant differences when evaluating segmentation accuracy. And with small, dim vessels, it is difficult to say which pixels exactly correspond to true flow even for human. It is difficult to tell whether the human graded ground truth or the CNN output is more correct. Quantitative comparisons between solutions generated by manual grading and the CNN are therefore potentially misleading. Still, qualitative comparisons like those presented above speak to reasonable performance.

Despite these limitations, the algorithm proposed in this work accomplishes the essential tasks for which it was intended. Transitioning CNV segmentation to a fully automated approach will not only increase the amount of information available during patient monitoring, but may also reveal previously hidden indicators of CNV progression and prognosis as more data accumulates. Tracing CNV vessels for every patient is not feasible, so the vessel segmentation presented here offers an essential capability if we wish to monitor new and potentially better CNV biomarkers in the clinic.

Funding

National Institutes of Health (P30 EY010572, R01 EY024544, R01 EY027833); National Natural Science Foundation of China (81971697); Research to Prevent Blindness (unrestricted departmental funding grant, William & Mary Greve Special Scholar Award).

Disclosures

Oregon Health & Science University (OHSU) and Yali Jia have a significant financial interest in Optovue, Inc. These potential conflicts of interest have been reviewed and managed by OHSU.

References

1. The Eye Disease Prevalence Research Group, “Causes and prevalence of visual impairment among adults in the united states,” Arch. Ophthalmol. 122(4), 477–485 (2004). [CrossRef]  

2. The Eye Disease Prevalence Research Group, Prevalence of age-related macular degeneration in the united states,” Arch. Ophthalmol. 122(4), 564–572 (2004). [CrossRef]  

3. R. D. Jager, W. F. Mieler, and J. W. Miller, “Age-related macular degeneration,” N. Engl. J. Med. 358(24), 2606–2617 (2008). [CrossRef]  

4. H. E. Grossniklaus and W. R. Green, “Choroidal neovascularization,” Am. J. Ophthalmol. 137(3), 496–503 (2004). [CrossRef]  

5. P. T. De Jong, “Age-related macular degeneration,” N. Engl. J. Med. 355(14), 1474–1485 (2006). [CrossRef]  

6. M. R. Hee, C. R. Baumal, C. A. Puliafito, J. S. Duker, E. Reichel, J. R. Wilkins, J. G. Coker, J. S. Schuman, E. A. Swanson, and J. G. Fujimoto, “Optical coherence tomography of age-related macular degeneration and choroidal neovascularization,” Ophthalmology 103(8), 1260–1270 (1996). [CrossRef]  

7. L. A. Donoso, D. Kim, A. Frost, A. Callahan, and G. Hageman, “The role of inflammation in the pathogenesis of age-related macular degeneration,” Surv. Ophthalmol. 51(2), 137–152 (2006). [CrossRef]  

8. P. E. Stanga, J. I. Lim, and P. Hamilton, “Indocyanine green angiography in chorioretinal diseases: indications and interpretation: an evidence-based update,” Ophthalmology 110(1), 15–21 (2003). [CrossRef]  

9. M. Lopez-Saez, E. Ordoqui, P. Tornero, A. Baeza, T. Sainza, e. J. Zubeldia, M. Baeza, and M. L. Baeza, “Fluorescein-induced allergic reaction,” Ann. Allergy, Asthma, Immunol. 81(5), 428–430 (1998). [CrossRef]  

10. Y. Jia, O. Tan, J. Tokayer, B. Potsaid, Y. Wang, J. J. Liu, M. F. Kraus, H. Subhash, J. G. Fujimoto, J. Hornegger, and D. Huang, “Split-spectrum amplitude-decorrelation angiography with optical coherence tomography,” Opt. Express 20(4), 4710–4725 (2012). [CrossRef]  

11. R. F. Spaide, J. G. Fujimoto, and N. K. Waheed, “Image artifacts in optical coherence angiography,” Retina 35(11), 2163–2180 (2015). [CrossRef]  

12. A. Camino, Y. Jia, G. Liu, J. Wang, and D. Huang, “Regression-based algorithm for bulk motion subtraction in optical coherence tomography angiography,” Biomed. Opt. Express 8(6), 3053–3066 (2017). [CrossRef]  

13. X. Wei, A. Camino, S. Pi, W. Cepurna, D. Huang, J. C. Morrison, and Y. Jia, “Fast and robust standard-deviation-based method for bulk motion compensation in phase-based functional oct,” Opt. Lett. 43(9), 2204–2207 (2018). [CrossRef]  

14. J. Wang, M. Zhang, T. S. Hwang, S. T. Bailey, D. Huang, D. J. Wilson, and Y. Jia, “Reflectance-based projection-resolved optical coherence tomography angiography,” Biomed. Opt. Express 8(3), 1536–1548 (2017). [CrossRef]  

15. M. Zhang, T. S. Hwang, J. P. Campbell, S. T. Bailey, D. J. Wilson, D. Huang, and Y. Jia, “Projection-resolved optical coherence tomographic angiography,” Biomed. Opt. Express 7(3), 816–828 (2016). [CrossRef]  

16. R. C. Patel, J. Wang, T. S. Hwang, M. Zhang, S. S. Gao, M. E. Pennesi, S. T. Bailey, B. J. Lujan, X. Wang, D. J. Wilson, D. Huang, and Y. Jia, “Plexus-specific detection of retinal vascular pathologic conditions with projection-resolved oct angiography,” Ophthalmol. Retin. 2(8), 816–826 (2018). [CrossRef]  

17. R. Patel, J. Wang, J. P. Campbell, L. Kiang, A. Lauer, C. Flaxel, T. Hwang, B. Lujan, D. Huang, S. T. Bailey, and Y. Jia, “Classification of choroidal neovascularization using projection-resolved optical coherence tomographic angiography,” Invest. Ophthalmol. Visual Sci. 59(10), 4285–4291 (2018). [CrossRef]  

18. K. V. Bhavsar, Y. Jia, J. Wang, R. C. Patel, A. K. Lauer, D. Huang, and S. T. Bailey, “Projection-resolved optical coherence tomography angiography exhibiting early flow prior to clinically observed retinal angiomatous proliferation,” Am. J. Ophthalmol. Case Reports 8, 53–57 (2017). [CrossRef]  

19. S. T. Bailey, O. Thaware, J. Wang, A. M. Hagag, X. Zhang, C. J. Flaxel, A. K. Lauer, T. S. Hwang, P. Lin, D. Huang, and Y. Jia, “Detection of nonexudative choroidal neovascularization and progression to exudative choroidal neovascularization using oct angiography,” Ophthalmol. Retin. 3(8), 629–636 (2019). [CrossRef]  

20. M. Al-Sheikh, N. A. Iafe, N. Phasukkijwatana, S. R. Sadda, and D. Sarraf, “Biomarkers of neovascular activity in age-related macular degeneration using optical coherence tomography angiography,” Retina 38(2), 220–230 (2018). [CrossRef]  

21. L. Liu, S. S. Gao, S. T. Bailey, D. Huang, D. Li, and Y. Jia, “Automated choroidal neovascularization detection algorithm for optical coherence tomography angiography,” Biomed. Opt. Express 6(9), 3564–3576 (2015). [CrossRef]  

22. U. Schmidt-Erfurth, A. Sadeghipour, B. S. Gerendas, S. M. Waldstein, and H. Bogunović, “Artificial intelligence in retina,” Prog. Retinal Eye Res. 67, 1–29 (2018). [CrossRef]  

23. S. Maetschke, B. Antony, H. Ishikawa, G. Wollstein, J. Schuman, and R. Garnavi, “A feature agnostic approach for glaucoma detection in oct volumes,” PLoS One 14(7), e0219126 (2019). [CrossRef]  

24. Y. Guo, A. Camino, J. Wang, D. Huang, T. S. Hwang, and Y. Jia, “Mednet, a neural network for automated detection of avascular area in oct angiography,” Biomed. Opt. Express 9(11), 5147–5158 (2018). [CrossRef]  

25. Y. Guo, T. T. Hormel, H. Xiong, B. Wang, A. Camino, J. Wang, D. Huang, T. S. Hwang, and Y. Jia, “Development and validation of a deep learning algorithm for distinguishing the nonperfusion area from signal reduction artifacts on oct angiography,” Biomed. Opt. Express 10(7), 3257–3268 (2019). [CrossRef]  

26. L. Fang, D. Cunefare, C. Wang, R. H. Guymer, S. Li, and S. Farsiu, “Automatic segmentation of nine retinal layer boundaries in oct images of non-exudative amd patients using deep learning and graph search,” Biomed. Opt. Express 8(5), 2732–2744 (2017). [CrossRef]  

27. P. Zang, J. Wang, T. T. Hormel, L. Liu, D. Huang, and Y. Jia, “Automated segmentation of peripapillary retinal boundaries in oct combining a convolutional neural network and a multi-weights graph search,” Biomed. Opt. Express 10(8), 4340–4352 (2019). [CrossRef]  

28. M. Zhang, J. Wang, A. D. Pechauer, T. S. Hwang, S. S. Gao, L. Liu, L. Liu, S. T. Bailey, D. J. Wilson, D. Huang, and Y. Jia, “Advanced image processing for optical coherence tomographic angiography of macular diseases,” Biomed. Opt. Express 6(12), 4661–4675 (2015). [CrossRef]  

29. Y. Guo, A. Camino, M. Zhang, J. Wang, D. Huang, T. Hwang, and Y. Jia, “Automated segmentation of retinal layer boundaries and capillary plexuses in wide-field optical coherence tomographic angiography,” Biomed. Opt. Express 9(9), 4429–4442 (2018). [CrossRef]  

30. Y. Jia, S. T. Bailey, D. J. Wilson, O. Tan, M. L. Klein, C. J. Flaxel, B. Potsaid, J. J. Liu, C. D. Lu, M. F. Kraus, J. G. Fujimoto, and D. Huang, “Quantitative optical coherence tomography angiography of choroidal neovascularization in age-related macular degeneration,” Ophthalmology 121(7), 1435–1444 (2014). [CrossRef]  

31. Y. Jia, S. T. Bailey, T. S. Hwang, S. M. McClintic, S. S. Gao, M. E. Pennesi, C. J. Flaxel, A. K. Lauer, D. J. Wilson, J. Hornegger, J. G. Fujimoto, and D. Huang, “Quantitative optical coherence tomography angiography of vascular abnormalities in the living human eye,” Proc. Natl. Acad. Sci. 112(18), E2395–E2402 (2015). [CrossRef]  

32. L.-C. Chen, G. Papandreou, F. Schroff, and H. Adam, “Rethinking atrous convolution for semantic image segmentation,” arXiv preprint arXiv:1706.05587 (2017).

33. L.-C. Chen, G. Papandreou, I. Kokkinos, K. Murphy, and A. L. Yuille, “Deeplab: Semantic image segmentation with deep convolutional nets, atrous convolution, and fully connected crfs,” IEEE Trans. Pattern Anal. Mach. Intell. 40(4), 834–848 (2018). [CrossRef]  

34. G. Huang, Z. Liu, L. Van Der Maaten, and K. Q. Weinberger, “Densely connected convolutional networks,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, (2017), pp. 4700–4708.

35. A. D. Treister, P. L. Nesper, A. E. Fayed, M. K. Gill, R. G. Mirza, and A. A. Fawzi, “Prevalence of subclinical cnv and choriocapillaris nonperfusion in fellow eyes of unilateral exudative amd on oct angiography,” Transl. Vis. Sci. & Technol. 7(5), 19 (2018). [CrossRef]  

36. M. A. Bonini Filho, E. Talisa, D. Ferrara, M. Adhi, C. R. Baumal, A. J. Witkin, E. Reichel, J. S. Duker, and N. K. Waheed, “Association of choroidal neovascularization and central serous chorioretinopathy with optical coherence tomography angiography,” JAMA Ophthalmol. 133(8), 899–906 (2015). [CrossRef]  

37. L. Kuehlewein, M. Bansal, T. L. Lenis, N. A. Iafe, S. R. Sadda, M. A. Bonini Filho, E. Talisa, N. K. Waheed, J. S. Duker, and D. Sarraf, “Optical coherence tomography angiography of type 1 neovascularization in age-related macular degeneration,” Am. J. Ophthalmol. 160(4), 739–748.e2 (2015). [CrossRef]  

38. M. Inoue, C. Balaratnasingam, and K. B. Freund, “Optical coherence tomography angiography of polypoidal choroidal vasculopathy and polypoidal choroidal neovascularization,” Retina 35(11), 2265–2274 (2015). [CrossRef]  

39. P. L. Nesper, B. T. Soetikno, A. D. Treister, and A. A. Fawzi, “Volume-rendered projection-resolved oct angiography: 3d lesion complexity is associated with therapy response in wet age-related macular degeneration,” Invest. Ophthalmol. Visual Sci. 59(5), 1944–1952 (2018). [CrossRef]  

40. J. Xue, A. Camino, S. T. Bailey, X. Liu, D. Li, and Y. Jia, “Automatic quantification of choroidal neovascularization lesion area on oct angiography based on density cell-like p systems with active membranes,” Biomed. Opt. Express 9(7), 3208–3219 (2018). [CrossRef]  

41. A. Zhang, Q. Zhang, and R. K. Wang, “Minimizing projection artifacts for accurate presentation of choroidal neovascularization in oct micro-angiography,” Biomed. Opt. Express 6(10), 4130–4143 (2015). [CrossRef]  

42. Q. Zhang, C.-L. Chen, Z. Chu, F. Zheng, A. Miller, L. Roisman, J. R. de Oliveira Dias, Z. Yehoshua, K. B. Schaal, W. Feuer, G. Gregori, S. Kubach, L. An, P. F. Stetson, M. K. Durbin, P. J. Rosenfeld, and R. K. Wang, “Automated quantitation of choroidal neovascularization: a comparison study between spectral-domain and swept-source oct angiograms,” Invest. Ophthalmol. Visual Sci. 58(3), 1506–1513 (2017). [CrossRef]  

43. Y. Zhang, Z. Ji, Y. Wang, S. Niu, W. Fan, S. Yuan, and Q. Chen, “Mpb-cnn: a multi-scale parallel branch cnn for choroidal neovascularization segmentation in sd-oct images,” OSA Continuum 2(3), 1011–1027 (2019). [CrossRef]  

44. R. Perfetti, E. Ricci, D. Casali, and G. Costantini, “Cellular neural networks with virtual template expansion for retinal vessel segmentation,” IEEE Trans. Circuits Syst. II 54(2), 141–145 (2007). [CrossRef]  

45. H. Fu, Y. Xu, S. Lin, D. W. K. Wong, and J. Liu, “Deepvessel: Retinal vessel segmentation via deep learning and conditional random field,” in International Conference on Medical Image Computing and Computer-assisted Intervention (Springer, 2016), pp. 132–139.

46. A. Dasgupta and S. Singh, “A fully convolutional neural network based structured prediction approach towards the retinal vessel segmentation,” in 2017 IEEE 14th International Symposium on Biomedical Imaging (ISBI 2017), (IEEE, 2017), pp. 248–251.

47. A. Wu, Z. Xu, M. Gao, M. Buty, and D. J. Mollura, “Deep vessel tracking: A generalized probabilistic approach via deep learning,” in 2016 IEEE 13th International Symposium on Biomedical Imaging (ISBI), (IEEE, 2016), 1363–1367.

48. C. Zhu, B. Zou, R. Zhao, J. Cui, X. Duan, Z. Chen, and Y. Liang, “Retinal vessel segmentation in colour fundus images using extreme learning machine,” Comput. Med. Imaging Graph. 55, 68–77 (2017). [CrossRef]  

49. J. Mo and L. Zhang, “Multi-level deep supervised networks for retinal vessel segmentation,” Int. journal computer assisted radiology surgery 12(12), 2181–2193 (2017). [CrossRef]  

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (14)

Fig. 1.
Fig. 1. CNV segmentation on challenging scans using a saliency-based algorithm. Small residual projection artifacts are excluded in the saliency map (A1&B1, highlighted by white arrows). Strong residual artifacts in CNV and non-CNV scans were over-segmented in the saliency map, providing false positives (A2&B2, A4&B4, highlighted by red arrows), while large CNV was under-segmented in the saliency map, producing false negatives (A3&B3, highlighted by green arrows).
Fig. 2.
Fig. 2. Comparison of non-CNV, including a healthy (column 1), diabetic retinopathy (DR, column 2), and dry AMD (column 3), and CNV (wet AMD, column 4) scans with en face outer retinal angiograms (row A) and cross-sectional structural OCT overlaid with OCTA (row B) showing inner retinal (violet), choroidal (red), and pathological outer retinal flow (yellow). Slab segmentation lines are the inner limiting membrane (violet), outer border of the outer plexiform layer (yellow), and Bruch’s membrane (green). White dotted lines in row A indicate the locations of the cross sections in row B. Red arrows indicate the pathologies in outer retina.
Fig. 3.
Fig. 3. Input angiographic image set. (A) Original (A1) and projection-resolved (PR) OCTA (A2) with inner retinal (violet), choroidal (red), and outer retinal (yellow) flow overlaid on structural OCT; (B) inner retinal angiogram, with white dotted line indicating the position of the B-scans in (A); (C) outer retinal angiogram generated from the original OCTA demonstrated in (A1); (D) outer retinal angiogram processed by slab -subtraction; (E) PR outer retinal angiogram. In (E) the entire CNV is preserved but some residual projection artifacts persist.
Fig. 4.
Fig. 4. Generation of outer retinal structural volume input. (A) Original structural OCT volume; (B) extracted outer retinal volume; (C) original cross-sectional OCT, with anatomic slab segmentation overlaid in violet (inner limiting membrane, ILM), yellow (outer plexiform layer, OPL), and green (Bruch’s membrane, BM); (D) segmented outer retinal cross section, resampled so that the volume has a constant voxel depth.
Fig. 5.
Fig. 5. Outline of the proposed automated CNV identification and segmentation method. Input consists of the original inner retinal angiogram and original, slab-subtracted, and projection-resolved (PR) outer retinal angiograms, and volumetric structural data from the outer retina. We trained two separate CNNs in order to segment the CNV membrane and vessels, respectively. The first one (CNN-M) segments CNV membrane and outputs a mask corresponding to its location (if it is present). The second (CNN-V) segments CNV vascular pixels within the CNV membrane output by the first CNN.
Fig. 6.
Fig. 6. CNN architecture for CNV membrane segmentation (CNN-M). The atrous kernel sizes ($Rate = 1\;to\;32$) are annotated below each encoder block. The number of kernels is annotated below each convolutional layer. Label $I$ and $I/2$ indicate operations on the full ($304\times 304-pixel$) and half-sized ($152\times 152-pixel$) image, respectively.
Fig. 7.
Fig. 7. Encoder block architecture. The number of kernels are annotated below each convolutional layer. Dots at intersections along the lines indicate connections between layers.
Fig. 8.
Fig. 8. CNN architecture for CNV vessel segmentation (CNN-V). The atrous kernel sizes ($Rate = 1\;to\;8$) are annotated below each encoder block. The number of kernels is annotated below the convolutional layer. Label beside the block is the image size; in this case, the network operates on the fully-sized($304\times 304-pixel$) image.
Fig. 9.
Fig. 9. Ground truth generation. (A) outer retina angiogram generated from projection resolved (PR)-OCTA; (B) CNV membrane outline drawn by an expert grader; (C) CNV vessel mask verified by an expert grader.
Fig. 10.
Fig. 10. CNV segmentation on scans with good image quality. (A) Projection-resolved (PR) outer retinal angiogram; (B) manually delineated CNV membrane (red outline) and vessel (white pixels) ground truths; (C) probability map output by the membrane segmentation CNN (CNN-M); (D) probability map output by the vessel segmentation CNN (CNN-V); (E) segmented CNV membrane (white outline, with $probability>0.5$) and vessels (with pixels of $probability>0.5$).
Fig. 11.
Fig. 11. CNV segmentation on challenging scans containing a wide range of flow rates. (A) Projection-resolved (PR) outer retinal angiogram; (B) manually delineated CNV membrane (red outline) and vessels (white pixels) ground truths; (C) probability map output by the membrane segmentation CNN (CNN-M); (D) probability map output by the vessel segmentation CNN (CNN-V); (E) segmented CNV membrane (white outline, with $probability>0.5$) and vessels (with pixels of $probability>0.5$). Large inter capillary space, highlighted by stars, were correctly included in the membrane area by the proposed algorithm.
Fig. 12.
Fig. 12. CNV segmentation on two cases with strong residual projection artifacts. Bottom row shows a special case with a retinal angiomatous proliferation lesion. (A) Projection-resolved (PR) outer retinal angiogram; (B) manually delineated CNV membrane (red outline) and vessel (white pixels) ground truths; (C) probability map output by the membrane segmentation CNN (CNN-M); (D) probability map output by the vessel segmentation CNN (CNN-V); (E) segmented CNV membrane (white outline, with $probability>0.5$) and vessels (with pixels of $probability>0.5$).
Fig. 13.
Fig. 13. CNV segmentation on scans with defocusing. (A) Projection-resolved (PR) outer retinal angiogram; (B) manually delineated ground truth of CNV membrane (red outline) and vessels (white pixels); (C) probability map output by the membrane segmentation CNN (CNN-M); (D) probability map output by the vessel segmentation CNN (CNN-V); (E) segmented CNV membrane (white outline, with $probability>0.5$) and vessels (with pixels of $probability>0.5$).
Fig. 14.
Fig. 14. The proposed method correctly classifies scans with no CNV present. Shown are a case with dry age-related macular degeneration (AMD; row 1) and diabetic retinopathy (DR; row 2). No CNV is delineated in ground truths (column B). Despite strong motion artifacts (column A), the proposed method’s probability map (column C&D) does not indicate any CNV, and so the algorithm correctly does not segment any membrane or vessels in the output (column E).

Tables (4)

Tables Icon

Table 1. Dataset for training and testing

Tables Icon

Table 2. CNV diagnostic accuracy

Tables Icon

Table 3. Agreement between CNV membrane outputs and ground truth ( m e a n ± s t d )

Tables Icon

Table 4. Comparison of repeatability among ground truth, saliency-based method and our proposed method

Equations (4)

Equations on this page are rendered with MathJax. Learn more.

I O U = G T O u t G T O u t
p r e c i s i o n = T P T P + F P
r e c a l l = T P T P + F N
F 1 = 2 × p r e c i s i o n × r e c a l l p r e c i s i o n + r e c a l l
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.