Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Automated segmentation of retinal layer boundaries and capillary plexuses in wide-field optical coherence tomographic angiography

Open Access Open Access

Abstract

Advances in the retinal layer segmentation of structural optical coherence tomography (OCT) images have allowed the separation of capillary plexuses in OCT angiography (OCTA). With the increased scanning speeds of OCT devices and wider field images (≥10 mm on fast-axis), greater retinal curvature and anatomic variations have introduced new challenges. In this study, we developed a novel automated method to segment seven retinal layer boundaries and two retinal plexuses in wide-field OCTA images. The algorithm was initialized by a series of points forming a guidance point array that estimates the location of retinal layer boundaries. A guided bidirectional graph search method consisting of an improvement of our previous segmentation algorithm was used to search for the precise boundaries. We validated the method on normal and diseased eyes, demonstrating subpixel accuracy for all groups. By allowing independent visualization of the superficial and deep plexuses, this method shows potential for the detection of plexus-specific peripheral vascular abnormalities.

© 2018 Optical Society of America under the terms of the OSA Open Access Publishing Agreement

1. Introduction

Optical coherence tomography (OCT) [1] is an interferometric imaging technology capable of acquiring high resolution, three-dimensional (3D) images of biological tissue such as the retina through non-invasive and non-contact laser scanning. It has been widely used in the diagnosis of ophthalmic diseases, such as glaucoma [2], diabetic retinopathy (DR) [3], and age-related macular degeneration (AMD) [4], by quantifying the thicknesses of relevant slabs. OCT angiography (OCTA) is a novel clinical tool for the early diagnosis of the diseases affecting retinal circulation and assessment of progression. Based on the variation of OCT signals between B-scans at the same position, OCTA can provide depth-resolved flow signals for the microvasculature. Prior studies have proved that slab-based OCTA can improve the visualization and interpretation of OCTA volumes [5–7], and a recent study also showed that vascular abnormalities are better visualized by separating the retinal circulation into three vascular layers [8,9]. Therefore, automated segmentation of the retinal layer boundaries is essential to accurately assess anatomic thickness and capillary plexuses.

The segmentation of retinal layers is a challenging task that has been approached through a diversity of methods [10–29]. They all exploit the reflectance contrast between adjacent retinal layers to distinguish them [10–12]. These methods have relied on the gradient information for active contour [13,14] and graph search [15–18], or conversely on training supervised machine learning methods such as the support vector machine [19], random forests [20], deep learning [21,22], probability-based approach [23], and other methods [24–29]. With increasing advancements in swept-source OCT (SS-OCT) technology, wide-field OCT imaging has been enabled to evaluate larger portions of the retina [30]. However, the wide-field OCT poses new challenges to the existing segmentation algorithms. First, SS-OCT systems, using 1050-nm center wavelength lasers, have decreased the axial resolution and back-scattered reflectance contrast compared to those of the spectral domain commercial devices that use 840-nm center wavelength, which reduces the pixels contained within retinal layers as well as the number of features that can be extracted for machine learning segmentation alternatives. Second, due to the large retinal curvature-associated aberration in the wider field of view, the focusing of wide-field OCT is compromised in the peripheral regions. Third, retinal curvature and anatomic variations are increased as the field of view increases. These characteristics make single source path search algorithms (e.g., graph search) prone to local errors that can be propagated further by the search routine.

Previously, we developed a successful segmentation algorithm based on directional graph search for 3 × 3- and 6 × 6-mm scans of the retina [17]. To address the new challenges associated with wide-field scans, we propose here the Guided Bidirectional Graph Search (GB-GS) method, in which an array of points is used to guide the graph search algorithm in two directions to identify the seven retinal boundaries. The method consists of three steps. First, a guidance point array (GPA) was found to represent the approximate positions of the boundaries. Then, a bidirectional graph search was applied on each point contained in the GPA but not included in any previous paths. The shortest paths between each of the other points in the GPA were used to generate the final boundaries.

2. Methods

2.1 Data acquisition

The study was approved by an Institutional Review Board/Ethics Committee of Oregon Health & Science University, and informed consent was collected from all participants, in compliance with the Declaration of Helsinki. Volumetric scans of both eyes were acquired by a prototype 200-kHz SS-OCT system with a 1050-nm central wavelength covering the 10-mm (fast-axis) × 8-mm (slow-axis) retinal regions. Two repeated B-scans were taken at each of 400 raster positions, and each B-scan was comprised of 850 A-lines. B-scans at the same position were averaged to improve signal-to-noise ratio of the structural OCT. The OCTA data was calculated by the split-spectrum amplitude decorrelation angiography (SSADA) algorithm [31].

2.2 Preprocessing

First, we normalized the B-scan and then flattened it using the center of mass as reference to prevent errors caused by significant tissue curvature [17]. Then, we generated gradient maps to emphasize the transitions between retinal layers with different reflectivity. We reduced the speckle noise by applying a median filter (kernel size - width × height: 3 × 3) and a mean filter (kernel size - width × height: 7 × 3) that preserved the continuity of retinal layer boundaries primarily along horizontal direction. Because the boundaries being segmented exhibited two different intensity transition modes (Fig. 1) (light-to-dark and dark-to-light) [17], we generated two gradient maps,  GA  representing dark-to-light transitions and GB representing light-to-dark transitions (Eq. (1))

G(x,z)=I(x,z)I(x,z1);   x=1, 2,,N;  z=1, 2, , MGA(x,z)={1G(x,z),G(x,z)>01                    ,otherwiseGB(x,z)={1|G(x,z)|,G(x,z)<01                       ,otherwise
where I(x,z) was the OCT reflectance value at position (x,z), M was the length of A-scans in pixels, and N was the width of B-scans in pixels.

 figure: Fig. 1

Fig. 1 Representation of the retinal layer boundaries that can be segmented by the algorithm. (A) A representative wide-field B-scan across the macula of a healthy subject before segmentation. (B) Segmentation of seven retinal boundaries: ILM (inner limiting membrane), NFL (nerve fiber layer), GCL (ganglion cell layer), IPL (inner plexiform layer), INL (inner nuclear layer) OPL (outer plexiform layer), ONL (outer nuclear layer), EZ (ellipsoid zone), RPE (retinal pigment epithelium), BM (Bruch’s membrane).

Download Full Size | PDF

From the gradient map  GA , we retrieved the boundaries between the vitreous and the inner limiting membrane (ILM), the inner nuclear layer (INL) and the outer plexiform layer (OPL), as well as the upper boundary of the ellipsoid zone (EZ) (Fig. 2(A)). From the gradient map  GA , we retrieved the remaining four boundaries that were between the nerve fiber layer (NFL) and the ganglion cell layer (GCL), between the inner plexiform layer (IPL) and the inner nuclear layer (INL), between the OPL and the outer nuclear layer (ONL), and between the retinal pigment epithelium (RPE) and Bruch’s membrane (BM) (Fig. 2(B)).

 figure: Fig. 2

Fig. 2 Two gradient maps used for layer segmentation. (A) Gradient map  GA . Vitreous/ILM, INL/OPL and EZ were segmented using this map. (B) Gradient map GB. NFL/GCL, IPL/INL, OPL/ONL, and RPE/BM were segmented using this map.

Download Full Size | PDF

2.3 Guidance point array

In this step, we generated for each boundary an array of points indicating its approximate position based on information extracted from the gradient maps. This GPA regulates the subsequent bidirectional graph search for the actual layer boundaries. GPAs were generated in a pre-determined order, taking advantage of the characteristics of gradient maps and retinal anatomy to minimize deviations from the correct boundaries (Fig. 3). First, the vitreous/ILM and upper EZ boundaries were processed from gradient map  GA , as they exhibited the greatest contrast with surrounding tissue. Then, using the EZ layer as the upper boundary, the set of points corresponding to the RPE/BM’s GPA was recognized from the GB gradient map. Subsequently, the upper boundary was fixed at the vitreous/ILM boundary, and GB was used to sequentially extract the GPA for the OPL/ONL, which had the EZ layer as the lower boundary. Then we extracted the GPAs for the IPL/INL and NFL/GCL, for which each GPA served as the lower boundary of the next GPA. Finally, the GPA for the INL/OPL was generated from the gradient map  GA  using the IPL/INL and OPL/ONL as upper and lower boundaries respectively.

 figure: Fig. 3

Fig. 3 The search order of GPAs in the guided bidirectional graph search algorithm.

Download Full Size | PDF

The first GPAs to be identified were the vitreous/ILM and upper EZ boundaries, which were not limited by any reference boundaries. To localize them, we first reduced speckle noise by down-sampling the gradient map  GA  and the B-scan by a factor of five to a size of 170 × 208 pixels. The vitreous/ILM boundary plays a very import role in subsequent operations. For the GPA identification, we compounded a new B-scan with enhanced contrast between the vitreous and the ILM by adding the gradient map  GA  to the normalized B-scan. We then binarized the enhanced B-scan by thresholding pixels with OCT reflectance values below the average reflectance value, which removes nonzero pixels in the vitreous. Then, the first nonzero pixels in each A-line were selected to form the GPA of the vitreous/ILM.

The second GPA to be recognized was the upper EZ boundary. Either this or the previously identified vitreous/ILM boundary contain the lowest gradient values in each A-line in the map  GA , making it easy to identify the EZ. Then, the binary image was up-sampled to the original number of pixels, and the 170 GPA points identified were reassigned to the A-lines with indices 5n + 1 (n = 0…169).

After the first two GPAs were generated from enhanced B-scans, the remaining five were obtained from the corresponding gradient maps, searching one of every five A-lines restricted to the corresponding upper and lower boundaries assigned above. These GPAs were first enhanced by a horizontal gradient operator (Eq. (2)), and the first point with parameter t<-0.02 (where t is the threshold assigned to GB') was selected for the corresponding GPA (Fig. 4).

 figure: Fig. 4

Fig. 4 Search guidance points in an A-scan. Red lines indicate the positions of the NFL/GCL, IPL/INL, OPL/ONL, and RPE/BM in (A), (B) and (C). (A) The GB of one B-scan and an A-scan of interest (vertical blue line). (B) Gradient intensities of the A-scan. (C) Intensities of (B) after applying (Eq. (2)).

Download Full Size | PDF

GB'=GB*[101]

Due to the relatively low contrast of image, points contained in the GPA were occasionally distant from the actual boundary (Fig. 5). Based on the prevalence of noise and relative flat GPA curves in the wide field of view, we used a mean filter (kernel size - width × height: 9 × 1) on the GPA to remove unreliable points and ensure the accuracy of the operation described below in Section 2.4.

 figure: Fig. 5

Fig. 5 Removal of unreliable points from the GPA. (A) GPA points before filtering. (B) GPA points after filtering. Red asterisks indicate the points removed from the GPA.

Download Full Size | PDF

2.4 Guided bidirectional graph search

Once the GPAs were identified, we implemented a guided bidirectional graph search algorithm for retinal layer segmentation (Fig. 6(A)). For any point S, we searched for graph points in two directions (left and right). For the next point L (or R), we appointed 5 nearby candidate points in the adjacent A-line (Figs. 6(B-C)) and chose the one with minimum gradient as the next node in the path. Unlike our previous directional graph search algorithm [17], we started from a virtual point located outside the image (Fig. 6(B)) and crossed a collection of points that may or may not fall in the GPA of the boundary under scrutiny. All GPA points crossed by this searched path were dropped from subsequent analysis (Figs. 7(A-B)), and guided bidirectional graph search started again from the next GPA point that was not contained in any previous graph recognized for the current boundary (Fig. 7(B)). This process was repeated, generating each time a potentially different graph until all GPA points belonged to one of the graphs (Figs. 7(B-E)). Then, all graphs thus generated were merged by the rationale explained in Section 2.5 below.

 figure: Fig. 6

Fig. 6 (A) Graph search. (B) Directional graph search. The virtual start point, V, was located outside the image. (C) Guided bidirectional graph search. The start point, S, of any graph search is necessarily contained in the GPA. L and R were points searched by the bidirectional graph search algorithm. After concluding the graph search, a new graph was generated for the next GPA point not included in any of the previous graphs.

Download Full Size | PDF

 figure: Fig. 7

Fig. 7 Guided bidirectional graph search. Once the GPA was selected, a first graph search was performed starting from a virtual point outside of the image (A). GPA points that were located on the first path (red points) and points left out of the graph (blue asterisks) were identified, and a second path was created bi-directionally by graph search, starting from the first GPA point left out of the previous path (in B, blue star, red arrow). Blue asterisks crossed by the second path became red points and did not trigger the start of a future graph search. The process was repeated (C-E) until all blue asterisks eventually form part of one candidate path.

Download Full Size | PDF

Although there were enough points in the GPA to support a point-to-point shortest path search, we preferred the bidirectional graph search to detect the boundary because we observed that some points in the GPA were outside the manually-segmented interfaces. Therefore, the graph of the layer boundary should not be forced to cross all GPA points, and a different boundary detection and merging scheme was necessary.

2.5 Path merging

The preceding procedures generated several possible paths for each boundary in a B-scan. To obtain the final boundary, we evaluated the deviation of each candidate path from the GPA in sections of a B-scan (Fig. 8). For example, from an interval bounded by three points of the GPA with indices a, b, and c, we selected the most accurate of all paths within this interval and assigned it to all A-lines with indices between a and b. To decide the most accurate path within an interval, we designed the evaluation function in (Eq. (3)).

u= mini(|pi(a)g(a)|+|pi(b)g(b)|+|pi(c)g(c)| )
where pi(x) was the value of the i-th candidate path at positon x = a, b, c. g(x)  was the GPA evaluated at points x, and the path with the lowest u was chosen between a and b (Fig. 8). Then, the process was repeated for the A-lines in the following interval, i.e., with indices between b and c, etc.

 figure: Fig. 8

Fig. 8 Final boundary (red) after selection of the path with minimum deviation from the GPA points (Eq. (3)). Two intervals between GPA points a, b, and c were emphasized, and three different paths similar to those generated in (Fig. 7) were represented in light blue, dark blue, and orange color. According to (Eq. (3)), the pixels crossed by the light blue path were assigned to the final path between points a and b, and the pixels crossed by the dark blue path were assigned to the final path between b and c.

Download Full Size | PDF

2.6 Segmentation of capillary plexuses

We extracted two vascular plexuses from the segmented OCTA volume (Figs. 9(A)-(B)): the superficial vascular complex (SVC) (Fig. 9(C)) and the deep vascular complex (DVC) (Fig. 9(D)). En face angiograms of the capillary plexuses were generated by the maximum projection of OCTA flow signals within the slab.

 figure: Fig. 9

Fig. 9 The positions of two inner retinal plexuses defined for wide-field OCTA scans (10 × 8- mm). (A) Segmented structural OCT scan from a healthy eye. (B) The upper and lower boundaries of two vascular plexuses. The superficial vascular complex (SVC) was defined between the vitreous/ILM (red line) and the SVC/deep vascular complex (DVC, green line). The SVC/DVC was defined between vitreous/ILM and the IPL/INL [17], represented in (A). The DVC is defined between the SVC/DVC and OPL/ONL (blue line). (C) En face angiogram of the SVC. The vertical yellow line in (C) marks the position of the B-scan slice in (A). (D) En face angiogram of the DVC.

Download Full Size | PDF

3. Results

3.1 Study population

We tested our segmentation method on normal eyes and eyes with glaucoma, diabetic retinopathy, and retinitis pigmentosa (Table 1). For all cases the seven layers were segmented to identify the vitreous/ILM, NFL/GCL, IPL/INL, INL/OPL, OPL/ONL, EZ, and RPE/BM.

Tables Icon

Table 1. Tested wide-field OCT volumetric data

3.2 Segmentation performance

We ran the GB-GS algorithm in Matlab R2017a on a desktop PC equipped with an Intel(R) Core(TM) i7-6700K @4.0GHz CPU and 32 GB RAM. The average run time of our algorithm was 0.3 seconds per B-scan. Our method correctly segmented retinal layer boundaries, even in the areas of large vessel shadows (Fig. 10(A)) and small cysts (Figs. 10(B-C)). Segmentation errors were present in areas of extremely low contrast between layers (Fig. 10(D)); in areas with retinal neovascularization, which could significantly affect the surface of the ILM (Fig. 10(E)); and in an area with a partially separated epiretinal membrane (Fig. 10(F)).

 figure: Fig. 10

Fig. 10 Retinal segmentation results. (A-C) Correct segmentation. The red arrow positions were correctly segmented, even though the boundaries were affected by shadows (A) and small cysts (B-C). (D-F) Examples of incorrect segmentation. The red arrow points indicate the areas where the segmentation failed owing to extremely low contrast (D), retinal neovascularization (E), and a partially separated epiretinal membrane (F).

Download Full Size | PDF

To evaluate segmentation accuracy, we compared the automatic segmentation results with manual segmentation performed by a masked grader. For each eye, 20 B-scans of one volumetric data set were randomly selected for evaluation. The position of the manual boundary was subtracted from the position of the automatic boundary without any manual corrections in all A-lines under scrutiny, and the segmentation accuracy was determined (Table 2). Subpixel accuracy was present for the four groups, with the most accurate being the vitreous/ILM boundary, which is the one with the highest perceived contrast.

Tables Icon

Table 2. Difference in segmentation between manual grading and automated grading for different clinical cases

Thanks to the stability and robustness of GB-GS, our method can also be used to segment small field of view OCT scans (3 × 3 and 6 × 6-mm). To evaluate the performance on these images, we randomly selected 20 volumetric scans acquired by a 70-kHz commercial AngioVue system (RTVue-XR; Optovue, Inc.) (Table 3). The segmentation errors were compared to our previous algorithm developed by Zhang et al [17] as well as the publicly available OCTExplorer software (download from https://www.iibi.uiowa.edu/oct-reference) [27,32,33].

Tables Icon

Table 3. Tested AngioVue OCT volumetric data

We randomly selected 31 B-scans from each volumetric scan, for a total of 620 B-scans. For each B-scan, we applied these three methods to segment 7 retinal boundaries, respectively. The segmentation results of the three methods were compared to manual grading (Table 4). Apparently, our method is superior to other two methods on at least five of seven layers.

Tables Icon

Table 4. Differences in segmentation between segmentation algorithms and manual grading for different size of view field OCT scans

3.3 Clinical applications

To evaluate the benefits of our segmentation method in the computation of clinically useful parameters, we applied it to the detection of the non-perfusion area in one eye with DR. Capillary nonperfusion is an important feature of DR [34,35], and quantification of it may be an important biomarker of disease progression. In particular, the larger scanning area of wide-field OCTA will likely improve the sensitivity of this metric for early stages of the disease because the manifestations of capillary dropout in DR begin in the peripheral retina rather than the central macula.

Using our automated segmentation method, we segmented each layer on a structural OCT B-scan (Fig. 11(A)). The en face angiogram of the SVC and DVC flow were generated (Figs. 11(B-C)), and a slab subtraction algorithm [6,7,36] was applied to reduce the prevalence of projection artifacts in the DVC. Then, we generated a nonperfusion map (Fig. 11(D)) using an automated algorithm developed previously [6,7]. The resulting images demonstrated areas of capillary nonperfusion over 7.04 mm2 that were specific to individual plexuses (Figs. 11(B-C)), allowing plexus-specific detection of nonperfusion in OCTA.

 figure: Fig. 11

Fig. 11 Segmentation results from a representative diabetic retinopathy case. (A) Segmentation of layer boundaries. (B) En face angiogram of the superficial vascular complex. The yellow line in (B) marks the position of the B-scan slice in (A). (C) En face angiogram of the deep vascular complex. (D) Nonperfusion area in the superficial vascular complex angiogram. (E) Retinal thickness map between vitreous/ILM and RPE/BM.

Download Full Size | PDF

Another possible use of wide-field OCTA is identification of neovascularization in DR eyes. A 10 × 25-mm wide-field OCTA, produced by montaging four scans, demonstrates a large area of neovascularization temporal to the macula (Fig. 12). Because wide-field-OCTA visualizes the neovascularization clearly without leakage, quantification of neovascularization is possible, allowing objective monitoring of treatment response.

 figure: Fig. 12

Fig. 12 Wide-field OCTA of a patient with proliferative diabetic retinopathy. A large area of neovascularization (yellow) temporal to the macula was present. This image is montaged from four 10 × 8-mm scans. The total size is 10 × 25-mm. The traditional 3 × 3- and 6 × 6-mm commercial OCTA images at the central macular area are indicated by dashed squares respectively. Unlike the fluorescein angiograms, OCTA demonstrates the neovascularization clearly without leakage and allows for quantification.

Download Full Size | PDF

4. Discussion

In this study, we demonstrated an improvement over our previous graph search retinal layer segmentation algorithm and OCTExplorer algorithm to achieve a more accurate delineation of the seven layer boundaries imaged by wide-field OCT scanning. The method was able to segment both healthy and diseased retinas, including hypo-reflective regions affected by vascular shadows and retinal cysts.

The main advantage of this algorithm is the ability to accurately segment retinal layers over a large scanning area. Traditional OCTA had been restricted from its inception to narrow fields of view, i.e., 3 × 3-, 4.5 × 4.5-, and 6 × 6-mm, which are still standard in commercial machines. Wide-field OCTA is a natural evolution of this technology, compelled by the clinical demand for better visualization of the peripheral retina. Stitching many images by registration techniques is an alternative to generate retinal angiograms of larger size, and it is inherently better to montage a few wide-field scans (e.g., 10 × 6-mm) than numerous narrow-field scans (e.g., 6 × 6-mm). For instance, the angiogram represented in Fig. 12 was generated by montaging of four 10 × 8-mm scans, whereas at least ten 6 × 6-mm scans would be needed to represent the same area. However, the advantage of wide-field scanning comes at the expense of more challenging segmentation across ultra-wide B-scans. Our method based on GB-GS not only can handle the macular area, but also can accurately segment the optic disc region and peripheral retinal region.

Recently, segmentation of retinal layers and pathological structures has also been accomplished by alternative supervised machine learning methods such as deep learning [21,22]. An advantage of our current guided graph search method is that unlike deep learning solutions, it does not need a large, annotated data set to be used for network training, and hence it is suitable for small studies, for data acquired by lab-built prototype devices, and for diseases in which even manual segmentation of boundaries is uncertain and could introduce confusion during training. Moreover, the machine learning methods reported previously only generated probability maps and still needed a post-processing step (e.g., graph search or conditional random fields) to generate sharp boundaries. In contrast, our results show that the method proposed here is generalizable to different retinal pathologies. This method is superior to previous graph search solutions in that it considers the laminar structure of the retina and performs the search in two directions, relying on the GPA to prevent graph deviations from the anatomically connected boundaries. Finally, segmentation is performed faster than machine learning alternatives owing to the lower computational requirements.

The limitations of the software can be summarized as follows. First, the method depends strongly on the gradient information at the layer boundaries and might fail for acquisitions with extremely low contrast between layers. Second, due to the order in which boundary detection is defined, the segmentation is sensitive to any errors in segmentation of previous graphs bounding the position of its upper and lower limits. To address this issue, the boundaries least likely to be erroneously segmented owing to the highest contrast were chosen to precede the segmentation of the boundaries more likely to be affected by disease.

5. Conclusions

We proposed a novel automatic segmentation method to find the boundaries of seven retinal layer boundaries in wide-field OCTA images. Our algorithm showed sub-pixel accuracy in both normal and diseased eyes. The extraction of thin slab boundaries over a large area has great potential for use in the improved diagnosis and progression assessment of diseases. This is especially true for diseases that begin from the peripheral retina and affect large areas, such as DR and inherited retinal diseases, where evaluation by OCTA was limited in the past to a small field of view.

Funding

National Institutes of Health (Bethesda, MD) (R01 EY027833, R01 EY023285, DP3 DK104397, R01 EY024544, P30 EY010572); William & Mary Greve Special Scholar Award from Research to Prevent Blindness (New York, NY)

Disclosures

Oregon Health & Science University (OHSU), David Huang and Yali Jia, have a significant financial interest in Optovue, Inc. These potential conflicts of interest have been reviewed and managed by OHSU.

References and links

1. D. Huang, E. A. Swanson, C. P. Lin, J. S. Schuman, W. G. Stinson, W. Chang, M. R. Hee, T. Flotte, K. Gregory, C. A. Puliafito, and J. G. Fujimoto, “Optical coherence tomography,” Science 254(5035), 1178–1181 (1991). [CrossRef]   [PubMed]  

2. G. D. Joshi, J. Sivaswamy, and S. R. Krishnadas, “Optic disk and cup segmentation from monocular color retinal images for glaucoma assessment,” IEEE Trans. Med. Imaging 30(6), 1192–1205 (2011). [CrossRef]   [PubMed]  

3. J. C. Bavinger, G. E. Dunbar, M. S. Stem, T. S. Blachley, L. Kwark, S. Farsiu, G. R. Jackson, and T. W. Gardner, “The effects of diabetic retinopathy and pan-retinal photocoagulation on photoreceptor cell function as assessed by dark adaptometry,” Invest. Ophthalmol. Vis. Sci. 57(1), 208–217 (2016). [CrossRef]   [PubMed]  

4. H. R. Coleman, C.-C. Chan, F. L. Ferris 3rd, and E. Y. Chew, “Age-related macular degeneration,” Lancet 372(9652), 1835–1845 (2008). [CrossRef]   [PubMed]  

5. J. P. Campbell, M. Zhang, T. S. Hwang, S. T. Bailey, D. J. Wilson, Y. Jia, and D. Huang, “Detailed Vascular Anatomy of the Human Retina by Projection-Resolved Optical Coherence Tomography Angiography,” Sci. Rep. 7(1), 42201 (2017). [CrossRef]   [PubMed]  

6. Y. Jia, S. T. Bailey, T. S. Hwang, S. M. McClintic, S. S. Gao, M. E. Pennesi, C. J. Flaxel, A. K. Lauer, D. J. Wilson, J. Hornegger, J. G. Fujimoto, and D. Huang, “Quantitative optical coherence tomography angiography of vascular abnormalities in the living human eye,” Proc. Natl. Acad. Sci. U.S.A. 112(18), E2395–E2402 (2015). [CrossRef]   [PubMed]  

7. Y. Jia, S. T. Bailey, D. J. Wilson, O. Tan, M. L. Klein, C. J. Flaxel, B. Potsaid, J. J. Liu, C. D. Lu, M. F. Kraus, J. G. Fujimoto, and D. Huang, “Quantitative optical coherence tomography angiography of choroidal neovascularization in age-related macular degeneration,” Ophthalmology 121(7), 1435–1444 (2014). [CrossRef]   [PubMed]  

8. T. S. Hwang, M. Zhang, K. Bhavsar, X. Zhang, J. P. Campbell, P. Lin, S. T. Bailey, C. J. Flaxel, A. K. Lauer, D. J. Wilson, D. Huang, and Y. Jia, “Visualization of 3 distinct retinal plexuses by projection-resolved optical coherence tomography angiography in diabetic retinopathy,” JAMA Ophthalmol. 134(12), 1411–1419 (2016). [CrossRef]   [PubMed]  

9. T. S. Hwang, A. M. Hagag, J. Wang, M. Zhang, A. Smith, D. J. Wilson, D. Huang, and Y. Jia, “Automated quantification of nonperfusion areas in 3 vascular plexuses with optical coherence tomography angiography in eyes of patients with diabetes,” JAMA Ophthalmol. 136(8), 929–936 (2018). [CrossRef]   [PubMed]  

10. D. C. DeBuc, “A review of algorithms for segmentation of retinal image data using optical coherence tomography,” in Image Segmentation, P.-G. Ho, ed. (InTech, 2011), pp. 15–54.

11. D. Koozekanani, K. Boyer, and C. Roberts, “Retinal thickness measurements from optical coherence tomography using a Markov boundary model,” IEEE Trans. Med. Imaging 20(9), 900–916 (2001). [CrossRef]   [PubMed]  

12. D. Cabrera Fernández, H. M. Salinas, and C. A. Puliafito, “Automated detection of retinal layer structures on optical coherence tomography images,” Opt. Express 13(25), 10200–10216 (2005). [CrossRef]   [PubMed]  

13. A. Mishra, A. Wong, K. Bizheva, and D. A. Clausi, “Intra-retinal layer segmentation in optical coherence tomography images,” Opt. Express 17(26), 23719–23728 (2009). [CrossRef]   [PubMed]  

14. A. Yazdanpanah, G. Hamarneh, B. R. Smith, and M. V. Sarunic, “Segmentation of intra-retinal layers from optical coherence tomography images using an active contour approach,” IEEE Trans. Med. Imaging 30(2), 484–496 (2011). [CrossRef]   [PubMed]  

15. S. J. Chiu, X. T. Li, P. Nicholas, C. A. Toth, J. A. Izatt, and S. Farsiu, “Automatic segmentation of seven retinal layers in SDOCT images congruent with expert manual segmentation,” Opt. Express 18(18), 19413–19428 (2010). [CrossRef]   [PubMed]  

16. P. P. Srinivasan, S. J. Heflin, J. A. Izatt, V. Y. Arshavsky, and S. Farsiu, “Automatic segmentation of up to ten layer boundaries in SD-OCT images of the mouse retina with and without missing layers due to pathology,” Biomed. Opt. Express 5(2), 348–365 (2014). [CrossRef]   [PubMed]  

17. M. Zhang, J. Wang, A. D. Pechauer, T. S. Hwang, S. S. Gao, L. Liu, L. Liu, S. T. Bailey, D. J. Wilson, D. Huang, and Y. Jia, “Advanced image processing for optical coherence tomographic angiography of macular diseases,” Biomed. Opt. Express 6(12), 4661–4675 (2015). [CrossRef]   [PubMed]  

18. M. K. Garvin, M. D. Abràmoff, R. Kardon, S. R. Russell, X. Wu, and M. Sonka, “Intraretinal layer segmentation of macular optical coherence tomography images using optimal 3-D graph search,” IEEE Trans. Med. Imaging 27(10), 1495–1505 (2008). [CrossRef]   [PubMed]  

19. K. A. Vermeer, J. van der Schoot, H. G. Lemij, and J. F. de Boer, “Automated segmentation by pixel classification of retinal layers in ophthalmic OCT images,” Biomed. Opt. Express 2(6), 1743–1756 (2011). [CrossRef]   [PubMed]  

20. A. Lang, A. Carass, M. Hauser, E. S. Sotirchos, P. A. Calabresi, H. S. Ying, and J. L. Prince, “Retinal layer segmentation of macular OCT images using boundary classification,” Biomed. Opt. Express 4(7), 1133–1152 (2013). [CrossRef]   [PubMed]  

21. L. Fang, D. Cunefare, C. Wang, R. H. Guymer, S. Li, and S. Farsiu, “Automatic segmentation of nine retinal layer boundaries in OCT images of non-exudative AMD patients using deep learning and graph search,” Biomed. Opt. Express 8(5), 2732–2744 (2017). [CrossRef]   [PubMed]  

22. A. G. Roy, S. Conjeti, S. P. K. Karri, D. Sheet, A. Katouzian, C. Wachinger, and N. Navab, “ReLayNet: retinal layer and fluid segmentation of macular optical coherence tomography using fully convolutional networks,” Biomed. Opt. Express 8(8), 3627–3642 (2017). [CrossRef]   [PubMed]  

23. F. Rathke, S. Schmidt, and C. Schnörr, “Probabilistic intra-retinal layer segmentation in 3-D OCT images using global shape regularization,” Med. Image Anal. 18(5), 781–794 (2014). [CrossRef]   [PubMed]  

24. H. Ishikawa, D. M. Stein, G. Wollstein, S. Beaton, J. G. Fujimoto, and J. S. Schuman, “Macular segmentation with optical coherence tomography,” Invest. Ophthalmol. Vis. Sci. 46(6), 2012–2017 (2005). [CrossRef]   [PubMed]  

25. V. Kajić, B. Považay, B. Hermann, B. Hofer, D. Marshall, P. L. Rosin, and W. Drexler, “Robust segmentation of intraretinal layers in the normal human fovea using a novel statistical model based on texture and shape analysis,” Opt. Express 18(14), 14730–14744 (2010). [CrossRef]   [PubMed]  

26. M. Baroni, P. Fortunato, and A. La Torre, “Towards quantitative analysis of retinal features in optical coherence tomography,” Med. Eng. Phys. 29(4), 432–441 (2007). [CrossRef]   [PubMed]  

27. B. Antony, M. D. Abràmoff, L. Tang, W. D. Ramdas, J. R. Vingerling, N. M. Jansonius, K. Lee, Y. H. Kwon, M. Sonka, and M. K. Garvin, “Automated 3-D method for the correction of axial artifacts in spectral-domain optical coherence tomography images,” Biomed. Opt. Express 2(8), 2403–2416 (2011). [CrossRef]   [PubMed]  

28. Q. Dai and Y. Sun, “Automated layer segmentation of optical coherence tomography images,” in 2011 4th International Conference on Biomedical Engineering and Informatics (BMEI) (IEEE, 2011), 57(10), pp. 142–146. [CrossRef]  

29. F. Shi, X. Chen, H. Zhao, W. Zhu, D. Xiang, E. Gao, M. Sonka, and H. Chen, “Automated 3-D retinal layer segmentation of macular optical coherence tomography images with serous pigment epithelial detachments,” IEEE Trans. Med. Imaging 34(2), 441–452 (2015). [CrossRef]   [PubMed]  

30. V. Raiji, A. Walsh, and S. Sadda, “Future directions in retinal optical coherence tomography,” Retinal Physician 9, 33–37 (2012).

31. Y. Jia, O. Tan, J. Tokayer, B. Potsaid, Y. Wang, J. J. Liu, M. F. Kraus, H. Subhash, J. G. Fujimoto, J. Hornegger, and D. Huang, “Split-spectrum amplitude-decorrelation angiography with optical coherence tomography,” Opt. Express 20(4), 4710–4725 (2012). [CrossRef]   [PubMed]  

32. M. K. Garvin, M. D. Abràmoff, X. Wu, S. R. Russell, T. L. Burns, and M. Sonka, “Automated 3-D intraretinal layer segmentation of macular spectral-domain optical coherence tomography images,” IEEE Trans. Med. Imaging 28(9), 1436–1447 (2009). [CrossRef]   [PubMed]  

33. K. Li, X. Wu, D. Z. Chen, and M. Sonka, “Optimal surface segmentation in volumetric images-A graph-theoretic approach,” IEEE Trans. Pattern Anal. Mach. Intell. 28(1), 119–134 (2006). [CrossRef]   [PubMed]  

34. E. Treatment and D. Retinopathy, “Early treatment diabetic retinopathy study design and baseline patient characteristics. ETDRS report number 7,” Ophthalmology 98(5), 741–756 (1991). [CrossRef]   [PubMed]  

35. M. S. Ip, A. Domalpally, J. K. Sun, and J. S. Ehrlich, “Long-term effects of therapy with ranibizumab on diabetic retinopathy severity and baseline risk factors for worsening retinopathy,” Ophthalmology 122(2), 367–374 (2015). [CrossRef]   [PubMed]  

36. L. Liu, S. S. Gao, S. T. Bailey, D. Huang, D. Li, and Y. Jia, “Automated choroidal neovascularization detection algorithm for optical coherence tomography angiography,” Biomed. Opt. Express 6(9), 3564–3576 (2015). [CrossRef]   [PubMed]  

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (12)

Fig. 1
Fig. 1 Representation of the retinal layer boundaries that can be segmented by the algorithm. (A) A representative wide-field B-scan across the macula of a healthy subject before segmentation. (B) Segmentation of seven retinal boundaries: ILM (inner limiting membrane), NFL (nerve fiber layer), GCL (ganglion cell layer), IPL (inner plexiform layer), INL (inner nuclear layer) OPL (outer plexiform layer), ONL (outer nuclear layer), EZ (ellipsoid zone), RPE (retinal pigment epithelium), BM (Bruch’s membrane).
Fig. 2
Fig. 2 Two gradient maps used for layer segmentation. (A) Gradient map   G A  . Vitreous/ILM, INL/OPL and EZ were segmented using this map. (B) Gradient map G B . NFL/GCL, IPL/INL, OPL/ONL, and RPE/BM were segmented using this map.
Fig. 3
Fig. 3 The search order of GPAs in the guided bidirectional graph search algorithm.
Fig. 4
Fig. 4 Search guidance points in an A-scan. Red lines indicate the positions of the NFL/GCL, IPL/INL, OPL/ONL, and RPE/BM in (A), (B) and (C). (A) The G B of one B-scan and an A-scan of interest (vertical blue line). (B) Gradient intensities of the A-scan. (C) Intensities of (B) after applying (Eq. (2)).
Fig. 5
Fig. 5 Removal of unreliable points from the GPA. (A) GPA points before filtering. (B) GPA points after filtering. Red asterisks indicate the points removed from the GPA.
Fig. 6
Fig. 6 (A) Graph search. (B) Directional graph search. The virtual start point, V, was located outside the image. (C) Guided bidirectional graph search. The start point, S, of any graph search is necessarily contained in the GPA. L and R were points searched by the bidirectional graph search algorithm. After concluding the graph search, a new graph was generated for the next GPA point not included in any of the previous graphs.
Fig. 7
Fig. 7 Guided bidirectional graph search. Once the GPA was selected, a first graph search was performed starting from a virtual point outside of the image (A). GPA points that were located on the first path (red points) and points left out of the graph (blue asterisks) were identified, and a second path was created bi-directionally by graph search, starting from the first GPA point left out of the previous path (in B, blue star, red arrow). Blue asterisks crossed by the second path became red points and did not trigger the start of a future graph search. The process was repeated (C-E) until all blue asterisks eventually form part of one candidate path.
Fig. 8
Fig. 8 Final boundary (red) after selection of the path with minimum deviation from the GPA points (Eq. (3)). Two intervals between GPA points a, b, and c were emphasized, and three different paths similar to those generated in (Fig. 7) were represented in light blue, dark blue, and orange color. According to (Eq. (3)), the pixels crossed by the light blue path were assigned to the final path between points a and b, and the pixels crossed by the dark blue path were assigned to the final path between b and c.
Fig. 9
Fig. 9 The positions of two inner retinal plexuses defined for wide-field OCTA scans (10 × 8- mm). (A) Segmented structural OCT scan from a healthy eye. (B) The upper and lower boundaries of two vascular plexuses. The superficial vascular complex (SVC) was defined between the vitreous/ILM (red line) and the SVC/deep vascular complex (DVC, green line). The SVC/DVC was defined between vitreous/ILM and the IPL/INL [17], represented in (A). The DVC is defined between the SVC/DVC and OPL/ONL (blue line). (C) En face angiogram of the SVC. The vertical yellow line in (C) marks the position of the B-scan slice in (A). (D) En face angiogram of the DVC.
Fig. 10
Fig. 10 Retinal segmentation results. (A-C) Correct segmentation. The red arrow positions were correctly segmented, even though the boundaries were affected by shadows (A) and small cysts (B-C). (D-F) Examples of incorrect segmentation. The red arrow points indicate the areas where the segmentation failed owing to extremely low contrast (D), retinal neovascularization (E), and a partially separated epiretinal membrane (F).
Fig. 11
Fig. 11 Segmentation results from a representative diabetic retinopathy case. (A) Segmentation of layer boundaries. (B) En face angiogram of the superficial vascular complex. The yellow line in (B) marks the position of the B-scan slice in (A). (C) En face angiogram of the deep vascular complex. (D) Nonperfusion area in the superficial vascular complex angiogram. (E) Retinal thickness map between vitreous/ILM and RPE/BM.
Fig. 12
Fig. 12 Wide-field OCTA of a patient with proliferative diabetic retinopathy. A large area of neovascularization (yellow) temporal to the macula was present. This image is montaged from four 10 × 8-mm scans. The total size is 10 × 25-mm. The traditional 3 × 3- and 6 × 6-mm commercial OCTA images at the central macular area are indicated by dashed squares respectively. Unlike the fluorescein angiograms, OCTA demonstrates the neovascularization clearly without leakage and allows for quantification.

Tables (4)

Tables Icon

Table 1 Tested wide-field OCT volumetric data

Tables Icon

Table 2 Difference in segmentation between manual grading and automated grading for different clinical cases

Tables Icon

Table 3 Tested AngioVue OCT volumetric data

Tables Icon

Table 4 Differences in segmentation between segmentation algorithms and manual grading for different size of view field OCT scans

Equations (3)

Equations on this page are rendered with MathJax. Learn more.

G( x,z )=I( x,z )I( x,z1 );   x=1, 2,,N;  z=1, 2, , M G A ( x,z )={ 1G( x,z ),G( x,z )>0 1                    ,otherwise G B (x,z)={ 1|G( x,z )|,G( x,z )<0 1                       ,otherwise
G B ' = G B *[ 1 0 1 ]
u=  min i (| p i ( a )g( a ) |+| p i ( b )g( b ) |+| p i ( c )g( c ) | )
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.