Abstract
Advances in the retinal layer segmentation of structural optical coherence tomography (OCT) images have allowed the separation of capillary plexuses in OCT angiography (OCTA). With the increased scanning speeds of OCT devices and wider field images (≥10 mm on fast-axis), greater retinal curvature and anatomic variations have introduced new challenges. In this study, we developed a novel automated method to segment seven retinal layer boundaries and two retinal plexuses in wide-field OCTA images. The algorithm was initialized by a series of points forming a guidance point array that estimates the location of retinal layer boundaries. A guided bidirectional graph search method consisting of an improvement of our previous segmentation algorithm was used to search for the precise boundaries. We validated the method on normal and diseased eyes, demonstrating subpixel accuracy for all groups. By allowing independent visualization of the superficial and deep plexuses, this method shows potential for the detection of plexus-specific peripheral vascular abnormalities.
© 2018 Optical Society of America under the terms of the OSA Open Access Publishing Agreement
1. Introduction
Optical coherence tomography (OCT) [1] is an interferometric imaging technology capable of acquiring high resolution, three-dimensional (3D) images of biological tissue such as the retina through non-invasive and non-contact laser scanning. It has been widely used in the diagnosis of ophthalmic diseases, such as glaucoma [2], diabetic retinopathy (DR) [3], and age-related macular degeneration (AMD) [4], by quantifying the thicknesses of relevant slabs. OCT angiography (OCTA) is a novel clinical tool for the early diagnosis of the diseases affecting retinal circulation and assessment of progression. Based on the variation of OCT signals between B-scans at the same position, OCTA can provide depth-resolved flow signals for the microvasculature. Prior studies have proved that slab-based OCTA can improve the visualization and interpretation of OCTA volumes [5–7], and a recent study also showed that vascular abnormalities are better visualized by separating the retinal circulation into three vascular layers [8,9]. Therefore, automated segmentation of the retinal layer boundaries is essential to accurately assess anatomic thickness and capillary plexuses.
The segmentation of retinal layers is a challenging task that has been approached through a diversity of methods [10–29]. They all exploit the reflectance contrast between adjacent retinal layers to distinguish them [10–12]. These methods have relied on the gradient information for active contour [13,14] and graph search [15–18], or conversely on training supervised machine learning methods such as the support vector machine [19], random forests [20], deep learning [21,22], probability-based approach [23], and other methods [24–29]. With increasing advancements in swept-source OCT (SS-OCT) technology, wide-field OCT imaging has been enabled to evaluate larger portions of the retina [30]. However, the wide-field OCT poses new challenges to the existing segmentation algorithms. First, SS-OCT systems, using 1050-nm center wavelength lasers, have decreased the axial resolution and back-scattered reflectance contrast compared to those of the spectral domain commercial devices that use 840-nm center wavelength, which reduces the pixels contained within retinal layers as well as the number of features that can be extracted for machine learning segmentation alternatives. Second, due to the large retinal curvature-associated aberration in the wider field of view, the focusing of wide-field OCT is compromised in the peripheral regions. Third, retinal curvature and anatomic variations are increased as the field of view increases. These characteristics make single source path search algorithms (e.g., graph search) prone to local errors that can be propagated further by the search routine.
Previously, we developed a successful segmentation algorithm based on directional graph search for 3 × 3- and 6 × 6-mm scans of the retina [17]. To address the new challenges associated with wide-field scans, we propose here the Guided Bidirectional Graph Search (GB-GS) method, in which an array of points is used to guide the graph search algorithm in two directions to identify the seven retinal boundaries. The method consists of three steps. First, a guidance point array (GPA) was found to represent the approximate positions of the boundaries. Then, a bidirectional graph search was applied on each point contained in the GPA but not included in any previous paths. The shortest paths between each of the other points in the GPA were used to generate the final boundaries.
2. Methods
2.1 Data acquisition
The study was approved by an Institutional Review Board/Ethics Committee of Oregon Health & Science University, and informed consent was collected from all participants, in compliance with the Declaration of Helsinki. Volumetric scans of both eyes were acquired by a prototype 200-kHz SS-OCT system with a 1050-nm central wavelength covering the 10-mm (fast-axis) × 8-mm (slow-axis) retinal regions. Two repeated B-scans were taken at each of 400 raster positions, and each B-scan was comprised of 850 A-lines. B-scans at the same position were averaged to improve signal-to-noise ratio of the structural OCT. The OCTA data was calculated by the split-spectrum amplitude decorrelation angiography (SSADA) algorithm [31].
2.2 Preprocessing
First, we normalized the B-scan and then flattened it using the center of mass as reference to prevent errors caused by significant tissue curvature [17]. Then, we generated gradient maps to emphasize the transitions between retinal layers with different reflectivity. We reduced the speckle noise by applying a median filter (kernel size - width × height: 3 × 3) and a mean filter (kernel size - width × height: 7 × 3) that preserved the continuity of retinal layer boundaries primarily along horizontal direction. Because the boundaries being segmented exhibited two different intensity transition modes (Fig. 1) (light-to-dark and dark-to-light) [17], we generated two gradient maps, representing dark-to-light transitions and representing light-to-dark transitions (Eq. (1))
where was the OCT reflectance value at position , was the length of A-scans in pixels, and was the width of B-scans in pixels.From the gradient map , we retrieved the boundaries between the vitreous and the inner limiting membrane (ILM), the inner nuclear layer (INL) and the outer plexiform layer (OPL), as well as the upper boundary of the ellipsoid zone (EZ) (Fig. 2(A)). From the gradient map , we retrieved the remaining four boundaries that were between the nerve fiber layer (NFL) and the ganglion cell layer (GCL), between the inner plexiform layer (IPL) and the inner nuclear layer (INL), between the OPL and the outer nuclear layer (ONL), and between the retinal pigment epithelium (RPE) and Bruch’s membrane (BM) (Fig. 2(B)).
2.3 Guidance point array
In this step, we generated for each boundary an array of points indicating its approximate position based on information extracted from the gradient maps. This GPA regulates the subsequent bidirectional graph search for the actual layer boundaries. GPAs were generated in a pre-determined order, taking advantage of the characteristics of gradient maps and retinal anatomy to minimize deviations from the correct boundaries (Fig. 3). First, the vitreous/ILM and upper EZ boundaries were processed from gradient map , as they exhibited the greatest contrast with surrounding tissue. Then, using the EZ layer as the upper boundary, the set of points corresponding to the RPE/BM’s GPA was recognized from the gradient map. Subsequently, the upper boundary was fixed at the vitreous/ILM boundary, and was used to sequentially extract the GPA for the OPL/ONL, which had the EZ layer as the lower boundary. Then we extracted the GPAs for the IPL/INL and NFL/GCL, for which each GPA served as the lower boundary of the next GPA. Finally, the GPA for the INL/OPL was generated from the gradient map using the IPL/INL and OPL/ONL as upper and lower boundaries respectively.
The first GPAs to be identified were the vitreous/ILM and upper EZ boundaries, which were not limited by any reference boundaries. To localize them, we first reduced speckle noise by down-sampling the gradient map and the B-scan by a factor of five to a size of 170 × 208 pixels. The vitreous/ILM boundary plays a very import role in subsequent operations. For the GPA identification, we compounded a new B-scan with enhanced contrast between the vitreous and the ILM by adding the gradient map to the normalized B-scan. We then binarized the enhanced B-scan by thresholding pixels with OCT reflectance values below the average reflectance value, which removes nonzero pixels in the vitreous. Then, the first nonzero pixels in each A-line were selected to form the GPA of the vitreous/ILM.
The second GPA to be recognized was the upper EZ boundary. Either this or the previously identified vitreous/ILM boundary contain the lowest gradient values in each A-line in the map , making it easy to identify the EZ. Then, the binary image was up-sampled to the original number of pixels, and the 170 GPA points identified were reassigned to the A-lines with indices 5n + 1 (n = 0…169).
After the first two GPAs were generated from enhanced B-scans, the remaining five were obtained from the corresponding gradient maps, searching one of every five A-lines restricted to the corresponding upper and lower boundaries assigned above. These GPAs were first enhanced by a horizontal gradient operator (Eq. (2)), and the first point with parameter t<-0.02 (where t is the threshold assigned to ) was selected for the corresponding GPA (Fig. 4).
Due to the relatively low contrast of image, points contained in the GPA were occasionally distant from the actual boundary (Fig. 5). Based on the prevalence of noise and relative flat GPA curves in the wide field of view, we used a mean filter (kernel size - width × height: 9 × 1) on the GPA to remove unreliable points and ensure the accuracy of the operation described below in Section 2.4.
2.4 Guided bidirectional graph search
Once the GPAs were identified, we implemented a guided bidirectional graph search algorithm for retinal layer segmentation (Fig. 6(A)). For any point S, we searched for graph points in two directions (left and right). For the next point L (or R), we appointed 5 nearby candidate points in the adjacent A-line (Figs. 6(B-C)) and chose the one with minimum gradient as the next node in the path. Unlike our previous directional graph search algorithm [17], we started from a virtual point located outside the image (Fig. 6(B)) and crossed a collection of points that may or may not fall in the GPA of the boundary under scrutiny. All GPA points crossed by this searched path were dropped from subsequent analysis (Figs. 7(A-B)), and guided bidirectional graph search started again from the next GPA point that was not contained in any previous graph recognized for the current boundary (Fig. 7(B)). This process was repeated, generating each time a potentially different graph until all GPA points belonged to one of the graphs (Figs. 7(B-E)). Then, all graphs thus generated were merged by the rationale explained in Section 2.5 below.
Although there were enough points in the GPA to support a point-to-point shortest path search, we preferred the bidirectional graph search to detect the boundary because we observed that some points in the GPA were outside the manually-segmented interfaces. Therefore, the graph of the layer boundary should not be forced to cross all GPA points, and a different boundary detection and merging scheme was necessary.
2.5 Path merging
The preceding procedures generated several possible paths for each boundary in a B-scan. To obtain the final boundary, we evaluated the deviation of each candidate path from the GPA in sections of a B-scan (Fig. 8). For example, from an interval bounded by three points of the GPA with indices a, b, and c, we selected the most accurate of all paths within this interval and assigned it to all A-lines with indices between a and b. To decide the most accurate path within an interval, we designed the evaluation function in (Eq. (3)).
where was the value of the -th candidate path at positon x = a, b, c. was the GPA evaluated at points x, and the path with the lowest was chosen between a and b (Fig. 8). Then, the process was repeated for the A-lines in the following interval, i.e., with indices between b and c, etc.2.6 Segmentation of capillary plexuses
We extracted two vascular plexuses from the segmented OCTA volume (Figs. 9(A)-(B)): the superficial vascular complex (SVC) (Fig. 9(C)) and the deep vascular complex (DVC) (Fig. 9(D)). En face angiograms of the capillary plexuses were generated by the maximum projection of OCTA flow signals within the slab.
3. Results
3.1 Study population
We tested our segmentation method on normal eyes and eyes with glaucoma, diabetic retinopathy, and retinitis pigmentosa (Table 1). For all cases the seven layers were segmented to identify the vitreous/ILM, NFL/GCL, IPL/INL, INL/OPL, OPL/ONL, EZ, and RPE/BM.
3.2 Segmentation performance
We ran the GB-GS algorithm in Matlab R2017a on a desktop PC equipped with an Intel(R) Core(TM) i7-6700K @4.0GHz CPU and 32 GB RAM. The average run time of our algorithm was 0.3 seconds per B-scan. Our method correctly segmented retinal layer boundaries, even in the areas of large vessel shadows (Fig. 10(A)) and small cysts (Figs. 10(B-C)). Segmentation errors were present in areas of extremely low contrast between layers (Fig. 10(D)); in areas with retinal neovascularization, which could significantly affect the surface of the ILM (Fig. 10(E)); and in an area with a partially separated epiretinal membrane (Fig. 10(F)).
To evaluate segmentation accuracy, we compared the automatic segmentation results with manual segmentation performed by a masked grader. For each eye, 20 B-scans of one volumetric data set were randomly selected for evaluation. The position of the manual boundary was subtracted from the position of the automatic boundary without any manual corrections in all A-lines under scrutiny, and the segmentation accuracy was determined (Table 2). Subpixel accuracy was present for the four groups, with the most accurate being the vitreous/ILM boundary, which is the one with the highest perceived contrast.
Thanks to the stability and robustness of GB-GS, our method can also be used to segment small field of view OCT scans (3 × 3 and 6 × 6-mm). To evaluate the performance on these images, we randomly selected 20 volumetric scans acquired by a 70-kHz commercial AngioVue system (RTVue-XR; Optovue, Inc.) (Table 3). The segmentation errors were compared to our previous algorithm developed by Zhang et al [17] as well as the publicly available OCTExplorer software (download from https://www.iibi.uiowa.edu/oct-reference) [27,32,33].
We randomly selected 31 B-scans from each volumetric scan, for a total of 620 B-scans. For each B-scan, we applied these three methods to segment 7 retinal boundaries, respectively. The segmentation results of the three methods were compared to manual grading (Table 4). Apparently, our method is superior to other two methods on at least five of seven layers.
3.3 Clinical applications
To evaluate the benefits of our segmentation method in the computation of clinically useful parameters, we applied it to the detection of the non-perfusion area in one eye with DR. Capillary nonperfusion is an important feature of DR [34,35], and quantification of it may be an important biomarker of disease progression. In particular, the larger scanning area of wide-field OCTA will likely improve the sensitivity of this metric for early stages of the disease because the manifestations of capillary dropout in DR begin in the peripheral retina rather than the central macula.
Using our automated segmentation method, we segmented each layer on a structural OCT B-scan (Fig. 11(A)). The en face angiogram of the SVC and DVC flow were generated (Figs. 11(B-C)), and a slab subtraction algorithm [6,7,36] was applied to reduce the prevalence of projection artifacts in the DVC. Then, we generated a nonperfusion map (Fig. 11(D)) using an automated algorithm developed previously [6,7]. The resulting images demonstrated areas of capillary nonperfusion over 7.04 mm2 that were specific to individual plexuses (Figs. 11(B-C)), allowing plexus-specific detection of nonperfusion in OCTA.
Another possible use of wide-field OCTA is identification of neovascularization in DR eyes. A 10 × 25-mm wide-field OCTA, produced by montaging four scans, demonstrates a large area of neovascularization temporal to the macula (Fig. 12). Because wide-field-OCTA visualizes the neovascularization clearly without leakage, quantification of neovascularization is possible, allowing objective monitoring of treatment response.
4. Discussion
In this study, we demonstrated an improvement over our previous graph search retinal layer segmentation algorithm and OCTExplorer algorithm to achieve a more accurate delineation of the seven layer boundaries imaged by wide-field OCT scanning. The method was able to segment both healthy and diseased retinas, including hypo-reflective regions affected by vascular shadows and retinal cysts.
The main advantage of this algorithm is the ability to accurately segment retinal layers over a large scanning area. Traditional OCTA had been restricted from its inception to narrow fields of view, i.e., 3 × 3-, 4.5 × 4.5-, and 6 × 6-mm, which are still standard in commercial machines. Wide-field OCTA is a natural evolution of this technology, compelled by the clinical demand for better visualization of the peripheral retina. Stitching many images by registration techniques is an alternative to generate retinal angiograms of larger size, and it is inherently better to montage a few wide-field scans (e.g., 10 × 6-mm) than numerous narrow-field scans (e.g., 6 × 6-mm). For instance, the angiogram represented in Fig. 12 was generated by montaging of four 10 × 8-mm scans, whereas at least ten 6 × 6-mm scans would be needed to represent the same area. However, the advantage of wide-field scanning comes at the expense of more challenging segmentation across ultra-wide B-scans. Our method based on GB-GS not only can handle the macular area, but also can accurately segment the optic disc region and peripheral retinal region.
Recently, segmentation of retinal layers and pathological structures has also been accomplished by alternative supervised machine learning methods such as deep learning [21,22]. An advantage of our current guided graph search method is that unlike deep learning solutions, it does not need a large, annotated data set to be used for network training, and hence it is suitable for small studies, for data acquired by lab-built prototype devices, and for diseases in which even manual segmentation of boundaries is uncertain and could introduce confusion during training. Moreover, the machine learning methods reported previously only generated probability maps and still needed a post-processing step (e.g., graph search or conditional random fields) to generate sharp boundaries. In contrast, our results show that the method proposed here is generalizable to different retinal pathologies. This method is superior to previous graph search solutions in that it considers the laminar structure of the retina and performs the search in two directions, relying on the GPA to prevent graph deviations from the anatomically connected boundaries. Finally, segmentation is performed faster than machine learning alternatives owing to the lower computational requirements.
The limitations of the software can be summarized as follows. First, the method depends strongly on the gradient information at the layer boundaries and might fail for acquisitions with extremely low contrast between layers. Second, due to the order in which boundary detection is defined, the segmentation is sensitive to any errors in segmentation of previous graphs bounding the position of its upper and lower limits. To address this issue, the boundaries least likely to be erroneously segmented owing to the highest contrast were chosen to precede the segmentation of the boundaries more likely to be affected by disease.
5. Conclusions
We proposed a novel automatic segmentation method to find the boundaries of seven retinal layer boundaries in wide-field OCTA images. Our algorithm showed sub-pixel accuracy in both normal and diseased eyes. The extraction of thin slab boundaries over a large area has great potential for use in the improved diagnosis and progression assessment of diseases. This is especially true for diseases that begin from the peripheral retina and affect large areas, such as DR and inherited retinal diseases, where evaluation by OCTA was limited in the past to a small field of view.
Funding
National Institutes of Health (Bethesda, MD) (R01 EY027833, R01 EY023285, DP3 DK104397, R01 EY024544, P30 EY010572); William & Mary Greve Special Scholar Award from Research to Prevent Blindness (New York, NY)
Disclosures
Oregon Health & Science University (OHSU), David Huang and Yali Jia, have a significant financial interest in Optovue, Inc. These potential conflicts of interest have been reviewed and managed by OHSU.
References and links
1. D. Huang, E. A. Swanson, C. P. Lin, J. S. Schuman, W. G. Stinson, W. Chang, M. R. Hee, T. Flotte, K. Gregory, C. A. Puliafito, and J. G. Fujimoto, “Optical coherence tomography,” Science 254(5035), 1178–1181 (1991). [CrossRef] [PubMed]
2. G. D. Joshi, J. Sivaswamy, and S. R. Krishnadas, “Optic disk and cup segmentation from monocular color retinal images for glaucoma assessment,” IEEE Trans. Med. Imaging 30(6), 1192–1205 (2011). [CrossRef] [PubMed]
3. J. C. Bavinger, G. E. Dunbar, M. S. Stem, T. S. Blachley, L. Kwark, S. Farsiu, G. R. Jackson, and T. W. Gardner, “The effects of diabetic retinopathy and pan-retinal photocoagulation on photoreceptor cell function as assessed by dark adaptometry,” Invest. Ophthalmol. Vis. Sci. 57(1), 208–217 (2016). [CrossRef] [PubMed]
4. H. R. Coleman, C.-C. Chan, F. L. Ferris 3rd, and E. Y. Chew, “Age-related macular degeneration,” Lancet 372(9652), 1835–1845 (2008). [CrossRef] [PubMed]
5. J. P. Campbell, M. Zhang, T. S. Hwang, S. T. Bailey, D. J. Wilson, Y. Jia, and D. Huang, “Detailed Vascular Anatomy of the Human Retina by Projection-Resolved Optical Coherence Tomography Angiography,” Sci. Rep. 7(1), 42201 (2017). [CrossRef] [PubMed]
6. Y. Jia, S. T. Bailey, T. S. Hwang, S. M. McClintic, S. S. Gao, M. E. Pennesi, C. J. Flaxel, A. K. Lauer, D. J. Wilson, J. Hornegger, J. G. Fujimoto, and D. Huang, “Quantitative optical coherence tomography angiography of vascular abnormalities in the living human eye,” Proc. Natl. Acad. Sci. U.S.A. 112(18), E2395–E2402 (2015). [CrossRef] [PubMed]
7. Y. Jia, S. T. Bailey, D. J. Wilson, O. Tan, M. L. Klein, C. J. Flaxel, B. Potsaid, J. J. Liu, C. D. Lu, M. F. Kraus, J. G. Fujimoto, and D. Huang, “Quantitative optical coherence tomography angiography of choroidal neovascularization in age-related macular degeneration,” Ophthalmology 121(7), 1435–1444 (2014). [CrossRef] [PubMed]
8. T. S. Hwang, M. Zhang, K. Bhavsar, X. Zhang, J. P. Campbell, P. Lin, S. T. Bailey, C. J. Flaxel, A. K. Lauer, D. J. Wilson, D. Huang, and Y. Jia, “Visualization of 3 distinct retinal plexuses by projection-resolved optical coherence tomography angiography in diabetic retinopathy,” JAMA Ophthalmol. 134(12), 1411–1419 (2016). [CrossRef] [PubMed]
9. T. S. Hwang, A. M. Hagag, J. Wang, M. Zhang, A. Smith, D. J. Wilson, D. Huang, and Y. Jia, “Automated quantification of nonperfusion areas in 3 vascular plexuses with optical coherence tomography angiography in eyes of patients with diabetes,” JAMA Ophthalmol. 136(8), 929–936 (2018). [CrossRef] [PubMed]
10. D. C. DeBuc, “A review of algorithms for segmentation of retinal image data using optical coherence tomography,” in Image Segmentation, P.-G. Ho, ed. (InTech, 2011), pp. 15–54.
11. D. Koozekanani, K. Boyer, and C. Roberts, “Retinal thickness measurements from optical coherence tomography using a Markov boundary model,” IEEE Trans. Med. Imaging 20(9), 900–916 (2001). [CrossRef] [PubMed]
12. D. Cabrera Fernández, H. M. Salinas, and C. A. Puliafito, “Automated detection of retinal layer structures on optical coherence tomography images,” Opt. Express 13(25), 10200–10216 (2005). [CrossRef] [PubMed]
13. A. Mishra, A. Wong, K. Bizheva, and D. A. Clausi, “Intra-retinal layer segmentation in optical coherence tomography images,” Opt. Express 17(26), 23719–23728 (2009). [CrossRef] [PubMed]
14. A. Yazdanpanah, G. Hamarneh, B. R. Smith, and M. V. Sarunic, “Segmentation of intra-retinal layers from optical coherence tomography images using an active contour approach,” IEEE Trans. Med. Imaging 30(2), 484–496 (2011). [CrossRef] [PubMed]
15. S. J. Chiu, X. T. Li, P. Nicholas, C. A. Toth, J. A. Izatt, and S. Farsiu, “Automatic segmentation of seven retinal layers in SDOCT images congruent with expert manual segmentation,” Opt. Express 18(18), 19413–19428 (2010). [CrossRef] [PubMed]
16. P. P. Srinivasan, S. J. Heflin, J. A. Izatt, V. Y. Arshavsky, and S. Farsiu, “Automatic segmentation of up to ten layer boundaries in SD-OCT images of the mouse retina with and without missing layers due to pathology,” Biomed. Opt. Express 5(2), 348–365 (2014). [CrossRef] [PubMed]
17. M. Zhang, J. Wang, A. D. Pechauer, T. S. Hwang, S. S. Gao, L. Liu, L. Liu, S. T. Bailey, D. J. Wilson, D. Huang, and Y. Jia, “Advanced image processing for optical coherence tomographic angiography of macular diseases,” Biomed. Opt. Express 6(12), 4661–4675 (2015). [CrossRef] [PubMed]
18. M. K. Garvin, M. D. Abràmoff, R. Kardon, S. R. Russell, X. Wu, and M. Sonka, “Intraretinal layer segmentation of macular optical coherence tomography images using optimal 3-D graph search,” IEEE Trans. Med. Imaging 27(10), 1495–1505 (2008). [CrossRef] [PubMed]
19. K. A. Vermeer, J. van der Schoot, H. G. Lemij, and J. F. de Boer, “Automated segmentation by pixel classification of retinal layers in ophthalmic OCT images,” Biomed. Opt. Express 2(6), 1743–1756 (2011). [CrossRef] [PubMed]
20. A. Lang, A. Carass, M. Hauser, E. S. Sotirchos, P. A. Calabresi, H. S. Ying, and J. L. Prince, “Retinal layer segmentation of macular OCT images using boundary classification,” Biomed. Opt. Express 4(7), 1133–1152 (2013). [CrossRef] [PubMed]
21. L. Fang, D. Cunefare, C. Wang, R. H. Guymer, S. Li, and S. Farsiu, “Automatic segmentation of nine retinal layer boundaries in OCT images of non-exudative AMD patients using deep learning and graph search,” Biomed. Opt. Express 8(5), 2732–2744 (2017). [CrossRef] [PubMed]
22. A. G. Roy, S. Conjeti, S. P. K. Karri, D. Sheet, A. Katouzian, C. Wachinger, and N. Navab, “ReLayNet: retinal layer and fluid segmentation of macular optical coherence tomography using fully convolutional networks,” Biomed. Opt. Express 8(8), 3627–3642 (2017). [CrossRef] [PubMed]
23. F. Rathke, S. Schmidt, and C. Schnörr, “Probabilistic intra-retinal layer segmentation in 3-D OCT images using global shape regularization,” Med. Image Anal. 18(5), 781–794 (2014). [CrossRef] [PubMed]
24. H. Ishikawa, D. M. Stein, G. Wollstein, S. Beaton, J. G. Fujimoto, and J. S. Schuman, “Macular segmentation with optical coherence tomography,” Invest. Ophthalmol. Vis. Sci. 46(6), 2012–2017 (2005). [CrossRef] [PubMed]
25. V. Kajić, B. Považay, B. Hermann, B. Hofer, D. Marshall, P. L. Rosin, and W. Drexler, “Robust segmentation of intraretinal layers in the normal human fovea using a novel statistical model based on texture and shape analysis,” Opt. Express 18(14), 14730–14744 (2010). [CrossRef] [PubMed]
26. M. Baroni, P. Fortunato, and A. La Torre, “Towards quantitative analysis of retinal features in optical coherence tomography,” Med. Eng. Phys. 29(4), 432–441 (2007). [CrossRef] [PubMed]
27. B. Antony, M. D. Abràmoff, L. Tang, W. D. Ramdas, J. R. Vingerling, N. M. Jansonius, K. Lee, Y. H. Kwon, M. Sonka, and M. K. Garvin, “Automated 3-D method for the correction of axial artifacts in spectral-domain optical coherence tomography images,” Biomed. Opt. Express 2(8), 2403–2416 (2011). [CrossRef] [PubMed]
28. Q. Dai and Y. Sun, “Automated layer segmentation of optical coherence tomography images,” in 2011 4th International Conference on Biomedical Engineering and Informatics (BMEI) (IEEE, 2011), 57(10), pp. 142–146. [CrossRef]
29. F. Shi, X. Chen, H. Zhao, W. Zhu, D. Xiang, E. Gao, M. Sonka, and H. Chen, “Automated 3-D retinal layer segmentation of macular optical coherence tomography images with serous pigment epithelial detachments,” IEEE Trans. Med. Imaging 34(2), 441–452 (2015). [CrossRef] [PubMed]
30. V. Raiji, A. Walsh, and S. Sadda, “Future directions in retinal optical coherence tomography,” Retinal Physician 9, 33–37 (2012).
31. Y. Jia, O. Tan, J. Tokayer, B. Potsaid, Y. Wang, J. J. Liu, M. F. Kraus, H. Subhash, J. G. Fujimoto, J. Hornegger, and D. Huang, “Split-spectrum amplitude-decorrelation angiography with optical coherence tomography,” Opt. Express 20(4), 4710–4725 (2012). [CrossRef] [PubMed]
32. M. K. Garvin, M. D. Abràmoff, X. Wu, S. R. Russell, T. L. Burns, and M. Sonka, “Automated 3-D intraretinal layer segmentation of macular spectral-domain optical coherence tomography images,” IEEE Trans. Med. Imaging 28(9), 1436–1447 (2009). [CrossRef] [PubMed]
33. K. Li, X. Wu, D. Z. Chen, and M. Sonka, “Optimal surface segmentation in volumetric images-A graph-theoretic approach,” IEEE Trans. Pattern Anal. Mach. Intell. 28(1), 119–134 (2006). [CrossRef] [PubMed]
34. E. Treatment and D. Retinopathy, “Early treatment diabetic retinopathy study design and baseline patient characteristics. ETDRS report number 7,” Ophthalmology 98(5), 741–756 (1991). [CrossRef] [PubMed]
35. M. S. Ip, A. Domalpally, J. K. Sun, and J. S. Ehrlich, “Long-term effects of therapy with ranibizumab on diabetic retinopathy severity and baseline risk factors for worsening retinopathy,” Ophthalmology 122(2), 367–374 (2015). [CrossRef] [PubMed]
36. L. Liu, S. S. Gao, S. T. Bailey, D. Huang, D. Li, and Y. Jia, “Automated choroidal neovascularization detection algorithm for optical coherence tomography angiography,” Biomed. Opt. Express 6(9), 3564–3576 (2015). [CrossRef] [PubMed]