Abstract
This paper presents a generalized framework for segmenting closed-contour anatomical and pathological features using graph theory and dynamic programming (GTDP). More specifically, the GTDP method previously developed for quantifying retinal and corneal layer thicknesses is extended to segment objects such as cells and cysts. The presented technique relies on a transform that maps closed-contour features in the Cartesian domain into lines in the quasi-polar domain. The features of interest are then segmented as layers via GTDP. Application of this method to segment closed-contour features in several ophthalmic image types is shown. Quantitative validation experiments for retinal pigmented epithelium cell segmentation in confocal fluorescence microscopy images attests to the accuracy of the presented technique.
©2012 Optical Society of America
1. Introduction
Fast, accurate, and objective detection and quantification of imaging biomarkers are crucial for the study and diagnosis of ophthalmic diseases. Due to the time consuming and subjective nature of manual segmentation, considerable work has been done in recent years to automate the segmentation of ocular structures. For example, many layer segmentation algorithms have been developed to delineate the retinal [1–6], choroid-scleral [7,8], and corneal layers [9,10] on optical coherence tomography (OCT) images [11].
However, in addition to layered structures, there is also a need to automatically segment and quantify closed-contour features in ophthalmic images. Examples of closed-contour anatomical and pathological structures include retinal pigmented epithelial (RPE) cells in confocal fluorescence microscopy images of flat-mounted retina [12], intra-retinal cysts in OCT images of patients with diabetic macular edema (DME) [13,14], and photoreceptors in adaptive optics scanning ophthalmoscope (AOSO) images of retina [15–18]. Previous works have been presented to segment closed-contour features, including fluid or detachments on OCT images of the retina [19–21], drusen in fundus photographs [22,23] or in OCT images [24], geographic atrophy in fundus auto-fluorescence images [25], vessel lumens in intravascular OCT images [26], corneal cells [27–31], and photoreceptors on AOSO images [32–37]. However, these segmentation algorithms were often developed for specific ophthalmic applications, rather than as a general framework applicable to both layer and closed-contour features. Furthermore, in some cases such as with photoreceptor or corneal cell identification, cell counts or densities were achieved but the actual structure size and shape were not necessarily determined [27,29,31–35,37].
To segment circular biological structures, transformation of images into the polar domain has proven to be an effective tool within the medical community [38–41]. This is due to the observation that circles in the Cartesian domain are represented as lines in the polar domain. Using this transform, the mathematical operations and manipulations required to segment the image are greatly simplified since these operations no longer need to be performed radially. However, for non-circular, closed-contour structures, the polar transform is no longer adequate since oblong or convoluted features deviate from a flat path.
In this paper, we present a general framework to segment arbitrarily shaped, closed-contour features. We previously developed a framework for layer segmentation based on graph theory and dynamic programming (GTDP) [6] to segment retinal and corneal layers on spectral domain optical coherence tomography (SDOCT) images [5,6,10]. Section 2 of this paper extends the application of our layer segmentation technique to include closed-contour segmentation. The algorithm is based on the projection of closed-contour features in the Cartesian domain into lines in a virtual domain, which we term the quasi-polar domain. In the quasi-polar domain, features of interest are then segmented as layers using our GTDP technique. In Section 3, we apply the framework to segment RPE cells in confocal microscopy images of an age-related macular degeneration (AMD) mouse model, and in Section 4.1 we evaluate the algorithm’s effectiveness in detail. Lastly, to highlight the flexibility of the presented framework, Section 4.2 illustrates the segmentation of closed-contour features for other ophthalmic features, including cysts on SDOCT and cone photoreceptor cells on AOSO images.
2. A generalized closed-contour segmentation framework using GTDP
This section introduces a generalized framework for closed-contour feature segmentation. The method is an extension of our previously presented technique for layer segmentation via GTDP [6]. The core steps are outlined in Fig. 1 and discussed in detail in the following subsections. Section 3 then provides the specific algorithmic steps required to implement this framework for RPE cell segmentation in confocal fluorescence microscopy images.
2.1. Pilot structure estimation
The first step is to attain pilot estimates of all structures to segment. These estimates should contain information about both the location and shape of the structures. Structure localization prevents the algorithm from confusing the target structure with similarly-shaped extraneous features, while shape estimation results in a more accurate quasi-polar transform, as will be discussed in Section 2.2.
For a grayscale image in the Cartesian domain, the pilot estimates can be represented as a binary image where
and denotes the pixel at the ith row and jth column of the image. Many basic techniques such as thresholding, contours, local minima, or the multiscale generalized likelihood ratio test [42] can be used to attain this estimate, and Section 3.2 describes one such method for pilot structure estimation. To isolate a single structure, and can be cropped to smaller image blocks both where and
2.2. Image transformation into the quasi-polar domain
This section explains the quasi-polar transform in depth. In its simplest form, the quasi-polar transform can be described by a polar transform followed by a flattening step. Figure 2 shows an illustration, where a circle, an oval, and an arbitrary shape in the Cartesian domain (Figs. 2(a-c)) are first transformed into the polar domain (Figs. 2(d-f)). Information about the shape of the pilot estimate is then used to flatten the structure into a line in the quasi-polar domain (Fig. 2(g)). Once transformed into the quasi-polar domain, the structure can be segmented as if it were a layer using the GTDP technique.
It can also be observed from Fig. 2 that a simple polar transform is sufficient to map closed-contour shapes (Figs. 2(a-c)) into layered structures (Figs. 2(d-f)). Motivation for a further transformation into the quasi-polar domain (Fig. 2(g)) stems from the tendency for shorter geometric paths to be found when using the GTDP technique, especially in low signal-to-noise ratio (SNR) imaging scenarios. This is due to the observation that a cut (segmentation) with fewer traversed nodes often yields a lower total weight. As a result, oblong features are disadvantaged since their paths in the polar domain are longer than the shortest geometric path (Figs. 2(e, f)). By performing an extra flattening step, the path of the oblong structure now reflects the shortest geometric distance.
The quasi-polar transform is implemented using the following three steps:
- 1. Map pixels from the Cartesian domain into the polar domain based on their distance and angle from a reference pixel and axis, respectively. The reference pixel can be any single pixel, where however ideally lies at the center of the closed-contour feature to facilitate its flatness in the polar domain. The centroid of the pilot estimate is therefore a good choice for where
for the set of K pixels where Example binary images are shown in Figs. 2(a-c) with marked as a yellow asterisk and the reference axis defined as the j-axis. Using the Cartesian binary image and Eq. (3), generate the polar-transformed binary image (Figs. 2(d-f)), where denotes the pixel in with a radius r and angle θ from the reference pixel and axis, respectively. Then, assuming a unit step size, where and ,
- 2. Find a function that best estimates the boundary between the background and the pilot estimate in This can be achieved by taking the vertical gradient of the image, or by using the GTDP layer segmentation technique described in Section 2.3 with edge weights determined by the vertical gradient of
- 3. Generate the quasi-polar binary image (Fig. 2(g)), where for all columns in the kth column in is determined by
Use the exact transformation mapping (Steps 2 and Eq. (4)) to then transform the grayscale image into its quasi-polar equivalent,
Since Step 3 flattens the pilot estimate instead of the structure itself, in general the structure is not perfectly flat in This also implies that a pilot estimate shape closer to the actual structure results in a flatter structure in the quasi-transform domain.
2.3. Layer segmentation using GTDP
Once the image is transformed into the quasi-polar domain, it can be segmented using our GTDP-based automatic layer segmentation algorithm [6]. In summary, this technique represents the image as a graph containing nodes (i.e. image pixels), and edges, which connect adjacent pixels. We assign weights to each of the edges based on a priori information about the structure to prefer paths through the structure boundaries. Our GTDP technique includes automatic endpoint initialization, where an additional column is added to either side of the image with minimal vertical edge weights. As we detailed in [6], the start and end nodes can then be arbitrarily defined anywhere within these two columns. Finally, any optimization method that finds the minimum weighted path from the start node to the end node (e.g. Dijkstra’s method [43]) can be used to segment the structure.
2.4. Transformation back into the Cartesian domain
Once the structure is segmented in the quasi-polar domain, the segmentation information is transformed back into the Cartesian domain by applying the inverse of Steps 1 and 3 from Section 2.2.
2.5. Segmentation of subsequent structures
After the first structure is segmented, Steps 1-3 in Section 2.2 are repeated for each subsequent structure (e.g. for each individual cell or cyst). We can achieve this either by sequential segmentation of individual features, or by parallelizing the segmentation to make this process more efficient.
3. Implementation for RPE cell segmentation
AMD is the leading cause of irreversible blindness in Americans older than 60 years [44]. In a recent study, Ding et al. showed that anti-amyloid therapy protects against RPE damage and vision loss in an APOE4 mouse model of AMD [12]. In [12], the average size and density of the cells were used as biomarkers for disease progression. To attain these quantitative metrics, 22,495 cells were analyzed using customized semi-automatic segmentation software. To achieve a drastically faster segmentation rate, this section describes an implementation of the generalized closed-contour segmentation algorithm from Section 2 to fully automatically segment RPE cells in confocal microscopy images of flat-mounted retina of normal and AMD mouse models.
3.1. Data set
In the following, we describe our fully automatic segmentation method as tailored to the data set from Ding et al.’s report published in PNAS [12]. However, the adaption of our method to any other data set is straightforward. The data set in [12] included 23 confocal microscopy fluorescence images (0.64 × 0.64 mm2) of central RPE flat mounts from 17 mice with approximately 1000 cells per image. RPE cells were stained with a rabbit antibody against ZO-1 (40–2200, 1:100; Invitrogen) and imaged on a Nikon Eclipse C1 microscope.
3.2. Pilot estimation of cell morphology
We first normalized the intensities of the image from [12] We then smoothed using an 11 × 11 pixel Gaussian filter with standard deviation pixels. Next, we computed the extended-minima transform [45] that found all minima and suppressed those with a depth less than 0.05. To generate the pilot estimate for all cells, we set for all points z lying within the convex hull [46] of each minima. To localize the desired cell and its corresponding pilot estimate, we first cropped and to 201 × 201 pixel image blocks corresponding to the largest conceivable cell size in our model. Figure 3(a) shows the cropped cell image, and Fig. 3(b) shows the cropped pilot estimate image. Both images were centered at the computed for the desired pilot estimate. Since contained other cell estimates, we then set all pixels outside the desired estimate in to zero.
3.3. Image transformation into the quasi-polar domain
We next transformed (Fig. 3(a, d, f)) by following Steps 1-3 in Section 2.2 with the following implementation details:
- ▪ After generating in Step 1, we found the logical OR combination of and set since
- ▪ For Step 2, we first segmented the boundary in using the GTDP layer segmentation technique. We then smoothed the segmentation using a moving average filter with a span of 1%. This smoothed cut was set as the function (Fig. 3(c)), which provided the shape information necessary to flatten the image.
and (MATLAB notation; The MathWorks, Natick, MA) was denoised using the Wiener filter. We then set all connected components in smaller than 20 pixels to zero, and for all z where and we set Finally, we transformed into its quasi-polar equivalent,
While Fig. 3(f) shows the resulting quasi-polar transformed cell image Fig. 3(e) shows the quasi-polar transformed pilot estimate. As can be seen, the transform yielded a fully flattened pilot estimate and an approximately flattened cell structure.
3.4. Cell segmentation using GTDP
Once the quasi-polar transformed image was generated, we used our GTDP layer segmentation framework to segment the cell boundary. To do so, we defined the following three cost functions:
- (1) Penalize bright nodes further from the centroid to avoid segmenting multiple cells at once ( in Eq. (7)).
- (2) Penalize nodes that include the pilot estimates of other cells to also avoid segmenting multiple cells at once ( in Eq. (7)).
- (3) Penalize nodes that fall below the threshold criteria for ( in Eq. (7)).
We implemented these three cost functions using the following MATLAB (The MathWorks, Natick, MA) notation, where and are corresponding columns in and respectively, such that
We then combined these cost functions such that
Finally, we calculated the edge weights using
where is the weight assigned to the edge connecting pixels and and The notation indicates a normalization of the values x to range from y to z
Prior to finding the shortest path, we discarded all edges from the graph containing nodes in previously segmented cells to avoid segmenting them again. We also excluded the edges that allowed those cells to be included within a newly segmented cell. We then found the minimum-weighted path p using Dijkstra’s method, where and represent the row and column axes of respectively. Since in the quasi-polar domain, must equal in practice however, this is not guaranteed when using automatic endpoint initialization (Section 2.3). Therefore, if we set both endpoints equal to and found the corresponding minimum-weighted path We then set both endpoints equal to and found the path The ultimate selected path (Fig. 3(f), magenta) was the one that passed through the brighter overall path, where
3.5. Cell transformation back into the Cartesian domain
We transformed the segmentation back into the Cartesian domain (Fig. 3(g)) by following the method in Section 2.4. However, since the cell was segmented using only a cropped section of the image, we shifted the coordinates of the segmented layer back to its appropriate location on the original-sized image The result was a binary edge image where all pixels coincided with the segmentation such that For all pixels and that were unconnected, we used linear interpolation to connect the dots and create a closed-contour.
3.6. Iteration for all cells
To segment all other cells, we iterated the steps outlined in Sections 3.3 through 3.6 for all pilot estimates of the cells in the image. To prevent gaps from occurring between adjacent cells, we created a preference for already-segmented cell borders by brightening their intensity values, where and for all z where
3.7. Refinement
The gold standard segmentation from [12] used in our validation study (Section 4.1) separated cells by drawing a line through the middle of the cell border with a single pixel thickness. Since the presented algorithm did not necessarily yield a single-pixel cell border, this section describes the post-processing steps required to match our automatic segmentation with the gold standard protocol, and to remove any erroneous cell segmentations.
First, we removed all cells smaller than 50 pixels by setting for all z lying within the cell. We then generated the skeleton [47] of and removed all spurs [47]. Next, we looked at individual cell edges by setting the branch points in equal to zero. We defined the minimum edge threshold to be where was the set of all pixels where If all pixels for a given cell edge fell below t, or if 80% of the pixels fell below t and there were at least 15 such pixels, then the cell edge was removed from Once we removed all invalid cell edges, we restored all branch points on and removed any newly created spurs or pixels no longer connected to the cell edge.
4. Experimental results
This section shows the closed-contour segmentation results obtained using the procedures described in Section 3. Section 4.1 evaluates in detail the RPE cell segmentation algorithm described in Section 3, while Section 4.2 briefly illustrates the segmentation of closed-contour features for other ophthalmic applications.
4.1. RPE cell segmentation results
To validate our fully automatic segmentation results against the gold standard, we compared the cell count and mean cell size for each of the 23 images in [12]. To prevent from biasing our results, these images were not used during algorithm development stage. Instead, a single confocal microscopy fluorescence image of an AMD mouse model not included in the validation data set was used to train the algorithm.
We did not alter the data in Ding’s study in any shape or form, with one slight exception. In [12], closed-counter features smaller than 100 pixels were absorbed into the neighboring cells, since such regions were likely a result of incorrect manual segmentation. In our replication of the results from [12], we instead discarded these regions from our quantitative comparison since such regions were negligible. In [12], regions of the image considered invalid due to imaging errors, flat-mount preparation errors, or cells being cut off by image borders, were marked separately and discarded from the analysis. In this study, we used the markings from [12] to ignore the exact same invalid regions in our analysis. Note that, the above two types of outliers occupied a very small percentage of the total area segmented.
Quantitative results for the RPE cell segmentation algorithm described in Section 3 are shown in Table 1 . The average error between automatic segmentation and the gold standard was 1.49 ± 1.44% for the cell count and 1.32 ± 1.53% for the mean cell area. The median error was found to be 0.97% and 0.71% for the detected number of cells and mean cell area, respectively. Qualitative results for this validation study are shown in Fig. 4 , where Fig. 4(a) (corresponding to Image 16 in Table 1) is the automatically segmented confocal fluorescence image of an AMD flat-mounted mouse retina, and Figs. 4(b-g) are zoomed-in cell segmentation results. Figure 5 shows two confocal images of RPE cells (Figs. 5(a, d)) along with their gold standard (Figs. 5(b, e)) and automatic (Figs. 5(c, f)) segmentation results. The image shown in Figs. 5(a-c) corresponds to Image 9 in Table 1 with approximately median error, and Figs. 5(d-f) show Image 10 from Table 1 with maximum error. Lastly, Fig. 6 shows zoomed-in images of Images 6 and 9 from Table 1 comparing the gold standard (Figs. 6(a-e)) to automatic segmentation (Figs. 6(f-j)). The images in Figs. 6(a-c) are examples of erroneous gold standard segmentation, while Fig. 6(j) is an example of erroneous automatic segmentation. Their corresponding images (Figs. 6(e-h)) are examples of accurate segmentation. Finally, Figs. 6(d, i) show an undetermined case where it is unclear which segmentation is correct.
All 23 images used in the study and their corresponding manual and automatic segmentation data are available at http://www.duke.edu/~sf59/Chiu_BOE_2012_dataset.htm. The “single click” algorithm was coded in MATLAB (The MathWorks, Natick, MA) and resulted in an average computation time of 2.95 minutes per image (with an average of 964 cells per image) on a desktop computer with a 64-bit operating system, Core i7 2600K CPU (Intel, Mountain View, CA), solid state hard drive, and 16 GB of RAM. This time included the overhead required for reading and writing operations.
4.2. Segmentation results for other applications
Section 2 presents a generalized segmentation framework for closed-contour structures, and Sections 3 and 4.1 discuss the implementation and results for RPE cell segmentation. We have additionally implemented the algorithm to segment other ophthalmic features. These include intra-retinal cysts seen on SDOCT images (Spectralis; Heidelberg Engineering, Heidelberg, Germany) of patients with DME (Fig. 7 ). We have also included automatic segmentation of cone photoreceptors on AOSO images [18] of the retina (Fig. 8 ). To segment the cysts and photoreceptors, the same basic framework as described in Section 3 was utilized. Changes required to adapt the algorithm to different image types included a modification of image filters and pilot estimation techniques, changing the edge weights, and different refinement steps to exclude erroneous segmentation (e.g. removing segmentations of hypo-reflective blood vessels in SDOCT images).
5. Discussion
To the best of our knowledge, no other technique has been reported for fully automatically segmenting confocal fluorescence images of RPE cells. The qualitative results shown in Fig. 4 demonstrate accurate automatic cell segmentation despite the presence of image artifacts, low signal-to-noise ratio, or inaccurate estimations. Furthermore, the cell shape and size were not constraining factors for the presented automatic algorithm.
Quantitative results show that our fully automatic algorithm accurately segmented RPE cells on confocal images of flat-mounted mouse retina with AMD, yielding cell count and mean cell area measurements with an average error of less than 1.5%. Automatic segmentation errors similar to Fig. 6(j) occurred due to the presence of bright imaging artifacts, which erroneously altered values in the first cost function of Eq. (6). However, as shown in Figs. 6(a-c), even the gold standard segmentation reported in PNAS contained errors as well. Such errors will naturally occur in most manual or semi-automatic segmentation tasks required for large-scale studies, where multiple human experts subjectively grade images at different dates. Furthermore, since manual correction is extremely time consuming, cells with inconsequential errors (Fig. 6(b)) may not have been a priority to correct. As a result, a 0% error from the gold standard does not imply perfect segmentation.
To further reduce the effect of pilot estimation and achieve more accurate segmentation, one may employ an iterative approach, in which the result of segmentation for one iteration is used as the pilot estimate for the next iteration. In addition, application of modern adaptive denoising methods to raw images, which remove imaging artifacts without altering the anatomic structures, may further improve the automated segmentation performance [48,49].
Our motivation for using cell count and area as validation metrics stems from the need to quantify the extent of cell damage present in different groups of diseased mice [12]. While error in the absolute position of each automatically segmented cell may have been a more direct validation metric, the presence of errors in the gold standard made this unfeasible. Furthermore, the cell boundary is several pixels thick, making it difficult to assign a “true” boundary position. While the gold standard divided cells often through the middle of the cell border, this was not always the case. As a result, we deemed cell count and area to be a more meaningful and viable validation metric.
Finally, we showed preliminary results demonstrating the segmentation of cysts seen in SDOCT images of eyes with DME (Fig. 7) and cone photoreceptors seen in AOSO images of the retina (Fig. 8). Prior studies on retinal fluid segmentation have focused on the detection of larger regions of intra-retinal fluid [19] or fluid constrained by retinal layer segmentation [21]. However, our algorithm is sensitive to the small changes in intra-retinal fluid buildup. With regards to cone photoreceptor segmentation, many previous works assumed a given photoreceptor size, shape, or packing [32–35,37]. Our method relaxes these constraints, allowing for a future version of the algorithm capable of segmenting both photoreceptor rods and cones in a single image. Lastly, while we plan to validate these algorithms in future studies, we believe these preliminary results are encouraging, as they show the flexibility of our algorithm to be extended to different structures, diseases, and imaging modalities.
6. Conclusion
In summary, we demonstrated the extension of our automatic GTDP framework to segment not only layered, but also closed-contour structures. Implementation of the algorithm for RPE cell segmentation in confocal fluorescence images of flat-mounted AMD mouse retina resulted in an accurate extraction of cell count and average cell size. We believe that such a tool will be extremely useful for future studies, which use cell morphology as a biomarker for the onset and progression of disease. While validation has yet to be shown for other ophthalmic applications, we have demonstrated preliminary results for intra-retinal cyst segmentation in a SDOCT image as well as cone photoreceptors segmentation in an AOSO image. This is highly encouraging for reducing the time and manpower required for segmenting closed-contour features in ophthalmic studies.
Acknowledgments
We would like to thank Dr. Jindong Ding for his supervision on the semi-automatic segmentation of RPE cells, Dr. Priyatham Mettu for his expertise on DME cyst segmentation, Prof. Glenn Jaffe for providing the image in Fig. 7(a), and Linkun (Sam) Xi for manual correction of the RPE cells. This work was supported in part by The American Health Association Foundation, NIH (1R21EY021321-01A1, NIH P30 EY-005722), Research to Prevent Blindness (Duke’s 2011 Unrestricted Grand Award), the North Carolina Biotechnology Center, and the John T. Chambers Scholarship.
Publisher’s note: This paper was corrected on June 28, 2012, to add a funding source.
References and links
1. A. Yazdanpanah, G. Hamarneh, B. R. Smith, and M. V. Sarunic, “Segmentation of intra-retinal layers from optical coherence tomography images using an active contour approach,” IEEE Trans. Med. Imaging 30(2), 484–496 (2011). [CrossRef] [PubMed]
2. K. A. Vermeer, J. van der Schoot, H. G. Lemij, and J. F. de Boer, “Automated segmentation by pixel classification of retinal layers in ophthalmic OCT images,” Biomed. Opt. Express 2(6), 1743–1756 (2011). [CrossRef] [PubMed]
3. Y. Y. Liu, M. Chen, H. Ishikawa, G. Wollstein, J. S. Schuman, and J. M. Rehg, “Automated macular pathology diagnosis in retinal OCT images using multi-scale spatial pyramid and local binary patterns in texture and shape encoding,” Med. Image Anal. 15(5), 748–759 (2011). [CrossRef] [PubMed]
4. Q. Yang, C. A. Reisman, K. Chan, R. Ramachandran, A. Raza, and D. C. Hood, “Automated segmentation of outer retinal layers in macular OCT images of patients with retinitis pigmentosa,” Biomed. Opt. Express 2(9), 2493–2503 (2011). [CrossRef] [PubMed]
5. S. J. Chiu, J. A. Izatt, R. V. O’Connell, K. P. Winter, C. A. Toth, and S. Farsiu, “Validated automatic segmentation of AMD pathology including drusen and geographic atrophy in SD-OCT images,” Invest. Ophthalmol. Vis. Sci. 53(1), 53–61 (2012). [CrossRef] [PubMed]
6. S. J. Chiu, X. T. Li, P. Nicholas, C. A. Toth, J. A. Izatt, and S. Farsiu, “Automatic segmentation of seven retinal layers in SDOCT images congruent with expert manual segmentation,” Opt. Express 18(18), 19413–19428 (2010). [CrossRef] [PubMed]
7. V. Kajić, M. Esmaeelpour, B. Považay, D. Marshall, P. L. Rosin, and W. Drexler, “Automated choroidal segmentation of 1060 nm OCT in healthy and pathologic eyes using a statistical model,” Biomed. Opt. Express 3(1), 86–103 (2012). [CrossRef] [PubMed]
8. L. Duan, M. Yamanari, and Y. Yasuno, “Automated phase retardation oriented segmentation of chorio-scleral interface by polarization sensitive optical coherence tomography,” Opt. Express 20(3), 3353–3366 (2012). [CrossRef] [PubMed]
9. J. Eichel, A. Mishra, P. Fieguth, D. Clausi, and K. Bizheva, “A novel algorithm for extraction of the layers of the cornea,” in Canadian Conference on Computer and Robot Vision, 2009. CRV '09 (IEEE, 2009), pp. 313–320.
10. F. LaRocca, S. J. Chiu, R. P. McNabb, A. N. Kuo, J. A. Izatt, and S. Farsiu, “Robust automatic segmentation of corneal layer boundaries in SDOCT images using graph theory and dynamic programming,” Biomed. Opt. Express 2(6), 1524–1538 (2011). [CrossRef] [PubMed]
11. D. Huang, E. A. Swanson, C. P. Lin, J. S. Schuman, W. G. Stinson, W. Chang, M. R. Hee, T. Flotte, K. Gregory, C. A. Puliafito, and J. G. Fujimoto, “Optical coherence tomography,” Science 254(5035), 1178–1181 (1991). [CrossRef] [PubMed]
12. J. D. Ding, L. V. Johnson, R. Herrmann, S. Farsiu, S. G. Smith, M. Groelle, B. E. Mace, P. Sullivan, J. A. Jamison, U. Kelly, O. Harrabi, S. S. Bollini, J. Dilley, D. Kobayashi, B. Kuang, W. Li, J. Pons, J. C. Lin, and C. B. Rickman, “Anti-amyloid therapy protects against retinal pigmented epithelium damage and vision loss in a model of age-related macular degeneration,” Proc. Natl. Acad. Sci. U.S.A. 108(28), E279–E287 (2011). [CrossRef] [PubMed]
13. M. R. Hee, C. A. Puliafito, J. S. Duker, E. Reichel, J. G. Coker, J. R. Wilkins, J. S. Schuman, E. A. Swanson, and J. G. Fujimoto, “Topography of diabetic macular edema with optical coherence tomography,” Ophthalmology 105(2), 360–370 (1998). [CrossRef] [PubMed]
14. T. Otani, S. Kishi, and Y. Maruyama, “Patterns of diabetic macular edema with optical coherence tomography,” Am. J. Ophthalmol. 127(6), 688–693 (1999). [CrossRef] [PubMed]
15. J. Liang, D. R. Williams, and D. T. Miller, “Supernormal vision and high-resolution retinal imaging through adaptive optics,” J. Opt. Soc. Am. A 14(11), 2884–2892 (1997). [CrossRef] [PubMed]
16. H. Hofer, L. Chen, G. Y. Yoon, B. Singer, Y. Yamauchi, and D. R. Williams, “Improvement in retinal image quality with dynamic correction of the eye’s aberrations,” Opt. Express 8(11), 631–643 (2001). [CrossRef] [PubMed]
17. A. Roorda, F. Romero-Borja, W. Donnelly Iii, H. Queener, T. Hebert, and M. Campbell, “Adaptive optics scanning laser ophthalmoscopy,” Opt. Express 10(9), 405–412 (2002). [PubMed]
18. R. F. Cooper, A. M. Dubis, A. Pavaskar, J. Rha, A. Dubra, and J. Carroll, “Spatial and temporal variation of rod photoreceptor reflectance in the human retina,” Biomed. Opt. Express 2(9), 2577–2589 (2011). [CrossRef] [PubMed]
19. D. C. Fernández, “Delineating fluid-filled region boundaries in optical coherence tomography images of the retina,” IEEE Trans. Med. Imaging 24(8), 929–945 (2005). [CrossRef] [PubMed]
20. C. Ahlers, C. Simader, W. Geitzenauer, G. Stock, P. Stetson, S. Dastmalchi, and U. Schmidt-Erfurth, “Automatic segmentation in three-dimensional analysis of fibrovascular pigmentepithelial detachment using high-definition optical coherence tomography,” Br. J. Ophthalmol. 92(2), 197–203 (2008). [CrossRef] [PubMed]
21. G. Quellec, K. Lee, M. Dolejsi, M. K. Garvin, M. D. Abramoff,, and M. Sonka, “Three-dimensional analysis of retinal layer texture: identification of fluid-filled regions in SD-OCT of the macula,” IEEE Trans. Med. Imaging 29(6), 1321–1330 (2010). [CrossRef] [PubMed]
22. R. T. Smith, J. K. Chan, T. Nagasaki, U. F. Ahmad, I. Barbazetto, J. Sparrow, M. Figueroa, and J. Merriam, “Automated detection of macular drusen using geometric background leveling and threshold selection,” Arch. Ophthalmol. 123(2), 200–206 (2005). [CrossRef] [PubMed]
23. A. D. Mora, P. M. Vieira, A. Manivannan, and J. M. Fonseca, “Automated drusen detection in retinal images using analytical modelling algorithms,” Biomed. Eng. Online 10(1), 59 (2011). [CrossRef] [PubMed]
24. S. Farsiu, S. J. Chiu, J. A. Izatt, and C. A. Toth, “Fast detection and segmentation of drusen in retinal optical coherence tomography images,” Proc. SPIE 6844, 68440D, 68440D-12 (2008). [CrossRef]
25. N. Lee, A. F. Laine, and R. T. Smith, “A hybrid segmentation approach for geographic atrophy in fundus auto-fluorescence images for diagnosis of age-related macular degeneration,” in 29th Annual International Conference of the IEEE Engineering in Medicine and Biology Society, 2007. EMBS 2007 (IEEE, 2007), pp. 4965–4968.
26. S. Tsantis, G. C. Kagadis, K. Katsanos, D. Karnabatidis, G. Bourantas, and G. C. Nikiforidis, “Automatic vessel lumen segmentation and stent strut detection in intravascular optical coherence tomography,” Med. Phys. 39(1), 503–513 (2012). [CrossRef] [PubMed]
27. S. V. Patel, J. W. McLaren, J. J. Camp, L. R. Nelson, and W. M. Bourne, “Automated quantification of keratocyte density by using confocal microscopy in vivo,” Invest. Ophthalmol. Vis. Sci. 40(2), 320–326 (1999). [PubMed]
28. F. J. Sanchez-Marin, “Automatic segmentation of contours of corneal cells,” Comput. Biol. Med. 29(4), 243–258 (1999). [CrossRef] [PubMed]
29. A. Ruggeri, E. Grisan, and J. Jaroszewski, “A new system for the automatic estimation of endothelial cell density in donor corneas,” Br. J. Ophthalmol. 89(3), 306–311 (2005). [CrossRef] [PubMed]
30. M. E. Díaz, G. Ayala, R. Sebastian, and L. Martínez-Costa, “Granulometric analysis of corneal endothelium specular images by using a germ-grain model,” Comput. Biol. Med. 37(3), 364–375 (2007). [CrossRef] [PubMed]
31. A. H. Karimi, A. Wong, and K. Bizheva, “Automated detection and cell density assessment of keratocytes in the human corneal stroma from ultrahigh resolution optical coherence tomograms,” Biomed. Opt. Express 2(10), 2905–2916 (2011). [CrossRef] [PubMed]
32. B. Xue, S. S. Choi, N. Doble, and J. S. Werner, “Photoreceptor counting and montaging of en-face retinal images from an adaptive optics fundus camera,” J. Opt. Soc. Am. A 24(5), 1364–1372 (2007). [CrossRef] [PubMed]
33. K. Y. Li and A. Roorda, “Automated identification of cone photoreceptors in adaptive optics retinal images,” J. Opt. Soc. Am. A 24(5), 1358–1363 (2007). [CrossRef] [PubMed]
34. M. Pircher, J. S. Kroisamer, F. Felberer, H. Sattmann, E. Götzinger, and C. K. Hitzenberger, “Temporal changes of human cone photoreceptors observed in vivo with SLO/OCT,” Biomed. Opt. Express 2(1), 100–112 (2011). [CrossRef] [PubMed]
35. M. Mujat, R. D. Ferguson, A. H. Patel, N. Iftimia, N. Lue, and D. X. Hammer, “High resolution multimodal clinical ophthalmic imaging system,” Opt. Express 18(11), 11607–11621 (2010). [CrossRef] [PubMed]
36. R. S. Jonnal, O. P. Kocaoglu, Q. Wang, S. Lee, and D. T. Miller, “Phase-sensitive imaging of the outer retina using optical coherence tomography and adaptive optics,” Biomed. Opt. Express 3(1), 104–124 (2012). [CrossRef] [PubMed]
37. K. Loquin, I. Bloch, K. Nakashima, F. Rossant, and M. Paques, “Photoreceptor detection in in-vivo adaptive optics images of the retina: towards a simple interactive tool for the physicians,” in 2011 IEEE International Symposium on Biomedical Imaging: from Nano to Macro (IEEE 2011), pp. 191–194.
38. C. A. Glasbey and M. J. Young, “Maximum a posteriori estimation of image boundaries by dynamic programming,” J. R. Stat. Soc. Ser. C Appl. Stat. 51(2), 209–221 (2002). [CrossRef]
39. S. Timp and N. Karssemeijer, “A new 2D segmentation method based on dynamic programming applied to computer aided detection in mammography,” Med. Phys. 31(5), 958–971 (2004). [CrossRef] [PubMed]
40. Z. Yan, B. J. Matuszewski, S. Lik-Kwan, and C. J. Moore, “A novel medical image segmentation method using dynamic programming,” in International Conference on Medical Information Visualisation—BioMedical Visualisation,2007. MediVis 200 (IEEE 2007), pp. 69–74.
41. S. Lu, “Accurate and efficient optic disc detection and segmentation by a circular transformation,” IEEE Trans. Med. Imaging 30(12), 2126–2133 (2011). [CrossRef] [PubMed]
42. S. Farsiu, J. Christofferson, B. Eriksson, P. Milanfar, B. Friedlander, A. Shakouri, and R. Nowak, “Statistical detection and imaging of objects hidden in turbid media using ballistic photons,” Appl. Opt. 46(23), 5805–5822 (2007). [CrossRef] [PubMed]
43. E. W. Dijkstra, “A note on two problems in connexion with graphs,” Numerische Mathematik 1(1), 269–271 (1959). [CrossRef]
44. N. M. Bressler, “Age-related macular degeneration is the leading cause of blindness,” JAMA 291(15), 1900–1901 (2004). [CrossRef] [PubMed]
45. P. Soille, Morphological Image Analysis: Principles and Applications (Springer, 1999).
46. T. Cormen, C. Leiserson, R. Rivest, and C. Stein, Introduction to Algorithms (The MIT Press, 2001).
47. R. Gonzalez and R. Woods, Digital Image Processing, 3rd ed. (Prentice Hall, 2007).
48. H. Takeda, S. Farsiu, and P. Milanfar, “Robust kernel regression for restoration and reconstruction of images from sparse noisy data,” in 2006 IEEE International Conference on Image Processing (IEEE, 2006), pp. 1257–1260.
49. L. Fang, S. Li, Q. Nie, J. A. Izatt, C. A. Toth, and S. Farsiu, “Sparsity-based denoising of spectral domain optical coherence tomography images,” Biomed. Opt. Express 3(5), 927–942 (2012). [CrossRef]