## Abstract

Assessment of serous retinal detachment plays an important role in the diagnosis of central serous chorioretinopathy (CSC). In this paper, we propose an automatic, three-dimensional segmentation method to detect both neurosensory retinal detachment (NRD) and pigment epithelial detachment (PED) in spectral domain optical coherence tomography (SD-OCT) images. The proposed method involves constructing a probability map from training samples using random forest classification. The probability map is constructed from a linear combination of structural texture, intensity, and layer thickness information. Then, a continuous max flow optimization algorithm is applied to the probability map to segment the retinal detachment-associated fluid regions. Experimental results from 37 retinal SD-OCT volumes from cases of CSC demonstrate the proposed method can achieve a true positive volume fraction (TPVF), false positive volume fraction (FPVF), positive predicative value (PPV), and dice similarity coefficient (DSC) of 92.1%, 0.53%, 94.7%, and 93.3%, respectively, for NRD segmentation and 92.5%, 0.14%, 80.9%, and 84.6%, respectively, for PED segmentation. The proposed method can be an automatic tool to evaluate serous retinal detachment and has the potential to improve the clinical evaluation of CSC.

© 2017 Optical Society of America

## 1. Introduction

Central serous chorioretinopathy (CSC) is an idiopathic disease of the posterior pole of the retina, which often appears as serous retinal detachment accompanied by leakage of altered retinal pigment epithelium (RPE) [1–3]. Spectral domain optical coherence tomography (SD-OCT), which capitalizes on recent advances in optical imaging, is the primary imaging modality for the diagnosis of CSC. State-of-the-art SD-OCT devices can non-invasively capture a high-definition, cross-sectional profile of retinal layers and pathological changes in the macular area, and thus allow radiologists to make detailed anatomical assessments for proper CSC treatment [4-5].

Serous retinal detachment, such as pigment epithelial detachment (PED) and neurosensory retinal detachment (NRD) as shown in Fig. 1, is a prominent characteristic of CSC. Mechanical stress resulting from increased intra-choroidal pressure reduces RPE adhesion and alters hydro-ionic RPE regulation, which in turn causes PED. NRD is often associated with a mechanical abrasion resulting from an active flow through a break in the RPE [6]. Therefore, the segmentation of serous retinal detachment-associated fluid is important for evaluating the severity and progression of chorioretinal impairment.

Several automatic or semi-automatic methods have been proposed for the segmentation of fluid-filled regions with abnormal retinal structures. Many of them are 2-D methods that segment those retinal abnormalities based on 2-D OCT slices. Fernández [7] applied a deformable model to outline the lesion boundaries by manual initialization. Novosel *et al.* [8] proposed an automatic, locally-adaptive approach to segment the subretinal fluid caused by CSC, exploiting the local attenuation differences between the layers near the fluid. Various spatial constraints have been incorporated into segmentation models so that spatial consistency between successive OCT slices could be exploited. Through label propagation and higher-order constraint, Wang *et al.* [9] presented an interactive, fluid-associated region segmentation algorithm for consecutive OCT B-scans. Wang *et al.* [10] utilized a voting strategy in 2-D OCT slices with different axes to eliminate false positives in cases of diabetic macular edema (DME) after fuzzy level set-based segmentation. Farsiu et al. [11] proposed a gradient vector flow based deformable method for segmentation of drusen in retinal OCT images, which estimated the normal RPE layer by enforcing the local convexity condition and curve fitting.

Recently, machine learning approaches have shown competitive performance for the segmentation of retinal abnormalities. Starting with layer segmentation [12–14], retinal fluid-filled regions were detected using various classifiers, including K nearest neighbor [15], random forest [16], kernel regression [17] and support vector machine [18] with retinal layer-dependent feature selection [19]. Some reports utilized machine learning as a coarse segmentation step. Chen *et al.* [20] adopted a probability map from training samples as seeds for a graph cut search method to segment the symptomatic exudate-associated derangements (SEADs). Sun *et al.* [21] proposed a framework for serous PED segmentation that combined AdaBoost classification and a shape-constrained graph cut.

Jointly segmenting both fluid and retinal layer is potentially valuable because it might allow clinically relevant interpretation of fluids within the context of retinal layers. Novosel et al. [22] proposed an approach to jointly segment the retinal layer and lesions by using a framework of loosely coupled level set, where lesions are modeled as an additional space-variant layer delineated by auxiliary interfaces. Montuoro et al. [23] proposed a supervised learning framework to segment retinal layers with two kinds of fluid, which utilized complex interaction between different retinal structures to refine the results. Roy et al. [24] proposed a deep learning based end-to-end framework named ReLayNet, which transfer the layer and lesion segmentation problem to a classification problem. Fang et al. [25] also proposed a layer segmentation method in OCT images with AMD using deep neural network. The method utilized convolutional neural network to obtain the coarse results based on probability map and then they refined the results using the graph search method.

Despite recent advancements, automatic serous retinal detachment segmentation remains a challenging task because of several critical problems. First, the accuracy of layer detection might be influenced by the presence of abnormal retinal structures. Second, the fluid-filled regions sometimes appear as regions with weak contours and an ill-defined shape. Furthermore, both NRD- and PED-associated fluids have similar intensities and shape profiles, which makes automatic segmentation even more difficult.

In this paper, we propose a continuous max flow approach using a probability map for serous retinal detachment segmentation. The probability map is learned from training samples, and combines structural texture, intensity, and retinal layer thickness information using a region-restricted training strategy. Then, a continuous max flow model [26–29] is utilized to segment the retinal abnormalities based on the obtained probability map. The learned probability map contains processed high-level information for discriminating between NRD, PED, and other related retinal tissue. We then apply the continuous max flow approach to the probability map, as the segmentation problem is better handled in the feature space of high-level representation. Our method affords the following advantages:

- 1. Our methodology consists of 3-D processing steps except for curve fitting in pre-processing and post-processing.
- 2. We utilize a probability map representation combining various features, which improves the segmentation results for the continuous max flow optimization
- 3. The proposed approach can sequentially segment PED- and NRD-associated fluid by using different restricted regions.

## 2. Methodology

#### 2.1 Method overview

Figure 2 illustrates the flowchart of the proposed method. First, structural texture and intensity scores are calculated by a random forest classification, and a thickness score is constructed from the clustering on the thickness map obtained after layer segmentation. These scores indicate the degree of confidence of the corresponding voxels belonging to the retinal detachment-associated fluid, according to the respective features. Then, those three types of feature information are linearly combined to build the probability map, which denotes the degree of membership of each voxel belonging to the fluid. Finally, a continuous max flow approach is applied to obtain the refined segmentation on the probability map.

#### 2.2 Generation of probability map

In this step, we extract the structural texture and intensity score from the denoised SD-OCT volume by random forest classification. The thickness score is computed from the thickness map generated by the layer segmentation. Then, three types of feature information are linearly combined to construct the probability map, which is viewed as a high-level representation of the SD-OCT volume.

### 2.2.1 Pre-processing

SD-OCT images are subject to speckle noise caused by the low temporal coherence and high spatial coherence of the optical beams [30], which significantly influences the performance of the image processing algorithms. Fang et al. [25] applied the median filter to the normalized SD-OCT volume and learned the size of needed filters from the convolution neural network structure. The heavy speckle noise might negatively influence the segmentation of retinal layers and fluid contours. In our protocol, we apply the widely used bilateral filtering for speckle noise reduction and edge preservation, which facilitates the layer segmentation and feature extraction procedures [31-32]. We used standard deviation values of 3.0 for space and 0.5 for range parameters in our study.

Layer segmentation is another pre-processing step. Here, we adopt a 3-D graph search algorithm [33–35] to delineate the ILM and RPE floor (Fig. 1.) using the well-established Iowa Reference Algorithm [36]. The 3-D graph search algorithm converts the multiple retinal surface segmentation problem to that of finding a minimum closed set in a node-weighted digraph with appropriate feasibility constraints, and it can effectively solve this optimization problem in low-order polynomial time. We used OCTexplore r3.6 for layer segmentation based on optimized 3-D graph search [33–35]. In our OCT data, the software worked well for ILM and RPE layer segmentation even there might be other abnormalities. Furthermore, our approach allow incorrect layer segmentation because we only need the candidate regions for fluid segmentation.

After the layer segmentation, we calculate an interpolated normal RPE [37] using a curve fitting procedure to tackle the deformation of the retinal structure. First, a second-order polynomial fitting is applied for coarse interpolation using the leftmost and rightmost 100 pixels from the elevated RPE, after which the point with a distance larger than 10 pixels to the interpolated curve are removed. This is assuming that there are no dramatic retinal structure changes caused by PED at that region. Then, we apply the second-order polynomial fitting again using those refined points.

### 2.2.2 Classification

To extract the structural texture score from the SD-OCT volume, we apply a random forest classification based on the 3-D structural and textural features. The output probability is considered to be a similarity in texture of the given voxel with respect to the fluid.

- 1. Feature extraction: The 3-D structural texture features of retinal layers can be used to identify the fluid-filled regions [15, 20]. A set of 47 features are extracted from each voxel, which are summarized in Table 1. The first 15 features (features 1-15) describe the local image structure, the next 30 features (features 16-45) describe the image texture, and the last two features (features 46 and 47) are the distances to the ILM and RPE floor, respectively.
- 2. Classifier training: We segment PED and NRD sequentially by using different restricted regions. In the training phase, we only consider voxels between the ILM and RPE floor for NRD detection to discard voxels related to irrelevant tissue types. For PED detection, we allow an additional ± 20 voxels in the z-direction (axis shown in Fig. 3(a)) between the RPE floor and the interpolated RPE. Then, the detection of NRD and PED can be treated as two separate binary classification problems, which might allow PED and NRD to overlap due to the specification of the search regions. To accelerate the speed of training, the SD-OCT volume is downsampled from 512 × 128 × 1024 voxels to 256 × 128 × 256 voxels. We apply a random forest algorithm by constructing 60 decision trees with a minimum leaf size of 10. For each volume, we used 5,000 positive (belonging to object) voxels and 15,000 negative (belonging to background) voxels to train the classifier in a 5-fold cross-validation fashion. We train and test RF classifier for each group separately. We sample more negative voxels because the background occupies a larger area than objects in a natural setting. The prediction score produced by the random forest classifier is used as our structural texture score, which denotes similarity in texture of the given voxel with respect to the fluid (Fig. 3(d), 3(g)).

Assume ${S}_{t}\in [0,1]$ is the predicted score of each voxel by random forest classification, and $\widehat{I}$ and $\sigma $ denote the mean and the standard deviation of the intensity of those voxels with ${S}_{t}>0.8$, respectively. We define ${S}_{i}=\mathrm{exp}(-||I-\widehat{I}|{|}^{2}/2{\sigma}_{i}{}^{2})$ as the intensity score for each voxel (Fig. 3(e), 3(h)), where ${\sigma}_{i}$ equals two times the standard deviation $\sigma $.

### 2.2.3 Construction of probability map

In this step, we linearly combine the various feature information to construct the probability map. The intensity and thickness scores are obtained based on two assumptions: (1) the fluid always shows up as a low reflection region and (2) the fluid region is associated with a larger thickness than the normal part of the retina.

The thickness map is calculated between the ILM and RPE for NRD detection, and between the RPE and interpolated RPE for PED detection (Fig. 3(b), 3(c)). Then, we apply fuzzy c-mean clustering to the thickness map and obtain three categories [40]. The distance between ILM and RPE floor are plotted with a red-green-blue (from thick to thin) color scale in Fig. 3(b), 3(c). The thickness map reveals a few distinct clusters that match the general shape of our object. The category with the highest thickness is considered to be the object, and the category with the lowest thickness is considered the background. The category with a moderate thickness is combined either with the object or the background. In this study, we are mainly interested in obtaining the likelihood of a pixel belonging to the fluid in the thickness map and the degree of the output membership belonging to the class with highest thickness is assigned to the corresponding pixels on the map (i.e. columns in B-scans), which is denoted as the thickness score ${S}_{m}$. The lighter blue columns in Fig. 3(f) and 3(i) indicate the restricted regions used for detecting candidate fluids.

Finally, the probability map is generated through a linear combination of the three types of feature information as follows:

where ${\lambda}_{1}$, ${\lambda}_{2}$ and ${\lambda}_{3}$ are weight parameters, subject to ${\lambda}_{1}+{\lambda}_{2}+{\lambda}_{3}=1$. In the end, the probability map can be viewed as a high-level representation of the SD-OCT volume.#### 2.3 Serous retinal detachment segmentation

### 2.3.1 Continuous max flow model applied to the generated probability map

Let ${R}_{o}$ and ${R}_{b}$ denote the retinal detachment-associated fluid region and the background, respectively. The label function for each voxel $x$ can be denoted as:

This segmentation problem can be formulated by a spatially continuous min-cut problem [26], which aims to force the smoothness of the segmented region and minimize its surface simultaneously:

According to previous reports [27, 28], Eq. (4) is equivalent to the continuous max flow problem as follows:

### 2.3.2 Iterative optimization

To optimize Eq. (5), we define its augmented Lagrangian function as [26]:

- • Optimize spatial flow $p$ by fixing other variables:
which can be solved by the gradient-projection algorithm [38].

### 2.3.3. Post-processing

Low reflection regions, such as artifacts and blood vessels, might cause misclassification. We eliminate those potential false positives based on the size and position of the detected regions. Regions smaller than 100 voxels in each B-scan and regions placed beyond the candidate fluid region determined from the thickness map are removed. We believe these regions are more likely to be the false positives. We intend to keep more true positives rather than removing the false positives. The size threshold depends largely on the prior knowledge of the experimental data set. In practice, we extend the scope of the candidate fluid region by an additional ± 117µm ( ± 10 pixels) and ± 236µm ( ± 5 pixels) in x-axis and y-axis (axes shown in Fig. 3) on the thickness map, respectively. The x-axis and y-axis information are the same on both the thickness map and original SD-OCT volume.

## 3. Experiments

#### 3.1. Data set and configuration

In this study, a data set of 37 SD-OCT volumes from cases diagnosed with CSC were acquired (25 cases with only NRD, 6 cases with only PED, and 6 cases with both NRD and PED) by a Cirrus SD-OCT device (Carl Zeiss Meditec, Inc., Dublin, CA, USA). Each SD-OCT volume contains 512 × 128 × 1024 voxels, with a voxel size of 11.7 × 47.2 × 2.0μm. This study was approved by the Institutional Review Board of The First Affiliated Hospital of Nanjing Medical University. Two specialists with more than 10 years of experience in fundus disease drew manual boundaries. They were instructed to find smooth boundary for NRD between the neurosensory retina and the underlying RPE, which corresponds to the accumulation of a clear or lipid-rich exudate in the subretinal space. They also drew the elevated RPE along with the overlying retina from the remaining Bruch’s membrane to delineate the PED contour, which is caused by the accumulation of the fluid.

We tested the proposed algorithm on a PC with an Intel Core i7-870 CPU @ 2.93GHz and 16 GB of memory. The 3-D textual information is known to be discriminative at identifying fluid-filled regions ([15, 20]). Therefore, we set a relatively large weight (${\lambda}_{\text{1}}$ > 0.5) to the predicted score (based on the texture). The intensity and the thickness scores are treated as supplemental information to aid the segmentation, and thus they have lower weights (${\lambda}_{\text{2}}$ for intensity and ${\lambda}_{\text{3}}$ for thickness) than the predicted score. We explored many sets of weights (${\lambda}_{\text{2}}$ in ranges [0.1-0.5] and ${\lambda}_{\text{3}}$in ranges in [0.1-0.5]) and empirically set the weights considering the tradeoff between true positive rate and false positive rate. In our experiment, the weight parameters${\lambda}_{1}$, ${\lambda}_{2}$, and ${\lambda}_{3}$ in Eq. (3) are set to be 0.55, 0.2, and 0.25, respectively, for NRD segmentation and 0.6, 0.3, and 0.1, respectively, for PED segmentation. The running time for random forest classification on the downsampled SD-volumes (implemented in Matlab R2016a) and continuous max flow-based optimization (implemented in C + + ) are 440 ± 49 s and 31 ± 5 s, respectively, for NRD segmentation and 122 ± 20 s and 23 ± 2 s, respectively, for PED segmentation. The continuous max flow optimization converges very fast since the probability map mainly contains discriminative information to distinguish between the object and the background.

#### 3.2. Metrics for segmentation performance

For a quantitative evaluation, the performance of the segmentation results is measured by the true positive volume fraction (TPVF), false positive volume fraction (FPVF), positive predicative value (PPV) [39], and dice similarity coefficient (DSC) as follows:

#### 3.3. Evaluation of serous retinal detachment

For qualitative evaluation, we show representative segmentation results from different CSC cases. Figure 5 shows results from CSC cases with only NRD. We compare our method with the initial classification by random forest and the fuzzy level set-based segmentation in [10]. The fuzzy level set method results in under segmentation, due to high reflective artifacts inside the fluid region. The initial random forest classification results include obvious false positive regions, which have a similar intensity profile to the fluid region. Those false positives can be suppressed by the thickness score from the probability map in our approach. The regions associated with a small retinal thickness are not considered to be the fluid region caused by serous retinal detachment. Compared to the level set-based segmentation [10] shown in Fig. 5, our methods has two ways to reduce the false positive. First, we apply multi-features spanning multi-scale compared to single scale structural texture features to get a more robust and discriminative representation for fluid classification. Second, the continuous max flow aims to force the smoothness of the segmented region and minimize its surface simultaneously.

We also test our method in CSC cases with only PED. Figure 6 shows that highly reflective holes inside the fluid are misclassified in the initial classification, while our approach can tackle this problem by adding probability information based on the thickness score and minimizing the fluid surface using the continuous max flow optimization. We compare our method with a layer segmentation-based method [13], which detects the PED footprint between the segmented elevated RPE and estimated normal RPE floor. The large smoothness constraint might lead to under segmentation of the fluid, which is shown by the yellow arrow in Fig. 6(a).

The most challenging cases are the patients with both NRD and PED. Figure 7 compares the results of the reference standard, the initial classification, and our method in such a case. Although a small NRD-associated fluid region is present, our method achieves good agreement with the manual segmentation method, and shows fewer false positives than the random forest classification results.

For a quantitative evaluation, Tables 2 and 3 summarize the TPVG, FPVF, PPV, and DSC of the various approaches using the reference standards for NRD and PED, respectively. Our method achieves a TPVG that is close to the initial classification, but has better FPVF, PPV and DSC compared to the initial classification. Therefore, the proposed method can be viewed as a refinement procedure after the initial random forest classification, which eliminates most of false positive regions. The proposed method is also superior to other two algorithms in overall performance. These results indicate that the machine learning-based segmentation performs better than the pure image processing-based approaches, as machine learning methods can utilize the high-level representation from the learning samples.

For statistical analysis, we apply a linear correlation analysis and Bland-Altman reproducibility approach to compare the proposed method with the reference standard of the specialist. The linear correlation results and Bland-Altman plots for the analysis of NRD and PED segmentation are shown in Fig. 8 and Fig. 9, respectively. Figure 10 and Fig. 11 demonstrate the reproducibility assessment of manual tracings of two specialists. Our method demonstrates a high correlation with the manual results of the specialist for NRD $({r}^{2}=0.99)$ and PED $({r}^{2}=0.91)$ segmentation. The Bland-Altman results confirm stable agreement between the automated method and the reference standard. Simultaneous segmentation of NRD and PED is challenging, and our results demonstrate a similar performance whether we are segmenting one type of abnormality or both types of abnormality.

## 4. Discussion and conclusion

Assessment of serous retinal detachment plays a vital role in CSC diagnosis and treatment. Automatic segmentation of serous retinal detachment is a challenging task, especially when NRD and PED occur simultaneously. The abnormal retinal structure, weak contours, and ill-defined fluid-filled regions make the segmentation task difficult. Furthermore, both NRD- and PED-associated fluids have similar intensities and shape profiles, which makes the segmentation task even more difficult. In this paper, we propose a 3-D automatic segmentation method for serous retinal detachment in cases of CSC. First, a probability map is constructed from learning samples using random forest classification. The probability map is a linear combination of structural texture, intensity, and layer thickness information. Then, we apply a continuous max flow optimization algorithm on this probability map to segment the fluid-associated region with serous retinal detachment. Our method is a full 3-D method which can handle both NRD and PED segmentation simultaneously in SD-OCT images from cases of CSC well, while most of the previously published studies only focus on one type of abnormality.

Compared with the pure image processing-based segmentation methods, our method achieves a higher overall performance. The learning procedure can effectively extract the high-level representation from the raw intensity values for each voxel. Therefore, our method is robust and can avoid misclassifying other fluid-like regions, such as low reflection artifacts and blood vessels. Moreover, although NRD and PED share similar intensities and shape profiles, we can discriminate between these two types of abnormalities using a region restriction learning strategy. The post-processing steps based on the candidate region’s size and position restricted by the thickness map are utilized to eliminate any potential false positives.

The proposed method significantly improves the initial classification results by using the probability map. The probability map flexibly represents the voxel score for the segmentation task by combining various feature information depending on the target abnormality. For example, we force a higher weight for retinal thickness for PED segmentation compared to that used for NRD segmentation, because the thickness between the elevated RPE floor and interpolated RPE is an important factor for judging PED. Furthermore, the continuous max flow optimization aims to force the smoothness of the segmented region and minimize its surface simultaneously. Thus, the problem of highly reflective artifacts inside the fluid can be tackled, as shown in Fig. 6.

Comparing with other state-of-the-art approaches, our method achieves desirable performance: (1) our method obtains TPVF, FPVF and DSC of 92.1%, 0.53% and 93.3% for NRD segmentation, and 92.5%, 0.14% and 84.6% for PED segmentation; (2) Chen et al. [20] achieved TPVP and FPVF of 86.5%, 1.7% for SEAD segmentation in AMD subjects; (3) Montuoro et al. [23] reported DSC of 60% for fluid segmentation in a public data set; (4) Roy et al. [24] achieved DSC of 77% in fluid segmentation by using deep neural network; (5) Novosel et al. [22] obtained TPR, FPR and Dice of 93%, 12% and 89% for fluid segmentation in CSC patients. Note that the comparison is an indirect one using different data set with different types of fluids.

Segmenting retinal lesions without retinal layers might lead to ignoring the potentially valuable clinical information regarding the retina. Exploring their complex interaction can yield improved segmentation results on severely diseased cases. Novosel et al. [22] extended the loosely coupled level set model to jointly segment retinal layers and lesions with topology-disrupting retinal diseases, where lesions were modelled as an additional space-variant layer denoted by auxiliary interfaces. Montuoro et al. [23] applied 3-D graph search on the trained probability map and utilized auto-context methodology to refine the results exploiting the complex interaction between different retinal structures. The complementary information between retinal layers and associated lesions are useful and meaningful for fluid detection and segmentation, which we intend to explore in future work.

There are several limitations which need to be improved upon in future studies. Some very small fluid regions are treated as artifacts in our method, and thus they can hardly be detected (shown in Fig. 7(a)). However, this defect does not significantly influence the quantitative evaluation, as we correctly segment large structures well. Figure 7(b) shows a false positive in the 3-D segmentation results, which indicates a high probability of the false positive region belonging to the fluid. Therefore, we need to extract more robust and discriminative features to eliminate those false positives in the future studies. In addition, we believe the manual threshold of the label function ${u}^{*}(x)$ from Eq. (11) can be adjusted to alter the true positive and false positive detection rates as needed in clinical practice. Our results come from 37 cases, and thus our findings need to be validated using a larger cohort in a future study. Although our framework is currently applied to detect serous retinal detachment in cases of CSC, we believe it can be applied to segmentation of other abnormalities, such as symptomatic exudate-associated derangement (SEAD), with only minor modification.

In summary, our 3-D automatic segmentation method demonstrates good agreement with the manual segmentation method both qualitatively and quantitatively. Our method can be used as an automatic tool to evaluate serous retinal detachment, including NRD and PED, in cases of CSC, and has the potential to improve the clinical treatment of CSC.

## Conflicts of interest

The authors declare that there are no conflicts of interest related to this article.

## Funding

National Natural Science Foundation of China (NSFC) (61671242, 61672279, 61701222); Fundamental Research Funds for the Central Universities (30920140111004); Jiangsu Government Scholarship for Overseas Studies; Institute for Basic Science (IBS-R015-D1); National Research Foundation of Korea (NRF-2016R1A2B4008545); Natural Science Foundation for Universities of Jiangsu (17KJB510026).

## References and links

**1. **K. K. Dansingani, C. Balaratnasingam, S. Mrejen, M. Inoue, K. B. Freund, J. M. Klancnik Jr, and L. A. Yannuzzi, “Annular lesions and catenary corms in chronic central serous chorioretinopathy,” Am. J. Ophthalmol. **166**, 60–67 (2016). [CrossRef] [PubMed]

**2. **R. Hua, L. Liu, C. Li, and L. Chen, “Evaluation of the effects of photodynamic therapy on chronic central serous chorioretinopathy based on the mean choroidal thickness and the lumen area of abnormal choroidal vessels,” Photodiagn. Photodyn. Ther. **11**(4), 519–525 (2014). [CrossRef] [PubMed]

**3. **Y. Kuroda, S. Ooto, K. Yamashiro, A. Oishi, H. Nakanishi, H. Tamura, N. Ueda-Arakawa, and N. Yoshimura, “Increased choroidal vascularity in central serous chorioretinopathy quantified using swept-source optical coherence tomography,” Am. J. Ophthalmol. **169**, 199–207 (2016). [CrossRef] [PubMed]

**4. **S. J. Ahn, T. W. Kim, J. W. Huh, H. G. Yu, and H. Chung, “Comparison of features on SD-OCT between acute central serous chorioretinopathy and exudative age-related macular degeneration,” Ophthalmic Surg. Lasers Imaging **43**(5), 374–382 (2012). [CrossRef] [PubMed]

**5. **P. Roberts, B. Baumann, J. Lammer, B. Gerendas, J. Kroisamer, W. Bühl, M. Pircher, C. K. Hitzenberger, U. Schmidt-Erfurth, and S. Sacu, “Retinal Pigment Epithelial Features in Central Serous Chorioretinopathy Identified by Polarization-Sensitive Optical Coherence Tomography,” Invest. Ophthalmol. Vis. Sci. **57**(4), 1595–1603 (2016). [CrossRef] [PubMed]

**6. **A. Daruich, A. Matet, A. Dirani, E. Bousquet, M. Zhao, N. Farman, F. Jaisser, and F. Behar-Cohen, “Central serous chorioretinopathy: recent findings and new physiopathology hypothesis,” Prog. Retin. Eye Res. **48**, 82–118 (2015). [CrossRef] [PubMed]

**7. **D. C. Fernández, “Delineating fluid-filled region boundaries in optical coherence tomography images of the retina,” IEEE Trans. Med. Imaging **24**(8), 929–945 (2005). [CrossRef] [PubMed]

**8. **J. Novosel, Z. Wang, H. de Jong, M. van Velthoven, K. A. Vermeer, and L. J. van Vliet, “Locally-adaptive loosely-coupled level sets for retinal layer and fluid segmentation in subjects with central serous retinopathy,” in Proceedings of IEEE International Symposium on Biomedical Imaging (IEEE, 2016), pp. 702–705. [CrossRef]

**9. **T. Wang, Z. Ji, Q. Sun, Q. Chen, S. Yu, W. Fan, S. Yuan, and Q. Liu, “Label propagation and higher-order constraint-based segmentation of fluid-associated regions in retinal SD-OCT images,” Inf. Sci. **358**, 92–111 (2016). [CrossRef]

**10. **J. Wang, M. Zhang, A. D. Pechauer, L. Liu, T. S. Hwang, D. J. Wilson, D. Li, and Y. Jia, “Automated volumetric segmentation of retinal fluid on optical coherence tomography,” Biomed. Opt. Express **7**(4), 1577–1589 (2016). [CrossRef] [PubMed]

**11. **S. Farsiu, S. J. Chiu, J. A. Izatt, and C. A. Toth, “Fast detection and segmentation of drusen in retinal optical coherence tomography images,” Proc. SPIE **6844**, 68440D (2008). [CrossRef]

**12. **P. A. Dufour, L. Ceklic, H. Abdillahi, S. Schröder, S. De Dzanet, U. Wolf-Schnurrbusch, and J. Kowal, “Graph-based multi-surface segmentation of OCT data using trained hard and soft constraints,” IEEE Trans. Med. Imaging **32**(3), 531–543 (2013). [CrossRef] [PubMed]

**13. **F. Shi, X. Chen, H. Zhao, W. Zhu, D. Xiang, E. Gao, M. Sonka, and H. Chen, “Automated 3-D retinal layer segmentation of macular optical coherence tomography images with serous pigment epithelial detachments,” IEEE Trans. Med. Imaging **34**(2), 441–452 (2015). [CrossRef] [PubMed]

**14. **B. J. Antony, A. Lang, E. K. Swingle, O. Al-Louzi, A. Carass, S. Solomon, P. A. Calabresi, S. Saidha, and J. L. Princea, “Simultaneous segmentation of retinal surfaces and microcystic macular edema in SDOCT volumes,” Proc. SPIE Medical Imaging. International Society for Optics and Photonics, **9784**, 97841C (2016).

**15. **G. Quellec, K. Lee, M. Dolejsi, M. K. Garvin, M. D. Abràmoff, and M. Sonka, “Three-dimensional analysis of retinal layer texture: identification of fluid-filled regions in SD-OCT of the macula,” IEEE Trans. Med. Imaging **29**(6), 1321–1330 (2010). [CrossRef] [PubMed]

**16. **A. Lang, A. Carass, E. K. Swingle, O. Al-Louzi, P. Bhargava, S. Saidha, H. S. Ying, P. A. Calabresi, and J. L. Prince, “Automatic segmentation of microcystic macular edema in OCT,” Biomed. Opt. Express **6**(1), 155–169 (2015). [CrossRef] [PubMed]

**17. **S. J. Chiu, M. J. Allingham, P. S. Mettu, S. W. Cousins, J. A. Izatt, and S. Farsiu, “Kernel regression based segmentation of optical coherence tomography images with diabetic macular edema,” Biomed. Opt. Express **6**(4), 1172–1194 (2015). [CrossRef] [PubMed]

**18. **T. Hassan, M. U. Akram, B. Hassan, A. M. Syed, and S. A. Bazaz, “Automated segmentation of subretinal layers for the detection of macular edema,” Appl. Opt. **55**(3), 454–461 (2016). [CrossRef] [PubMed]

**19. **X. Xu, K. Lee, L. Zhang, M. Sonka, and M. Abramoff, “Stratified sampling voxel classification for segmentation of intraretinal and subretinal fluid in longitudinal clinical OCT data,” IEEE Trans. Med. Imaging **34**(7), 1616–1623 (2015). [CrossRef] [PubMed]

**20. **X. Chen, M. Niemeijer, L. Zhang, K. Lee, M. D. Abramoff, and M. Sonka, “Three-dimensional segmentation of fluid-associated abnormalities in retinal OCT: probability constrained graph-search-graph-cut,” IEEE Trans. Med. Imaging **31**(8), 1521–1531 (2012). [CrossRef] [PubMed]

**21. **Z. Sun, H. Chen, F. Shi, L. Wang, W. Zhu, D. Xiang, C. Yan, L. Li, and X. Chen, “An automated framework for 3D serous pigment epithelium detachment segmentation in SD-OCT images,” Sci. Rep. **6**(1), 21739 (2016). [CrossRef] [PubMed]

**22. **J. Novosel, K. A. Vermeer, J. H. de Jong, Ziyuan Wang, and L. J. van Vliet, “Joint Segmentation of Retinal Layers and Focal Lesions in 3-D OCT Data of Topologically Disrupted Retinas,” IEEE Trans. Med. Imaging **36**(6), 1276–1286 (2017). [CrossRef] [PubMed]

**23. **A. Montuoro, S. M. Waldstein, B. S. Gerendas, U. Schmidt-Erfurth, and H. Bogunović, “Joint retinal layer and fluid segmentation in OCT scans of eyes with severe macular edema using unsupervised representation and auto-context,” Biomed. Opt. Express **8**(3), 1874–1888 (2017). [CrossRef] [PubMed]

**24. **A. G. Roy, S. Conjeti, S. P. K. Karri, D. Sheet, A. Katouzian, C. Wachinger, and N. Navab, “ReLayNet: Retinal Layer and Fluid Segmentation of Macular Optical Coherence Tomography using Fully Convolutional Network,” Biomed. Opt. Express **8**(8), 3627–3642 (2017). [CrossRef]

**25. **L. Fang, D. Cunefare, C. Wang, R. H. Guymer, S. Li, and S. Farsiu, “Automatic segmentation of nine retinal layer boundaries in OCT images of non-exudative AMD patients using deep learning and graph search,” Biomed. Opt. Express **8**(5), 2732–2744 (2017). [CrossRef] [PubMed]

**26. **J. Yuan, E. Bae, and X. C. Tai, “A study on continuous max-flow and min-cut approaches,” *in*Proceeding of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2010), pp. 2217–2224. [CrossRef]

**27. **J. Yuan, E. Bae, X. C. Tai, and Y. Boykov, “A continuous max-flow approach to potts model,” in *Proceeding of European Conference on Computer Vision* (Springer 2010), pp. 379–392. [CrossRef]

**28. **M. Rajchl, J. Yuan, J. A. White, E. Ukwatta, J. Stirrat, C. M. Nambakhsh, F. P. Li, and T. M. Peters, “Interactive hierarchical-flow segmentation of scar tissue from late-enhancement cardiac MR images,” IEEE Trans. Med. Imaging **33**(1), 159–172 (2014). [CrossRef] [PubMed]

**29. **W. Qiu, J. Yuan, E. Ukwatta, Y. Sun, M. Rajchl, and A. Fenster, “Prostate segmentation: An efficient convex optimization approach with axial symmetry using 3-D TRUS and MR images,” IEEE Trans. Med. Imaging **33**(4), 947–960 (2014). [CrossRef] [PubMed]

**30. **J. Rogowska and M. E. Brezinski, “Evaluation of the adaptive speckle suppression filter for coronary optical coherence tomography imaging,” IEEE Trans. Med. Imaging **19**(12), 1261–1266 (2000). [CrossRef] [PubMed]

**31. **Q. Chen, T. Leng, L. Zheng, L. Kutzscher, J. Ma, L. de Sisternes, and D. L. Rubin, “Automated drusen segmentation and quantification in SD-OCT images,” Med. Image Anal. **17**(8), 1058–1072 (2013). [CrossRef] [PubMed]

**32. **C. Tomasi and R. Manduchi, “Bilateral filtering for gray and color image,” in Proceeding of IEEE International Conference on Computer Vision (IEEE, 1998) pp. 839–846. [CrossRef]

**33. **M. K. Garvin, M. D. Abràmoff, X. Wu, S. R. Russell, T. L. Burns, and M. Sonka, “Automated 3-D intraretinal layer segmentation of macular spectral-domain optical coherence tomography images,” IEEE Trans. Med. Imaging **28**(9), 1436–1447 (2009). [CrossRef] [PubMed]

**34. **K. Li, X. Wu, D. Z. Chen, and M. Sonka, “Optimal surface segmentation in volumetric images-a graph-theoretic approach,” IEEE Trans. Pattern Anal. Mach. Intell. **28**(1), 119–134 (2006). [CrossRef] [PubMed]

**35. **K. Lee, M. Niemeijer, M. K. Garvin, Y. H. Kwon, M. Sonka, and M. D. Abramoff, “Segmentation of the optic disc in 3-D OCT scans of the optic nerve head,” IEEE Trans. Med. Imaging **29**(1), 159–168 (2010). [CrossRef] [PubMed]

**36. **M. D. Abràmoff, M. K. Garvin, and M. Sonka, “Retinal imaging and image analysis,” IEEE Rev. Biomed. Eng. **3**, 169–208 (2010). [CrossRef] [PubMed]

**37. **F. M. Penha, P. J. Rosenfeld, G. Gregori, M. Falcão, Z. Yehoshua, F. Wang, and W. J. Feuer, “Quantitative imaging of retinal pigment epithelial detachments using spectral-domain optical coherence tomography,” Am. J. Ophthalmol. **153**(3), 515–523 (2012). [CrossRef] [PubMed]

**38. **A. Chambolle, “An algorithm for total variation minimization and applications,” J. Math. Imaging Vis. **20**(1), 89–97 (2004).

**39. **J. K. Udupa, V. R. Leblanc, Y. Zhuge, C. Imielinska, H. Schmidt, L. M. Currie, B. E. Hirsch, and J. Woodburn, “A framework for evaluating image segmentation algorithms,” Comput. Med. Imaging Graph. **30**(2), 75–87 (2006). [CrossRef] [PubMed]

**40. **M. Wu, Q. Chen, and X. He, “Automatic Subretinal fluid segmentation of retinal SD-OCT images with neurosensory retinal detachment guided by enface fundus imaging,”* IEEE Transactions on Biomedical Engineering* (2017).