Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Automated choroid segmentation based on gradual intensity distance in HD-OCT images

Open Access Open Access

Abstract

The choroid is an important structure of the eye and plays a vital role in the pathology of retinal diseases. This paper presents an automated choroid segmentation method for high-definition optical coherence tomography (HD-OCT) images, including Bruch’s membrane (BM) segmentation and choroidal-scleral interface (CSI) segmentation. An improved retinal nerve fiber layer (RNFL) complex removal algorithm is presented to segment BM by considering the structure characteristics of retinal layers. By analyzing the characteristics of CSI boundaries, we present a novel algorithm to generate a gradual intensity distance image. Then an improved 2-D graph search method with curve smooth constraints is used to obtain the CSI segmentation. Experimental results with 212 HD-OCT images from 110 eyes in 66 patients demonstrate that the proposed method can achieve high segmentation accuracy. The mean choroid thickness difference and overlap ratio between our proposed method and outlines drawn by experts was 6.72µm and 85.04%, respectively.

© 2015 Optical Society of America

1. Introduction

The choroid is a highly vascular structure of the eye. It provides the metabolic support to the retinal pigment epithelium (RPE), supplies blood to the optic nerve and absorbs the excess light penetrating the retina [1]. The choroid plays a vital role in the pathophysiology of many conditions, such as glaucoma [2], serous chorioretinopathy [3], age-related macular degeneration (AMD) [4], choroidal melanoma [5], Vogt-Koyanagi-Harada [6]. Changes in the choroid, in particular its thickness, have been hypothesized to be of critical importance in these retinal diseases.

Recent reports demonstrated successful examination and measurement of choroidal thickness in normal and pathologic states using spectral-domain optical coherence tomography (SD-OCT) instruments [1, 7–10]. Recently, the enhanced depth imaging (EDI) method of SD-OCT, which places zero delay line to the choroid to obtain high-resolution images of the choroid, has enable the quantitative measurement of choroid thickness or volume, in addition to obtain cross-sectional images of choroid [1, 4, 7, 11–13]. The commercially spectral-domain device, Cirrus HD-OCT, can generate a high-definition 1-line raster image from 20 B-scans taken at a single location, which enables visualization of the border where choroidal tissue meets sclera and allows choroidal thickness measurements to be performed [14]. EDI-OCT has a high agreement with conventional OCT (non-EDI) in the measurement of retinal thickness and volume in healthy eyes and in eyes of patients with chorioretinal disease [15]. The reproducibility between choroidal thickness measurements of images acquired with Cirrus HD-OCT and EDI-OCT is good [11, 15, 16].

Many methods have been proposed for retinal layer segmentation of OCT images, such as active contour approach [17], intensity-based Markov boundary models [18], graph theory and dynamic programming based method [19], boundary classification [20], graph-based multi-surface segmentation method [21, 22]. However, these methods have certain limitations when applying them to the choroid segmentation, because the interface between the choroid and sclera is very weak and even invisible in some locations.

Recently, there are several studies on automatic choroid segmentation in OCT images. Zhang et al. [23] presented an automatic segmentation method for the choroidal vessels in SD-OCT images, and quantify the vasculature thickness rather than the choroidal thickness. Kajic et al. [24] devised a two stage statistical model based on texture and shape for choroidal segmentation of normal and pathologic eyes in a 1060 nm OCT system. Torzicky et al. [25] and Duan et al. [26] presented automated segmentation algorithms to detect the boundary between the choroid and sclera from polarization sensitive optical coherence tomography (PS-OCT). For EDI-OCT images, graph-searching theory was used to automatically segment choroid [27, 28]. Hu et al. [29] adopted a graph-based multi-stage segmentation approach to identify the choroid for SD-OCT images, which is a semiautomatic method.

Since the positions of zero delay line for EDI-OCT images are closer to choroid than that for HD-OCT images, the intensity property in HD-OCT images is different from that in EDI-OCT images. Thus, the existing methods [25–28] are not suitable for the choroid segmentation in HD-OCT images. Figure 1(a) shows a HD-OCT image and Fig. 1(b) is the corresponding EDI-OCT image. The two images were downloaded from internet website (https://www.heidelbergengineering.com/international/products/spectralis/imaging-modes/edi-oct/). The white regions in Figs. 1(c) and 1(d) are approximately the high reflectivity of Figs. 1(a) and 1(b), respectively. The high reflectivity regions in the EDI-OCT image mainly belong to the RPE layer, while the reflectivity of RNFL is similar with that of RPE in the HD-OCT image, as shown in Figs. 1(c) and 1(d). Therefore, the assumption that the highest intensities in A-scans is assigned as the ridge of RPE in Tian’s algorithm [27] is not proper for HD-OCT images. The visibility of the CSI boundary in the EDI-OCT image is better than that in the HD-OCT image. A similar example of the comparison between conventional OCT and EDI-OCT images was shown in [30].

 figure: Fig. 1

Fig. 1 Comparison between HD-OCT and EDI-OCT images. (a) HD-OCT image. (b) EDI-OCT image. (c) High reflectivity regions of HD-OCT image. (d) High reflectivity regions of EDI-OCT image.

Download Full Size | PDF

This paper presents an automatic choroid segmentation method in traditional Cirrus HD-OCT images, which includes Bruch’s membrane (BM) segmentation and choroidal-scleral interface (CSI) segmentation. To the best of our knowledge, this is the first time to automatically segment choroid in HD-OCT images. Figure 2 shows a HD-OCT image where two retinal layers, retinal nerve fiber layer (RNFL) and RPE, and the choroid boundaries, BM and CSI, are labeled manually. By considering the characteristics of the HD-OCT images, we present two different methods to segment the BM and CSI boundaries, respectively.

 figure: Fig. 2

Fig. 2 (a) An original HD-OCT image. (b) Several structures are manually labeled. RNFL: Retinal Nerve Fiber Layer, ONL: Outer Nuclear Layer (dark stripe marked with a white straight line), IS/OS: Inner Segment/Out Segment (bright thin stripe marked with an orange straight line), RPE: Retinal Pigment Epithelium (bright stripe marked with a black straight line), BM: Bruch’s Membrane (green curve), CSI: Choroidal-Scleral Interface (red curve). Vitreous: dark region above RNFL, Choroid: region between BM and CSI, Sclera: sub-CSI region.

Download Full Size | PDF

2. Methods

2.1 BM segmentation and flattening

Since the BM boundary is equal to the outer boundary of RPE layers, an improved RPE estimation method was proposed. The basic process is as follows:

(1) Image denoising. An average filtering with a 5 × 1 templet was used to reduce the noise in HD-OCT images.

(2) RNFL complex removal. Most of the high reflectivity regions are RNFL and RPE layers in HD-OCT images. To facilitate segmenting the RPE layer, the RNFL complex (defined as the region between the inner limiting membrane and outer plexiform layer) should be removed firstly because the estimation of RPE layers is mainly based on the high reflectivity. The inner surface of the RNFL complex is equivalent to the bottom boundary of the black vitreous region as shown in Fig. 2. Since the reflectivity of the vitreous region is obviously lower than that of the retina region, a constant threshold can be used to extract the vitreous region (see Appendix A in [31]). The high reflectivity regions close to the estimated inner boundary of the RNFL are removed.

To further assure that the remained high reflectivity regions are the RPE complex, not the RNFL complex, we use the structure characteristics of retinal layers: the outer nuclear layer (ONL) is under the RNFL and above the RPE layer. The reflectivity of ONL is lower than that of RNFL and RPE layer. If there are high and low reflectivity regions above the left high reflectivity region, it is assumed to be RPE layer; otherwise it will be removed as the left RNFL region. The effect of the improved RNFL complex removal on BM segmentation was discussed in section 3.3.

(3) BM segmentation. After the RNFL complex removal above, the remained high reflectivity regions are mainly the RPE and inner segment/out segment (IS/OS) layers. Because the IS/OS layer is only probably connected with the inner boundary of RPE layers, we can consider the IS/OS layer and RPE layer as a complex. The pixels with large vertical gradient value near the outer boundary of the RPE complex estimated based on high reflectivity are taken as the initial BM boundary. Finally, the initial BM boundary is fitted with a 4th polynomial fitting function to generate the BM boundary.

To facilitate the following CSI segmentation, the HD-OCT image is shifted up or down based on the BM boundary so that the BM is straight. Figure 3 shows the BM segmentation result (green line) in the original and flattened images of Fig. 2.

 figure: Fig. 3

Fig. 3 BM segmentation result (green line) in the original image (a) and the flattened image (b).

Download Full Size | PDF

2.2 CSI segmentation

Since the CSI boundary is generally weak or nonexistent, it is difficult to effectively segment CSI based on gradient or intensity information. The choroidal segmentation protocol [32] indicates that the sclera intensity below the CSI boundary should be homogenous and no jagged edges. Furthermore, we find that the sclera intensity below the CSI boundary decreases gradually because there are no blood vessels beyond the CSI, as shown in Fig. 4. Figure 4(b) is the smoothed result of Fig. 4(a) with the average filtering. From Fig. 4, we can observe that the gradual intensity distance under the CSI is relatively long in the axial direction (Fig. 4(c)), which is used to generate primary CSI boundary points. The definition of the gradual intensity distance for each point is the number of pixels with monotonically decreasing intensity values in the axial direction under the point.

 figure: Fig. 4

Fig. 4 Analysis of CSI boundary characteristics. (a) and (b) are the original and smoothed choroid-sclera parts of the flattened image (Fig. 3), respectively. (c) The intensity profile of one column corresponding to the dashed red line in (b), where the CSI boundary position corresponding to the position pointed out in (b) and its gradual intensity distance are marked.

Download Full Size | PDF

2.3 Generation of gradual intensity distance image

Let I be the smoothed choroid-sclera part of the flatten image, as shown in Fig. 4(b). The intensity difference in the axial direction is Ia=[11]I, where ‘’ represents the convolution operator. For each column, the gradual intensity distance image D is defined:

D(x,y)={D(x+1,y)+1ifIa(x,y)Ia(x+1,y)0ifIa(x,y)<Ia(x+1,y)
where x[1,Nr1] and y[1,Nc] are the row and column coordinates of the image I respectively. Nr and Nc are the height and width of I, respectively. The initialization of the last (or bottom) row of D is D(Nr,y)=0. Figure 5 shows the gradual intensity distance image of Fig. 4(b), where the intensity value represents the gradual distance in the axial direction. Figure 5 indicates that the gradual intensity distance is long under the most of CSI boundary.

 figure: Fig. 5

Fig. 5 Gradual intensity distance image of Fig. 4(b), where the x-axis, y-axis and scale bar are also shown.

Download Full Size | PDF

For each column, the position with the maximum intensity difference in the gradual intensity distance image is taken as the primary CSI boundary:

CSI0={(xCSI,y)|xCSI=argmaxx[1,Nr]Dx(x,y),y[1,Nc]}
where Dx=[11]D denotes the difference image of D in the axial direction. Figure 6(a) is the difference image of Fig. 5. The point with the maximum difference value in each column is marked with red circle (Fig. 6(b)). Most of the maximum difference points are located at the CSI boundary, while a few of primary CSI points wrongly lie in sclera regions. To reduce the wrong points, the primary points are projected in the horizontal direction. Figure 7(a) shows the projection of the red circle points in Fig. 6(b), where the horizontal coordinate represents the number of the red circle points in the same row. Most of CSI points are close in the horizontal projection because we assume that the change of choroid thickness in one B-scan image is not great. The difference value far away (it is set to be 40 experimentally in this paper) from the row with the maximum number of the primary CSI points is set to be 0. Figure 7(b) is the refined difference image of Fig. 6(a), and the refined primary CSI boundary points are marked in red circles (Fig. 7(c)). Based on the refined difference image, in the following section an improved 2-D graph search method will be introduced to obtain the accurate and smooth CSI segmentation results.

 figure: Fig. 6

Fig. 6 (a) Difference image of the gradual intensity distance image (Fig. 5). (b) The positions with the maximum difference are marked with red circles in the smoothed choroid-sclera parts (Fig. 4(b)).

Download Full Size | PDF

 figure: Fig. 7

Fig. 7 Projection of primary CSI points in the horizontal direction (a). The refined difference image (b) and CSI boundary points marked in red circles (c).

Download Full Size | PDF

2.4 CSI boundary detection based on 2-D graph search

There are several boundary detection methods, such as canny operator, the graph search [27] and dynamic programming [19]. In this paper, we proposed an improved graph search method that is different from the literature [27]. Our method utilizes the gradual intensity distance difference image to construct vertex weighted graph for CSI boundary detection, however the literature [27] searches CSI boundary based on CSI candidate points with Dijkstra’s algorithm [33]. After the construction of vertex weighted graph, the CSI boundary is estimated by searching the shortest path of the graph with max-min flow [34]. Compared with traditional boundary detection methods and dynamic programming, there are two advantages of our graph search method: (1) traditional boundary detection methods and the dynamic programming method [19] are mainly relied on image gradient. Choroid vessels in HD-OCT images make these methods difficult to detect CSI boundaries, because the gradient of CSI boundaries is generally weaker than that of choroid vessels. However, our method is mainly relied on the difference image of the gradual intensity distance difference image, which is robust to the choroid vessels, as shown in the section above; (2) our method integrates neighbor information and smoothness constrains to segment CSI boundary, which constructs the intra-arcs in the graph from one vertex to next vertex in each column and inter-arcs according to neighbor columns. In dynamic programming methods, 4-neighborhoods or 8-neighborhoods are used to construct a graph, without considering the column constrains.

2.4.1 Graph construction

Recently, the graph search method has been extensively studied, and several improved algorithms were proposed to segment 3-D or high-dimensional objects [21, 22, 35]. We first briefly recall the original graph search model as introduced in [36]. The goal is to simultaneously segment multi-surfaces that represent the boundaries of a 3-D object in a volume data. Cost value is assigned to each vertex in the volume image for detecting a surface, which is inversely related to the likelihood that the desired surface contains the vertexes on the surface. An optimal surface will be found with the graph search method such that 1) the surface must to be a feasible surface that satisfies smoothness constraints; 2) the total cost of a surface is the minimum cost among all feasible surfaces defined in the volume image.

For a 2-D image, our goal is to detect boundaries of 2-D objects using the graph search method. The boundary is a curve, not a surface. It is necessary to adjust the construction of the graph for achieving our requirements. On the other hand, the main weakness of the original framework is that only voxel weights in the graph are used to calculate the surface cost. The relationships from one node to the adjacent nodes are regarded as an identical importance, which results in the expired surfaces non-smooth. In the view of the 3-D graph search method, we introduced an improved 2-D graph search method with curve smooth constraints to detect the smooth boundaries in 2-D images.

Consider a 2-D image I(y, x) of Y×X. Let Y and X denote the image size in y and x directions, respectively. The curves in I are oriented as shown in Fig. 8(a). The curve is defined by a function C: (y)→C(y), where y ∈{0,…, Y–1}, and C(y) ∈{0,…, X–1}. For each column, the node set {I (y, x) |0 ≤ x < X} is denoted by c(y), which forms a column parallel to x direction.

 figure: Fig. 8

Fig. 8 The curve detection problem. (a) the curves Ci and Cj in a 2-D image, (b) the constructed directed graph of two adjacent columns. Dark solid lines are intra arcs; orange lines are inter-arcs; green solid lines are smooth constraint; and the dashed lines are optional.

Download Full Size | PDF

A node-weighted directed graph G=(V,E) is constructed according to the image I as follows: every node V(y,x)V represents one and only one pixel I(y,x)I. Under the 4-neighbor setting, the column c(y) is adjacent to c(y-1) and c(y + 1). In the rest of paper, the 4-neighbor is assumed.

Intra-column arcs Eintra: the arc from the node V(y,x)(x>0) to V(y,x1) is created in the same column as shown in Fig. 8(b). If a node in a column belongs to minimum closure set, all nodes below this node in the same column must belong to the closure set. The directed arcs with + are defined as follows:

Eintra={V(y,x),V(y,x1)|x>0}

Inter-column arcs Einter: In the original graph search model, the hard shape constraints are set by the height difference between adjacent columns [36], which ensures that the neighbor column difference must satisfy |V(y,s)V(y,x)|Δy. A directed arc with + weight is created from the nodes V(y,x) in the y-column to the nodes V(y,max(xΔy,0)), as shown in Fig. 8(b) (solid yellow lines). The described hard shape constraints only limit the feasible curve because the weights of these arcs are infinite. However, the obtained curves are not smooth. If the finite weights are integrated to the graph, it would penalize the curvature of curves. The arcs with finite weight are regarded as curve smoothness constraints. For given two neighboring columns c(y) and c(y), the directed arcs with weight f(ΔI) are added to the graph from the nodes V(y,x) to the nodes V(y,x), as shown in Fig. 8(b) (solid green lines), i.e.

Einter={{V(y,x),V(y+1,max(xΔy,0))|y{0,,Y2},xx}{V(y,x),V(y1,max(xΔy,0))|y{0,,Y1},xx}{V(y,x),V(y1,x)|y{0,,Y1},xx}{V(y,x),V(y+1,x)|y{0,,Y2},xx}

2.4.2 Cost function calculation

For finding the optimal curve, the weight is calculated for each node of I(y,x) similar to [36]. Let c(y,x) be the cost function defining the inverse likelihood that the nodes I(y,x) belong to the curve. The weight assigned to the node V(y,x) is defined as

wi(y,x)={ci(y,x)ifx=0ci(y,x)ci(y,x1)otherwise
We firstly apply our proposed gradual intensity distance algorithm on the restricted HD-OCT image, and then produce the difference image of the gradual intensity distance image, as shown in Fig. 7(b), which is assigned to cost function c(y,x).

To combine the curve smoothness constraint information, the curve smoothness constraint is derived from similarity of two neighboring pixels I(y,x) and I(y,x). Suppose two adjacent pixels I(y,x) and I(y+1,x), the gray difference is denoted as ΔI=I(y,x)I(y+1,x). The curve smooth penalty function f(ΔI) is designed to restrain the curvature of curves between two adjacent columns c(y) and c(y) as follows

f(I(y,x)I(y,x))=eI(y,x)I(y,x)2σ2

In conclusion, the boundary segmentation problem is translated into solving energy function minimum problem with the optimal surface problem [36]. Let c is a set of n curve ci, then the total energy of curve c is defined as

E(c)=i=1nI(y,x)cci(y,x)+αi=1nI(y,x),I(y,x)cf(I(y,x)I(y,x))
The first term is a boundary energy function, which is equal to the total cost of all nodes on the curve. The boundary term is regarded as the image internal force that will enforce the curve to reach the desired object boundary. The second term is a curve smoothness constraint, which is represented by the similarity of two adjacent pixels in x-direction. The parameter α is the weight to adjust the curve smoothness, which is taken as 10 experimentally in this paper. The role is to penalize the curvature change deviation between two neighboring pixels, which can guarantee the curve smooth. The main difference between the improved 2-D graph search method and the state-of-art graph search methods is the weight of each arc. The weight in our method is related to the gradual intensity distance and the gray difference, while in the state-of-art graph search methods, several parameters should be trained to calculate the weight, which will cause more complexity. In order to obtain smoother CSI boundaries, we adopt the hard smoothness constraint in horizontal direction that is similar to the state-of-art graph search methods.

Figure 9 shows the CSI segmentation results with the original and improved methods. The CSI boundary of the improved method is smoother than that of the original graph search method, for example the yellow circle marked region, because the curve smoothness constraint is able to prevent the large difference in neighboring columns, which looks more concordant with the choroidal segmentation protocol (no jagged edges).

 figure: Fig. 9

Fig. 9 CSI segmentation comparison between the original method and the improved method. (a) The segmentation obtained using the original graph search method. (b) The segmentation achieved using the proposed graph search with the curve smoothness constraint.

Download Full Size | PDF

2.5 Evaluation study

To quantitatively evaluate the segmentations, we compared the results of our automated segmentation method with manual segmentations of choroid drawn manually by expert readers. Quality control was established by having a third party grade the manual raters’ practice segmentation and feedback was given to improve their accuracy prior to the manual raters performing the segmentations for the study.

Two different analyses were adopted for this evaluation: (1) a quantification of the possible discrepancies found in the manual segmentations drawn by different readers (inter-observer agreement) and by the same reader at different times (intra-observer agreement); (2) a comparison of the differences between the automated method and manual segmentations with the measured observed inter-observer and intra-observer agreements. Evaluation of observer agreement was performed in all of the test data from the manual segmentation by two experts.

We employed eight metrics to assess the differences between pairs of the segmentation methods: correlation coefficient (cc), p-value of the Mann-Whitney U-test (p-value), absolute boundary difference (ABD), mean boundary difference (MBD), thickness difference (TD), overlap ratio (Overlap), overestimated ratio (Overest), underestimated ratio (Undest). The ABD and MBD measure the absolute and mean difference in the axial direction for each A-scan, respectively. Their mean and standard deviation (std) values are computed across the different B-scans available in the data set:

ABD¯(X;Y)=1Kk=1K1ni=1n|XkiYki|
std(ABD)(X;Y)=1Kk=1K(1ni=1n|XkiYki|ABD¯(X;Y))2
MBD¯(X;Y)=1Kk=1K1ni=1n(XkiYki)
std(MBD)(X;Y)=1Kk=1K(1ni=1n(XkiYki)MBD¯(X;Y))2,
where Xki and Yki indicate the coordinates of the ith column in the axial direction of B-scan k, produced by the method X and the grader Y, respectively. K denotes the number of B-scan. The TD, Overlap, Overest and Undest measure the choroid area difference. Their mean and standard deviation values are computed in the same way for ABD:
TD¯(X;Y)=1Kk=1K1ni=1n(TXkiTYki)
std(TD)(X;Y)=1Kk=1K(1ni=1n(TXkiTYki)TD¯(X;Y))2
Overlap¯(X;Y)=1Kk=1KXkYkXkYk
std(Overlap)(X;Y)=1Kk=1K(XkYkXkYOverlap¯(X;Y))2
Overest¯(X;Y)=1Kk=1KX¯kYkXk
std(Overest)(X;Y)=1Kk=1K(X¯kYkXkOverest¯(X;Y))2
Undest¯(X;Y)=1Kk=1KXkY¯kXk
std(Undest)(X;Y)=1Kk=1K(XkY¯kXkUndest¯(X;Y))2,
where TXki and TYki indicate the choroidal thicknesses of the ith column in the axial direction of B-scan k, produced by the method X and the grader Y, respectively. Xk and Yk indicate the regions inside the segmented choroid boundary of B-scan k. X¯k and Y¯k indicate the complements of Xk and Yk, respectively. The operator and indicate union and intersection, respectively. The correlation coefficients were computed between the areas of choroid segmented by our method and segmentations drawn by hand by expert graders (as well as between different graders and sessions), measuring the linear dependence using each scan as an observation. We used the Mann-Whitney U-test to measure the possible statistical differences in the area measured between our segmentations and those drawn by hand.

3. Results

The algorithm was implemented in Matlab and run on a 2.83 GHz Pentium 4 PC with 3.37 GB memory. We obtained 212 HD-OCT images from 110 eyes in 66 patients to test our automated choroid segmentation algorithm. All of the images were obtained using a Cirrus HD-OCT device (Carl Zeiss Meditec, Inc., Dublin, CA). Each HD-OCT image has a size of 1024 × 1024 pixels and a 2 mm axial depth (corresponding to 1024 pixels). The acquisition time of one HD-OCT image is 0.08 second in theory, but in fact the time will be influenced by the device speed and the patient cooperation. Thus, the time for taking one image is really more than 3 second. The segmentation time for each image is approximately 9 second. Following segmentation, each HD-OCT image was cropped by 150 pixels on each side to remove the influence of completely dark columns introduced by the imaging device.

3.1 Quantitative evaluation

Table 1 shows the variability evaluation in the choroid segmentations by different graders (inter-observer agreement) and by the same grader at different sessions (intra-observer agreement), where Ai represents the segmentations of the first grader in the ith session, Bi is the second grader in the ith session. ABD_BM and ABD_CSI are the absolute boundary differences of BM and CSI boundaries, respectively. MBD_BM and MBD_CSI are the mean boundary differences of BM and CSI boundaries, respectively. For the assessment of the differences between the segmentations done by different graders, the union of both sessions was considered for each grader: A1&2 and B1&2 represent the first and second grader respectively.

Tables Icon

Table 1. Within-expert and between-expert correlation coefficients (cc), paired U-test p-values, absolute boundary difference, mean boundary difference, thickness difference, overlap ratio, overestimated ratio and underestimated ratio evaluation between the manual segmentations.

We compared the segmentations produced by our algorithm with those drawn by the two graders in the test images by creating an “average expert (Avg. Exp.) segmentation” for BM and CSI boundaries, respectively. This was generated by averaging the four segmentations drawn by the two experts at the two separated sessions. Table 2 summarizes the results of this comparison. Figure 10 shows the thickness difference of each column of all tested images between our segmentation method and average expert segmentations.

Tables Icon

Table 2. Correlation coefficients (cc), paired U-test p-values, absolute boundary difference, mean boundary difference, thickness difference, overlap ratio, overestimated ratio and underestimated ratio between our segmentation method (Our Seg.) and expert segmentations.

 figure: Fig. 10

Fig. 10 Thickness difference of each column of all tested images, where the blue line is the mean thickness difference of each column.

Download Full Size | PDF

3.2 Qualitative evaluation

Figure 11 shows the segmentation results for choroid with dark columns on each side, where the green and red curves are the BM and CSI boundaries, respectively. To exclude the influence of dark columns, the HD-OCT images were cropped by 150 pixels (the maximum gap of dark columns in our test images) on each side. Figure 12 shows the segmentation results for thin choroid. The choroid thickness is thinner than the normal thickness of choroid due to high myopia. In addition, there were intra-retinal fluid and retinal blood vessels as marked in Fig. 12(a), which will influence the choroid segmentation because the intra-retinal fluid disturbs the retinal layer structure and the retinal blood vessels decrease the contrast of choroidal regions. Figure 13 shows the segmentation results for weak CSI boundary. Due to the influences of choroidal blood vessels (marked in Fig. 13(a)), the CSI boundary is weak, even lost.

 figure: Fig. 11

Fig. 11 Segmentation results for choroid with dark columns on each side. (a) Original image. (b) Our method. (c) Expert 1. (d) Expert 2.

Download Full Size | PDF

 figure: Fig. 12

Fig. 12 Segmentation results for thin choroid. (a) Original image. (b) Our method. (c) Expert 1. (d) Expert 2.

Download Full Size | PDF

 figure: Fig. 13

Fig. 13 Segmentation results for weak CSI boundary. (a) Original image. (b) Our method. (c) Expert 1. (d) Expert 2.

Download Full Size | PDF

Figure 14 shows two segmentation mistakes when there exist the epiretinal membrane with low reflectivity (pointed by the white arrow in Fig. 14(a)) and the large choroidal vessel (pointed by the white arrow in Fig. 14(b)) in HD-OCT images. For the upper-left image in Fig. 14, the absolute boundary differences of BM and CSI are 17.16µm and 38.78µm, respectively, and the overlap is 36.87%. For the upper-right image in Fig. 14, the absolute boundary differences of BM and CSI are 4.75µm and 20.67µm, respectively, and the overlap is 91.76%. Thus, the epiretinal membrane influences the BM and CSI segmentation because the CSI segmentation depends on the BM segmentation. The large choroidal vessel only influences the CSI segmentation locally.

 figure: Fig. 14

Fig. 14 Two segmentation mistakes. Segmentation results of our method (Upper row) and one expert (Bottom row). (a) Epiretinal membrane. (b) Large choroidal vessel.

Download Full Size | PDF

3.3 BM segmentation comparison

To make the RPE segmentation algorithm [31] proposed for SD-OCT images suitable for BM segmentation of HD-OCT images, we presented an improved algorithm by considering the structure characteristics of retinal layers, as described in the method section. Figure 15 shows an example of BM segmentation with Chen’s algorithm [31] and the improved algorithm.

 figure: Fig. 15

Fig. 15 BM segmentation comparison. (a) Original image. (b) Estimated high reflectivity regions. (c) Remained high reflectivity regions after Chen’s RNFL complex removal. (d) BM segmentation result with Chen’s algorithm. (e) Remained high reflectivity regions after our RNFL complex removal. (f) BM segmentation result with our improved algorithm.

Download Full Size | PDF

Table 3 shows the absolute boundary difference and mean boundary difference of the BM segmentation results between Chen’s algorithm [31] and our improved algorithm for all test images. The mean absolute boundary difference of our improved algorithm is lower than that of Chen’s algorithm. The standard deviation of the absolute boundary difference and mean boundary difference of our algorithm is obviously lower than that of Chen’s algorithm. Figure 16 shows the mean boundary difference of Chen’s algorithm and our improved algorithm for each image.

Tables Icon

Table 3. Absolute boundary difference (AMD) and mean boundary difference (MBD) of the BM segmentation results between Chen’s algorithm (Chen’s Alg.), our improved algorithm (Our Imp. Alg.) and average expert (Avg. Exp.) segmentations.

 figure: Fig. 16

Fig. 16 Mean boundary difference comparison for all test images

Download Full Size | PDF

3.4 CSI segmentation comparison

Since no CSI segmentation methods exist for HD-OCT images currently, we compared the CSI segmentation with Tian’s method [27] for EDI-OCT choroidal images. Here, the BM segmentation of our method and Tian’s method is the same to ensure the comparability of the CSI segmentation between our method and Tian’s method. Since the highest intensities are not proper for the BM segmentation of HD-OCT images, analyzed in the introduction, our improved BM segmentation algorithm was used.

Figure 17 shows the CSI segmentation comparison with Tian’s method [27]. The points with local minimum intensity below BM boundaries are taken as CSI candidate points in Tian’s method. Due to the influence of speckle noise and choroidal vasculature, there were many wrong CSI candidate points (shown in Fig. 17(d)), which makes the following CSI segmentation difficult. The edge cost image (Fig. 17(f)) indicates that the gradual intensity distance is more robust characteristics of CSI boundary than the local minimums, because most of points with low edge cost values were located at the real CSI boundary. Based on the edge cost image the accurate CSI boundary can be obtained with the improved graph search, as shown in Figs. 17(g) and 17(b).

 figure: Fig. 17

Fig. 17 Comparison of CSI segmentation between Tian’s and our methods. (a) Original image. (b) CSI segmentation results with Tian’s (red curve) and our (green curve) methods. (c) Flattened choroid-sclera parts. (d) CSI candidate points (red circle) with Tian’s method. (e) Primary CSI boundary (yellow curve) with Tian’s method. (f) CSI edge cost image with our method. (g) CSI segmentation result (green curve) with our method.

Download Full Size | PDF

Table 4 shows the absolute boundary difference and mean boundary difference of the CSI segmentation results between Tian’s algorithm [27] and our algorithm for all test images. The absolute boundary difference and mean boundary difference of our algorithm were lower than those of Tian’s algorithm, and the standard deviation of our algorithm was also lower. Thus, Tian’s algorithm is not suitable for the choroid segmentation of HD-OCT images.

Tables Icon

Table 4. Absolute boundary difference (AMD) and mean boundary difference (MBD) of the CSI segmentation results between Tian’s algorithm (Tian’s Alg.), our algorithm (Our Alg.) and average expert (Avg. Exp.) segmentations.

4. Discussion

Table 1 indicates that the correlations observed in measured choroid areas were very high for both segmentations drawn by different graders and by the graders at different times. The absolute boundary difference of BM boundaries was obviously lower than that of CSI boundaries, because the BM boundary is generally easier to be distinguished than the CSI boundary, as shown in Fig. 1. Due to the same reason, the standard deviation of ABD_BM and MBD_BM was also lower than that of ABD_CSI and MBD_CSI. The choroidal thickness difference is small, which indicates that the manual choroid segmentation is feasible. The overlap ratios in choroid areas were high for both graders and between them, and the overestimated ratio and underestimated ratio were low. The high p-values in the U-test indicate that there were no statistical differences found in the distribution of areas. These results indicate that measurements and quantification of choroid in the HD-OCT images proved robust and repeatable compared with segmentations made by different graders or by the same grader at different times.

Table 2 indicates that the correlations between the areas computed by our method and those produced by the experts were high, and close to those observed with-experts and between-experts for the same data set. Since the BM boundary is more visible than the CSI boundary, the absolute boundary difference and mean boundary difference of BM boundaries were obviously lower than those of CSI boundaries, and the standard deviation of CSI boundaries were larger than that of BM boundaries. The choroid thickness obtained with our method is close to that with the manual segmentations. The thickness differences in most of columns are small and the mean thickness difference of each column (blue line) is close to 0, as shown in Fig. 10. The measured overlap ratios were lower than when comparing segmentations drawn by individual graders, but still showed good agreement (around 85.04% with an average expert segmentation). The overestimated ratios were higher than the underestimated ratios, which indicates that the choroid areas obtained with our method were generally larger than those drawn by experts. The p-value (p < 0.05) with average expert segmentation in the U-test indicates that there were statistical differences found in the distribution of areas.

It can be seen from Figs. 11-13 that the proposed method can obtain satisfied results for different conditions by comparing the experts’ segmentations. The CSI boundaries with our method are smooth, which satisfies the choroidal segmentation protocol [29].

Although our method is effective for most cases, there exist several mistakes as shown in Fig. 14. The epiretinal membrane with low reflectivity separates the RNFL with high reflectivity, which will make parts of the RNFL regions reserved after the RNFL complex removal, and make the segmented choroid boundary under the epiretinal membrane inaccurate. The large choroidal vessel will distort the CSI boundary greatly, which makes the CSI smoothness constrain of the proposed method unsuitable at the greatly distorted boundary.

Figure 15 demonstrates that the previous RNFL complex removal proposed for SD-OCT image is not very suitable for HD-OCT images because many of the RNFL regions were still remained after the RNFL complex removal, as shown in Fig. 15(c). By using the improved algorithm, most of the RNFL regions were removed, as shown in Fig. 15(e). The removal of RNFL regions is important for the BM segmentation based on high reflectivity property. Figures 15(d) and 15(f) are the BM segmentation results based on Figs. 15(c) and 15(e), respectively. Figure 15 indicates that the improved algorithm can obtain a more accurate BM segmentation than Chen’s algorithm [31]. It can be seen from Fig. 16 that for most of images the BM segmentation with Chen’s algorithm was the same as that with our improved algorithm, but for several images the Chen’s results were not acceptable because their mean boundary differences were very high (larger than 35µm). Thus, the improved algorithm is more robust than Chen’s algorithm.

In conclusion, we present an automated choroid segmentation method for HD-OCT images by considering the choroid characteristics. For BM segmentation, an improved algorithm based on reflectivity and vertical gradient is proposed. To distinguish the high reflectivity regions (RNFL complex and RPE) better, the structure characteristics of retinal layers (namely the ONL is under the RNFL and above the RPE layer) is also used. For CSI segmentation, an improved graph search algorithm with curve smooth constraints is proposed, where the edge cost image of the graph search is constructed based on the difference image of the gradual intensity distance image. Quantitative and qualitative experimental results demonstrate that the proposed method can achieve a relatively good segmentation performance for normal and abnormal cases when compared to expert segmentations. The discussion about the difference between our method and the previous methods (Chen’s method and Tian’s method) in sections 3.3 and 3.4 proves that the BM segmentation of Chen’s method (for SD-OCT images) and the CSI segmentation of Tian’s method (for EDI-OCT images) are not applicable for HD-OCT images, because the intensity properties of SD-OCT, EDI-OCT and HD-OCT images are different. The algorithm does have limitations in that large lesion regions (epiretinal membrane and choroidal vessel) exist. Future refinement and development of this algorithm will be pursued in an attempt to improve the segmentation of these choroid layers.

Acknowledgments

This work was supported by the Six Talent Peaks Project in Jiangsu Province under Grant no. 2014-SWYY-024, the National Basic Research Program of China (973 Program) under Grant no. 2013CB967500, the Fundamental Research Funds for the Central Universities under Grant no. 30920140111004, Qing Lan Project under Grant no. D040201, and the National Natural Science Foundation of China under Grant no. 81070743/81001427.

References and links

1. R. Margolis and R. F. Spaide, “A pilot study of enhanced depth imaging optical coherence tomography of the choroid in normal eyes,” Am. J. Ophthalmol. 147(5), 811–815 (2009). [CrossRef]   [PubMed]  

2. Z. Q. Yin, T. J. Vaegan, T. J. Millar, P. Beaumont, and S. Sarks, “Widespread choroidal insufficiency in primary open-angle glaucoma,” J. Glaucoma 6(1), 23–32 (1997). [CrossRef]   [PubMed]  

3. M. Gemenetzi, G. De Salvo, and A. J. Lotery, “Central serous chorioretinopathy: an update on pathogenesis and treatment,” Eye (Lond.) 24(12), 1743–1756 (2010). [CrossRef]   [PubMed]  

4. R. F. Spaide, “Age-related choroidal atrophy,” Am. J. Ophthalmol. 147(5), 801–810 (2009). [CrossRef]   [PubMed]  

5. V. L. Torres, N. Brugnoni, P. K. Kaiser, and A. D. Singh, “Optical coherence tomography enhanced depth imaging of choroidal tumors,” Am. J. Ophthalmol. 151(4), 586–593 (2011). [CrossRef]   [PubMed]  

6. I. Maruko, T. Iida, Y. Sugano, H. Oyamada, T. Sekiryu, T. Fujiwara, and R. F. Spaide, “Subfoveal choroidal thickness after treatment of Vogt-Koyanagi-Harada disease,” Retina 31(3), 510–517 (2011). [CrossRef]   [PubMed]  

7. R. F. Spaide, H. Koizumi, and M. C. Pozzoni, “Enhanced depth imaging spectral-domain optical coherence tomography,” Am. J. Ophthalmol. 146(4), 496–500 (2008). [CrossRef]   [PubMed]  

8. H. Tanabe, Y. Ito, and H. Terasaki, “Choroid is thinner in inferior region of optic disks of normal eyes,” Retina 32(1), 134–139 (2012). [CrossRef]   [PubMed]  

9. C. V. Regatieri, L. Branchini, J. G. Fujimoto, and J. S. Duker, “Choroidal imaging using spectral-domain optical coherence tomography,” Retina 32(5), 865–876 (2012). [CrossRef]   [PubMed]  

10. T. A. Moreno, R. V. O’Connell, S. J. Chiu, S. Farsiu, M. T. Cabrera, R. S. Maldonado, D. Tran-Viet, S. F. Freedman, D. K. Wallace, and C. A. Toth, “Choroid development and feasibility of choroidal imaging in the preterm and term infants utilizing SD-OCT,” Invest. Ophthalmol. Vis. Sci. 54(6), 4140–4147 (2013). [CrossRef]   [PubMed]  

11. L. Branchini, C. V. Regatieri, I. Flores-Moreno, B. Baumann, J. G. Fujimoto, and J. S. Duker, “Reproducibility of choroidal thickness measurements across three spectral domain optical coherence tomography systems,” Ophthalmology 119(1), 119–123 (2012). [CrossRef]   [PubMed]  

12. J. Chhablani, G. Barteselli, H. Wang, S. El-Emam, I. Kozak, A. L. Doede, D. U. Bartsch, L. Cheng, and W. R. Freeman, “Repeatability and reproducibility of manual choroidal volume measurements using enhanced depth imaging optical coherence tomography,” Invest. Ophthalmol. Vis. Sci. 53(4), 2274–2280 (2012). [CrossRef]   [PubMed]  

13. L. Shao, L. Xu, C. X. Chen, L. H. Yang, K. F. Du, S. Wang, J. Q. Zhou, Y. X. Wang, Q. S. You, J. B. Jonas, and W. B. Wei, “Reproducibility of subfoveal choroidal thickness measurements with enhanced depth imaging by spectral-domain optical coherence tomography,” Invest. Ophthalmol. Vis. Sci. 54(1), 230–233 (2013). [CrossRef]   [PubMed]  

14. V. Manjunath, M. Taha, J. G. Fujimoto, and J. S. Duker, “Choroidal thickness in normal eyes measured using Cirrus HD optical coherence tomography,” Am. J. Ophthalmol. 150(3), 325–329 (2010). [CrossRef]   [PubMed]  

15. S. Y. Park, S. M. Kim, Y. M. Song, J. Sung, and D. I. Ham, “Retinal thickness and volume measured with enhanced depth imaging optical coherence tomography,” Am. J. Ophthalmol. 156(3), 557–566 (2013). [CrossRef]   [PubMed]  

16. S. Lee, N. Fallah, F. Forooghian, A. Ko, K. Pakzad-Vaezi, A. B. Merkur, A. W. Kirker, D. A. Albiani, M. Young, M. V. Sarunic, and M. F. Beg, “Comparative analysis of repeatability of manual and automated choroidal thickness measurements in nonneovascular age-related macular degeneration,” Invest. Ophthalmol. Vis. Sci. 54(4), 2864–2871 (2013). [CrossRef]   [PubMed]  

17. A. Yazdanpanah, G. Hamarneh, B. R. Smith, and M. V. Sarunic, “Segmentation of intra-retinal layers from optical coherence tomography images using an active contour approach,” IEEE Trans. Med. Imaging 30(2), 484–496 (2011). [CrossRef]   [PubMed]  

18. D. Koozekanani, K. Boyer, and C. Roberts, “Retinal thickness measurements from optical coherence tomography using a Markov boundary model,” IEEE Trans. Med. Imaging 20(9), 900–916 (2001). [CrossRef]   [PubMed]  

19. S. J. Chiu, X. T. Li, P. Nicholas, C. A. Toth, J. A. Izatt, and S. Farsiu, “Automatic segmentation of seven retinal layers in SDOCT images congruent with expert manual segmentation,” Opt. Express 18(18), 19413–19428 (2010). [CrossRef]   [PubMed]  

20. A. Lang, A. Carass, M. Hauser, E. S. Sotirchos, P. A. Calabresi, H. S. Ying, and J. L. Prince, “Retinal layer segmentation of macular OCT images using boundary classification,” Biomed. Opt. Express 4(7), 1133–1152 (2013). [CrossRef]   [PubMed]  

21. M. K. Garvin, M. D. Abràmoff, X. Wu, S. R. Russell, T. L. Burns, and M. Sonka, “Automated 3-D intraretinal layer segmentation of macular spectral-domain optical coherence tomography images,” IEEE Trans. Med. Imaging 28(9), 1436–1447 (2009). [CrossRef]   [PubMed]  

22. P. A. Dufour, L. Ceklic, H. Abdillahi, S. Schröder, S. De Dzanet, U. Wolf-Schnurrbusch, and J. Kowal, “Graph-based multi-surface segmentation of OCT data using trained hard and soft constraints,” IEEE Trans. Med. Imaging 32(3), 531–543 (2013). [CrossRef]   [PubMed]  

23. L. Zhang, K. Lee, M. Niemeijer, R. F. Mullins, M. Sonka, and M. D. Abràmoff, “Automated segmentation of the choroid from clinical SD-OCT,” Invest. Ophthalmol. Vis. Sci. 53(12), 7510–7519 (2012). [CrossRef]   [PubMed]  

24. V. Kajić, M. Esmaeelpour, B. Považay, D. Marshall, P. L. Rosin, and W. Drexler, “Automated choroidal segmentation of 1060 nm OCT in healthy and pathologic eyes using a statistical model,” Biomed. Opt. Express 3(1), 86–103 (2012). [CrossRef]   [PubMed]  

25. T. Torzicky, M. Pircher, S. Zotter, M. Bonesi, E. Götzinger, and C. K. Hitzenberger, “Automated measurement of choroidal thickness in the human eye by polarization sensitive optical coherence tomography,” Opt. Express 20(7), 7564–7574 (2012). [CrossRef]   [PubMed]  

26. L. Duan, M. Yamanari, and Y. Yasuno, “Automated phase retardation oriented segmentation of chorio-scleral interface by polarization sensitive optical coherence tomography,” Opt. Express 20(3), 3353–3366 (2012). [CrossRef]   [PubMed]  

27. J. Tian, P. Marziliano, M. Baskaran, T. A. Tun, and T. Aung, “Automatic segmentation of the choroid in enhanced depth imaging optical coherence tomography images,” Biomed. Opt. Express 4(3), 397–411 (2013). [CrossRef]   [PubMed]  

28. D. Alonso-Caneiro, S. A. Read, and M. J. Collins, “Automatic segmentation of choroidal thickness in optical coherence tomography,” Biomed. Opt. Express 4(12), 2795–2812 (2013). [CrossRef]   [PubMed]  

29. Z. Hu, X. Wu, Y. Ouyang, Y. Ouyang, and S. R. Sadda, “Semiautomated segmentation of the choroid in spectral-domain optical coherence tomography volume scans,” Invest. Ophthalmol. Vis. Sci. 54(3), 1722–1729 (2013). [CrossRef]   [PubMed]  

30. H. Danesh, R. Kafieh, H. Rabbani, and F. Hajizadeh, “Segmentation of choroidal boundary in enhanced depth imaging OCTs using a multiresolution texture based modeling in graph cuts,” Comput.Math. Method. M. 2014, 2014.

31. Q. Chen, T. Leng, L. Zheng, L. Kutzscher, J. Ma, L. de Sisternes, and D. L. Rubin, “Automated drusen segmentation and quantification in SD-OCT images,” Med. Image Anal. 17(8), 1058–1072 (2013). [CrossRef]   [PubMed]  

32. D. A. Sim, P. A. Keane, H. Mehta, S. Fung, J. Zarranz-Ventura, M. Fruttiger, P. J. Patel, C. A. Egan, and A. Tufail, “Repeatability and reproducibility of choroidal vessel layer measurements in diabetic retinopathy using enhanced depth optical coherence tomography,” Invest. Ophthalmol. Vis. Sci. 54(4), 2893–2901 (2013). [CrossRef]   [PubMed]  

33. E. W. Dijkstra, “A note on two problems in connexion with graphs,” Numer. Math. 1(1), 269–271 (1959). [CrossRef]  

34. Y. Boykov and V. Kolmogorov, “An experimental comparison of min-cut/max-flow algorithms for energy minimization in vision,” IEEE Trans. Pattern Anal. Mach. Intell. 26(9), 1124–1137 (2004). [CrossRef]   [PubMed]  

35. Q. Song, J. Bai, M. K. Garvin, M. Sonka, J. M. Buatti, and X. Wu, “Optimal multiple surface segmentation with shape and context priors,” IEEE Trans. Med. Imaging 32(2), 376–386 (2013). [CrossRef]   [PubMed]  

36. K. Li, X. Wu, D. Z. Chen, and M. Sonka, “Optimal surface segmentation in volumetric images--a graph-theoretic approach,” IEEE Trans. Pattern Anal. Mach. Intell. 28(1), 119–134 (2006). [CrossRef]   [PubMed]  

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (17)

Fig. 1
Fig. 1 Comparison between HD-OCT and EDI-OCT images. (a) HD-OCT image. (b) EDI-OCT image. (c) High reflectivity regions of HD-OCT image. (d) High reflectivity regions of EDI-OCT image.
Fig. 2
Fig. 2 (a) An original HD-OCT image. (b) Several structures are manually labeled. RNFL: Retinal Nerve Fiber Layer, ONL: Outer Nuclear Layer (dark stripe marked with a white straight line), IS/OS: Inner Segment/Out Segment (bright thin stripe marked with an orange straight line), RPE: Retinal Pigment Epithelium (bright stripe marked with a black straight line), BM: Bruch’s Membrane (green curve), CSI: Choroidal-Scleral Interface (red curve). Vitreous: dark region above RNFL, Choroid: region between BM and CSI, Sclera: sub-CSI region.
Fig. 3
Fig. 3 BM segmentation result (green line) in the original image (a) and the flattened image (b).
Fig. 4
Fig. 4 Analysis of CSI boundary characteristics. (a) and (b) are the original and smoothed choroid-sclera parts of the flattened image (Fig. 3), respectively. (c) The intensity profile of one column corresponding to the dashed red line in (b), where the CSI boundary position corresponding to the position pointed out in (b) and its gradual intensity distance are marked.
Fig. 5
Fig. 5 Gradual intensity distance image of Fig. 4(b), where the x-axis, y-axis and scale bar are also shown.
Fig. 6
Fig. 6 (a) Difference image of the gradual intensity distance image (Fig. 5). (b) The positions with the maximum difference are marked with red circles in the smoothed choroid-sclera parts (Fig. 4(b)).
Fig. 7
Fig. 7 Projection of primary CSI points in the horizontal direction (a). The refined difference image (b) and CSI boundary points marked in red circles (c).
Fig. 8
Fig. 8 The curve detection problem. (a) the curves Ci and Cj in a 2-D image, (b) the constructed directed graph of two adjacent columns. Dark solid lines are intra arcs; orange lines are inter-arcs; green solid lines are smooth constraint; and the dashed lines are optional.
Fig. 9
Fig. 9 CSI segmentation comparison between the original method and the improved method. (a) The segmentation obtained using the original graph search method. (b) The segmentation achieved using the proposed graph search with the curve smoothness constraint.
Fig. 10
Fig. 10 Thickness difference of each column of all tested images, where the blue line is the mean thickness difference of each column.
Fig. 11
Fig. 11 Segmentation results for choroid with dark columns on each side. (a) Original image. (b) Our method. (c) Expert 1. (d) Expert 2.
Fig. 12
Fig. 12 Segmentation results for thin choroid. (a) Original image. (b) Our method. (c) Expert 1. (d) Expert 2.
Fig. 13
Fig. 13 Segmentation results for weak CSI boundary. (a) Original image. (b) Our method. (c) Expert 1. (d) Expert 2.
Fig. 14
Fig. 14 Two segmentation mistakes. Segmentation results of our method (Upper row) and one expert (Bottom row). (a) Epiretinal membrane. (b) Large choroidal vessel.
Fig. 15
Fig. 15 BM segmentation comparison. (a) Original image. (b) Estimated high reflectivity regions. (c) Remained high reflectivity regions after Chen’s RNFL complex removal. (d) BM segmentation result with Chen’s algorithm. (e) Remained high reflectivity regions after our RNFL complex removal. (f) BM segmentation result with our improved algorithm.
Fig. 16
Fig. 16 Mean boundary difference comparison for all test images
Fig. 17
Fig. 17 Comparison of CSI segmentation between Tian’s and our methods. (a) Original image. (b) CSI segmentation results with Tian’s (red curve) and our (green curve) methods. (c) Flattened choroid-sclera parts. (d) CSI candidate points (red circle) with Tian’s method. (e) Primary CSI boundary (yellow curve) with Tian’s method. (f) CSI edge cost image with our method. (g) CSI segmentation result (green curve) with our method.

Tables (4)

Tables Icon

Table 1 Within-expert and between-expert correlation coefficients (cc), paired U-test p-values, absolute boundary difference, mean boundary difference, thickness difference, overlap ratio, overestimated ratio and underestimated ratio evaluation between the manual segmentations.

Tables Icon

Table 2 Correlation coefficients (cc), paired U-test p-values, absolute boundary difference, mean boundary difference, thickness difference, overlap ratio, overestimated ratio and underestimated ratio between our segmentation method (Our Seg.) and expert segmentations.

Tables Icon

Table 3 Absolute boundary difference (AMD) and mean boundary difference (MBD) of the BM segmentation results between Chen’s algorithm (Chen’s Alg.), our improved algorithm (Our Imp. Alg.) and average expert (Avg. Exp.) segmentations.

Tables Icon

Table 4 Absolute boundary difference (AMD) and mean boundary difference (MBD) of the CSI segmentation results between Tian’s algorithm (Tian’s Alg.), our algorithm (Our Alg.) and average expert (Avg. Exp.) segmentations.

Equations (19)

Equations on this page are rendered with MathJax. Learn more.

D(x,y)={ D(x+1,y)+1 if I a (x,y) I a (x+1,y) 0 if I a (x,y)< I a (x+1,y)
CS I 0 ={ ( x CSI ,y)| x CSI = argmax x[1,Nr] D x (x,y),y[1,Nc] }
E intra ={ V( y,x ),V( y,x1 ) | x>0 }
E inter ={ { V( y,x ),V( y+1,max( x Δ y ,0 ) ) | y{ 0,,Y2 },xx } { V( y,x ),V( y1,max( x Δ y ,0 ) ) | y{ 0,,Y1 },xx } { V( y,x ),V( y1,x ) | y{ 0,,Y1 },xx } { V( y,x ),V( y+1,x ) | y{ 0,,Y2 },xx }
w i ( y,x )={ c i ( y,x ) if x=0 c i ( y,x ) c i ( y,x1 ) otherwise
f( I( y,x )I( y ,x ) )= e I( y,x )I( y ,x ) 2 σ 2
E( c )= i=1 n I( y,x )c c i ( y,x )+α i=1 n I( y,x ),I( y ,x )c f( I( y,x )I( y ,x ) )
ABD ¯ ( X;Y )= 1 K k=1 K 1 n i=1 n | X k i Y k i |
std( ABD )( X;Y )= 1 K k=1 K ( 1 n i=1 n | X k i Y k i | ABD ¯ ( X;Y ) ) 2
MBD ¯ ( X;Y )= 1 K k=1 K 1 n i=1 n ( X k i Y k i )
std( MBD )( X;Y )= 1 K k=1 K ( 1 n i=1 n ( X k i Y k i ) MBD ¯ ( X;Y ) ) 2 ,
TD ¯ ( X;Y )= 1 K k=1 K 1 n i=1 n ( T X k i T Y k i )
std( TD )( X;Y )= 1 K k=1 K ( 1 n i=1 n ( T X k i T Y k i ) TD ¯ ( X;Y ) ) 2
Overlap ¯ ( X;Y )= 1 K k=1 K X k Y k X k Y k
std( Overlap )( X;Y )= 1 K k=1 K ( X k Y k X k Y Overlap ¯ ( X;Y ) ) 2
Overest ¯ ( X;Y )= 1 K k=1 K X ¯ k Y k X k
std( Overest )( X;Y )= 1 K k=1 K ( X ¯ k Y k X k Overest ¯ ( X;Y ) ) 2
Undest ¯ ( X;Y )= 1 K k=1 K X k Y ¯ k X k
std( Undest )( X;Y )= 1 K k=1 K ( X k Y ¯ k X k Undest ¯ ( X;Y ) ) 2 ,
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.