Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Layer boundary evolution method for macular OCT layer segmentation

Open Access Open Access

Abstract

Optical coherence tomography (OCT) is used to produce high resolution depth images of the retina and is now the standard of care for in-vivo ophthalmological assessment. It is also increasingly being used for evaluation of neurological disorders such as multiple sclerosis (MS). Automatic segmentation methods identify the retinal layers of the macular cube providing consistent results without intra- and inter-rater variation and is faster than manual segmentation. In this paper, we propose a fast multi-layer macular OCT segmentation method based on a fast level set method. Our framework uses contours in an optimized approach specifically for OCT layer segmentation over the whole macular cube. Our algorithm takes boundary probability maps from a trained random forest and iteratively refines the prediction to subvoxel precision. Evaluation on both healthy and multiple sclerosis subjects shows that our method is statistically better than a state-of-the-art graph-based method.

© 2019 Optical Society of America under the terms of the OSA Open Access Publishing Agreement

1. Introduction

Optical coherence tomography (OCT) is a three-dimensional (3D) imaging technique that emits a beam of light into the tissues to be examined and detects the reflected or back-scattered light from the tissues. This reflected light will interfere with light that originated from the same source and the reflectivity profile along the light beam can be derived from the interference signal and used to generate an A-scan. Sequences of A-scans are used to form two-dimensional (B-scans) and three-dimensional volumes. The reflectivity profile of all A-scans at a fixed depth forms a C-scan. OCT acquires images with high resolution (<5μm) [1], from which the individual retina layers can be demarcated, qualitatively assessed, and quantified [2].

Ophthalmological assessment of retinal disease and neurological disorders often includes a segmentation of retinal cellular layers from OCT images [3–5]. Eight retinal subdivisions are commonly segmented: retinal nerve fiber (RNFL), ganglion cell and inner plexiform complex (GCIP), inner nuclear layer (INL), outer plexiform layer (OPL), outer nuclear layer (ONL), inner segment (IS), outer segment (OS), and the retinal pigment epithelium (RPE) complex. The segmentation of these eight layers are accomplished by identifying nine boundaries: vitreous-RNFL or internal limiting membrane (ILM), RNFL-GCIP, GCIP-INL, INL-OPL, OPL-ONL, ONL-IS or external limiting membrane (ELM), IS-OS, OS-RPE, and RPE-choroid or Bruch’s membrane (BM). As a result, an OCT image is partitioned into ten regions (see Fig. 1). We note that there is some debate as to the appropriateness of the term IS-OS, with some suggestion that ellipsoid zone (EZ) may be more correct [6]. Since this is an unresolved matter, we have continued to use the term IS-OS.

Quantification of layer thickness is often used in the study of various ophthalmological and neurological diseases. For example, subjects with multiple sclerosis (MS) of all subtypes exhibit significant thinning of the inner retinal layers [3]. OCT is also used to study myopia [7], diabetes [5, 8–10], Alzheimer’s disease (AD) [11], and glaucoma [4, 12, 13]. Since manual segmentation of each layer is time consuming and prone to inter-rater variability, fast, automatic algorithms for retinal layer segmentation are preferred.

 figure: Fig. 1

Fig. 1 A B-scan from a healthy subject with manual delineation of nine boundaries. The boundaries from top to bottom are ILM, RNFL-GCIP, GCIP-INL, INL-ONL, OPL-ONL, ELM, IS-OS, OS-RPE, and BM.

Download Full Size | PDF

Various approaches have been developed for automated layer segmentation of OCT images [14–35], including methods based on shortest paths [17], active contours [18–20], statistical models [21], and level sets [22–25]. The graph method introduced by Li et al. [36], was originally developed for blood vessel segmentation before being repurposed by Garvin et al. [26] for OCT data. The underlying graph framework, with various improvements and extensions [14–16, 28–31], has become the predominate method in use today for OCT segmentation. Recently, a great amount of effort has been focused on exploring the use of deep networks, which have proven to be very effective in other medical imaging tasks, for use in OCT image segmentation. Fang et al. [33] developed a combined approach using both a deep neural network and a graph cut search, while He et al. [34] proposed a cascaded fully convolutional neural network to guarantee topological ordering of the layers.

 figure: Fig. 2

Fig. 2 A flowchart of our proposed method.

Download Full Size | PDF

This paper introduces a novel framework for retinal layer segmentation that builds on the multi-layer fast level set method (MLS) [37]. Our method takes probability maps for each boundary from a trained random forest as input. An external force field and an initial boundary location are calculated from the input probability maps. The initialization is further refined through iterative updates based on the external forces and internal forces. Five major extensions to MLS are employed: 1) a subvoxel location is calculated during surface evolution, which allows for better estimation of the smoothness force; 2) a simple Gaussian filtering process is used to calculate the smoothness force without the need to calculate curvature, which significantly reduces computational time; 3) the initial surface is generated using a shortest path method; 4) surface evolution is performed in a BM flattened space and the benefit of using a flat space is incorporated into an extra term in the internal force; and 5) vector field convolution [38] is used instead of gradient vector flow [39] for better performance. Our proposed layer boundary evolution (LBE) algorithm can be divided into two parts. The first part, in which nine boundary probability maps are calculated from a retinal OCT image, is largely adopted from [28]. Three steps are involved: preprocessing an OCT image, feature extraction, and boundary classification using a trained random forest. The second part uses the boundary probability maps from the trained random forest for generating two inputs to the layer boundary evolution: a force field and an initial prediction for each boundary. The final result is the output of layer boundary evolution, which iteratively refines the input prediction to a subvoxel accuracy (see Fig. 2). Since the first part is well documented in [28] and its source code is publicly available (http://www.nitrc.org/projects/aura_tools), we briefly introduce each step in the first part and then focus most of our discussion on the second part.

2. Method

2.1. Preprocessing

The preprocessing steps serve to transform an OCT volume into a common space so that the intensity and spatial coordinates of an input volume is consistent with the training data. An example of a preprocessed B-scan is shown in Fig. 3.

2.1.1. Retina boundary flattening

Flattening is a common preprocessing step performed by many retina segmentation algorithms [17, 26]. We first estimate the top and bottom boundaries of the retina, specifically the ILM and the BM. A B-scan is filtered by a Gaussian smoothing kernel with σ = 3 pixels and the derivative along each A-scan within the B-scan is computed. The locations of the ILM and BM in a particular A-scan is found as the locations that have the largest positive and negative derivatives, respectively. Error points are found by comparing the estimated boundaries with its independently median filtered version. Points that deviate more than 15 voxels from the median filtered surface are defined as outliers and are replaced by interpolated points. A BM flattened volume is obtained by vertically shifting each A-scan within its B-scan based on the estimated BM surface. Features related to spatial coordinates of pixels contain more information in a retina boundary flattened volume. Our classifier described in Section 2.2 is trained using features calculated from this flattened volume and its corresponding manual delineation.

 figure: Fig. 3

Fig. 3 The outputs of our preprocessing steps. A B-scan from a healthy subject is shown (a). The ILM and BM are detected (b) and used to produce a BM flattened B-scan (c). The image in (d) shows a normalized flattened B-scan.

Download Full Size | PDF

2.1.2. Intensity normalization

The automatic intensity rescaling and automatic real-time averaging performed by the OCT scanner may potentially cause a large intensity variation for a particular tissue type within one subject. These differences in dynamic range are not desirable especially during feature classification. An intensity consistent B-scan is achieved by computing a robust maximum Im and linearly rescaling intensities from the range [0,Im] to the range [0,1], with values greater than Im being clamped at 1. The robust maximum Im is found by first median filtering each individual A-scan within the same B-scan using a kernel size of 15 pixels. Then, Im is set to the value that is 5% larger than the maximum intensity of the entire median-filtered image.

2.2. Random forest classification

The boundary locations are estimated using a trained random forest classifier [28]. In total 27 features are calculated for every voxel in the preprocessed OCT volume. Those features are passed to decision trees in the forest, which provides information about whether the voxel belongs to a specific boundary or not. By combining the predictions from all decision trees, the probability of each voxel belonging to each boundary is computed. In short, the trained random forest takes features and outputs nine probability maps, each associated with one boundary (see Fig. 4). A probability map has the same size as the input OCT volume. To train the random forest, manual delineations are also preprocessed to have the same volume size and flattened BM.

 figure: Fig. 4

Fig. 4 Shown are the random forest predicted boundary locations in a B-scan.

Download Full Size | PDF

2.3. Surface representation

The fast level set method introduced by Shi et al. [40] uses a list instead of a level set function to represent a boundary or the zero level set. Their idea adapts naturally to boundaries in retinal OCT images. Specifically, the anatomical structure of human retina determines that retinal layers cannot fold on themselves. In other words, an A-scan can only intersect a specific layer boundary once in an OCT image. Therefore, for a given OCT image of size l×m×n, where l is the number of C-scans, m is the number of A-scans within a B-scan, and n is the number of B-scans within the 3D volume, a 2D array of size m×n is sufficient to represent a specific boundary. The jth row, kth column of the 2D array corresponds to the jth A-scan within the kth B-scan, and its value indicates the intersection location of that boundary in that A-scan. In other words, this 2D array represents the boundary height as shown in Fig. 5. We note that a layer boundary in a B-scan of size l×m is represented by a 1D array of size 1×m.

 figure: Fig. 5

Fig. 5 Representation of the ILM surface. The red line segment shows the height of the ILM (320 voxels in this case) at the 350th A-scan of the 40th B-scan. Therefore in the 2D array representation of the ILM surface, the value at location (350,40) is 320. For a 496×1024×49 OCT volume, since there are 1024 × 49 A-scans, the 2D array for the ILM surface is of size 1024 × 49. In total nine 1024 × 49 2D arrays are needed to fully represent all nine boundaries.

Download Full Size | PDF

2.4. Force field

The external force Fext used to drive the surface to the layer boundaries is computed from the probability map described in Section 2.2. A probability map can be thought of as an edge map of the corresponding boundary. We generate the external force field using vector field convolution (VFC) [38], which calculates the convolution of the probability map and a vector field kernel. Typically, all vectors in the vector field kernel point to the kernel origin; thus our external force always points to locations with the highest boundary probability. It can be used to guide our model to the boundary locations learned by the random forest. An example of the vector field kernel with its generated force field in a zoomed B-scan is shown in Fig. 6. During layer boundary evolution, we evolve surfaces vertically in a B-scan (see Section 2.6), which means only forces along an A-scan take effect. Which is why we only show vertical forces. The vector field kernel, however, is 2-dimensional so that information from adjacent A-scans is incorporated. Compared with other methods, VFC produces a force field that is robust to noise. This is important for processing fuzzy edge maps, like the probability maps generated by a random forest. A comparison between VFC and gradient vector flow [39], which our previous approach used, found a three fold improvement in speed [38].

 figure: Fig. 6

Fig. 6 The vector field kernel used is shown in (a). A probability map of ILM (b top) is convolved with this vector field kernel to produce the external force field. A magnified view of the force field (red arrows) is overlaid on the probability map and shown in (b bottom).

Download Full Size | PDF

Internal forces are used to enforce smoothness of the output surface. The smoothness constraints for layer boundaries in retinal OCT are different from other boundaries. Specifically, boundaries have very different curvature near the fovea versus far from the fovea. Because of this, a universal smoothness constraint over the entire boundary may produce overly-smooth surfaces near the fovea and bumpy surface elsewhere. Using a flat space [25, 41, 42] can solve this problem by encoding the boundary shape prior information within the flat space. A flat surface in the flat space corresponds to a curvy surface near the fovea and smooth surface away from the fovea. Therefore enforcing a universal smoothness constraint in the flat space allows different smoothness characteristics in different regions. However, a smooth surface in flat space does not necessarily mean a smooth surface in native space, because the conversion from flat space to native space involves interpolation [42]. Instead of converting to flat space, we propose a new internal force Fint=Fs+Fp that accounts for the smoothness of the surface and uses a shape prior for the retinal boundaries.

Internal forces for smoothness regularization, Fs, are commonly computed using the curvature on the surface, which is computationally expensive to evaluate. Alternatively, we introduce a Gaussian filtering process that efficiently calculates Fs. Use of Gaussian smoothing to replace the curvature was first proposed in [40]. It is motivated by the observation that when the level set function is chosen as the signed distance function, the curvature takes a simpler form and equals the Laplacian of ϕ, From the theory of scale-space ([43]), we know that evolution of a function according to its Laplacian is equivalent to Gaussian filtering the function. We first filter the predicted surface B, where B is the 2D array of numbers representing the retinal surface as depicted in Fig. 5, with a Gaussian kernel of standard deviation s to produce a smoothed surface Bs. Fs is then computed as,

Fs(i)=cs(Bs(i)B(i))i,
where i is the A-scan index, cs is a parameter to control the strength of the force and i is a unit vector along the direction of the A-scan. Fs is designed to always point to the smoothed surface. It regularizes a bumpy surface to a smooth one.

The second term Fp is computed using the difference between the smoothed initialization, Bp, and the predicted surface B as,

Fp(i)=cp(Bp(i)B(i))i,
where cp is a parameter to control the strength of the force. Intuitively, Fp ensures that the evolution will not drive the surface too far from the initialization. Similar to the use of flat space, we assume that the initial surface encodes the boundary shape prior information and Fp makes use of this prior shape without flat space conversion. When we impose a universal smoothness force Fs over the entire boundary, Fp allows the total internal force to act differently in different regions. Generally speaking, Fs near the fovea is large, but it will not oversmooth the boundary because Fp compensates Fs and encourages a curvy surface when Fs tries to flatten out the surface. Both Fs and Fp were developed in a heuristic manner after the exploration of more traditional forces proved unsuccessful.

The external force Fext computed from VFC is normalized between [0,1]. During the evolution described in Section 2.6, we compute the internal force Fint for a boundary based on the current predicted surface B. The total force F for evolution is computed as:

F=Fext+Fint
=Fext+cs(BsB)i+cp(BpB)i

2.5. Initialization

With an external force field free of noise, generating an initialization is a trivial problem as the forces point perfectly towards the true boundary. For example, we could simply initial nine flat planes as the predicted surfaces. However, as a real force field is usually noisy, pixels away from the boundary may not have forces that point in the correct direction. If we initialize the surface far from the true boundary, it is likely that it will never evolve to the true boundary, because false boundary responses from noise in the neighborhood may overwhelm the force for a true boundary. Therefore, we want to initialize the surface close enough to the true boundary to guarantee that our initialization is within the capture range of the force fields. We use Dijkstra’s shortest path (DSP) algorithm [44] to find the shortest path using the B-scan probabilities.

To use DSP, we construct a graph within each B-scan for each boundary using its probability P(i,j) derived from the random forest. The vertex at the ith location in the jth A-scan only connects to vertices in the (j+1)th A-scan that are between i1 and i+1. This ensures that the shortest path we find is smooth and, more importantly, it goes through each A-scan only once. The edge cost C(i,j) to the vertex at location i in the jth A-scan is defined as

C(i,j)=1P(i,j)+0.011.

The total distance is the sum of all edge costs from the previous vertices. We initialize all vertices in the first A-scan as visited with the total distance equal to the edge cost described above. We then iteratively assess a vertex with the minimal total distance within all visited vertices. The algorithm stops when we mark a vertex in the last A-scan as visited. From that vertex, we trace back to the first A-scan. The coordinates of all vertices along the path are used as an initial surface location. For an input B-scan probability map, we find a smooth path that goes across all A-scans. Since the shortest path method provides no topology guarantee, its predicted surfaces may cross over each other, a problem most likely to occur near the fovea. To address this problem, after the nine boundary predictions are individually found in each B-scan, we fix ordering errors as we combine them. We first combine the boundaries of ILM and RNFL-GCL, if in some A-scans the boundary height of RNFL-GCL are close to or even larger then the boundary height of the ILM, we move the RNFL-GCL boundary in those A-scans down so that the RNFL layer has a predefined minimal thickness. This process is iterated to combine all nine boundaries, guaranteeing that the initial boundaries have the correct layer ordering along every A-scan. Previously, we used a regression model to generate initial boundary locations from the retina thickness [42]. We find that the DSP is fast and robust and produces an initialization that is closer to the true boundary.

2.6. Layer boundary evolution

Layer boundary evolution (LBE) is an iterative process that refines the initial surfaces to the boundary locations indicated by the force fields described in Section 2.4. Our evolution scheme evolves the boundaries in the A-scan direction by adjusting values in the 2D arrays. Adding to or subtracting from a 2D array will move the boundary towards the inner or outer retina, respectively. This approach provides two benefits: firstly, we can easily find grid points adjacent to the boundary without the need of a level set function; secondly, the layer ordering can be easily maintained during the evolution.

Given the layered nature of the tissues under consideration within the retina, there is a topological requirement for OCT images. Specifically, layers cannot fold over themselves and the ordering of the eight layers between the vitreous and choroid needs to be maintained. Since only one location is recorded in each A-scan for each boundary using our surface representation, the first requirement is automatically satisfied. We postprocess the initialization to have the correct layer ordering, as described in Section 2.5. Evolving the boundary only in the A-scan direction helps maintain the ordering of layers to be consistent with the underlying tissue ordering. Every time a change of boundary height (a value changes in the surface 2D array) that will result in its adjacent layer to have thinner thickness than the minimal thickness, we reset its boundary height so that layer has the exact minimal thickness defined. As is shown in Fig. 7, before the current iteration, Point C and Point D are 0.9 pixels away. If Point C tries to move down by more than 0.8 pixels, its location will be reset so that the distance between Point C and Point D is exactly 0.1 pixels, where 0.1 pixels is the predefined minimal layer thickness (for this example).

 figure: Fig. 7

Fig. 7 Shown are two surfaces in front of a B-scan. We highlight two A-scans (the 250th and the 500th A-scans at the 21th B-scan). The first A-scan intersects with the IPL-INL surface and INL-OPL surface at Points C and D; the second A-scan intersects with the IPL-INL surface and INL-OPL surface at Points A and B.

Download Full Size | PDF

During each iteration, a decision will be made for every value in a 2D array B: whether to increase the value (move up towards inner retina), decrease the value (move down towards outer retina), or keep the surface position unchanged. For a boundary point Bn(j,k), this decision is made by looking at the combination of external and internal forces at its upper adjacent grid point (ceil(Bn(j,k))) which is computed as

Fun(j,k)=Fext(ceil(Bn(j,k)))+Fintn(ceil(Bn(j,k))
and at its lower adjacent grid point (floor(Bn(j,k))) which is computed as
Fln(j,k)=Fext(floor(Bn(j,k)))+Fintn(floor(Bn(j,k)).

Function ceil() rounds the input value to the nearest larger integer while function floor() rounds the input to the nearest smaller integer. Superscript n indicates the iteration index, and (j,k) is the A-scan index. While Fext can be precomputed, we need to calculate Fint during each iteration. We use two arrows and a division line to represent the direction of the net forces Fun(j,k) and Fln(j,k); for example, the symbol indicates that the Fun(j,k) is pointing down while the Fln(j,k) is pointing up. The decision rule for refining a predicted surface point Bn(j,k) in the nth iteration is described as follows:

  1. If the two forces are , a zero level set location will be calculated as the predicted boundary location.
  2. If the two forces are , the surface point moves down: Bn+1(j,k)floor(Bn(j,k))0.5
  3. If the two forces are , the surface point moves up: Bn+1(j,k)ceil(Bn(j,k))+0.5
  4. If the two forces are , the surface point moves based only on Fs.

Case 1 has two net forces pointing at each other, which is the feature exhibited at a boundary. The zero level set location is calculated using the net force at the upper grid point Fun(j,k) and lower grid point Fln(j,k):

Bn+1(j,k)floor(Bn(j,k))+|Fln(j,k)||Fun(j,k)|+|Fln(j,k)|

Even if this subvoxel location may change in future iterations because of changes in the internal force, it is essential to calculate it so that boundary points in adjacent A-scans can use this more accurate location for better estimating Fs in internal force in later iterations. For cases 2 and 3, we evolve the boundary point to follow the force field. Both cases indicating that the true boundary is not in between the current upper grid point and lower grid point interval. Situation 4 accounts for the influence of noise or a bad initial surface. When we have Fun(j,k) and Fln(j,k) pointing away from each other, the combined force field cannot correctly infer the direction of the true boundary. It can result from noise in the probability map or a poor initial surface. In such cases, our model trusts the smoothness force Fs. Because the upper grid point (ceil(B(j,k))) is always one voxel higher then the lower grid point (floor(B(j,k))), from Equation 1 it is obvious that the smoothness force at the upper grid point will always be smaller than the lower grid point. Thus, in subsequent iterations we will not have Fun(j,k) and Fln(j,k) pointing away from each other again.

3. Experiments and results

3.1. Data set

We test our LBE algorithm on macular OCT images with no observable intraretinal pathologies. We do this with data from the right eyes of 36 subjects obtained using a Spectralis OCT system (Heidelberg Engineering, Heidelberg, Germany). The research protocol was approved by the local Institutional Review Board, and written informed consent was obtained from all participants. Of the 36

subjects, 21 were diagnosed with MS while the remaining 15 were healthy controls. All scans were screened and found to be free of microcystic macular edema (a pathology sometimes found in MS subjects).

All scans were acquired using the Spectralis scanner’s automatic real-time function in order to improve image quality by averaging at least 12 images of the same location (and the ART setting was fixed at 12). The resulting scans had signal-to-noise ratios (SNRs) of between 20 dB and 38 dB, with a mean SNR of 30.6 dB. Macular raster scans (20×20) were acquired with 49 B-scans, each B-scan having 1024 A-scans with 496 voxels per A-scan. The B-scan resolutions varied slightly between subjects and averaged 5.8μm laterally and 3.9μm axially. The through-plane distance (slice separation) averaged 123.6μm between images, resulting in an imaging area of approximately 6 × 6 mm2. We note that our data is of a higher resolution and better SNR than many other recent publications. The nine layer boundaries were manually delineated on all B-scans for all subjects by a single rater using an internally developed protocol and software tool. The manual delineations were performed by clicking on approximately 20–50 points along each layer border followed by interpolation between the points using a cubic B-spline. Visual feedback was used to move points to ensure a correct boundary.

3.2. Parameter selection

The random forest classifier was trained using 4 healthy subjects and 6 MS subjects. The recommended parameters of the RF classifier described in [28] are used. Since our initialization is very close to the final results, the maximum number of iterations is set to 10. This allows an accurate estimation of the boundary at a small computational cost.

There are three parameters related to the internal force that need to be determined in our proposed method (LBE) for a single boundary:

  1. cs in Equation 1 weights the smoothness force;
  2. cp in Equation 2 weights the shape prior force; and
  3. s defines the standard deviation when computing Bs in Equation 1

Different sets of parameters were found for each of the nine boundaries using a validation set of 2 healthy subjects and 3 MS subjects. We used mean absolute boundary error as the metric when determining these parameters. The mean absolute boundary error is calculated as the absolute distance for each boundary against the manual delineated surface along each A-scan. These errors are first averaged across all A-scans in each subject and then averaged over all subjects for each boundary.

 figure: Fig. 8

Fig. 8 Shown are the effect of parameter selection on absolute boundary error for four boundaries in the validation set. Each sphere corresponds to an experiment on a validation set and the parameters used are indicated by the spatial coordinates of the sphere. After all 1440 experiments, the resultant absolute error were thresholded and errors that were larger than 5% of the lowest error are shown as big red spheres. The remaining results were linearly mapped to [0,1] with the lowest error indicated by 0 (small blue spheres) and largest errors indicated by 1 (large yellow spheres). The parameters chosen are indicated by a magenta triangle. The shaded green region indicates the combination of parameters that our method produce lower absolute boundary error than AURA v3.4 when evaluated on the test data set. The black contour indicates the intersection of this region and each slice.

Download Full Size | PDF

We ran 1440 experiments on the validation set for each boundary with different parameters and we chose our parameters by selecting the combination of cs, cp, and s that produced the lowest mean absolute boundary error. We limited the search region of cs and cp to be within [0,2.75]; otherwise, the internal force might have overwhelmed the external force yielding a result that is simply a smoothed version of the initial surface. Figure 8 shows the effect of parameter selection on the mean absolute error of four boundaries. Each point in the 3D volume in Fig. 8 represents the resultant absolute error produced by a certain choice of the three parameters. For each boundary, the resulting mean absolute errors larger than 5% of the lowest mean absolute error are shown as red circles. Meanwhile a mean absolute error within the range of 5% of the best result is linearly mapped to [0,1] with the lowest mean absolute indicated by 0 (blue small spheres) and the worst mean absolute error indicated by 1 (yellow large spheres). The best results are marked with a magenta triangle. Figure 8 also indicates the region (green) of parameters where LBE produce a lower mean absolute error than a graph method (AURA v3.4) we compare with in Section 3.3 over the remaining 21 subjects. The black contour is the intersection of this region and each slice. The RNFL-GCIP boundary (top right subplot) has no green region indicating that no combination of the three parameters can produce better result than AURA v3.4, while for IS-OS boundary (bottom left subplot), any combination of parameters can produce better result than AURA v3.4. The figure for GCIP-INL and INL-OPL boundaries are very similar to ILM (top left subplot), the figure for OPL-ONL, ELM, and BM are similar to IS-OS (bottom left subplot), for space consideration the figures for these five boundaries are not shown.

After determining the optimal cs, cp, and s for the validation set, we used them on the test data set; the mean absolute boundary error for iterations between 1 and 50 are shown in Fig. 9. This demonstrates that 10 iterations is an appropriate number to allow all boundaries to converge.

 figure: Fig. 9

Fig. 9 Shown is the mean absolute boundary error (in pixels) for all nine boundaries from iteration 1 to iteration 50. Each pixel has a physical dimension of 3.9μm axially and 5.8μm laterally.

Download Full Size | PDF

3.3. Results

We evaluated our method (LBE) on the remaining 21 subjects that were not used to train the random forest or for parameter selection. We compared with a previously reported graph cut method (AURA v2.11) [28] and its most recent version (AURA v3.4). AURA v2.11 has been independently shown to be one of the most accurate OCT retinal segmentation tools [45]. For the purpose of direct comparison, the recommended parameters [28] for the random forest are used for all three methods. An example of the manual delineations, as well as the result of the AURA v3.4 and LBE methods on the same subject are shown in Fig. 10.

 figure: Fig. 10

Fig. 10 A B-scan retinal OCT (a) and a magnified region near the fovea (b) are shown, along with corresponding manual delineation (c) and (d). The B-scan is segmented using AURA v3.4 (e) and (f), and our method (LBE) (g) and (h). The LBE mask is generated by rounding the boundary location to 0.1 voxel level.

Download Full Size | PDF

Shown in Table 1 is the mean absolute boundary error of the three methods against the manual delineation. The mean absolute boundary error is calculated as the absolute distance for each boundary against the manual delineated surface along each A-scan. The standard deviations were calculated assuming that every error value is separate (i.e., errors were not averaged for each subject before calculation). It is observed that the mean absolute boundary error is smaller for LBE than AURA v3.4 in 8 out of the 9 boundaries, the exception being the RNFL-GCIP boundary. A paired Wilcoxon signed rank test was used to test the significance of any improvement between AURA v3.4 and our method. 6 of the 9 boundaries reach significance (α level of 0.001). Two boundaries (OS-RPE and BM) lack statistical significance because of the large variance. AURA v3.4 has lower mean absolute boundary error than LBE on one of the nine boundaries, however it does not reach significance.

Tables Icon

Table 1. Mean and standard deviations of absolute boundary error (in pixels) are shown. Each pixel correspond a physical dimension of 3.9 microns axially and 5.8 laterally. A paired Wilcoxon signed rank test was used to test the significance of any improvement between AURA v3.4 and our method (LBE). The method with the lowest mean absolute boundary error are bolded. Boundaries that are significant (α level of 0.001) are marked with a . We include the result directly from shortest path initialization (SP) to show that our method improves on the initialization. A paired Wilcoxon signed rank test (not shown in the table) between LBE and SP shows all boundaries except BM reach a p-value of 0.0001, where the p-value of BM is 0.0143 due to the large variance in the LBE output.

We also computed the Dice Coefficient [46] between each of the three methods and the manual delineation; the results are shown in Table 2. The Dice Coefficient measures how much the segmentation results produced by an algorithm agree with the manual delineation. The Dice Coefficient has a range of [0,1], with higher values reflecting more agreement between the algorithm and the manual delineation. Since both manual delineation and segmentation results are boundary locations, we first converted the boundary locations to a layer mask by rounding the boundary location to the voxel level and then in each A-scan assigning the corresponding layer label from the current boundary voxel to the location for the boundary below. The resulting Dice coefficients for AURA v3.4 and LBE are very close, with five out of the eight layers showing statistically significant improvements (α level of 0.001).

Tables Icon

Table 2. Mean and standard deviations of the Dice Coefficient across the eight retinal layers. A paired Wilcoxon signed rank test was used to test the significance of any improvement between AURA v3.4 and our method (LBE). The method with the highest Dice Coefficient are bolded. Boundaries that are significant (α level of 0.001) are marked with a .

The computational performance are compared between AURA v3.4 and LBE. Both algorithms were implemented in MATLAB 2018a and run on a four core 3.1GHz Linux computer. For our 21 test data set, the average computational time per subject for LBE is 93.4(±7.4) seconds compared with 106(±4.8) seconds for AURA v3.4. Typically, more than 80 seconds are spent on producing the probability maps, which are common to both algorithms. The peak memory usage for our algorithm is 9.7 GB during the random forest classification, compared with 11.5 GB for AURA v3.4 during its graph cut search.

4. Conclusion

In this paper, we proposed a layer boundary evolution method that loosely originates from the fast level set method to replace the commonly used graph based method. We build our pipeline upon the state-of-the-art graph based method [28]: our method takes boundary probability map from a trained random forest, the result produced by the proposed method shows excellent performance in terms of both mean absolute boundary error and layer Dice coefficient. In particular, five out of nine boundary estimates are statistically better than the state-of-the-art graph cut method when using the same random forest classifier. In addition, less computation cost is required, which makes it an ideal alternative for graph cut search in boundary refinement after feature classification. The mean absolute boundary error is less then 4μm for all nine boundaries. Although we are unable to outperform deep learning based methods [47] in terms of speed (since the latter needs only a few seconds to run on a GPU), our method provides topological correct results and subvoxel resolution that most deep learning based methods cannot achieve. In terms of the run times of other methods, we note that OCTRIMA3D [48] has a run time of 28 seconds whereas AURA v2.11 on the same hardware had a run time of 152 seconds [48]. However, AURA v2.11 performed better in terms of mean absolute boundary error.

The data used to develop LBE is of higher quality than is typically acquired in a clinical OCT setting, however it is similar to the data used to developi the state-of-the-art [28]. Moreover, the random forest that generates the boundary classification probability maps is the same as [28] and has been previously shown to be very robust across scanners and data sources [45, 49]. Which provides us with some confidence about the general utility of out method across scanning platforms.

Future work will focus on replacing the current random forest classifier with a deep learning model to produce boundary probability maps using data driven features. The major challenge will be producing a diffused probability map from a deep network. Deep networks are often quite certain about the position of a boundary, with near binary probability maps as output; that is the true uncertainty of the deep network is not fully reflected in the networks output.

Funding

NIH/NEI (R01-EY024655); NIH/NINDS (R01-NS082347).

Disclosures

The authors declare that there are no conflicts of interest related to this article.

References

1. S. Saidha, C. Eckstein, and J. N. Ratchford, “Optical coherence tomography as a marker of axonal damage in multiple sclerosis,” Int. J. Clin. Rev. (2010). [CrossRef]  

2. S. Saidha, E. S. Sotirchos, M. A. Ibrahim, C. M. Crainiceanu, J. M. Gelfand, Y. J. Sepah, J. N. Ratchford, J. Oh, M. A. Seigo, S. D. Newsome, L. J. Balcer, E. M. Frohman, A. J. Green, Q. D. Nguyen, and P. A. Calabresi, “Microcystic macular oedema, thickness of the inner nuclear layer of the retina, and disease characteristics in multiple sclerosis: a retrospective study,” The Lancet Neurol. 11, 963–972 (2012). [CrossRef]   [PubMed]  

3. S. Saidha, S. B. Syc, M. K. Durbin, C. Eckstein, J. D. Oakley, S. A. Meyer, A. Conger, T. C. Frohman, S. Newsome, J. N. Ratchford, E. M. Frohman, and P. A. Calabresi, “Visual dysfunction in multiple sclerosis correlates better with optical coherence tomography derived estimates of macular ganglion cell layer thickness than peripapillary retinal nerve fiber layer thickness,” Multiple Scler. J. 17, 1449–1463 (2011). [CrossRef]  

4. H. L. Rao, L. M. Zangwill, R. N. Weinreb, P. A. Sample, L. M. Alencar, and F. A. Medeiros, “Comparison of different spectral domain optical coherence tomography scanning areas for glaucoma diagnosis,” Ophthalmology 117, 1692–1699 (2010). [CrossRef]   [PubMed]  

5. D. C. DeBuc and G. M. Somfai, “Early detection of retinal thickness changes in diabetes using optical coherence tomography,” Med. Sci. Monit. 16, MT15–MT21 (2010).

6. G. Staurenghi, S. Sadda, U. Chakravarthy, and R. F. Spaide, “Proposed Lexicon for Anatomic Landmarks in Normal Posterior Segment Spectral-Domain Optical Coherence Tomography: The IN·OCT Consensus,” Ophthalmology 121, 1572–1578 (2014). [CrossRef]   [PubMed]  

7. S. H. Kang, S. W. Hong, S. K. Im, S. H. Lee, and M. D. Ahn, “Effect of myopia on the thickness of the retinal nerve fiber layer measured by cirrus hd optical coherence tomography,” Investig. Ophthalmol. & Vis. Sci. 51, 4075–4083 (2010). [CrossRef]  

8. T. Alasil, P. A. Keane, J. F. Updike, L. Dustin, Y. Ouyang, A. C. Walsh, and S. R. Sadda, “Relationship between optical coherence tomography retinal parameters and visual acuity in diabetic macular edema,” Ophthalmology 117, 2379–2386 (2010). [CrossRef]   [PubMed]  

9. H. W. Van Dijk, P. H. Kok, M. Garvin, M. Sonka, J. H. DeVries, R. P. Michels, M. E. van Velthoven, R. O. Schlingemann, F. D. Verbraak, and M. D. Abramoff, “Selective loss of inner retinal layer thickness in type 1 diabetic patients with minimal diabetic retinopathy,” Investig. Ophthalmol. & Vis. Sci. 50, 3404–3409 (2009). [CrossRef]  

10. J. C. Bavinger, G. E. Dunbar, M. S. Stem, T. S. Blachley, L. Kwark, S. Farsiu, G. R. Jackson, and T. W. Gardner, “The effects of diabetic retinopathy and pan-retinal photocoagulation on photoreceptor cell function as assessed by dark adaptometry,” Investig. Ophthalmol. & Vis. Sci. 57, 208–217 (2016). [CrossRef]  

11. B. Knoll, J. Simonett, N. J. Volpe, S. Farsiu, M. Ward, A. Rademaker, S. Weintraub, and A. A. Fawzi, “Retinal nerve fiber layer thickness in amnestic mild cognitive impairment: Case-control study and meta-analysis,” Alzheimer’s & Dementia: Diagn. Assess. & Dis. Monit. 4, 85–93 (2016).

12. J. S. Schuman, M. R. Hee, C. A. Puliafito, C. Wong, T. Pedut-Kloizman, C. P. Lin, E. Hertzmark, J. A. Izatt, E. A. Swanson, and J. G. Fujimoto, “Quantification of nerve fiber layer thickness in normal and glaucomatous eyes using optical coherence tomography: a pilot study,” Arch. Ophthalmol. 113, 586–596 (1995). [CrossRef]   [PubMed]  

13. A. Kanamori, M. Nakamura, M. F. Escano, R. Seya, H. Maeda, and A. Negi, “Evaluation of the glaucomatous damage on retinal nerve fiber layer thickness measured by optical coherence tomography,” Am. J. Ophthalmol. 135, 513–520 (2003). [CrossRef]   [PubMed]  

14. B. J. Antony, M. D. Abràmoff, M. Sonka, Y. H. Kwon, and M. K. Garvin, “Incorporation of texture-based features in optimal graph-theoretic approach with application to the 3-D segmentation of intraretinal surfaces in SD-OCT volumes,” in Proceedings of SPIE Medical Imaging (SPIE-MI2012), San Diego, CA, February 4, vol. 8314, p. 83141G2012.

15. B. J. Antony, M. S. Miri, M. D. Abràmoff, Y. H. Kwon, and M. K. Garvin, “Automated 3D segmentation of multiple surfaces with a shared hole: segmentation of the neural canal opening in SD-OCT volumes,” in 17th International Conference on Medical Image Computing and Computer Assisted Intervention (MICCAI 2014), vol. 8673 of Lecture Notes in Computer Science (SpringerBerlin Heidelberg, 2014), pp. 739–746.

16. B. J. Antony, A. Lang, E. K. Swingle, O. Al-Louzi, A. Carass, S. D. Solomon, P. A. Calabresi, S. Saidha, and J. L. Prince, “Simultaneous Segmentation of Retinal Surfaces and Microcystic Macular Edema in SDOCT Volumes,” in Proceedings of SPIE Medical Imaging (SPIE-MI 2016), San Diego, CA, February 27–March 3, 2016, vol. 9784 (2016), p. 97841C.

17. S. J. Chiu, X. T. Li, P. Nicholas, C. A. Toth, J. A. Izatt, and S. Farsiu, “Automatic segmentation of seven retinal layers in sdoct images congruent with expert manual segmentation,” Opt. Express 18, 19413–19428 (2010). [CrossRef]   [PubMed]  

18. A. Yazdanpanah, G. Hamarneh, B. Smith, and M. Sarunic, “Intra-retinal layer segmentation in optical coherence tomography using an active contour approach,” in International Conference on Medical Image Computing and Computer-Assisted Intervention, (Springer, 2009), pp. 649–656.

19. A. Mishra, A. Wong, K. Bizheva, and D. A. Clausi, “Intra-retinal layer segmentation in optical coherence tomography images,” Opt. Express 17, 23719–23728 (2009). [CrossRef]  

20. I. Ghorbel, F. Rossant, I. Bloch, S. Tick, and M. Paques, “Automated segmentation of macular layers in oct images and quantitative evaluation of performances,” Pattern Recognit. 44, 1590–1603 (2011). [CrossRef]  

21. V. Kajić, B. Považay, B. Hermann, B. Hofer, D. Marshall, P. L. Rosin, and W. Drexler, “Robust segmentation of intraretinal layers in the normal human fovea using a novel statistical model based on texture and shape analysis,” Opt. Express 18, 14730–14744 (2010). [CrossRef]  

22. J. Novosel, K. A. Vermeer, G. Thepass, H. G. Lemij, and L. J. Van Vliet, “Loosely coupled level sets for retinal layer segmentation in optical coherence tomography,” in Biomedical Imaging (ISBI), 2013 IEEE 10th International Symposium on, (Citeseer, 2013), pp. 1010–1013.

23. J. Novosel, G. Thepass, H. G. Lemij, J. F. de Boer, K. A. Vermeer, and L. J. van Vliet, “Loosely coupled level sets for simultaneous 3D retinal layer segmentation in optical coherence tomography,” Med. Image Analysis 26, 146–158 (2015). [CrossRef]  

24. J. Novosel, K. A. Vermeer, J. H. de Jong, Z. Wang, and L. J. van Vliet, “Joint Segmentation of Retinal Layers and Focal Lesions in 3-D OCT Data of Topologically Disrupted Retinas,” IEEE Trans. Med. Imag. 36, 1276–1286 (2017). [CrossRef]  

25. A. Carass, A. Lang, M. Hauser, P. A. Calabresi, H. S. Ying, and J. L. Prince, “Multiple-object geometric deformable model for segmentation of macular OCT,” Biomed. Opt. Express 5, 1062–1074 (2014). [CrossRef]   [PubMed]  

26. M. K. Garvin, M. D. Abramoff, X. Wu, S. R. Russell, T. L. Burns, and M. Sonka, “Automated 3-D intraretinal layer segmentation of macular spectral-domain optical coherence tomography images,” IEEE Transactions on Med. Imaging 28, 1436–1447 (2009). [CrossRef]  

27. A. Lang, A. Carass, E. Sotirchos, P. Calabresi, and J. L. Prince, “Segmentation of retinal OCT images using a random forest classifier,” in Proceedings of SPIE Medical Imaging (SPIE-MI 2013), Orlando, FL, February , 2013, vol. 8669 (2013), p. 86690R. [CrossRef]  

28. A. Lang, A. Carass, M. Hauser, E. S. Sotirchos, P. A. Calabresi, H. S. Ying, and J. L. Prince, “Retinal layer segmentation of macular oct images using boundary classification,” Biomed. Opt. Express 4, 1133–1152 (2013). [CrossRef]   [PubMed]  

29. A. Lang, A. Carass, P. A. Calabresi, H. S. Ying, and J. L. Prince, “An adaptive grid for graph-based segmentation in macular cube OCT,” in Proceedings of SPIE Medical Imaging (SPIE-MI 2014), San Diego, CA, February 15, 2014, vol. 9034 (2014), p. 90340A.

30. A. Lang, A. Carass, O. Al-Louzi, P. Bhargava, H. S. Ying, P. A. Calabresi, and J. L. Prince, “Longitudinal graph-based segmentation of macular OCT using fundus alignment,” in Proceedings of SPIE Medical Imaging (SPIE-MI 2015), Orlando, FL, February 21, 2015, vol. 9413 (2015), p. 94130M. [CrossRef]  

31. A. Lang, A. Carass, O. Al-Louzi, P. Bhargava, S. D. Solomon, P. A. Calabresi, and J. L. Prince, “Combined registration and motion correction of longitudinal retinal OCT data,” in Proceedings of SPIE Medical Imaging (SPIE-MI 2016), San Diego, CA, February 27–March 3, 2016, vol. 9784 (2016), p. 97840X. [CrossRef]  

32. A. Lang, A. Carass, A. K. Bittner, H. S. Ying, and J. L. Prince, “Improving graph-based OCT segmentation for severe pathology in Retinitis Pigmentosa patients,” in Proceedings of SPIE Medical Imaging (SPIE-MI 2017), Orlando, FL, February 11, 2017, vol. 10137 (2017), p. 101371M.

33. L. Fang, D. Cunefare, C. Wang, R. H. Guymer, S. Li, and S. Farsiu, “Automatic segmentation of nine retinal layer boundaries in OCT images of non-exudative AMD patients using deep learning and graph search,” Biomed. Opt. Express 8, 2732–2744 (2017). [CrossRef]   [PubMed]  

34. Y. He, A. Carass, Y. Yun, C. Zhao, B. M. Jedynak, S. D. Solomon, S. Saidha, P. A. Calabresi, and J. L. Prince, “Towards topological correct segmentation of macular OCT from cascaded FCNs,” in Fetal, Infant and Ophthalmic Medical Image Analysis, (Springer, 2017), pp. 202–209. [CrossRef]  

35. Y. Yun, A. Carass, A. Lang, J. L. Prince, and B. J. Antony, “Collaborative SDOCT Segmentation and Analysis Software,” in Proceedings of SPIE Medical Imaging (SPIE-MI 2017), Orlando, FL, February 11, 2017, vol. 10138 (2017), p. 1013813.

36. K. Li, X. Wu, D. Z. Chen, and M. Sonka, “Optimal Surface Segmentation in Volumetric Images - A Graph-Theoretic Approach,” IEEE Trans. Patt. Anal. Mach. Intell. 28, 119–134 (2006). [CrossRef]  

37. Y. Liu, A. Carass, S. D. Solomon, S. Saidha, P. A. Calabresi, and J. L. Prince, “Multi-layer fast level set segmentation for macular OCT,” in Biomedical Imaging (ISBI 2018), 2018 IEEE 15th International Symposium on, (IEEE, 2018), pp. 1445–1448.

38. B. Li and S. T. Acton, “Active contour external force using vector field convolution for image segmentation,” IEEE Transactions on Image Process. 16, 2096–2106 (2007). [CrossRef]  

39. C. Xu and J. L. Prince, “Snakes, shapes, and gradient vector flow,” IEEE Transactions on Image Process. 7, 359–369 (1998). [CrossRef]  

40. Y. Shi and W. C. Karl, “Real-time tracking using level sets,” in Computer Vision and Pattern Recognition, 2005. CVPR 2005. IEEE Computer Society Conference on, vol. 2 (Citeseer, 2005), pp. 34–41.

41. A. Lang, A. Carass, B. M. Jedynak, S. D. Solomon, P. A. Calabresi, and J. L. Prince, “Intensity inhomogeneity correction of macular OCT using N3 and retinal flatspace,” in 13th International Symposium on Biomedical Imaging (ISBI 2016), (2016), pp. 197–200.

42. A. Lang, A. Carass, B. M. Jedynak, S. D. Solomon, P. A. Calabresi, and J. L. Prince, “Intensity inhomogeneity correction of SD-OCT data using macular flatspace,” Med. Image Analysis 43, 85–97 (2018). [CrossRef]  

43. P. Perona and J. Malik, “Scale-space and edge detection using anisotropic diffusion,” IEEE Transactions on pattern analysis and machine intelligence 12, 629–639 (1990). [CrossRef]  

44. E. W. Dijkstra, “A note on two problems in connexion with graphs,” Numer. Math. 1, 269–271 (1959). [CrossRef]  

45. J. Tian, B. Varga, E. Tatrai, P. Fanni, G. M. Somfai, W. E. Smiddy, and D. C. Debuc, “Performance evaluation of automated segmentation software on optical coherence tomography volume data,” J. Biophotonics 9, 478–489 (2016). [CrossRef]   [PubMed]  

46. L. R. Dice, “Measures of the amount of ecologic association between species,” Ecology 26, 297–302 (1945). [CrossRef]  

47. A. G. Roy, S. Conjeti, S. P. K. Karri, D. Sheet, A. Katouzian, C. Wachinger, and N. Navab, “ReLayNet: retinal layer and fluid segmentation of macular optical coherence tomography using fully convolutional networks,” Biomed. Opt. Express 8, 3627–3642 (2017). [CrossRef]   [PubMed]  

48. J. Tian, B. Varga, G. M. Somfai, W.-H. Lee, W. E. Smiddy, and D. Cabrera DeBuc, “Real-Time Automatic Segmentation of Optical Coherence Tomography Volume Data of the Macular Region,” PLoS ONE 10, 1–20 (2015). [CrossRef]  

49. P. Bhargava, A. Lang, O. Al-Louzi, A. Carass, J. L. Prince, P. A. Calabresi, and S. Saidha, “Applying an open-source segmentation algorithm to different OCT devices in Multiple Sclerosis patients and healthy controls: implications for clinical trials,” Multiple Scler. Int. (2015). [CrossRef]  

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (10)

Fig. 1
Fig. 1 A B-scan from a healthy subject with manual delineation of nine boundaries. The boundaries from top to bottom are ILM, RNFL-GCIP, GCIP-INL, INL-ONL, OPL-ONL, ELM, IS-OS, OS-RPE, and BM.
Fig. 2
Fig. 2 A flowchart of our proposed method.
Fig. 3
Fig. 3 The outputs of our preprocessing steps. A B-scan from a healthy subject is shown (a). The ILM and BM are detected (b) and used to produce a BM flattened B-scan (c). The image in (d) shows a normalized flattened B-scan.
Fig. 4
Fig. 4 Shown are the random forest predicted boundary locations in a B-scan.
Fig. 5
Fig. 5 Representation of the ILM surface. The red line segment shows the height of the ILM (320 voxels in this case) at the 350 th A-scan of the 40 th B-scan. Therefore in the 2D array representation of the ILM surface, the value at location ( 350 , 40 ) is 320. For a 496 × 1024 × 49 OCT volume, since there are 1024 × 49 A-scans, the 2D array for the ILM surface is of size 1024 × 49. In total nine 1024 × 49 2D arrays are needed to fully represent all nine boundaries.
Fig. 6
Fig. 6 The vector field kernel used is shown in (a). A probability map of ILM (b top) is convolved with this vector field kernel to produce the external force field. A magnified view of the force field (red arrows) is overlaid on the probability map and shown in (b bottom).
Fig. 7
Fig. 7 Shown are two surfaces in front of a B-scan. We highlight two A-scans (the 250 th and the 500 th A-scans at the 21 th B-scan). The first A-scan intersects with the IPL-INL surface and INL-OPL surface at Points C and D; the second A-scan intersects with the IPL-INL surface and INL-OPL surface at Points A and B.
Fig. 8
Fig. 8 Shown are the effect of parameter selection on absolute boundary error for four boundaries in the validation set. Each sphere corresponds to an experiment on a validation set and the parameters used are indicated by the spatial coordinates of the sphere. After all 1440 experiments, the resultant absolute error were thresholded and errors that were larger than 5% of the lowest error are shown as big red spheres. The remaining results were linearly mapped to [ 0 , 1 ] with the lowest error indicated by 0 (small blue spheres) and largest errors indicated by 1 (large yellow spheres). The parameters chosen are indicated by a magenta triangle. The shaded green region indicates the combination of parameters that our method produce lower absolute boundary error than AURA v3.4 when evaluated on the test data set. The black contour indicates the intersection of this region and each slice.
Fig. 9
Fig. 9 Shown is the mean absolute boundary error (in pixels) for all nine boundaries from iteration 1 to iteration 50. Each pixel has a physical dimension of 3.9 μm axially and 5.8 μm laterally.
Fig. 10
Fig. 10 A B-scan retinal OCT (a) and a magnified region near the fovea (b) are shown, along with corresponding manual delineation (c) and (d). The B-scan is segmented using AURA v3.4 (e) and (f), and our method (LBE) (g) and (h). The LBE mask is generated by rounding the boundary location to 0.1 voxel level.

Tables (2)

Tables Icon

Table 1 Mean and standard deviations of absolute boundary error (in pixels) are shown. Each pixel correspond a physical dimension of 3.9 microns axially and 5.8 laterally. A paired Wilcoxon signed rank test was used to test the significance of any improvement between AURA v3.4 and our method (LBE). The method with the lowest mean absolute boundary error are bolded. Boundaries that are significant (α level of 0.001) are marked with a . We include the result directly from shortest path initialization (SP) to show that our method improves on the initialization. A paired Wilcoxon signed rank test (not shown in the table) between LBE and SP shows all boundaries except BM reach a p-value of 0.0001, where the p-value of BM is 0.0143 due to the large variance in the LBE output.

Tables Icon

Table 2 Mean and standard deviations of the Dice Coefficient across the eight retinal layers. A paired Wilcoxon signed rank test was used to test the significance of any improvement between AURA v3.4 and our method (LBE). The method with the highest Dice Coefficient are bolded. Boundaries that are significant (α level of 0.001) are marked with a .

Equations (8)

Equations on this page are rendered with MathJax. Learn more.

F s ( i ) = c s ( B s ( i ) B ( i ) ) i ,
F p ( i ) = c p ( B p ( i ) B ( i ) ) i ,
F = F ext + F int
= F ext + c s ( B s B ) i + c p ( B p B ) i
C ( i , j ) = 1 P ( i , j ) + 0.01 1 .
F u n ( j , k ) = F ext ( ceil ( B n ( j , k ) ) ) + F int n ( ceil ( B n ( j , k ) )
F l n ( j , k ) = F ext ( floor ( B n ( j , k ) ) ) + F int n ( floor ( B n ( j , k ) ) .
B n + 1 ( j , k ) floor ( B n ( j , k ) ) + | F l n ( j , k ) | | F u n ( j , k ) | + | F l n ( j , k ) |
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.