Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Correction propagation for user-assisted optical coherence tomography segmentation: general framework and application to Bruch’s membrane segmentation

Open Access Open Access

Abstract

Optical coherence tomography (OCT) is a commonly used ophthalmic imaging modality. While OCT has traditionally been viewed cross-sectionally (i.e., as a sequence of B-scans), higher A-scan rates have increased interest in en face OCT visualization and analysis. The recent clinical introduction of OCT angiography (OCTA) has further spurred this interest, with chorioretinal OCTA being predominantly displayed via en face projections. Although en face visualization and quantitation are natural for many retinal features (e.g., drusen and vasculature), it requires segmentation. Because manual segmentation of volumetric OCT data is prohibitively laborious in many settings, there has been significant research and commercial interest in developing automatic segmentation algorithms. While these algorithms have achieved impressive results, the variability of image qualities and the variety of ocular pathologies cause even the most robust automatic segmentation algorithms to err. In this study, we develop a user-assisted segmentation approach, complementary to fully-automatic methods, wherein correction propagation is used to reduce the burden of manually correcting automatic segmentations. The approach is evaluated for Bruch’s membrane segmentation in eyes with advanced age-related macular degeneration.

© 2020 Optical Society of America under the terms of the OSA Open Access Publishing Agreement

1. Introduction

Optical coherence tomography (OCT) is a standard imaging modality in ophthalmology, where it is used for disease detection as well as for monitoring progression and treatment response. Ophthalmic OCT is typically acquired using a raster protocol, wherein a set of B-scans is sequentially collected at different positions on the retina to form an OCT volume. In clinical practice, OCT volumes have traditionally been viewed on a B-scan basis by scrolling through the 2-D cross-sections that comprise the volume. However, increases in A-scan rates have enabled denser, isotropic A-scan sampling, which, in turn, has increased the practicality of en face OCT visualization, wherein 2-D transverse planes or slabs/projections are viewed. Compared to B-scan approaches, there are several advantages of en face viewing and analysis: the transverse extent and spatial distribution of lesions (e.g., drusen) is naturally captured; specific retinal layers can be summarized in a single image, thus enabling rapid review; and disruptions in normal retinal anatomy can be detected by discontinuities in en face data (e.g., drusen-related elevations of the retinal pigment epithelium [RPE]). More recently, en face visualization has become increasing important in the clinical context of visualizing OCT angiography (OCTA) data [1,2], because the chorioretinal microvasculature is largely oriented along en face surfaces.

While en face analysis can be performed by simply extracting a slab that is perpendicular to the OCT beam, it is often desirable to use a modified slab that follows one or more retinal layers. Extracting such layer-fitted slabs requires segmentation, which is complicated by the natural retinal curvature, retinal layer variations, and distortions introduced by pathology. Thus, retinal layer segmentation a critical prerequisite for en face OCT/OCTA visualization and quantitation.

Related work for retinal layer segmentation can be roughly partitioned into three approaches. Graph-based approaches either construct a weighted graph and search for the shortest path [3,4] or approximate a surface by optimizing a cost function [5,6]. Model-based approaches use prior shape knowledge and adapt [7,8] or iteratively calculate [9] boundary shapes. Learning-based approaches classify retinal layers by using expert input or hand-crafted features [10,11], or by using end-to-end training schemes such as deep learning [12] to receive boundary segmentations [1319].

Although automatic layer segmentation algorithms have achieved impressive results, manual segmentation remains the gold standard, particularly for the mentioned layers and in the presence of severe pathology. Unfortunately, manual segmentation, which is typically performed on a B-scan basis, is prohibitively time consuming, with OCT volumes often being comprised of hundreds of B-scans. Rather than using an entirely automatic or manual approach, this paper develops a general, user-assisted segmentation framework with the aim of reducing—rather than eliminating—user input [20]. This paper is conceptually divided into two parts. First, we present the general framework for user-assisted segmentation and describe the principles of correction propagation, its key element. Second, we develop and evaluate a particular instantiation of the framework applied to segmentation of Bruch’s membrane in eyes with advanced age-related macular degeneration (AMD)—in particular, geographic atrophy (GA) and choroidal neovascularization (CNV).

2. General framework and correction propagation

2.1 General framework

The proposed user-assisted segmentation framework follows the general workflow presented in Fig. 1. With reference to this figure, the framework consists of three modules: in the first module, the OCT volume is automatically segmented; in the second module, the user identifies regions wherein the automatic segmentation is insufficiently accurate and corrects a subset of B-scans within these regions; and, in the third module, the user corrections are propagated to the other regions of the volume. Note that each of the modules is relatively independent of the particular implementations of the other modules, thus allowing for flexible and extensible usage.

 figure: Fig. 1.

Fig. 1. Our user-assisted segmentation framework is comprised of three modules: (1) initial automatic segmentation; (2) manual correction of segmentation errors over a subset of the volume; and (3) propagation of the segmentation corrections to other regions of the volume. The framework is designed so that each module is relatively independent of the particular implementations of the other modules.

Download Full Size | PDF

The advantage of the proposed user-assisted segmentation framework is that, through the judicious usage of manual segmentation, correction propagation can greatly reduce the time needed to achieve volumetric segmentations with accuracies approaching those achieved with fully-manual segmentation.

The central component of our user-assisted segmentation framework is the correction propagation step, wherein a partial manual correction is automatically propagated to other uncorrected regions of the volume. While there are many approaches to correction propagation, there are common specifications and functionalities, which we review briefly below.

2.2 Interpolation versus re-segmentation

Correction propagation schemes can be roughly divided into interpolation (inpainting) schemes and re-segmentation schemes. Interpolation schemes perform propagation by interpolating user-corrected segmentations to uncorrected portions of the volume, and can range from simple spline fitting, to partial differential equation based methods (e.g., Laplacian interpolation), to deep-learning approaches. In contrast, re-segmentation schemes re-segment the uncorrected volumes by using the user-corrected segmentations to inform the re-segmentation; for example, by establishing internal boundary conditions to reduce the solution space. Interpolation schemes have the advantage that they typically perform well when the boundary to be segmented is slowly varying (along the direction of propagation) relative to the density of user-corrected segmentations. Moreover, interpolation approaches will typically converge to the true boundary in a predictable manner as the density of user-corrected segmentations increases. A disadvantage is that, for boundaries with rapid spatial variations (e.g., elevations of the RPE in the presence of drusen), interpolation approaches may require prohibitively dense user-corrections. In contrast, re-segmentation approaches can naturally accommodate rapid spatial variations, but may perform poorly when the boundary being segmented is not well visualized (e.g., in cases where the boundary is not visible due to low OCT signal). Of course, hybrid schemes combining interpolation and re-segmentation can be used to balance these advantages and disadvantages.

2.3 Domain of correction propagation

In general, if the domain of correction propagation is the entire OCT volume, corrections made to a small sub-region of the volume can influence segmentations at all other locations within the volume. While there are situations in which such unconstrained propagation is desirable, it risks creating a “Whac-a-mole” scenario wherein correction propagation generates errors in previously error-free regions. To avoid this, a domain of correction (e.g., a rectangular region) can be specified; outside of this domain, correction propagation either does not occur, or occurs with tighter constraints. The approach of explicitly specifying a domain of correction works well when there are only a few regions that require correction. For numerous, spatially distributed regions of correction, explicitly specifying multiple domains of correction can become overly laborious. This point is elaborated upon in the Discussion section of this paper.

3. User-assisted pipeline for Bruch’s membrane segmentation using graph-cut-based correction propagation

In the sections below, we describe a user-assisted correction propagation scheme for segmenting Bruch’s membrane, the penta-layered structure situated between the choriocapillaris and the RPE. For reference, in normal eyes the RPE-to-choriocapillaris distance is ∼20 µm, as measured with OCT, and Bruch’s membrane thickness is 2 µm $-$ 5 µm, as measured with light microscopy [21,22]. Bruch’s membrane segmentation arises in several contexts, including in OCTA-based assessment of the choriocapillaris. The details of our segmentation algorithm are provided in the sections below. Accompanying code is available at https://github.com/MIT-BOIB/CorrectionPropagation/, and mathematical notation is summarized in Table 2 of Appendix I.

To facilitate discussion, we introduce an orthogonal coordinate system $(\mathbf {e}_P, \mathbf {e}_A, \mathbf {e}_B)$, where $\mathbf {e}_P$ points along increasing pixel index (i.e., along the anterior-to-posterior axis); $\mathbf {e}_A$ points along increasing A-scan index (i.e., along one of the transverse directions), and $\mathbf {e}_B$ points along the direction of increasing B-scan index (i.e., along the other, orthogonal, transverse direction). We assume that the input is a motion-corrected OCT volume $\mathbf {V}: \mathcal {D} \to [0, 1]$, where $\mathcal {D} = \mathcal {I} \times \mathcal {J} \times \mathcal {K}$. Here, $\mathcal {I} = \left \{ 0, \ldots , N_P-1 \right \}$, $\mathcal {J} = \left \{ 0, \ldots , N_A-1 \right \}$, and $\mathcal {K}=\left \{0, \ldots , N_B-1\right \}$, where $N_P$, $N_A$, and $N_B$ are the number of pixels per A-scan, the number of A-scans per B-scan, and the number of B-scans per volume, respectively. In this notation, $\mathbf {V}(i,j,k) = \mathbf {V}_{i, j, k}$ denotes the value of the $i$-th pixel of the $j$-th A-scan of the $k$-th B-scan, the restriction $\left .\mathbf {V}\right |_{\mathcal {K} = \left \{ k \right \}}$ denotes the $k$-th OCT cross-section along the $\mathbf {e}_A$ axis, and the restriction $\left .\mathbf {V}\right |_{\mathcal {J} = \left \{ j \right \}}$ denotes the $j$-th OCT cross-section along the $\mathbf {e}_B$ axis. Because the optimal parameter values for our algorithm depend on the OCT instrument and imaging specifications (e.g., axial and transverse resolutions, sampling densities, and data up-sampling), in the following, the algorithm is presented with a variable parameterization. The particular parameter values used in this study are provided in Appendix II.

3.1 Module 1: automatic segmentation

The initial automatic layer segmentation is composed of (1) an optimized B-scan graph-cut algorithm, similar to that presented in Chiu et al. [4], which is used to segment the RPE, and (2) a Bruch’s membrane approximation algorithm, which estimates the Bruch’s membrane position from the segmented RPE. The algorithmic details are described below.

3.1.1 Flattening OCT B-scans relative to the RPE

B-scan graph-cut methods minimize the traversed path across the B-scan (i.e., along the $\mathbf {e}_A$ direction) and are therefore sensitive to the curvature of the retina (as it appears on the OCT B-scan), which is attributable to both the physiological retinal curvature and to differences in optical pathlengths. To mitigate this sensitivity, we “flatten” the B-scans to remove bulk retinal curvature. First, each B-scan is denoised with an anisotropic Gaussian filter with standard deviations $(\sigma _{f_i},\sigma _{f_j})$. Then, for the center B-scan, $\left .\mathbf {V}\right |_{\mathcal {K} = \left \{ c \right \}}$, with $c = \lfloor (N_B-1)/2 \rfloor$, the axial position of the RPE, which is assumed to be the brightest layer, is estimated for each A-scan as the position of the pixel having the maximal value along that A-scan. Next, random sampling consensus (RANSAC) [23] is used to fit a polynomial $\gamma ^c_{\mathbf {e}_A}$ of degree $d$ through the set of identified pixel positions (i.e., the presumptive RPE position). Finally, all A-scans of $\left .\mathbf {V}\right |_{\mathcal {K} = \left \{ c \right \}}$ are axially shifted to place the RPE contour at the middle axial pixel position. In particular, the $j$-th A-scan is axially shifted by an amount $\gamma ^c_{\mathbf {e}_A}(j) - \lfloor (N_P-1)/2\rfloor$.

With the central B-scan flattened, the algorithm proceeds in a bidirectional, multi-threaded march along the $+\mathbf {e}_B$ and $-\mathbf {e}_B$ directions to higher and lower indexed B-scans, respectively. Assuming relative continuity in the $\mathbf {e}_B$ direction—which is a valid assumption under the specification that the input volume is motion-corrected—the search for the brightest layer is constrained by the fitted RPE position of its neighboring B-scan. In particular, the search set for the RPE in the $j$-th A-scan of the $(c \pm n)$-th B-scan is $\left \{\gamma _{\mathbf {e}_A}^{c \pm (n-1)}(j) - \Delta _{\mathbf {e}_A} , \ldots , \gamma _{\mathbf {e}_A}^{c \pm (n-1)}(j)+\Delta _{\mathbf {e}_A} \right \}$, where $\Delta _{\mathbf {e}_A}$ is the half-size of the search space.

Note that with the proposed flattening scheme, the RPE must show maximum values in more than 50 % of the initially flattened central B-scan pixels. In cases where the retinal nerve fiber layer shows widespread maximum pixels along A-scan directions, the initially flattened B-scan should be manually adapted to a different section. Our framework provides functionality to test and adapt the flattening procedure.

3.1.2 Automatic RPE segmentation via graph-cut

After flattening $\mathbf {V}$, we perform a B-scan-wise RPE segmentation along the $\mathbf {e}_A$ axis by using the graph-cut approach [4,24,25]. Briefly, the graph-cut approach works by constructing, for each B-scan $\left .\mathbf {V}\right |_{\mathcal {K} = \left \{ k \right \}}$, a graph wherein there is one graph vertex for each pixel of the image. Vertices of neighboring pixels are connected, and the edge weights are derived from OCT B-scan information. The graph is then traversed by finding the shortest (weighted) path that connects one side of the B-scan to the other side of the B-scan. We define the graph used to segment the RPE of the $k$-th B-scan, $\left .\mathbf {V}\right |_{\mathcal {K} = \left \{ k \right \}}$, as the ordered triple $\mathcal {G}_k^{\textrm {RPE}} = \left (\mathcal {P}_k^{\textrm {RPE}}, \mathcal {E}_k^{\textrm {RPE}}, w_k^{\textrm {RPE}} \right )$, where $\mathcal {P}_k^{\textrm {RPE}}$ is the set of points/vertices (this set is typically denoted as $\mathcal {V}$, but we avoid this notation because $\mathbf {V}$ has already been used to denote the OCT volume), $\mathcal {E}_k^{\textrm {RPE}} \subset \mathcal {P}_k^{\textrm {RPE}} \times \mathcal {P}_k^{\textrm {RPE}}$ is the set of edges, and $w^{\textrm {RPE}} : \mathcal {E}^{\textrm {RPE}}_k \to \mathbb {R}^+ \cup \left \{+\infty \right \}$ is the edge-weight function. In particular, we let:

$$\mathcal{P}^{\textrm{RPE}}_k = \left\{ \left. p_{i,j} \right| (i, j) \in \mathcal{I} \times \mathcal{J} \right\}$$
$${\mathcal{E}^{\textrm{RPE}}_k} = {\bigcup_{(i, j) \in \mathcal{I} \times \mathcal{J}}} \mathcal{N}^{\textrm{RPE}}\left( p_{i,j} \right)$$
where $p_{i,j} \in \mathcal {P}_k^{\textrm {RPE}}$ is the point corresponding to the volume element $\mathbf {V}_{i, j, k}$ and $\mathcal {N}^{\textrm {RPE}}(p_{i,j})$ is the set of edges defined by:
$$\mathcal{N}^{\textrm{RPE}}(p_{i,j})) = \left\{ \left(p_{i,j}, p_{i-1, j+1} \right) \right\} \cup \left\{\left(p_{i,j}, p_{i, j+1}\right) \right\} \cup \left\{\left(p_{i,j}, p_{i+1, j+1} \right) \right\}$$
For RPE segmentation, we compute the edge-weights by using the OCT B-scan intensities, rather than using the OCT B-scan gradients, as done in Chiu et al. [4]. Specifically, each OCT B-scan $\left .\mathbf {V}\right |_{\mathcal {K} = \left \{ k \right \}}$ is bilaterally filtered, thus resulting in filtered volume $\mathbf {V}^{\textrm {RPE}}$. Bilateral filtering weights the distance of pixels ($\sigma _s$) as well their intensity information ($\sigma _r$), thus maintaining edges while smoothing non-edges [26]. The graph weights are then computed as:
$$w^{\textrm{RPE}}_k \left(e_{i,j,m,n} \right) = \mathrm{exp}\left[2 - \left(\mathbf{V}^{\textrm{RPE}}(i,j,k) + \mathbf{V}^{\textrm{RPE}}(m,n,k)\right) + \epsilon\right]$$
where $e_{i,j,m,n} \in \mathcal {E}^{\textrm {RPE}}$ is the edge between graph vertices $p_{i,j}$ and $p_{m,n}$, with $p_{i,j},\; p_{m,n} \in \mathcal {P}_k^{\textrm {RPE}}$. Here, $\epsilon$ denotes a small-valued bias, the addition of which ensures that the graph weights are strictly positive. The exponential weighting scheme is used to prevent shortcuts through pathological features—such as drusen—that cause geometric distortions in the RPE. The shortest path is computed by using Dijkstra’s algorithm, thus yielding the automatic RPE segmentation of the flattened volume $\mathcal {S}^{\textrm {RPE}, f} : \mathcal {J} \times \mathcal {K} \to \mathcal {I}$. A representative RPE segmentation is shown in Fig. 2.

 figure: Fig. 2.

Fig. 2. Example of fully-automatic segmentation of the RPE and Bruch’s membrane in an OCT B-scan that intersects a CNV lesion. Note that, for clarity, the segmentation lines and B-scan are shown in their natural (unflattened) coordinate frames. In B-scans such as this one, where the RPE becomes separated from Bruch’s membrane, the automatic Bruch’s membrane segmentation can be anteriorly shifted from the true Bruch’s membrane position (turquoise arrows).

Download Full Size | PDF

3.1.3 Automatic Bruch’s membrane segmentation via RPE relaxation

We compute a Bruch’s membrane segmentation by iterative relaxation of the RPE segmentation, $\mathcal {S}^{\textrm {RPE}, f}$, a process that achieves results similar to a convex hull approach [27]. The pseudo-code for the iterative relaxation is presented in Alg. 1 (Appendix III), with the key computations occurring in lines 7 and 8. In line 7, the current Bruch’s membrane segmentation is smoothed by using a mean filter of width $h_m$, thereby flattening elevated sections of the RPE. In line 8, the maximum of the shifted RPE segmentation and the current Bruch’s membrane segmentation is computed, thereby ensuring that the Bruch’s membrane segmentation does not move anterior to the shifted RPE position. The mean filtering and maximum operations are iteratively performed until steady state or a specified maximum number of iterations. After, the estimated Bruch’s membrane position is posteriorly shifted by $\Delta _{\textrm {BM}}$ to account for the RPE segmentation running through the center of the RPE, rather than along its basement membrane. The result is an automatic Bruch’s membrane segmentation $\mathcal {S}^{\textrm {BM}, f}$ of the flattened volume (Fig. 2).

3.2 Module 2: manual correction

3.2.1 Domain of correction propagation and manual correction

After volumetrically inspecting the quality of the automatic Bruch’s membrane segmentation, $\mathcal {S}^{\textrm {BM}, f}$, the user draws a 2-D bounding box on the $\mathbf {e}_A \times \mathbf {e}_B$ plane (Fig. 3). This selection defines a restricted sub-domain, $\mathcal {D}^{\tilde {R}}$, with $\mathcal {D} \supseteq \mathcal {D}^{\tilde {R}} = \mathcal {I} \times \mathcal {J}^{R} \times \mathcal {K}^{R}$. The $\sim$ of $\tilde {R}$ indicates that the domain is only partially restricted—namely, along the transverse, but not axial, directions. Here, $\mathcal {J}\supseteq \mathcal {J}^{R} = \left \{ n_{\ell }, \ldots , n_{r} \right \}$, where $n_{\ell } \leq n_{r}$ are the indices of the left and right sides of the restricted domain, respectively (i.e., the sides of the restricted domain parallel to the $\mathbf {e}_B$ axis); $\mathcal {K}\supseteq \mathcal {K}^{R} = \left \{n_{b}, \ldots , n_{t}\right \}$ where $n_{b} \leq n_{t}$ are the indices of the bottom and top sides of the restricted domain, respectively (i.e., the sides of the restricted domain parallel to the $\mathbf {e}_A$ axis). The segmentation labels for all border pixels of the sub-domain $\mathcal {D}^{\tilde {R}}$ are assumed to be correct.

 figure: Fig. 3.

Fig. 3. Example of user-assisted segmentation correction within a restricted domain, for the same case as in Fig. 2. (Left panel) En face OCT plane, cropped from a larger field-of-view, that intersects a region of CNV. (Right panel) OCT B-scan, extracted from the en face OCT plane, also intersecting the region of CNV. Note that, for clarity, the segmentation lines and B-scan are shown in their natural (unflattened) coordinate frames. The teal contour corresponds to the automatically segmented internal limiting membrane; the orange contour corresponds to the automatically segmented RPE; the red contour corresponds to the automatically segmented Bruch’s membrane; and the dashed, light-green contour corresponds to the corrected Bruch’s membrane segmentation. The domain of correction, $\mathcal {D}^{\tilde {R}}$, is indicated by the dark-green dashed box in the left panel, the sides of which correspond to the vertical dark-green dashed lines in the OCT B-scan of the right panel. In this example, three OCT B-scans are corrected. Note that the B-scans are only corrected within $\mathcal {D}^{\tilde {R}}$.

Download Full Size | PDF

With the restricted domain defined, the user then proceeds to manually correct a subset $\mathcal {K}^M \subseteq \mathcal {K}^R$ of the B-scans within this restricted domain, thereby creating a partially corrected segmentation of Bruch’s membrane $\mathcal {S}^{ {\tilde{\textrm C}\rm{BM}}, f} : \mathcal {J}^{R} \times \mathcal {K}^M \to \mathcal {I}$.

3.3 Module 3: correction propagation

3.3.1 Constructing a Bruch’s membrane graph

For re-segmentation of Bruch’s membrane, we construct a second graph, $\\ \mathcal {G}_j^{\textrm {BM}, \tilde {R}} = \left (\mathcal {P}_j^{\textrm {BM},\tilde {R}}, \mathcal {E}_j^{\textrm {BM},\tilde {R}}, w_j^{\textrm {BM},\tilde {R}} \right )$, this time along the $\mathbf {e}_B$ direction. In particular, we let:

$$\mathcal{P}^{\textrm{BM},\tilde{R}}_j = \left\{ \left. p_{i,k} \right| (i, k) \in \mathcal{I} \times \mathcal{K}^{R} \right\}$$
$$\mathcal{E}^{\textrm{BM},\tilde{R}}_j = \mathop{\bigcup}\limits_{(i, k) \in \mathcal{I} \times \mathcal{K}^{R}} \mathcal{N}^{\textrm{BM},\tilde{R}}\left(p_{i,k} \right)$$
where: $\mathcal {N}^{\textrm {BM},\tilde {R}}(p_{i,k})$ is the set of edge vertices defined by:
$$\begin{aligned} \mathcal{N}^{\textrm{BM},\tilde{R}}(p_{i,k}) = &\left\{ \left(p_{i,k}, p_{i-3, k+1} \right) \right\} \cup \left\{\left(p_{i,k}, p_{i-2, k+1}\right) \right\} \cup \left\{\left(p_{i,k}, p_{i-1, k+1} \right) \right\} \cup \\ &\left\{ \left(p_{i,k}, p_{i, k+1} \right) \right\} \cup \left\{\left(p_{i,k}, p_{i+1, k+1}\right) \right\} \cup \left\{\left(p_{i,k}, p_{i+2, k+1} \right) \cup \left\{\left(p_{i,k}, p_{i+3, k+1} \right) \right\} \right\} \end{aligned}$$
The weights, $w^{\textrm {BM},\tilde {R}}_j$, are computed using the OCT gradient, rather than the OCT intensity, as was the case for $w^{\textrm {RPE}}_k$. This adaption ensures that the shortest path along BM does not incorrectly pass through the RPE, which has relatively high intensities. In particular, we calculate the smoothed axial gradient of the volume, $\mathbf {V}^{\textrm {BM}}$, by convolving each OCT B-scan with a kernel $Q: \mathcal {I} \times \mathcal {J} \to \mathbb {R}$:
$$\left. \mathbf{V}^{\textrm{BM}} \right|_{\mathcal{K} = \left\{ k \right\}} = \frac{1}{2}\left(\textrm{sgn}\left(\left. \mathbf{V} \right|_{\mathcal{K} = \left\{ k \right\}} * Q \right) + 1\right)\circ \left(\left. \mathbf{V} \right|_{\mathcal{K} = \left\{ k \right\}} * Q \right)$$
where $\textrm {sgn}$ denotes the sign (i.e., signum) function, $\circ$ denotes the Hadamard product (i.e., element-wise multiplication), and $*$ denotes convolution. Note that we specify $Q$ so that the gradient is along the $\mathbf {e}_P$ direction, and the smoothing is along the $\mathbf {e}_A$ direction (see Appendix II). Then, the graph weights are set as:
$$w^{\textrm{BM}}_j \left(e_{i,k,m,o} \right) = 2 - \left(\mathbf{V}^{\textrm{BM}}(i,j,k) + \mathbf{V}^{\textrm{BM}}(m,j,o)\right) + \epsilon$$
Note that $\mathbf {V}^{\textrm {BM}}$ takes values in $[0, 1]$, thus ensuring that the computed graph weights are strictly positive.

3.3.2 Axial restriction of Bruch’s membrane graph

Before performing the re-segmentation of Bruch’s membrane, we use the manual Bruch’s membrane segmentation to restrict the graph $\mathcal {G}_j^{\textrm {BM},\tilde {R}}$ along the axial dimension, thereby creating $\mathcal {G}_j^{\textrm {BM}, R} = \left (\mathcal {P}_j^{\textrm {BM}, R}, \mathcal {E}_j^{\textrm {BM}, R}, w_j^{\textrm {BM}, R} \right )$. In particular,

$$\mathcal{P}_j^{\textrm{BM}, R}= \mathcal{P}_j^{\textrm{BM}} \cap \mathcal{B}_j$$
Here, $\mathcal {B}_j$ is a pixel “band” that straddles the manually segmented A-scans along the $j$-th OCT cross-section (along the $\mathbf {e}_B$ direction). In particular:
$$\mathcal{B}_j = \mathop{\bigcup}\limits_{ k \in \mathcal{K}^{R}} T\left( p_{L(k), k}\right)$$
where $L$ is a linear interpolating function:
$$L(k) = \left\lfloor (k-k_{-}(k))\frac{g(k_{+}(k))-g(k_{-}(k))}{k_{+}(k) - k_{-}(k)} + g(k_{-}(k))\right\rceil,$$
$\lfloor \cdot \rceil$ denotes operation of rounding to the nearest integer, $g =\left .\mathcal {S}^{ {\tilde{\textrm C}\rm{BM}}, f}\right |_{\mathcal {J} = \left \{j \right \}}$ and, $k_{-}(k)$ & $k_{+}(k)$ return the indices of the manually corrected B-scans that straddle the index $k$:
$$k_{-}(k) = \underset{k^* \in \mathcal{K}^{M_-}_k}{\arg\min} \left|k^* - k \right|$$
$$k_{+}(k) = \underset{k^* \in \mathcal{K}^{M_+}_k}{\arg\min} \left|k^* - k \right|$$
where,
$$\mathcal{K}^{M_-}_k = \left\{\left. k^* \in \mathcal{K}^{R} \right| k^* \leq k \right\} \cup \left\{ n_b \right\}$$
$$\mathcal{K}^{M_+}_k = \left\{\left. k^* \in \mathcal{K}^{R} \right| k^* \geq k \right\} \cup \left\{ n_t \right\}$$
and $T(p_{i,k})$ is a “thickness” function defined as:
$$T(p_{i,k}) = \begin{cases} \left\{p_{i,k}\right\}, & k \in \mathcal{K}^M\\ \mathcal{I}^{\Delta_{\mathcal{B}}}_k(p_{i,k}), & \textrm{else} \end{cases}$$
with
$$\mathcal{I}_k^{\Delta_{\mathcal{B}}}(p_{i,k}) = \mathop{\bigcup}\limits_{i^*\in\mathcal{I},\; |i^*-i| \leq \Delta_{\mathcal{B}}} \left\{ p_{i^*, k}\right\}$$

An illustration of $\mathcal {B}_j$ and $\mathcal {P}_j^{\textrm {BM}, R}$ for $\Delta _{\mathcal {B}} = 2$ is shown in Fig. 4. The edges $\mathcal {E}_j^{\textrm {BM}, R}$ and edge weights $w_j^{\textrm {BM}, R}$ of the restricted graph are formed by restriction of $\mathcal {E}_j^{\textrm {BM}, \tilde {R}}$ and edge weights $w_j^{\textrm {BM},\tilde {R}}$ using $\mathcal {P}_j^{\textrm {BM}, R}$. Namely,

$$\mathcal{E}_j^{\textrm{BM}, R} = \left.\mathcal{E}_j^{\textrm{BM},\tilde{R}}\right|_{\mathcal{P}_j^{\textrm{BM}, R}}$$
and
$$w_j^{\textrm{BM}, R} = \left.w_j^{\textrm{BM},\tilde{R}}\right|_{\mathcal{E}_j^{\textrm{BM}, R}}$$
For each $j \in \mathcal {J}^R$, the shortest paths across $\mathcal {G}_j^{\textrm {BM}, R}$ are computed by using Dijkstra’s algorithm, thus yielding the corrected Bruch’s membrane segmentation on the flattened volume, $\mathcal {S}^{\textrm {CBM}, f}$. As a final step, the segmentation $\mathcal {S}^{\textrm {CBM}, f}$ is “unflattend” by applying the inverse of the flattening function, thereby yielding $\mathcal {S}^{\textrm {CBM}}$.

 figure: Fig. 4.

Fig. 4. Illustration of Bruch’s membrane restriction. The manually segmented points ($k_1$ and $k_2$) and the border points ($n_b$ and $n_t$) act as ‘gates’ through which the shortest path must pass. Between these points, the graph is restricted to the band $\mathcal {B}_j$ with thickness $\Delta _{\mathcal {B}}$ (here: $\Delta _{\mathcal {B}}=2$). $\mathcal {B}_j$ is constructed via linear interpolation between the manual inputs and sub-domain borders. White cells denote unconnected vertices.

Download Full Size | PDF

4. Evaluation

4.1 Subjects

All data were acquired from eyes imaged at the ophthalmology clinic at the New England Eye Center at Tufts Medical Center (Boston, MA). The study was approved by the institutional review boards at the Massachusetts Institute of Technology and Tufts Medical Center, and written informed consent was obtained from all subjects. The research adhered to the Declaration of Helsinki and the Health Insurance Portability and Accountability Act. All subjects underwent a complete ophthalmic examination at the New England Eye Center, and patients with GA or CNV secondary to AMD were retrospectively identified. In total, 13 eyes (8 with GA, 5 with CNV) were identified for analysis (Table 1).

Tables Icon

Table 1. Description of the datasets used to evaluate our proposed user-assisted Bruch’s membrane segmentation algorithm. All cases are $6\, \textrm{mm} \times 6\, \textrm{mm}$ volumes. Superscripts * and § indicate eyes of the same patient (i.e., OD/OS).

4.2 OCT imaging and data processing

All images were acquired using a 400 kHz prototype SS-OCT system operating at a center wavelength of 1050 nm [28]. The full-width-at-half-maximum axial and transverse resolutions in tissue were ∼9 µm and ∼20 µm, respectively. The incident power on the cornea was ∼1.8 mW.

OCT volumes were acquired over a $6\, \textrm{mm}\times 6\, \textrm{mm}$ fields-of-view. A single volume consists of 500 A-scans per B-scan, 5 repeated B-scans per location, and 500 locations per volume, thereby corresponding to an isotropic ∼12 µm transverse sampling density. After Fourier transformation of the OCT fringes, the digital axial pixel resolution was ∼4.5 µm. OCT angiography (OCTA) images were computed using pairwise amplitude decorrelation of the 5 repeated B-scans, where the interscan time was ∼1.5 ms [29]. Because OCT volumes suffer from motion artifacts, in this study we adopted the approach of Kraus et al., wherein a pair of orthogonally oriented (“horizontal” and “vertical”) raster volumes are acquired, registered, and merged [30,31]. In addition to minimizing motion artifacts, volume merging also increases the signal-to-noise ratio. GA lesion boundaries were manually traced on the basis of OCT hyper-transmission on sub-RPE slabs formed by projecting the OCT volume from Bruch’s membrane to ∼340 µm posterior to Bruch’s membrane. OCT B-scans and fundus autofluorescence images were consulted in cases where the boundaries were unclear. CNV lesion boundaries were computed by manually tracing the CNV vasculature along the en face planes at each axial depth of the volume, and then taking the maximum boundary extent over all depths.

4.3 Evaluation metrics

The accuracy of the correction propagation algorithm for segmenting Bruch’s membrane was evaluated for each of the GA and CNV datasets relative to fully-manual segmentations. In particular, for each dataset, a reader (D. S.) drew a set of rectangular correction domains whose union covered the regions of GA or CNV (Fig. 6 ). The number and dimensions of the correction domains were selected in order to mimic those that might reasonably be chosen in practice. Four different correction “densities” were evaluated, thus corresponding to inter-correction spacings of $\Delta k \in \left \{24, 96, 180, 384 \right \} $ µm. That is, a spacing of $\Delta k$ corresponds to manual correction of every $(\Delta k / 12)$-th B-scan. For each correction density, the mean-absolute-difference (MAD) between the fully-manual (ground truth) and the user-assisted segmentation was computed. The MAD between the fully-manual and fully-automatic segmentation was also computed.

4.4 Parameter selection

The parameter $\Delta _{\mathcal {B}}$ was optimized by using cases G7 and C3, which were selected on the basis of having the least accurate fully-automatic segmentations. In particular, a grid search for $\Delta _{\mathcal {B}} \in \left \{1, 2, 3, 4, 5 \right \}$ was performed. Other parameters were subjectively determined. All tunable parameters are listed in Appendix II.

5. Results

Computed over the entire field-of-view the MAD $\pm$ $\sigma _{\textrm {MAD}}$ between the fully-automatic Bruch’s membrane segmentations and the fully-manual segmentations was $5.5 \pm 6.4$ µm for the GA dataset and $6.9 \pm 9.7$ µm for the CNV dataset. Considering only the domain of correction, $\mathcal {D}^{\tilde {R}}$, the results of the user-assisted correction propagation are summarized in Fig. 5. The spatial distribution of the segmentation errors before and after correction propagation are shown in Fig. 6 for an example of GA and CNV cases.

 figure: Fig. 5.

Fig. 5. Bar chart showing the user-assisted segmentation results for the GA and CNV datasets evaluated over $\mathcal {D}^{\tilde {R}}$; the dashed horizontal line indicates the axial digital (pixel) resolution, which is half of the axial optical resolution. MAD and $\sigma _{\textrm {MAD}}$ exhibit a monotonic decrease with an increasing number of corrected B-scans. The first bar of each dataset illustrates the MAD for the fully-automatic segmentation.

Download Full Size | PDF

 figure: Fig. 6.

Fig. 6. Illustration of user-assisted Bruch’s membrane segmentation in an eye with GA (top row) and an eye with CNV (bottom row). For all panels, propagation correction was performed with a correction density corresponding to $\Delta k = 180$ µm. Note that, for clarity, the segmentation lines and B-scans are shown in their natural (unflattened) coordinate frames. (a) OCT B-scans extracted from the locations of the dashed lines in column-b and column-c. For each panel, the teal contour corresponds to the automatically segmented internal limiting membrane; the orange contour corresponds to the automatically segmented RPE; and the green lines correspond to the manually segmented (ground truth) Bruch’s membrane. In the panels labelled “Fully-Automatic,” the red contours correspond to the fully-automatic Bruch’s membrane segmentation. In panels labelled “User-Assisted,” the red contours correspond to the user-assisted segmentation achieved via correction propagation. (b) Signed-difference maps between the fully-automatic segmentations and fully-manual segmentations. (c) Signed-difference maps between the user-assisted segmentations and the fully-manual segmentations. The dark-green contours in the panels of column-b and column-c correspond to lesion boundaries. The black rectangles in the panels of column-b and column-c correspond to the domains of correction, $\mathcal {D}^{\tilde {R}}$.

Download Full Size | PDF

6. Discussion

This work presents a general framework for user-assisted segmentation via correction propagation. The utility of this framework is illustrated by correcting the Bruch’s membrane segmentation in a small case series of eyes with GA and CNV. The evaluation demonstrates that segmentation accuracies can be substantially improved by correcting segmentations in only a subset of B-scans within the regions of error—in this study, the areas underlying regions of GA and CNV.

As noted in Section 3, one of the contexts in which accurate segmentation of Bruch’s membrane is required is in OCTA-based assessment of the choriocapillaris. Thus, it is interesting to examine the impact of user-assisted segmentation with respect to this application. Figure 7 shows example en face OCTA images of the choriocapillaris that correspond to the same GA and CNV eyes of Fig. 6. These en face OCTA images were formed by median projection of their respective OCTA volumes from Bruch’s membrane to 25 µm posterior to Bruch’s membrane. The most notable differences between the fully-automatic and user-assisted segmentations occur within the regions of GA and CNV. In the region of GA, the fully-automatic segmentation is posteriorly shifted relative to the fully-manual (and user-assisted) segmentations, which results in erroneous inclusions of larger choroidal vasculature. In the region of CNV, the fully-automatic segmentation is anteriorly shifted relative to the fully-manual (and user-assisted) segmentations, which results in both erroneous exclusions of CC vasculature and erroneous inclusions of CNV vasculature. Although not the motivation for this work, another important application of accurate Bruch’s membrane segmentation is in the detection, visualization, and analysis of CNV vasculature. An example of the effect of user-assisted segmentation correction on the visualization of CNV vascular patterning is given in Fig. 8. In future studies, we expect to evaluate the impact of user-assisted segmentation in the detection of sub-clinical CNV in asymptomatic eyes with intermediate AMD, the presence of which confers a marked increase in exudation risk [32,33].

 figure: Fig. 7.

Fig. 7. Effect of user-assisted segmentation on choriocapillaris OCTA slabs. The panels of the top row correspond to the full 6 mm $\times$ 6 mm fields-of-view, and the panels of the bottom row correspond to enlargements of those of the top row. All en face OCTA slabs were formed via median projection of the OCTA volume from Bruch’s membrane to ∼25 µm immediately posterior to Bruch’s membrane. Lesion boundaries are outlined in teal. User-assisted segmentation was performed as described in Fig. 6. (a) Choriocapillaris OCTA slab of an eye with GA (same as in Fig. 6) generated by using a fully-automatic segmentation. (b) Choriocapillaris OCTA slab of the same GA eye generated by using the user-assisted segmentation. (c) Choriocapillaris OCTA slab of an eye with CNV (same as in Fig. 6) generated by using a fully-automatic segmentation. (d) Choriocapillaris OCTA slab of the same CNV eye generated by using the user-assisted segmentation. Comparing panel-a and panel-b, the dominant effect of the user-assisted segmentation is a reduced appearance of larger choroidal vasculature within the region of atrophy (e.g., yellow arrow within teal outline). This reduction is a consequence of the fully-automatic segmentation being posteriorly shifted relative to the fully-manual (and user-asssisted) segmentation, as illustrated in Fig. 6. Even beyond the GA margin, the choriocapillaris OCTA signal appears lower with the user-assisted segmentation than with the fully-automatic segmentation (e.g., yellow arrow beyond teal outline). Comparing panel-c and panel-d, the user-assisted segmentation results in a finer CC/CNV patterning, which, again, is particularly noticeable within the lesion margins. In some regions, the user-assisted segmentation results in higher choriocapillaris OCTA signals (e.g., red arrows); in others, the user-assisted segmentation results in lower choriocapillaris OCTA signals (e.g., yellow arrow).

Download Full Size | PDF

 figure: Fig. 8.

Fig. 8. Effect of user-assisted segmentation on visualization of CNV lesion. User-assisted segmentation was performed as described in Fig. 6). All en face OCTA slabs were formed via mean projection of the OCTA volume from the automatically segmented RPE to the Bruch’s membrane, and therefore correspond to the type-I lesion component. (a) CNV OCTA slab generated by using a fully-automatic segmentation. (b) CNV OCTA slab of the same CNV eye generated by using the user-assisted correction. (c, d) Enlargements of panel-a and panel-b, respectively. When comparing panel-c and panel-d, several regions that appear vessel-free with the fully-automatic segmentation show vessels with the user-assisted segmentation (e.g., red arrows).

Download Full Size | PDF

In this work, we opted to use the approach of constraining the correction propagation to within a user-defined domain. Our doing so was largely motivated by our observation that, very often, the automatic segmentation was accurate except within regions of severe pathology. Thus, by constraining the correction, we eliminate the risk of erroneously adjusting the already correct regions outside the areas of pathology. The cost of using a constrained domain of correction is relatively small, since bounding boxes (or other regions) can be rapidly defined on en face images. In cases where segmentation errors are caused by a large number of distributed lesions, such as drusen, defining the domains of correction that bound each lesion becomes impractical. However, in this case, the domain of correction can simply be taken as the entire field-of-view, and the algorithm reverts to the domain-free approach. Another potential situation in which the domain approach becomes ineffective is when the lesions disrupting the segmentation are not well visualized in a single en face projection without accurate segmentation, thereby complicating the process of defining the domain(s) of correction. Small drusen on a curved retina are such a feature. One possible approach for such lesions is to draw the domains of correction using an ortho-plane viewer, though this increases the time and complexity of the analysis. Another approach is, as before, simply to take the domain of correction to be the entire field-of-view.

Our decision to use a constrained graph-cut method to propagate the user corrections results in limitations related to how the correction information is propagated to other regions of the volume. Specifically, the effect that correcting one B-scan has on the re-segmentation of B-scans far away from that corrected B-scan can be quite weak. This is particularly true when regions without pathology that have been correctly automatically segmented separate two lesions with incorrect automatic segmentations. Consequently, if this strategy were used to correct segmentation errors caused by distributed, discrete lesions, such as drusen, it would likely be necessary to have at least one corrected B-scan passing through each drusen. In cases where there are tens-to-hundreds of drusen, this can become unwieldy, thus requiring almost every B-scan to be corrected. Adaptive approaches, for example those that might learn graph-cut weights or other parameters based on the user corrections, may potentially to avoid or reduce such drawbacks.

For each volume, manual segmentation of one layer (e.g., Bruch’s membrane) took $\sim 150$ minutes ($\sim 18$ seconds per B-scan), whereas automatic segmentation of three layers (ILM, RPE, and Bruch’s membrane) took $\sim 3$ minutes. When considering the time-savings of our user-assisted approach, it is important to note that the savings are inversely proportional to $\Delta k$. For example, with $\Delta k= {12}$ µm, every B-scan would require manual correction, resulting in no time-savings; for $\Delta k = {384}$ µm, every 32-nd B-scan would require manual correction, thus resulting in $\sim 1/32$ of the fully-manual segmentation time (plus 3 minutes for the initial automatic segmentation).

This study has several limitations. First, the number of evaluated cases was small, thereby limiting the extrapolation of our results to larger, more varied patient cohorts. Second, our OCT data were volumetrically motion-corrected, thereby resulting in a continuous Bruch’s membrane in both of the transverse directions. OCT data generated with other motion-correction strategies, such as eye-tracking, may generate data with axial discontinuities along the $\mathbf {e}_B$ direction. Such discontinuities require more relaxed graph restrictions (i.e., a larger $\Delta _{\mathcal {B}}$), thus potentially necessitating denser B-scan corrections. Similarly, for OCT data that are not motion-corrected, eye motion, either in the axial or transverse directions, would result in discontinuities in Bruch’s membrane and would require a more relaxed graph restriction. Fortunately, all commercial OCTA instruments use some form of motion-correction. Relaxation of the graph restriction would also be required in other contexts. In particular, in this study we only evaluated the performance of our correction propagation algorithm in segmenting Bruch’s membrane. While Bruch’s membrane segmentation can be challenging, in that is not always well visualized on OCT, it has the advantage that it is relatively continuous, even in regions of pathology. This contrasts with other layers, such as the RPE, which can undergo sharp deformations. Further studies are needed to understand how our proposed methods would extend to such layers.

7. Conclusion and outlook

This work presents a general user-assisted segmentation scheme that utilizes correction propagation to reduce the labor of correcting automatic segmentations of volumetric OCT data. A particular instantiation of this framework is provided in the form of a graph-cut-based correction propagation algorithm for correcting segmentations of Bruch’s membrane. The efficacy of this algorithm is evaluated in a small case series of eyes with GA and CNV. Examples of the impact of our proposed algorithm on OCTA-based assessment of the choriocapillaris are also provided.

Appendix I.

Tables Icon

Table 2. Mathematical notation sorted by occurrence in the manuscript.

Appendix II.

Please note that the parameters used in this study were determined by a grid-search on our data and may need to be adapted for different OCT devices and/or volume resolutions. Note also that the OCT volumes of this study had a digital resolution of 12 µm$\times$12 µm$\times$4.5 µm. Parameters that are dependent on the digital resolutions of the OCT volumes are marked with a (*).

$\sigma _{f_i}$(*)= 4 pixels (${18}$ µm)
$\sigma _{f_j}$(*)= 2 pixels (${24}$ µm)
$\sigma _s$(*)= 5 pixels (${22.5}$ µm axial; ${60}$ µm transverse)
$\sigma _r$ = 5
$d$ $= 3$
$Q$ $= [[1,-1]^T, [1,-1]^T,[1,-1]^T]$
$\epsilon$ $= 0.001$
$\textrm {MAXITER}$ $= 50$
$h_m$(*)= 50 pixels (${600}$ µm)
$\textrm {RELAXSTEP}$ $= 1$
$\Delta _{\mathbf {e}_{A}}$(*)= 2 pixels ($9$ µm)
$\Delta _{\textrm {BM}}$(*)= 4 pixels ($18$ µm)
$\Delta _{\textrm {min}}$ $= 0.05$
$\Delta _{\mathcal {B}}$(*) $= \begin {cases} 3 \; \textrm {pixels} \; (12.5\,\mathrm{\mu} \rm{m}) & ,\textrm{if }\Delta k \geq 15 \\ 1 \; \textrm {pixel} \; (4.5\, \mathrm{\mu}\rm{m}) & , \textrm{otherwise}\end {cases}$

Appendix III.

boe-11-5-2830-i001

Funding

National Eye Institute (R01-EY011289); Research to Prevent Blindness; Champalimaud Vision Award; Beckman-Argyros Award in VisionResearch; Retina Research Foundation Awards.

Disclosures

The authors declare that there are no conflicts of interest related to this article.

References

1. A. H. Kashani, C.-L. Chen, J. K. Gahm, F. Zheng, G. M. Richter, P. J. Rosenfeld, Y. Shi, and R. K. Wang, “Optical coherence tomography angiography: A comprehensive review of current methods and clinical applications,” Prog. Retinal Eye Res. 60, 66–100 (2017). [CrossRef]  

2. R. F. Spaide, J. G. Fujimoto, N. K. Waheed, S. R. Sadda, and G. Staurenghi, “Optical coherence tomography angiography,” Prog. Retinal Eye Res. 64, 1–55 (2018). [CrossRef]  

3. J. Tian, B. Varga, G. M. Somfai, W.-H. Lee, W. E. Smiddy, and D. C. DeBuc, “Real-time automatic segmentation of optical coherence tomography volume data of the macular region,” PLoS One 10(8), e0133908 (2015). [CrossRef]  

4. S. J. Chiu, X. T. Li, P. Nicholas, C. A. Toth, J. A. Izatt, and S. Farsiu, “Automatic segmentation of seven retinal layers in SDOCT images congruent with expert manual segmentation,” Opt. Express 18(18), 19413–19428 (2010). [CrossRef]  

5. K. Li, X. Wu, D. Z. Chen, and M. Sonka, “Optimal surface segmentation in volumetric images-a graph-theoretic approach,” IEEE Trans. Pattern Anal. Mach. Intell. 28(1), 119–134 (2006). [CrossRef]  

6. P. A. Dufour, L. Ceklic, H. Abdillahi, S. Schroder, S. De Dzanet, U. Wolf-Schnurrbusch, and J. Kowal, “Graph-based multi-surface segmentation of OCT data using trained hard and soft constraints,” IEEE Trans. Med. Imaging 32(3), 531–543 (2013). [CrossRef]  

7. A. Yazdanpanah, G. Hamarneh, B. Smith, and M. Sarunic, “Intra-retinal layer segmentation in optical coherence tomography using an active contour approach,” in International Conference on Medical Image Computing and Computer-Assisted Intervention, (Springer, 2009), pp. 649–656.

8. S. Niu, L. de Sisternes, Q. Chen, T. Leng, and D. L. Rubin, “Automated geographic atrophy segmentation for SD-OCT images using region-based cv model via local similarity factor,” Biomed. Opt. Express 7(2), 581–600 (2016). [CrossRef]  

9. L. de Sisternes, G. Jonna, J. Moss, M. F. Marmor, T. Leng, and D. L. Rubin, “Automated intraretinal segmentation of SD-OCT images in normal and age-related macular degeneration eyes,” Biomed. Opt. Express 8(3), 1926–1949 (2017). [CrossRef]  

10. A. Lang, A. Carass, M. Hauser, E. S. Sotirchos, P. A. Calabresi, H. S. Ying, and J. L. Prince, “Retinal layer segmentation of macular OCT images using boundary classification,” Biomed. Opt. Express 4(7), 1133–1152 (2013). [CrossRef]  

11. B. J. Antony, M. D. Abràmoff, M. M. Harper, W. Jeong, E. H. Sohn, Y. H. Kwon, R. Kardon, and M. K. Garvin, “A combined machine-learning and graph-based framework for the segmentation of retinal surfaces in SD-OCT volumes,” Biomed. Opt. Express 4(12), 2712–2728 (2013). [CrossRef]  

12. Y. LeCun, Y. Bengio, and G. Hinton, “Deep learning,” Nature 521(7553), 436–444 (2015). [CrossRef]  

13. L. Fang, D. Cunefare, C. Wang, R. H. Guymer, S. Li, and S. Farsiu, “Automatic segmentation of nine retinal layer boundaries in OCT images of non-exudative amd patients using deep learning and graph search,” Biomed. Opt. Express 8(5), 2732–2744 (2017). [CrossRef]  

14. M. Chen, J. Wang, I. Oguz, B. L. VanderBeek, and J. C. Gee, “Automated segmentation of the choroid in EDI-OCT images with retinal pathology using convolution neural networks,” in Fetal, Infant and Ophthalmic Medical Image Analysis, (Springer, 2017), pp. 177–184.

15. X. Sui, Y. Zheng, B. Wei, H. Bi, J. Wu, X. Pan, Y. Yin, and S. Zhang, “Choroid segmentation from optical coherence tomography with graph-edge weights learned from deep convolutional neural networks,” Neurocomputing 237, 332–341 (2017). [CrossRef]  

16. F. G. Venhuizen, B. van Ginneken, B. Liefers, M. J. van Grinsven, S. Fauser, C. Hoyng, T. Theelen, and C. I. Sánchez, “Robust total retina thickness segmentation in optical coherence tomography images using convolutional neural networks,” Biomed. Opt. Express 8(7), 3292–3316 (2017). [CrossRef]  

17. A. G. Roy, S. Conjeti, S. P. K. Karri, D. Sheet, A. Katouzian, C. Wachinger, and N. Navab, “ReLayNet: retinal layer and fluid segmentation of macular optical coherence tomography using fully convolutional networks,” Biomed. Opt. Express 8(8), 3627–3642 (2017). [CrossRef]  

18. M. Pekala, N. Joshi, T. A. Liu, N. M. Bressler, D. C. DeBuc, and P. Burlina, “Deep learning based retinal OCT segmentation,” Comput. Biol. Med. 114, 103445 (2019). [CrossRef]  

19. A. Shah, L. Zhou, M. D. Abrámoff, and X. Wu, “Multiple surface segmentation using convolution neural nets: application to retinal layer segmentation in OCT images,” Biomed. Opt. Express 9(9), 4509–4526 (2018). [CrossRef]  

20. D. Stromer, “Non-invasive imaging methods for digital humanities, medicine, and quality assessment,” Ph.D. thesis, Friedrich-Alexander-Universität Erlangen-Nürnberg (FAU) (2019).

21. H. Zhou, Y. Dai, G. Gregori, P. R. Rosenfeld, J. L. Duncan, D. M. Schwartz, and R. K. Wang, “Automated morphometric measurement of the retinal pigment epithelium complex and choriocapillaris using swept source OCT,” Biomed. Opt. Express 11(4), 1834–1850 (2020). [CrossRef]  

22. R. S. Ramrattan, T. L. van der Schaft, C. M. Mooy, W. De Bruijn, P. Mulder, and P. De Jong, “Morphometric analysis of Bruch’s membrane, the choriocapillaris, and the choroid in aging,” Investigative ophthalmology & visual science 35, 2857–2864 (1994).

23. M. A. Fischler and R. C. Bolles, “Random sample consensus: a paradigm for model fitting with applications to image analysis and automated cartography,” Commun. ACM 24(6), 381–395 (1981). [CrossRef]  

24. S. J. Chiu, J. A. Izatt, R. V. O’Connell, K. P. Winter, C. A. Toth, and S. Farsiu, “Validated automatic segmentation of AMD pathology including drusen and geographic atrophy in SD-OCT images,” Invest. Ophthalmol. Visual Sci. 53(1), 53–61 (2012). [CrossRef]  

25. M. K. Garvin, M. D. Abràmoff, R. Kardon, S. R. Russell, X. Wu, and M. Sonka, “Intraretinal layer segmentation of macular optical coherence tomography images using optimal 3-D graph search,” IEEE Trans. Med. Imaging 27(10), 1495–1505 (2008). [CrossRef]  

26. C. Tomasi and R. Manduchi, “Bilateral filtering for gray and color images,” in Proceedings of the Sixth International Conference on Computer Vision, (IEEE Computer Society, Washington, DC, USA, 1998), ICCV ’98, p. 839.

27. Z. Sun, H. Chen, F. Shi, L. Wang, W. Zhu, D. Xiang, C. Yan, L. Li, and X. Chen, “An automated framework for 3D serous pigment epithelium detachment segmentation in SD-OCT images,” Sci. Rep. 6(1), 21739 (2016). [CrossRef]  

28. W. Choi, B. Potsaid, V. Jayaraman, B. Baumann, I. Grulkowski, J. J. Liu, C. D. Lu, A. E. Cable, D. Huang, J. S. Duker, and J. G. Fujimoto, “Phase-sensitive swept-source optical coherence tomography imaging of the human retina with a vertical cavity surface-emitting laser light source,” Opt. Lett. 38(3), 338–340 (2013). [CrossRef]  

29. Y. Jia, O. Tan, J. Tokayer, B. Potsaid, Y. Wang, J. J. Liu, M. F. Kraus, H. Subhash, J. G. Fujimoto, J. Hornegger, and D. Huang, “Split-spectrum amplitude-decorrelation angiography with optical coherence tomography,” Opt. Express 20(4), 4710–4725 (2012). [CrossRef]  

30. M. F. Kraus, B. Potsaid, M. A. Mayer, R. Bock, B. Baumann, J. J. Liu, J. Hornegger, and J. G. Fujimoto, “Motion correction in optical coherence tomography volumes on a per A-scan basis using orthogonal scan patterns,” Biomed. Opt. Express 3(6), 1182–1199 (2012). [CrossRef]  

31. M. F. Kraus, J. J. Liu, J. Schottenhamml, C.-L. Chen, A. Budai, L. Branchini, T. Ko, H. Ishikawa, G. Wollstein, J. Schuman, J. S. Duker, J. G. Fujimoto, and J. Hornegger, “Quantitative 3D-OCT motion correction with tilt and illumination correction, robust similarity measure and regularization,” Biomed. Opt. Express 5(8), 2591–2613 (2014). [CrossRef]  

32. J. R. de Oliveira Dias, Q. Zhang, J. M. Garcia, F. Zheng, E. H. Motulsky, L. Roisman, A. Miller, C.-L. Chen, S. Kubach, L. de Sisternes, M. K. Durbin, W. Feuer, R. K. Wang, G. Gregori, and P. J. Rosenfeld, “Natural history of subclinical neovascularization in nonexudative age-related macular degeneration using swept-source OCT angiography,” Ophthalmology 125(2), 255–266 (2018). [CrossRef]  

33. L. Roisman, Q. Zhang, R. K. Wang, G. Gregori, A. Zhang, C.-L. Chen, M. K. Durbin, L. An, P. F. Stetson, G. Robbins, A. Miller, F. Zheng, and P. J. Rosenfeld, “Optical coherence tomography angiography of asymptomatic neovascularization in intermediate age-related macular degeneration,” Ophthalmology 123(6), 1309–1319 (2016). [CrossRef]  

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (8)

Fig. 1.
Fig. 1. Our user-assisted segmentation framework is comprised of three modules: (1) initial automatic segmentation; (2) manual correction of segmentation errors over a subset of the volume; and (3) propagation of the segmentation corrections to other regions of the volume. The framework is designed so that each module is relatively independent of the particular implementations of the other modules.
Fig. 2.
Fig. 2. Example of fully-automatic segmentation of the RPE and Bruch’s membrane in an OCT B-scan that intersects a CNV lesion. Note that, for clarity, the segmentation lines and B-scan are shown in their natural (unflattened) coordinate frames. In B-scans such as this one, where the RPE becomes separated from Bruch’s membrane, the automatic Bruch’s membrane segmentation can be anteriorly shifted from the true Bruch’s membrane position (turquoise arrows).
Fig. 3.
Fig. 3. Example of user-assisted segmentation correction within a restricted domain, for the same case as in Fig. 2. (Left panel) En face OCT plane, cropped from a larger field-of-view, that intersects a region of CNV. (Right panel) OCT B-scan, extracted from the en face OCT plane, also intersecting the region of CNV. Note that, for clarity, the segmentation lines and B-scan are shown in their natural (unflattened) coordinate frames. The teal contour corresponds to the automatically segmented internal limiting membrane; the orange contour corresponds to the automatically segmented RPE; the red contour corresponds to the automatically segmented Bruch’s membrane; and the dashed, light-green contour corresponds to the corrected Bruch’s membrane segmentation. The domain of correction, $\mathcal {D}^{\tilde {R}}$, is indicated by the dark-green dashed box in the left panel, the sides of which correspond to the vertical dark-green dashed lines in the OCT B-scan of the right panel. In this example, three OCT B-scans are corrected. Note that the B-scans are only corrected within $\mathcal {D}^{\tilde {R}}$.
Fig. 4.
Fig. 4. Illustration of Bruch’s membrane restriction. The manually segmented points ($k_1$ and $k_2$) and the border points ($n_b$ and $n_t$) act as ‘gates’ through which the shortest path must pass. Between these points, the graph is restricted to the band $\mathcal {B}_j$ with thickness $\Delta _{\mathcal {B}}$ (here: $\Delta _{\mathcal {B}}=2$). $\mathcal {B}_j$ is constructed via linear interpolation between the manual inputs and sub-domain borders. White cells denote unconnected vertices.
Fig. 5.
Fig. 5. Bar chart showing the user-assisted segmentation results for the GA and CNV datasets evaluated over $\mathcal {D}^{\tilde {R}}$; the dashed horizontal line indicates the axial digital (pixel) resolution, which is half of the axial optical resolution. MAD and $\sigma _{\textrm {MAD}}$ exhibit a monotonic decrease with an increasing number of corrected B-scans. The first bar of each dataset illustrates the MAD for the fully-automatic segmentation.
Fig. 6.
Fig. 6. Illustration of user-assisted Bruch’s membrane segmentation in an eye with GA (top row) and an eye with CNV (bottom row). For all panels, propagation correction was performed with a correction density corresponding to $\Delta k = 180$ µm. Note that, for clarity, the segmentation lines and B-scans are shown in their natural (unflattened) coordinate frames. (a) OCT B-scans extracted from the locations of the dashed lines in column-b and column-c. For each panel, the teal contour corresponds to the automatically segmented internal limiting membrane; the orange contour corresponds to the automatically segmented RPE; and the green lines correspond to the manually segmented (ground truth) Bruch’s membrane. In the panels labelled “Fully-Automatic,” the red contours correspond to the fully-automatic Bruch’s membrane segmentation. In panels labelled “User-Assisted,” the red contours correspond to the user-assisted segmentation achieved via correction propagation. (b) Signed-difference maps between the fully-automatic segmentations and fully-manual segmentations. (c) Signed-difference maps between the user-assisted segmentations and the fully-manual segmentations. The dark-green contours in the panels of column-b and column-c correspond to lesion boundaries. The black rectangles in the panels of column-b and column-c correspond to the domains of correction, $\mathcal {D}^{\tilde {R}}$.
Fig. 7.
Fig. 7. Effect of user-assisted segmentation on choriocapillaris OCTA slabs. The panels of the top row correspond to the full 6 mm $\times$ 6 mm fields-of-view, and the panels of the bottom row correspond to enlargements of those of the top row. All en face OCTA slabs were formed via median projection of the OCTA volume from Bruch’s membrane to ∼25 µm immediately posterior to Bruch’s membrane. Lesion boundaries are outlined in teal. User-assisted segmentation was performed as described in Fig. 6. (a) Choriocapillaris OCTA slab of an eye with GA (same as in Fig. 6) generated by using a fully-automatic segmentation. (b) Choriocapillaris OCTA slab of the same GA eye generated by using the user-assisted segmentation. (c) Choriocapillaris OCTA slab of an eye with CNV (same as in Fig. 6) generated by using a fully-automatic segmentation. (d) Choriocapillaris OCTA slab of the same CNV eye generated by using the user-assisted segmentation. Comparing panel-a and panel-b, the dominant effect of the user-assisted segmentation is a reduced appearance of larger choroidal vasculature within the region of atrophy (e.g., yellow arrow within teal outline). This reduction is a consequence of the fully-automatic segmentation being posteriorly shifted relative to the fully-manual (and user-asssisted) segmentation, as illustrated in Fig. 6. Even beyond the GA margin, the choriocapillaris OCTA signal appears lower with the user-assisted segmentation than with the fully-automatic segmentation (e.g., yellow arrow beyond teal outline). Comparing panel-c and panel-d, the user-assisted segmentation results in a finer CC/CNV patterning, which, again, is particularly noticeable within the lesion margins. In some regions, the user-assisted segmentation results in higher choriocapillaris OCTA signals (e.g., red arrows); in others, the user-assisted segmentation results in lower choriocapillaris OCTA signals (e.g., yellow arrow).
Fig. 8.
Fig. 8. Effect of user-assisted segmentation on visualization of CNV lesion. User-assisted segmentation was performed as described in Fig. 6). All en face OCTA slabs were formed via mean projection of the OCTA volume from the automatically segmented RPE to the Bruch’s membrane, and therefore correspond to the type-I lesion component. (a) CNV OCTA slab generated by using a fully-automatic segmentation. (b) CNV OCTA slab of the same CNV eye generated by using the user-assisted correction. (c, d) Enlargements of panel-a and panel-b, respectively. When comparing panel-c and panel-d, several regions that appear vessel-free with the fully-automatic segmentation show vessels with the user-assisted segmentation (e.g., red arrows).

Tables (2)

Tables Icon

Table 1. Description of the datasets used to evaluate our proposed user-assisted Bruch’s membrane segmentation algorithm. All cases are 6mm×6mm volumes. Superscripts * and § indicate eyes of the same patient (i.e., OD/OS).

Tables Icon

Table 2. Mathematical notation sorted by occurrence in the manuscript.

Equations (20)

Equations on this page are rendered with MathJax. Learn more.

PkRPE={pi,j|(i,j)I×J}
EkRPE=(i,j)I×JNRPE(pi,j)
NRPE(pi,j))={(pi,j,pi1,j+1)}{(pi,j,pi,j+1)}{(pi,j,pi+1,j+1)}
wkRPE(ei,j,m,n)=exp[2(VRPE(i,j,k)+VRPE(m,n,k))+ϵ]
PjBM,R~={pi,k|(i,k)I×KR}
EjBM,R~=(i,k)I×KRNBM,R~(pi,k)
NBM,R~(pi,k)={(pi,k,pi3,k+1)}{(pi,k,pi2,k+1)}{(pi,k,pi1,k+1)}{(pi,k,pi,k+1)}{(pi,k,pi+1,k+1)}{(pi,k,pi+2,k+1){(pi,k,pi+3,k+1)}}
VBM|K={k}=12(sgn(V|K={k}Q)+1)(V|K={k}Q)
wjBM(ei,k,m,o)=2(VBM(i,j,k)+VBM(m,j,o))+ϵ
PjBM,R=PjBMBj
Bj=kKRT(pL(k),k)
L(k)=(kk(k))g(k+(k))g(k(k))k+(k)k(k)+g(k(k)),
k(k)=argminkKkM|kk|
k+(k)=argminkKkM+|kk|
KkM={kKR|kk}{nb}
KkM+={kKR|kk}{nt}
T(pi,k)={{pi,k},kKMIkΔB(pi,k),else
IkΔB(pi,k)=iI,|ii|ΔB{pi,k}
EjBM,R=EjBM,R~|PjBM,R
wjBM,R=wjBM,R~|EjBM,R
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.