Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Automated detection of retinal layer structures on optical coherence tomography images

Open Access Open Access

Abstract

Segmentation of retinal layers from OCT images is fundamental to diagnose the progress of retinal diseases. In this study we show that the retinal layers can be automatically and/or interactively located with good accuracy with the aid of local coherence information of the retinal structure. OCT images are processed using the ideas of texture analysis by means of the structure tensor combined with complex diffusion filtering. Experimental results indicate that our proposed novel approach has good performance in speckle noise removal, enhancement and segmentation of the various cellular layers of the retina using the STRATUSOCT system.

©2005 Optical Society of America

1. Introduction

Denoising of Optical Coherence Tomography (OCT) image data [1, 2] has been an active area of research with several different approaches being proposed using techniques such as median filtering [3, 4], wavelet-based filtering that employs nonlinear thresholds [5], phase domain processing approach [6], anisotropic diffusion filtering [7, 8], and nonlinear anisotropic filtering [9, 10]. On the other hand, feature detection is known to be an indispensable tool in image processing. Features such as edges and corners have to be classified in a stable way to enable edge preserving image denoising [11, 12, 13, 14] and robust segmentation of image subdomains bounded by edges [15]. Correspondingly, the local classification of edge and corner features turns out to be an important ingredient for many image processing applications.

In image processing, a straightforward identification of edges can be based on an evaluation of image gradients. For example, a sufficiently large gradient is supposed to indicate an edge. Equally, a commonly considered edge indicator is the Canny edge detector, which searches for extrema of the second derivatives in the gradient direction [16]. In addition, the structure tensor enables a robust classification of edges and edge directions in images [17].

OCT images can either be used to qualitatively assess retinal features and pathologies or to objectively make quantitative measurements. Certainly, retinal thickness measurements are clinically important because they may be used to diagnose diseases and determine the appropriate treatment. For instance, glaucoma is characterized by the progressive loss of ganglion cells and axons in the retinal nerve fiber layer (RNFL). Thus the thickness of the RNFL reduces and causes peripheral and subsequently central vision loss. Ishikawa et al. have recently shown that OCT could be effectively used to detect quantitative differences in glaucomatous eyes [18]. However, we have encountered many instances of erroneous boundary detection on diseased retina with the commercially available algorithm. As a matter of fact, only the inner and outer boundaries of the retina and the boundaries of the RNFL are detected by the current STRATUSOCT system (Carl Zeiss Meditec, Inc., Dublin, CA). Thus, there is a need for improvement in precise quantification and data analysis of the various cellular layers of the retina. For example, the detection of the ganglion cell layer (GCL) and inner plexiform layer (IPL) along with the boundaries of the RNFL could better aid in diagnosing the progress of glaucoma. Alternatively, it could be also potentially useful to determine which layers are involved in macular edema formation and whether is there any rule concerning which layers get thicker first.

In this paper we examine the application of complex diffusion filtering [19] along with coherence-enhancing diffusion filtering [20] as a tool for noise reduction, segmentation and structural analysis in retinal OCT images. We propose a model based enhancement-segmentation approach by combining complex diffusion and coherence-enhanced diffusion filtering in three consecutive steps. Specifically, the enhancement-segmentation approach starts with a complex diffusion process, which is shown to be advantageous for speckle denoising and edge preservation. A coherence-enhanced diffusion filter is then applied to improve the discontinuities in the retinal structure (e.g. gaps created by intraretinal blood vessels) and to obtain the structural coherence information in the raw data. The enhancement-segmentation approach ends with the application of a boundary detection algorithm based on local coherence information of the structure.

The remainder of this paper is organized as follows. In section 2, we describe our proposed method. In section 3, we present the experimental results on image enhancement and segmentation. We then conclude with discussion and offer some concluding remarks.

2. Methods

2.1 Nonlinear complex diffusion approach

The non-homogeneous nonlinear isotropic diffusion process (also known as the Perona-Malik filter) limits the smoothing of an image near pixels with a large gradient magnitude (i.e. at edge pixels). Thus, noise at edges cannot be removed by permitting more diffusion along the edge than across it. In order to achieve this, anisotropic models need to be considered [15, 21–22]. Furthermore, the Perona-Malik formulation is known to have some sharpening effects, but it serves mainly as an edge-preserving denoising process. On the other hand, it has been shown that important features of the image can be extracted by extending the analysis from the real axis to the complex domain [19]. Gilboa et al. generalized the linear scale spaces in the complex domain, by combining the diffusion equation with the free Shrödinger equation [19]. In such formulation, a nonlinear complex diffusion is a process with complex value diffusion coefficient. An important observation, is that the imaginary value serves as a robust edge-detector (i.e. it is a smoothed second derivative of the initial signal scaled by time) with increasing confidence in time, thus handles noise well and may serve as a controller for nonlinear processes [19]. Numerical evidence showed that the qualitative characteristics of the imaginary part in nonlinear processes are similar to the linear case, especially at the zero-crossing locations [19]. As a matter of fact, the nonlinear complex diffusion approach does not enhance edges as the Perona-Malik process, avoids staircasing (a known byproduct of Perona-Malik) and removes more noise.

The equation for the nonlinear complex diffusion approach is:

tI=.(d(Im(I))I)

where Im(.) is the imaginary value and the diffusivity is defined as:

d(Im(I))=exp(iθ)1+(Im(I)kθ)2

where k is a threshold parameter and θ∈ (-π/2; π/2) is the phase angle. A complete detailed derivation of the nonlinear complex scheme has been provided by Gilboa et al. [19].

2.2 Coherence-enhanced diffusion filtering

In this approach the diffusion process is guided by a diffusion tensor instead of a spatially varying scalar diffusivity [20]. Thus the diffusivity varies with both the edge location and its orientation, facilitating a true anisotropic behavior of the diffusion process (i.e. different smoothing in different directions). The key point is to find the prominent local orientation and smooth along that direction but not across them. Coherence-enhanced diffusion is governed by the following equation:

tI=.(d(I)I)

where the diffusion tensor d⃡ is chosen such as to enhance coherence. To this end, the diffusion tensor should be chosen as a function of the local image structure. Thus, a more sophisticated structure descriptor than ∇I is needed. In the case of measuring local coherence structures, it is natural to adapt the diffusion tensor to the structure tensor (also called second-moment matrix) [20]:

Jρ(Iσ)=Gρ*(IσIσ)=Gρ*(IσIσT)

where * denotes convolution, ⊗ denotes tensor product, Gρ is a Gaussian with standard deviation ρ, and I σ = Gσ* I is a regularized version of I. The normalized eigenvectors νi of Jρ give the preferred local orientations, and its eigenvalues μi give the local contrast along these directions. Thus, the eigenvalues measure the average contrast (grey value variation) in the eigendirections within an integration scale ρ. Consequently, V1 is the orientation with the highest grey value fluctuations, and ν2 gives the preferred local orientation (i.e. points in the direction with the highest coherence or lowest average contrast within an integration scale ρ) [17]. Note that the diffusion process rather acts along coherent structures. Moreover, noise is uncorrelated and thus has no preferred orientation. Therefore, fast damping of uncorrelated structures while preserving oriented structures will result in efficient denoising. We could attain this goal by large diffusion along oriented structures and almost no diffusion perpendicular to them. Using a diagonal matrix M with Mii =μi Jρ can be written as:

Jρ=ν1ν2Mν1ν2T=ν1ν2(μ100μ2)ν1ν2T

In order to adapt the diffusion tensor to the local structure, d⃡ should posses the same eigenvectors ν1 ν2 as the structure tensor Jρ and its corresponding eigenvalues λ1 λ2 should be chosen as:

λ1=α,
λ2={αifμ1=μ2α+(1α)exp[C(μ1μ2)2m]else,}

where C is the coherence parameter (C>0), m defines the speed the diffusivity parallel to the structure changes for a variation in the coherence (m∈ N, big values of m make the diffusivity change quickly), and α is a small positive parameter (α∈ (0, 1)) that keeps the diffusion tensor positive definite.

Consequently, to enhance coherent structures, one should smooth mainly along the coherence direction ν2 with a diffusivity λ2 which increases with respect to the coherence direction (μ1 -μ2 )2. Note that (μ1 -μ2 )2 is a measure of the local coherence, and that the directional diffusivities λ1 , λ2 should be high for low values of μi and vice versa. Thus, if the coherence is inferior to C the flux is increasing with the coherence and if the coherence is larger than C the flux decreases as the coherence grows. Note that the main idea in this approach is to allow large diffusion along oriented structures and almost no diffusion perpendicular to them. A complete detailed derivation of the coherence-enhanced scheme has been provided by Weickert et al. [20].

2.3 Algorithm development

A program featuring a new algorithm was developed using a custom automatic and/or interactive boundary detection algorithm written for the MATLAB software platform (The Mathworks, Natick, MA). Raw data files of OCT images were exported to a compatible PC. The new algorithm searches for peaks on each sampling line instead of applying conventional thresholding techniques. The structure coherence matrix is used in this peak finding process instead of the original data. Usually the following two characteristics of a local maximum of a continuous function are used in peak searching algorithms:

  1. Near the maximum the curve is convex and so its second derivative becomes negative, having a minimum value around the maximum of the peak.
  2. While passing a peak the first derivative changes sign.

In our peak finding procedure, the peak is identified at the point where the first derivative changes sign from either positive to negative or negative to positive. A total of 7 boundaries are automatically extracted using the new approach presented herein.

The summarized flow of the algorithm is as follows (see also flowchart shown on Fig. 1):

  1. First the DC bias and the background noise level are removed. The DC bias and background noise level are computed from the mean and standard deviation respectively of the first 50 rows in the OCT image, which are assumed to contain only noise.
  2. Apply a nonlinear complex diffusion filter to suppress the speckle noise. The resulting denoised image (real part) is then use in the enhanced coherence scheme in order to obtain the structure coherence matrix.
  3. Seeks the internal limiting membrane (ILM) on each sampling line by determining the first peak from the inner side of the retinal structure. The automatic peak finding procedure is applied to each image column individually once the structure coherence matrix (1024x512) is obtained. We note that the ILM is usually defined as the first highly reflective rise from the inner side on each sampling line. Then, starting from the ILM, the next peak below is detected and corresponds to the outer boundary of the RNFL.
  4. Seeks the outer side of the retinal pigment epithelium (RPE) layer by detecting the maximum intensity level (i.e. the absolute highest peak) on each sampling line. Defines a starting point at the RPE contour in order to detect the next peak above this layer that corresponds to the outer side of the outer nuclear layer (ONL). From there, the subsequent peaks above the ONL are determined to outline the remaining of the layers (i.e. the ganglion cell layer (GCL) along with the inner plexiform layer (IPL), inner nuclear layer (INL), and the outer plexiform layer (OPL). The inner side of the RPE layer is determined by looking for peaks below the ONL and above the outer side of the RPE layer.
  5. Since there are discontinuous segments along the retina with some spurious edges (e.g. due to intraretinal blood vessels which cause shadowing or regions of high intraretinal reflectivity which may create a strong intraretinal edge); then the boundaries could not be detected in a number of subsequent scans, creating a discontinuity (i.e. a vertical dislocation) in the outlined boundary. These discontinuous segments, or dropouts, are corrected by approximating the boundary in the dropout region using linear interpolation (i.e. a straight line is drawn between the discontinuities in the dropout region).
  6. Since retinal reflections are minimally visible in the fovea, a control is defined for some layers in a 0.5mm diameter zone enclosing this region. For example, the ILM and inner side of the GCL+IPL complex are forced to be coincident in this region. Moreover, the outer side of the GCL+IPL complex, the INL, and OPL are also force to be coincident in this zone.
  7. Once the 7 boundaries are automatically outlined, other specific boundaries could be interactively extracted using the corresponding information from retinal anatomy and histology. Specifically, the boundary between the GCL and IPL, the outer border of the junction between the inner and outer segments (IS/OS), the outer edge of the choriocapillaries (ChCap) and a predefined choroidal segment can be manually extracted using the semi-automated processing software.
 figure: Fig. 1.

Fig. 1. Flowchart of the methodology illustrating the corresponding main processing steps that need to be performed to automatically extract the cellular layers of the retina on OCT images.

Download Full Size | PDF

3. Experimental results and discussions

We tested the algorithm on seventy-two OCT scans from normal subjects; some results are shown in this section. Normal subjects were healthy volunteers with normal ocular examination findings. All participants gave their informed consent to participate in the study. Subjects were scanned with a STRATUSOCT unit using the radial lines protocol (spoke pattern: 6 lines of 6mm length). It is worth to mention that most image denoising processes are quite sensitive to the choice and fine tuning of various parameters. In our particular case, the output of the nonlinear complex diffusion process is affected mainly by the threshold value (κ) and the iteration number (N). Since the imaginary part is normalized by θ (see Eq.2 on section 2.1), the denoising task is not affected much by changing the value of θ as long as it stays small [19]. Thus, the crucial question is when to stop the filtering and what is the optimal κ value that should be used in order to obtain the optimal restoration result. This problem motivated us to develop several tests to determine the optimal choice of the above parameters [23, 24]. Our criterion is to maximize the signal-to-mean square error (S/MSE) ratio obtained as a result of the application of complex diffusion filtering to real OCT images.

Figure 2 shows the optimal parameter results obtained after varying the threshold value (κ) and iteration number (N) to obtain the optimal image restoration result [23, 24]. About 50 iterations (N) suffice for most denoising tasks in the nonlinear complex diffusion approach (k=10, θ=π/30, σ=1, and Δt=0.24), consuming a total computation time of about 24 seconds for a 1024x512-pixel OCT image on a personal computer (Pentium 4 CPU, 2.26 GHz) using Windows XP. Figure 3 shows the denoising results for a sample OCT scan obtained from a normal subject using the STRATUSOCT system. The parameters for the nonlinear complex diffusion filter (κ=10, θ = π/30, σ=1, N=50, and Δt=0.24) were selected as previously described by Cabrera Fernández, et al. [23, 24]. Note that the nonlinear complex diffusion filter effectively reduces noise, preserves edges, and enhances fine signal details [see Fig. 3(b)]. Although the result obtained with a diffusion coefficient κ = 10 handles speckle noise well [23, 24], a value of κ = 60 allows a more robust extraction of important features in the image (e.g. compare the results shown in Figs. 3(B), 3(C), 3(D) and 3(E), 3(F), and 3(G). One may notice that the intraretinal layers are much better delineated and visualized in the OCT image [see Fig. 3(e)] obtained after complex diffusion filtering when κ = 60. Particularly, the benefit of using a higher diffusion coefficient can be also demonstrated by noting that the imaginary value of the original OCT image shown in Fig. 3(f) serves as a better edge-detector than the one shown in Fig. 3(c). Moreover, the enhancing diffusion process (α=4, σ=5, ρ=2, m=8, and Δt=0.24) is really capable of closing interrupted lines [23, 24] without destroying the relevant singularities in the image (e.g. compare the regions in the OCT image containing the characteristic shadowing of the OCT signal below the level of vessels in Figs. 3(a), 3(b), 3(d) 3(e), and 3(g). A sample edge map, where the intensity at each point indicates the value of the first derivative of the structure coherence matrix, is also presented in Fig. 3(h). This map justifies the usage of the structure tensor as a robust detector for retinal structure features.

 figure: Fig. 2.

Fig. 2. Optimal choice of the threshold value (κ) and iteration number (N) using small values of θ to obtain the optimal image restoration result. (A) Selection criteria of optimal stopping time. The iteration should be stopped after 40-50 iterations to avoid redundancy computation (κ= 10, with θ = π/30, σ= 1 and Δt=0.24). (B) Measured S/MSE improvement of a typical OCT image as κ is varied from 1 to 20 (N= 50, with θ = π/30,σ= 1 and Δt= 0.24).

Download Full Size | PDF

 figure: Fig. 3.

Fig. 3. Denoising results for a sample OCT scan. (A) Original OCT image. [Media 1](B) Image denoised (real part) using the nonlinear complex diffusion filter (κ=10, θ=π/30, σ =1, N=50, and Δt=0.24). (C) Imaginary part of the original OCT image after nonlinear complex diffusion filtering (κ = 10, θ = π/30, σ= 1, N=50, and Δt=0.24). (D) Image obtained after a coherence-enhanced diffusion filtering (α=4, σ=5, ρ =2, m=8, and Δt=0.24) is applied to the denoised image (real part shown in B) obtained after nonlinear complex diffusion filtering. (E) Image denoised (real part) using the nonlinear complex diffusion filter (κ= 60, θ= π/30, σ =1, N=50, and Δt=0.24). (F) Imaginary part of the original OCT image after nonlinear complex diffusion filtering (κ = 60, θ = π/30, σ= 1, N=50, and Δt=0.24). (G) Image obtained after a coherence-enhanced diffusion filtering is applied to the denoised image (real part shown in E) obtained after nonlinear complex diffusion filtering. (H) Edge map obtained calculating the first derivative of the structure coherence matrix of the denoised image shown in E. The OCT images displayed are grayscale representations of the actual interference signal intensities.

Download Full Size | PDF

Figure 4 shows the coherence structure and orientation images obtained after applying the enhanced coherence scheme. Note that the coherence structures are enhanced and visualized in brighter colors [see Fig. 4(a)]. Also, note the definition and orientation of the layers in the structure tensor orientation image [see Fig. 4(b)]. As it can be seen, there is not a well-oriented structure in the vitreous region where only speckle noise (i.e. a non-coherent structure) is assumed to be predominant. However, a prominent layer orientation appears where the retinal structure is located showing the preferred local orientation or coherence direction across which diffusion is taking place. Thus, the enhancing diffusion filter encourages smoothing along the coherence direction and is therefore well suited for closing interrupted lines.

 figure: Fig. 4.

Fig. 4. Local coherence and orientation in a sample OCT scan. Top: Structure tensor coherence. Bottom: Structure tensor orientation (in degrees).

Download Full Size | PDF

Segmentation of an A-scan line based on the coherence structure information extracted from the OCT signal intensity is shown on Fig. 5. The reflectivity measurements of the cellular layers are indicated. Note that the raw A-scan line is outlined in green and its corresponding scan line after applying the enhanced coherence scheme is outlined in red. The peaks on the red curve represent the boundaries’ location of the cellular layers of the retina.

 figure: Fig. 5.

Fig. 5. Segmentation of an A-scan line based on the coherence structure information extracted from the OCT signal intensity after enhancing diffusion filtering.

Download Full Size | PDF

Figure 6 shows the automated segmentation results obtained for a normal subject. The edges extracted are superimposed on the original OCT image. The strongly reflective layers are white. Note that a total of 7 edges are extracted. Also, observe that since there is not significant luminance transition between GCL and IPL, then the outer boundary of the GCL layer is difficult to visualize in the image shown. Thus, a combined GCL+IPL layer is preferably shown. Moreover, since the STRATUSOCT image does not have the resolution to resolve the fine features of some intraretinal layers, a semi-automated approach was implemented to extract these particular layers that cannot be found by the automated algorithm. In order to create a user-friendly automated and semi- automated segmentation platform we designed a graphical user interface (GUI) in Matlab (The Mathworks, Natick, MA). We also assumed a constant thickness for the layers that were manually extracted. Accordingly, manual segmentation was carried out for the outer side border of the photoreceptor segment junction (IS/OS) and choriocapillaries. The result of the automated and semi-automated boundary detection process is shown in Fig. 7.

 figure: Fig. 6.

Fig. 6. Automated segmentation results. Top: Original OCT image with overlaid retinal boundaries. The segmented retinal layers are, from top to bottom, retinal nerve fiber layer (RNFL), ganglion cell layer (GCL) along with the inner plexiform layer (IPL), inner nuclear layer (INL), outer plexiform layer (OPL), outer nuclear layer (ONL) and the photoreceptor inner/outer segment junction (IS/OS). The retinal pigment epithelium (RPE) along with the choriocapillaries (ChCap) and choroid layer appear below the bottom boundary line. Bottom: A small section of the original OCT image containing 100 A-scans and the single axial scan shown on Fig. 3. The tomogram is composed of 512 A-scans. We note that the sublayer labeled as the ONL is actually enclosing the external limiting membrane (ELM) but in the standard 10-15μm resolution OCT image this thin intraretinal layer cannot be visualized clearly. Thus this layer classification is our assumption and does not reflect the actual anatomic structure.

Download Full Size | PDF

 figure: Fig. 7.

Fig. 7. Automated and semi-automated segmentation results. The boundaries detected are superimposed on the original OCT image shown in Fig. 3(a). Note that we have assumed a constant thickness for the layers obtained with the semi-automated approach (i.e. for the IS/OS junction and ChCap layer).

Download Full Size | PDF

A potential clinical application of this methodology is the measurement of retinal thickness and reflectivity of the various cellular layers extracted. In order to obtain the quantitative data per layer, the macula was divided into nine regions including of a central disk of 1mm diameter, and an inner and outer ring, each divided into four quadrants, with outer diameters of 3mm and 6mm, respectively. Bilinear interpolation was performed to estimate the retinal thickness in the wedges between each OCT scan [1, 8]. Retinal thickness was extracted from all points in each radial scan by means of a computer algorithm [8] and then averaged within the nine regions indicated. Quantitative data was then calculated using the known diameters and the optically measured information (i.e. thickness or reflectivity) of each quadrant accordingly [8]. Figure 8 shows the thickness segmentation mapping obtained for the normal subject shown in Fig. 3 after automatic segmentation of the retinal structures (also shown in Fig. 6).

 figure: Fig. 8.

Fig. 8. Thickness segmentation mapping obtained for the same normal subject shown in Fig. 1 after automatic segmentation of the retinal layers (see Fig. 4). (a) Whole macular thickness map. (b) RNFL thickness map. (c) GCL + IPL thickness map. (d) INL thickness map. (e) OPL thickness map. (f) ONL thickness map. Note that the scale of the color scheme in the maps is adjusted for the thickness range of each extracted layer. Thickness values are in microns.

Download Full Size | PDF

Although it would be interesting to build a normative database with normal, control eyes to compare various macular data such as the thickness and reflectivity information of the various cellular layers of the retina, the more attractive application of our methodology is on retina showing pathologic features. Thus, four different cases were considered to test the performance of the automated segmentation algorithm on OCT images of retinal pathologies. Figure 9 shows the segmentation results for two pathologic human eyes. Specifically, the top image on Fig. 9 shows a sample A-scan centered at fovea from a glaucomatous subject. In most glaucoma and neuro-ophthalmological optic nerve disorders, tracking RNFL thickness changes over time is important in determining the appropriate treatment. A previous work has shown that quantitative measurements of RNFL thickness in the macula is nearly as sensitive as measurements of RNFL thickness around the optic nerve [25]. Although glaucomatous eyes may tend to have worse image quality than normal eyes, the automated segmentation algorithm was able to extract the 7 cellular layers previously described. An OCT tomogram showing a small subfoveal cyst is also shown on Fig. 9 (see bottom picture). The algorithm is again able to fairly extract the cellular layers in almost the whole macula except in the region enclosing the cyst where the layer structure is totally corrupted by the presence of abnormal fluid. Two more difficult cases were also used in this preliminary test. Fig. 10 shows the segmentation result for a subject with chorioretinitis (see top picture). Note that the layers are not well extracted in the region where the lesion is located due to the fact that in this region the layer structure is totally corrupted. Figure 10 (see bottom image) shows another OCT tomogram obtained from a subject with AMD.

 figure: Fig. 9.

Fig. 9. Automatic segmentation results for two pathologic human eyes. Top: Glaucomatous subject. Bottom: Subject with a small subfoveal cyst.

Download Full Size | PDF

 figure: Fig. 10.

Fig. 10. Automatic segmentation results for two more pathologic human eyes showing a higher perturbation in the retinal structure. Top: Chorioretinitis. Bottom: Macular edema.

Download Full Size | PDF

We note that our algorithm required modifications for these particular retinal pathologies since their images contain structures that violate our current assumptions. For example, the foveal depression is not present in the two images shown in Fig. 10. Thus, the control around the foveal pit that it was previously defined in the automated algorithm was not used to segment these images. All instances of failure of the new algorithm were in regions which had extremely low reflectivity or almost no structural information. For example, the algorithm failed to trace the outer border of the INL (first layer outlined in green from vitreous) in Fig. 9 (see bottom picture, right section of the image). A similar failure can be observed for the outer border of the GCL+IPL complex (see first layer outlined in yellow from vitreous on top image in Fig. 10, left section) and the outer border of the INL (see first line outline in green from the vitreous on bottom image in Fig. 10, middle and right sections). Despite the erroneous registration of these particular edges, the new algorithm still provided a fairly good extraction of the cellular layers in such critical cases.

4. Conclusions

A preliminary evaluation of a novel approach to automatically and/or interactively locate the cellular layers of the retina is presented. The new algorithm looks for peaks on each sampling line instead of applying conventional thresholding techniques. The structure coherence matrix is used in the peak finding process instead of the original data. The evaluated approach leads to efficient despeckling [8, 23, 24] with well-preserved image details for better segmentation of the retinal layers.

It is well known that the commercially available STRATUSOCT algorithm not only tend to fail on damaged retina but it is also not able to detect all the retinal layers. The preliminary experimental results shown here indicate that our proposed approach has good performance in noise removal and enhancement of retinal layers. This image enhancement not only serves as a preprocessing step for the subsequent segmentation of the retinal layers but also serves for facilitating the extraction of local reflectance properties and structural information of the retina. Thus, the methodology presented could have an important role in the future development of computer-assisted OCT quantification techniques. For example, the thickness and volume of the retinal layers, and the layer’s reflectance profile varying with depth could be obtained to provide a more complete OCT biometry for diagnosis. Moreover, we have also shown that it is possible to segment the retinal layers without the need of an ultrahigh resolution system. The flexibility of the new algorithm developed allowed detection of retinal layers under common situations under which the commercially available algorithm tend to fail, such as: low internal reflectivity and eye movement during scanning. We would like to mention that further sophistication of our algorithm must be developed to detect the correct edge responses specific to different pathologies in order to improve the overall performance of segmentation of the various cellular layers of damaged retina. These are matters for future study.

In addition, the new approach introduced have the potential of allowing the quantification of damage in ocular pathologies by focusing only on the measurements from those retinal layers that are directly involved in the pathologic process. On the other hand, our methodology would allow the exploration of other interesting issues. For example, it is assumed that a higher RPE reflectance is associated to a lower choroidal signal [26]. The segmentation of the retinal layers would help to validate this assumption and would also make easier to determine whether or not the RPE reflectance vary among people using the reflectance information from any other specific layer as a reference. The possibility to explore the directional reflectance of RNFL with respect to RPE or IPL is also of interest [26]. A future work will test the error rate and reproducibility of the methodology presented. The further development of our methodology may yield a powerful, quantitative image tool for the current STRATUSOCT system.

Acknowledgments

This study is supported in part by a NIH R01 EY008684-10S1, a Stanley J. Glaser Foundation Biomedical Research Support Grant, a NIH center grant P30-EY014801 and by an unrestricted grant to the University of Miami from Research to Prevent Blindness, Inc. The authors would like to thanks Dr. R. W. Knighton for his support concerning this work and for inspiring many of our current investigations with his challenging questions and motivating discussions. Recognition is also due to Dr. Xiang-Run Huang for the helpful discussions provided.

References and links

1 . D. Huang , E. A. Swanson , C. P. Lin , J. S. Schumann , W. G. Stinson , W. Chang , M. R. Hee , T. Flotte , K. Gregory , C. A. Puliafito , and J. G. Fujimoto , “ Optical coherence tomography ,” Science 254 , 1178 – 1181 ( 1991 ). [CrossRef]   [PubMed]  

2 . J. M. Schmitt , S. H. Xiang , and K. M. Yung , “ Speckle in optical coherence tomography ,” J. Biomed. Optics 4 , 95 , ( 1999 ). [CrossRef]  

3 . E. R. Ritenour , T. R. Nelson , and U. Raff , “ Application of median filtering to digital radiographic images ,” in Proc. 7th Int. Conf. Acoust. Speech, Signal Processing , 1984, 23.1.1 – 23.1.4 .

4 . A. Loannidis , D. Kazakos , and D. D. Watson , “ Application of median filtering on nuclear medicine scintigrams images ,” in Proc. 7th Conf. Pattern Recognition , 1984 , 33 – 36 .

5 . S. H. Xiang , L. Zhou , and J. M. Schmitt , “ Speckle noise reduction for optical coherence tomography , in Optical and Imaging Techniques for Biomonitoring III , H.-J. Foth , R. Marchesini , and H. Podbielska , eds.”, Proc. SPIE 3196 , 79 , ( 1997 ). [CrossRef]  

6 . K. M. Yung , S. L. Lee , and J. M. Schmitt , “ Phase-domain processing of optical coherence tomography images ,” J. Biomed. Optics 4 , 125 ( 1999 ). [CrossRef]  

7 . P. Perona and J. Malik , “ Scale-space and edge detection using anisotropic diffusion ,” IEEE Trans. Pattern Anal. Mach. Intell. 12 , 629 – 639 ( 1990 ). [CrossRef]  

8 . D. Cabrera Fernández and H. M. Salinas , “ Evaluation of a non-linear diffusion process for segmentation and quantification of lesions in Optical Coherence Tomography images ,” in Proc. SPIE Int. Soc. Opt. Eng. 5747 , 1834 ( 2005 ).

9 . G. Gregori and R. W. Knighton , “ A robust algorithm for retinal thickness measurements using optical coherence tomography (Stratus OCT) ,” Invest. Ophthalmol. Visual Sci. 45 : E-Abstract 3007 ( 2004 ).

10 . D. Cabrera Fernández , “ Delineating fluid-filled region boundaries in optical coherence tomography images of the retina ,” IEEE Trans. Med. Imaging 24 , 929 – 945 ( 2005 ). [CrossRef]  

11 . L Alvarez , F. Guichard , P. L. Lions , and J. M. Morel , “ Axioms and fundamental equations of image processing ,” Arch. Ration. Mech. Anal. 23 , 199 – 257 ( 1993 ). [CrossRef]  

12 . G. Aubert and P. Kornprobst , Mathematical Problems in Image Processing , Applied Mathematical Sciences 147 ( Springer-Verlag, New-York , 2002 ).

13 . D. Cabrera Fernández and R. W. Knighton , “ Active contour models for assessing lesion shape and area in OCT images of the retina ,” Invest. Ophthalmol. Visual Sci. 44 : E-Abstract 1770 ( 2003 ).

14 . D. Koozekanani , K. Boyer , and C. Roberts , “ Retinal thickness measurements from optical coherence tomography using a Markov boundary model ,” IEEE Trans. Med. Imaging 20 , 900 – 916 , ( 2001 ). [CrossRef]   [PubMed]  

15 . J. Weickert , “ Anisotropic diffusion filters for image processing based quality control ,” A. Fasano and M. Primicerio eds., in Proc. Seventh European Conf. on Mathematics in Industry , ( Teubner, Stuttgart , 1994 ), 355 – 362 .

16 . R. Deriche , “ Using canny criteria to derive a recursively implemented optimal edge detector ,” Int. J. Comput. Vision 1 , 167 – 187 ( 1987 ). [CrossRef]  

17 . J. Weickert , “ Foundations and applications of nonlinear anisotropic diffusion filtering ,” Z. Angew. Math. Mech. 76 , 283 – 286 ( 1996 ).

18 . H. Ishikawa , D. M. Stein , G. Wollstein , S. Beaton , J. G. Fujimoto , and J. S. Schuman , “ Macular segmentation with optical coherence tomography ,” Invest. Ophthalmol. Vis. Sci. 46 : 2012 – 2017 . ( 2005 ). [CrossRef]   [PubMed]  

19 . G. Gilboa , N. Sochen , and Y. Y Zeevi , “ Image enhancement and denoising by complex diffusion process ,” IEEE Trans. PAMI 25 ( 8 ), 1020 – 1036 , ( 2004 ). [CrossRef]  

20 . J. Weickert , “ Coherence-enhancing diffusion filtering ,” Int. J. Comput. Vision 31 , 111 – 127 , ( 1999 ). [CrossRef]  

21 . G. H. Cottet and L. Germain , “ Image processing through reaction combined with nonlinear diffusion ,” Math. Comp. 61 , 659 – 673 ( 1993 ). [CrossRef]  

22 . M. Nitzberg and T. Shiota , “ Nonlinear image filtering with edge and corner enhancement ,” IEEE Trans. PAMI 14 , 826 – 833 ( 1992 ). [CrossRef]  

23 . D. Cabrera Fernández and H. M. Salinas , “ Extracting subretinal layers on stratus OCT images via a structure tensor approach combined with a nonlinear diffusion process ,” Invest. Ophthalmol. Visual. Sci. 46 : E-Abstract 2575 ( 2005 ).

24 . H. M. Salinas and D. Cabrera Fernández , “ Comparison of PDE-based nonlinear anisotropic diffusion approaches for image enhancement and denoising in Optical Coherence Tomography ,” submitted to IEEE Trans. Med. Imaging ( 2005 ).

25 . O. Tan , Y. Li , and D. Huang , “ Measurement of Ganglion cell layer and inner plexiform layer thickness with Optical Coherence Tomography ,” Invest. Ophthalmol. Visual. Sci. 44 : E-Abstract 4926 ( 2003 ).

26 . R. W. Knighton , Bascom Palmer Eye Institute, School of Medicine, University of Miami 1638 NW. 10th Ave, Miami, FL, 33136 (personal communication, 2005 ).

Supplementary Material (1)

Media 1: JPG (33 KB)     

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (10)

Fig. 1.
Fig. 1. Flowchart of the methodology illustrating the corresponding main processing steps that need to be performed to automatically extract the cellular layers of the retina on OCT images.
Fig. 2.
Fig. 2. Optimal choice of the threshold value (κ) and iteration number (N) using small values of θ to obtain the optimal image restoration result. (A) Selection criteria of optimal stopping time. The iteration should be stopped after 40-50 iterations to avoid redundancy computation (κ= 10, with θ = π/30, σ= 1 and Δt=0.24). (B) Measured S/MSE improvement of a typical OCT image as κ is varied from 1 to 20 (N= 50, with θ = π/30,σ= 1 and Δt= 0.24).
Fig. 3.
Fig. 3. Denoising results for a sample OCT scan. (A) Original OCT image. [Media 1](B) Image denoised (real part) using the nonlinear complex diffusion filter (κ=10, θ=π/30, σ =1, N=50, and Δt=0.24). (C) Imaginary part of the original OCT image after nonlinear complex diffusion filtering (κ = 10, θ = π/30, σ= 1, N=50, and Δt=0.24). (D) Image obtained after a coherence-enhanced diffusion filtering (α=4, σ=5, ρ =2, m=8, and Δt=0.24) is applied to the denoised image (real part shown in B) obtained after nonlinear complex diffusion filtering. (E) Image denoised (real part) using the nonlinear complex diffusion filter (κ= 60, θ= π/30, σ =1, N=50, and Δt=0.24). (F) Imaginary part of the original OCT image after nonlinear complex diffusion filtering (κ = 60, θ = π/30, σ= 1, N=50, and Δt=0.24). (G) Image obtained after a coherence-enhanced diffusion filtering is applied to the denoised image (real part shown in E) obtained after nonlinear complex diffusion filtering. (H) Edge map obtained calculating the first derivative of the structure coherence matrix of the denoised image shown in E. The OCT images displayed are grayscale representations of the actual interference signal intensities.
Fig. 4.
Fig. 4. Local coherence and orientation in a sample OCT scan. Top: Structure tensor coherence. Bottom: Structure tensor orientation (in degrees).
Fig. 5.
Fig. 5. Segmentation of an A-scan line based on the coherence structure information extracted from the OCT signal intensity after enhancing diffusion filtering.
Fig. 6.
Fig. 6. Automated segmentation results. Top: Original OCT image with overlaid retinal boundaries. The segmented retinal layers are, from top to bottom, retinal nerve fiber layer (RNFL), ganglion cell layer (GCL) along with the inner plexiform layer (IPL), inner nuclear layer (INL), outer plexiform layer (OPL), outer nuclear layer (ONL) and the photoreceptor inner/outer segment junction (IS/OS). The retinal pigment epithelium (RPE) along with the choriocapillaries (ChCap) and choroid layer appear below the bottom boundary line. Bottom: A small section of the original OCT image containing 100 A-scans and the single axial scan shown on Fig. 3. The tomogram is composed of 512 A-scans. We note that the sublayer labeled as the ONL is actually enclosing the external limiting membrane (ELM) but in the standard 10-15μm resolution OCT image this thin intraretinal layer cannot be visualized clearly. Thus this layer classification is our assumption and does not reflect the actual anatomic structure.
Fig. 7.
Fig. 7. Automated and semi-automated segmentation results. The boundaries detected are superimposed on the original OCT image shown in Fig. 3(a). Note that we have assumed a constant thickness for the layers obtained with the semi-automated approach (i.e. for the IS/OS junction and ChCap layer).
Fig. 8.
Fig. 8. Thickness segmentation mapping obtained for the same normal subject shown in Fig. 1 after automatic segmentation of the retinal layers (see Fig. 4). (a) Whole macular thickness map. (b) RNFL thickness map. (c) GCL + IPL thickness map. (d) INL thickness map. (e) OPL thickness map. (f) ONL thickness map. Note that the scale of the color scheme in the maps is adjusted for the thickness range of each extracted layer. Thickness values are in microns.
Fig. 9.
Fig. 9. Automatic segmentation results for two pathologic human eyes. Top: Glaucomatous subject. Bottom: Subject with a small subfoveal cyst.
Fig. 10.
Fig. 10. Automatic segmentation results for two more pathologic human eyes showing a higher perturbation in the retinal structure. Top: Chorioretinitis. Bottom: Macular edema.

Equations (7)

Equations on this page are rendered with MathJax. Learn more.

t I = . ( d ( Im ( I ) ) I )
d ( Im ( I ) ) = exp ( i θ ) 1 + ( Im ( I ) k θ ) 2
t I = . ( d ( I ) I )
J ρ ( I σ ) = G ρ * ( I σ I σ ) = G ρ * ( I σ I σ T )
J ρ = ν 1 ν 2 M ν 1 ν 2 T = ν 1 ν 2 ( μ 1 0 0 μ 2 ) ν 1 ν 2 T
λ 1 = α ,
λ 2 = { α if μ 1 = μ 2 α + ( 1 α ) exp [ C ( μ 1 μ 2 ) 2 m ] else , }
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.