Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Automated segmentation of the macula by optical coherence tomography

Open Access Open Access

Abstract

This paper presents optical coherence tomography (OCT) signal intensity variation based segmentation algorithms for retinal layer identification. Its main ambition is to reduce the calculation time required by layer identification algorithms. Two algorithms, one for the identification of the internal limiting membrane (ILM) and the other for retinal pigment epithelium (RPE) identification are implemented to evaluate structural features of the retina. Using a 830 nm spectral domain OCT device, this paper demonstrates a segmentation method for the study of healthy and diseased eyes.

©2009 Optical Society of America

1. Introduction

An established method in ophthalmic imaging, optical coherence tomography (OCT) has the great advantage that it provides high-resolution three dimensional (3D) images of the human eye noninvasively. As technical development continues, its resolution and imaging speed will be further improved. One problem is that high resolution 3D imaging produces large quantities of data, and modifying the measured raw data requires substantial processing. That is time-consuming and degrades the clinical usability of OCT. To improve the practicability of OCT technology in ophthalmology therefore depends on effective data processing.

One essential part of OCT data processing is segmentation, in which different tissue layers are identified and separated from each other. A number of severe eye diseases (age-related macular degeneration (ARMD), choroidal neovascularisation (CNV), glaucoma, etc.) cause structural changes in the retina and the choroid. To evaluate these changes quantitatively requires a segmentation-based determination of the thicknesses of the different tissue layers. One of the most important layers to be identified is the retinal pigment epithelium (RPE), which is often considered as a limiting membrane between the retina and the choroid. In a normal eye, RPE is appeared as the most hyper-scattering layer in an OCT image. Another hyper scattering layer which exists slightly superior to the RPE is outer tips of photoreceptors. In a standard resolution OCT image, the RPE and outer tips are too close to be resolved and often appeared as a single hyper scattering line. This combined hyper scattering line is clinically recognized as an RPE complex, and regarded as the indicator of RPE. In pathologic cases, the RPE is often deformed strongly and rarely disconnected and disappeared. Also the identification of the internal limiting membrane (ILM) is important, because it is constitutes a limiting membrane between the vitreous and the retina. Distance between the ILM and the RPE is regarded to be a measure for retinal thickness. That information is used to evaluate both retinal diseases and the effectiveness of surgery and other treatments.

Several different methods are available for identifying the internal layers of posterior human eye [1, 2, 3, 4, 5, 6, 7]. Most of these are based on intensity variations in backscattered signal [1, 2, 3, 4, 5, 6]. ILM segmentation offers a straightforward approach, because the contrast between vitreous and retina is typically very good, but RPE segmentation has been found a challenging undertaking, especially in pathologic cases. One novel approach identifies RPE on the basis of the polarization scrambling property of RPE tissue [7].

However, regardless of whether tissue identification is based on segmentation relying on intensity variation in OCT signals or polarization sensitive OCT data, the required calculation time for 3D data processing using currently available methods is very long, making these methods a bit unpractical. To establish the justification and validity of that statement, we did the literature review of published segmentation methods. Articles published by Koozekanani et al. [1], Ishikawa et al. [2] and Baroni et al. [6] do not mention a required calculation time. Mujat et al. published a method for retinal nerve fiber layer thickness determination. The processing of a single image (with 1000 A-scans) took 62 seconds [3]. If data segmentation of volume with 138 images is performed, the needed calculation time would be more than 2 hours. The corresponding processing time for semi-automatic segmentation method published by Szkulmowski et al. [4] was about 5 minutes for data volume containing 200 images with 600 A-scans/image. That time includes the increment of processing time caused by a necessary manual intervention. The computation time of segmentation method published by Fernandez et al. was 24 seconds for a 1024×512 image, entailing a 55 minutes total data processing time for volume with 138 images [5]. Simpler version of the RPE segmentation method published by Götzinger et al. took 8.3 minutes for volume with 60 images (1000 A-scans/image) [7]. Although the segmentation method based on the polarization scrambling property of RPE tissue seems more specific for RPE identification than intensity based algorithms, it requires polarization sensitive OCT (PS-OCT), which is not commercially available so far, and is not yet common in clinics. Consequently, we have decided to concentrate on intensity signal-based RPE segmentation.

This work demonstrates an alternative method for identifying the ILM and RPE based on intensity variations in OCT signals. The main ambition is to decrease the necessary calculation time, while still obtaining reliable segmentation results.

2. Methods

2.1. Retinal pigment epithelium identification

 figure: Fig. 1.

Fig. 1. Steps in RPE identification. (i) The positions of maximum intensity pixels are determined and a 2D RPE position matrix is obtained, (ii) Automated binarization is performed to obtain a mask to identify erroneous pixels in the RPE position matrix (iii) Erroneous pixels are removed and replaced with new numerical values based on information on neighbouring pixels, (iv) 30 pixels around the RPE estimation are extracted from the original volume data, (v) the position of the RPE is redetermined on the basis of maximum intensity determination. The thus obtained RPE position map can be further improved by repeating steps (iv-vi) using a reduced amount of pixels around the estimated RPE.

Download Full Size | PDF

Actual layer identification can be started after normal SD-OCT pre-processing, including depth motion compensation. RPE and ILM identification can be performed independently. Moreover, the presented RPE segmentation algorithm does not require any denoising, allowing us to avoid unnecessary complexity of calculation. The principle of the algorithm is based on the fact that the intensity of backscattered light is largest at the RPE complex. Fig. 1 depicts the sequences of the RPE identification algorithm.

Logarithmic scaled OCT signal intensity for each A-scans in measured data volume is described by {I x,y (z), x∊[1,M], y∊[1,N]}, where z refers to the depth position of pixel from the beginning of the depth scan. M is the position of the A-scan in each B-scan and N is the position of B-scans. Thus M×N is the total number of A-scans in the processed data volume. Initially, step (i) in Fig. 1, the maximum intensity value of each A-scan in the 3D data is determined by {max(I x,y (z))} and the depth positions of these pixels are identified {zmax (I)(x,y)}. The obtained matrix can be considered as the first estimation for RPE position {z max (I)(x,y)=z rpe1(x,y)}. Due to the speckle noise and signal distortion caused by retinal vessels, this first RPE position, based on determining the position of the maximum intensity pixels, is not perfect. However, the majority of erroneous pixels in the 2D RPE position matrix {z rpe1(x,y)} are located in the retinal nerve fiber layer (RNFL). Because of the large distance between the RPE and the RNFL, it is possible to determine erroneous pixels by intensity-based thresholding. To facilitate the position identification of erroneous pixels, the obtained RPE position matrix is processed using top-hat filtering, which computes the morphological opening of the image and then subtracts the result from the original image [8]. The size of used structuring element was 5(x)×5(y) pixels. To get a mask {B rpe1 (x,y)} that can be used for identifying the position of erroneous pixels in the RPE position matrix, the top-hat filtered RPE position matrix is thresholded by an automatic binarization algorithm based on Otsu’s method (step (ii) in Fig. 1.) [9]. All pixels that are expected to be erroneous in the RPE position approximation matrix are first set to a so-called NaN value, which contains no numerical value (if B rpe1 (x,y)=0 then z rpe1(x,y)=NaN). All matrix elements with a NaN value are then replaced by a numerical value based on the nearest neighboring pixel value (step (iii) Fig. 1) and smoothened by a moving window median filter (size 30 (x)×2 (y) pixels) to obtain the matrix {Z rpe1(x,y)}. To ensure that also the minor RPE position variations such as small drusens can be detected, 30 pixels (~129 µm) in depth around the estimated RPE are extracted {I x,y (z), z∊[Z rpe1(x,y)-10,Z rpe1(x,y)+20]} and reprocessed (step (iv) Fig. 1.). The IS/OS junction located just above RPE complex is also a relatively highly scattering tissue layer may cause problems for RPE position identification based on evaluation of maximum intensity. However, the probability that the RPE is identified erroneously at the IS/OS junction can be minimized, because the IS/OS junction is thinner than RPE. Thus, if the position of the six pixels with the highest intensity in each A-scan are determined, the median of each six pixels group can be calculated to find the position of the RPE along each A-scan. Next the obtained RPE position matrix is smoothened by 40 (x)×2 (y) moving window median filter (step (v) Fig.1) and a third iteration round is performed. This time processing extends only to 20 pixels in depth (~86 µm) around estimated RPE of each B-scan {I x,y (z), z∊[Z rpe2(x,y)-10,Z rpe2(x,y)+10]}, and the obtained matrix is smoothened by a moving average median filter with a window size of 1(x)×30(y). Since the maximum number of iterations in this work was four, the final iteration involved 10 pixels (~43 µm) around RPE {I x,y (z), z∊[Z rpe3(x,y)-5,Z rpe3(x,y)+5]}. Z rpe2(x,y) and Z rpe3(x,y) refer to the estimated RPE position matrices after the second and third iteration, respectively. Finally, the obtained RPE position matrix is smoothened by a 20 (x)×1 (y) moving window median filter. It should be noticed that the size of the used filters are connected to the geometrical dimensions of sample and thus they must be adjusted if scanning protocol is changed.

2.2. Internal limiting membrane identification

Another important layer that needs to be identified, particularly when estimating retinal thickness, is the ILM, which can be considered as a limiting membrane between the retina and the vitreous. Owing to the very different optical properties of the retinal nerve fiber layer (RNFL) and the vitreous, the contrast between the ILM and the vitreous is typically very good in OCT images. This is because there are no highly scattering or absorbing tissues before the ILM. Consequently, ILM identification can be performed efficiently using automatic intensity-based binarization.

 figure: Fig. 2.

Fig. 2. Steps in ILM identification. (i) A threshold value is calculated automatically and each slice of volume data is binarized; (ii) the depth position of the first zero value of each binarized A-scan is determined to get a first estimation of the ILM position; (iii) to remove erroneous ILM position pixels, 45 pixels around the estimated ILM are extracted and reprocessed; (iv) intensity-based binarization is performed again with same threshold value as the first time; (v) the depth position of the ILM is re-estimated based on determining the position of the first zero value pixels of A-scans; (vi) to improve the reliability of the obtained ILM position matrix, steps (iii–v) are repeated with a smaller amount of pixels around the estimated ILM.

Download Full Size | PDF

First, a suitable intensity threshold value is determined for each B-scan in a data cube, and the first 5 pixels in depth of each B-scan {Noisey (x, z), y∊[1,N], x∊[1,M], z∊[1,5]} are extracted. Assuming that these pixels only contain a noise signal, threshold determination of each B-scan is based on evaluating 5×M pixels. The threshold is selected such that 0.5%of the pixels are set to be zero after binarization {∑M x=15 z=1 Bnoisey (x, z)≤0.005×5×M}. Here Bnoisey (x, z) refers to the binarized form of Noisey (x, z). Because of the noise variations of processed B-scans, each of them are binarized with their own threshold (see step (i) in Fig. 2.) and the estimation for the position of the ILM is obtained by determining the depth position of the first zero value of each A-scan (see step (ii) in Fig. 2.). Due to the noise signal mentioned above the ILM position matrix contains erroneous pixels. To remove them, the 2D ILM position matrix is smoothed by moving window median filters (size 1(x)×25(y) and 25(x)×1(y)). In addition, 45 pixels (~194 µm) around it {Sy (x, z), y∊[1,N], x∊[1,M], z∊[Zilm(x,y)-15,Zilm(x,y)+30]} are extracted from the original data (see step (iii) in Fig. 2.). In this expression, Zilm refers to the first depth position estimation of the ILM. To get more reliable segmentation results, steps (iii–v) can be repeated with smaller amount of pixels around the ILM (see step (vi) in Fig. 2.). In this work, only three iterations were performed. During the last iteration, 30 pixels (~129µm) around the estimated ILM {z∊[Z ilm2(x,y)-3,Z ilm2(x,y)+27]} of each B-scan were reprocessed. Here, Z ilm2 refers to a second depth position estimation of the ILM. Finally, the obtained position matrix is smoothened by a moving window median filter with a size of 10(x)×1(y) pixels. Because the number of reprocessed pixels is reduced dramatically between adjacent iterations, the calculation time does not increase significantly. Like in RPE segmentation method, the size of used filters must be adjusted if the scanning protocol is changed.

3. Results

We employed a spectral domain OCT (SD-OCT) system to obtain three-dimensional OCT images. As light source, the systemused a superluminescent diode (SLD) with a centre wavelength of 840 nm and a FWHMspectral bandwidth of 50 nm. The measured optical power of the beam on the cornea was 700 µW (less than ANSI limit). A transmission type diffractive grating of 1200 lines/mm was used. The scanning rate of the camera (Basler, L103k-2k) was 18.7 kHz and the exposure time of each A-line was 53.3 µs. With a measured maximum sensitivity of 99.3 dB, the system had a measured axial resolution of 8.8 µm in air. A more detailed description of the measurement setup can be found in Reference [10]. All segmentation algorithms are implemented in Matlab and all data processing was performed by a normal personal computer (2.4 GHz CPU, 2.93 GB RAM).

3.1. Layer segmentation and investigation of a healthy macula

Performed on a healthy volunteer, the first experimental measurements of the macula involved 1024 depth scans (with 320 pixels) per frame, with the entire data set containing 138 frames. The imaging area was about 5×5 mm2 and the measurement time was about 7.6 s. Using the segmentation methods described above, the position of the RPE and the ILM were identified with calculation times of 21 s and 16 s, respectively. To evaluate the quality of the segmentation process, Fig. 3 shows an en-face projection image and 10 cross-section images with the RPE and ILM superimposed on them. Only very small ILM and RPE segmentation errors can be seen.

 figure: Fig. 3.

Fig. 3. RPE and ILM layer segmentation results in a healthy volunteers macula. En-face projection image and ten cross-section images with identified RPE (green line) and ILM (red line). The position of each cross-section image is pointed by a number and a line in the en-face image. The projection image covers an area of 5×5 mm2 and the vertical dimension of each cross-section image is 1.37 mm. (in air)

Download Full Size | PDF

We analysed the accuracy of the obtained RPE and ILM segmentation results by evaluating all B-scans manually. Assuming that if the segmentation error is less than 10% of the average thickness of normal retina (255±16 µm), the error can be considered to be small [1]. All unclear results were regarded to be erroneous. In our evaluation that limit was set to be 5 pixels (~22 µm). In the case of RPE segmentation, 99.7% of depth scans has smaller error than 5 pixels when the corresponding value for ILM segmentation was 99.2%.

Because layer segmentation is performed over the whole measured area, 2D position maps of the RPE and ILM can be obtained. Calculating the distance between the two boundaries allows determining the thickness of the retina. Figure 4 presents the obtained RPE and ILM position maps and a retinal thickness map. The position of the RPE and ILM are given by the depth from the top of the cross-section image, while the scale bars indicate the optical distance.

 figure: Fig. 4.

Fig. 4. Healthy volunteers macula. (a) Position map of the ILM; (b) position map of the RPE; (c) retinal thickness map. The position of the ILM and RPE are given by the depth from the top of the cross-section image. The effect of the refractive index of the tissue is not taken into account.

Download Full Size | PDF

One potential clinical application of our method involves the quantitative determination of retinal thickness. Assuming that the average refractive index of retinal tissue is n=1.38, the physical thickness of the measured retina varies between 156 µm and 305 µm. Central retinal thickness (area with a 1 mm diameter) was measured to be 202 µm which is comparable with previously published studies on the normal eye [11, 12].

3.2. Layer segmentation and investigation of a macula with disease

Macular imaging of a patient with ARMD was performed to evaluate applicability of the presented segmentation method for abnormal eye. The measured data set contained 140 frames with 1022 depth scans (with 380 pixels) per frame. Covering an imaging area of about 5×5 mm2, the required RPE and ILM segmentation times were 21 s and 16 s, respectively. Results of the segmentation are shown in Fig. 5. As seen, only a few, small segmentation errors can be found, demonstrating that ILM segmentation was successful. On the other hand, evaluating the quality of RPE segmentation is not unambiguous, due to the distortion of the RPE. Nonetheless, assuming that the RPE is not totally destroyed in the area of elevation, also the RPE segmentation seems to be very reliable. Manually performed segmentation accuracy analysis showed that ILM segmentation was performed without errors larger than 5 pixels. In the case of RPE segmentation, at least 96.7% of the depth scans were segmented without significant error. The portion of uncertain segmentation results from the total number of analyzed depth scans was 3.0%.

Figure 6 shows the obtained RPE and ILM position maps together with a retinal thickness map. The position of the RPE and ILM is given by the depth from the top of the cross-section image, while the scale bars indicate optical distances. Figure 6(c) shows that retinal thickness increases significantly around the elevation.

 figure: Fig. 5.

Fig. 5. Macula of the left eye with ARMD. An en-face projection image and ten cross-section images showing the identified RPE (green line) and ILM (red line). The position of each cross-section image is pointed by a number and a line in the en-face image. The projection image covers an area of 5×5 mm2 and the vertical dimension of each crosssection image is 1.72 mm.

Download Full Size | PDF

 figure: Fig. 6.

Fig. 6. Macula of a patient with ARMD. (a) Position map of the ILM; (b) position map of the RPE; (c) retinal thickness map. The positions of the ILM and RPE are given by the depth from the top of cross-section image. The effect of refractive index is ignored.

Download Full Size | PDF

Macular imaging of a patient with polypoidal choroidal vasculopathy (PCV) was also performed during the experiments. Here, too, the measured data set contained 140 frames with 1022 depth scans (with 450 pixels) each, and the imaging area was about 5×5 mm2. The required RPE and ILM segmentation times were 21 s and 17 s, respectively. Displayed in Fig. 7, the obtained results demonstrate that ILM segmentation worked very reliably again with only some minor segmentation errors present, even though the measured OCT signal from the ILM was strongly attenuated on the fringes of the measured area. However, evaluating the RPE segmentation quality is a bit problematic, due to the severely distorted RPE. Cross-section image 4 in Fig. 7. shows a possible RPE segmentation error (indicated by a white arrow). A part of the RPE complex seems to have broken off, producing a strongly backscattering layer which is detected by the segmentation algorithm. Shown in cross-section image 7 is a clearly erroneous segmentation result. Thus, the method is incapable of detecting a deep gap between two adjacent elevations. Manual evaluation showed that ILM segmentation was performed with smaller error than 5 pixels for 98.6% of depth scans and the corresponding value for RPE segmentation was 97.0%.

Obtained RPE and ILM position maps and a retinal thickness map are shown in Fig. 8, with the position of the RPE and ILMindicated by the depth from the top of the cross-section image, while the scale bars denote optical distance. Fig. 8.(c) shows that retinal thickness increases significantly around the elevation of the RPE.

 figure: Fig. 7.

Fig. 7. Macula of the right eye with PCV. An en-face projection image and ten cross-section images showing the identified RPE (green line) and ILM (red line). The position of each cross-section image is pointed by a number and a line in the en-face image. The projection image covers an area of 5×5 mm2, and the vertical dimension of each cross-section image is 1.64 mm. Images 4 and 7 are magnified to see segmentation results in more details. White arrows are used to indicate the location of errors.

Download Full Size | PDF

 figure: Fig. 8.

Fig. 8. Macula of a patient with PCV. (a) Position map of the ILM; (b) position map of the RPE;(c) retinal thickness map. The position of the ILM and the RPE are given by the depth from the top of the cross-section image. The effect of refractive index is ignored.

Download Full Size | PDF

4. Discussion

As the presented ILM and RPE identification results suggest, the proposed method can be successfully applied to the study of the macula area. Even though the focus here was on macular segmentation, it should be possible to use the presented methods to identify the ILM and the RPE in the optic nerve head area (ONH) as well. The advantages of these methods stem from three facts at least. Firstly, the ILM and RPE layers can be segmented directly from the measured OCT data without massive denoising. Secondly, 3D information of pixels belonging to the ILM or the RPE are used for identification. And thirdly, rather simple tools are used iteratively. This makes the segmentation process very effective and reduces the required calculation time to about 17 s and 21 s for the ILM and the RPE, respectively. Contrast this with the corresponding calculation times for other published methods, which are in the range of several minutes. We may thus conclude that the presented method is tens of times faster than the other methods. Moreover, ILM segmentation seems even more reliable and efficient than RPE segmentation.

 figure: Fig. 9.

Fig. 9. The effect of iterations on segmentation result (a) RPE estimation based on maximum intensity search (red dots). The green line stands for final segmentation result; (b) RPE position estimate after masking the erroneous pixels (yellow dots); (c) representative cross-section image (healthy eye) with the red line showing the result of the first iteration, while the yellow and green lines stand for the second and third iteration, respectively; (d) magnified image of the region of interest with the white arrows showing the position where the second iteration fails; (e) representative cross-section image (ARMD eye) with the red line showing the result of the first iteration, while the yellow, blue and green lines stand for the second, third and forth iteration, respectively; (f) magnified image of the region of interest with the white arrows showing the position where the two first iterations fail.

Download Full Size | PDF

As the developed algorithms approach the layer of interest iteratively, more accurate RPE and ILM identification results can be obtained by performing additional iterations. In Fig. 9. the effect of different steps on segmentation results are shown. The first RPE estimation is based on maximum intensity determination. Due to the other highly back-scattering layers like RNFL and IS/OS junction, the maximum intensity search produces often artefacts to segmentation result. In Fig. 9(a), the results of maximum intensity search (red dots) are shown. Many dots which are far away from RPE can be seen. After the top-hat filtering and the erroneous pixel masking (step (iii) in Fig. 1), the number of erroneous pixels can be decreased significantly (Fig. 9(b)). In Fig. 9(c) and (d), the advantage of iterative segmentation is shown. Due to the relatively strong smoothing used in the first iteration, the estimation for the ILM position is erroneous, especially in areas such as in the fovea, where the ILM is curved in shape. Although the results of the second and third iteration do not differ much, a detailed evaluation shows that the second iteration slightly oversmooths the ILM position map. This suggests that the artefacts caused by filtering can be decreased if the iterative approach is used. In Fig. 9(e) and (f), the difference between adjacent iterations on RPE segmentation is shown. The erroneous segmentations results of iterations 1 and 2 can be decreased by additional iterations. Further, additional iteration rounds obviously increase the required calculation time. It is clear that the optimum number of iterations depends on the quality of the processed data and the required accuracy of identification. The presented segmentation results suggest that two iteration rounds suffice to give satisfactory identification results. When the RPE segmentation is performed, the first iteration takes 14% of the total RPE processing time. The portion of second, third and forth iterations from that total calculation time was 23%, 33% and 30% respectively. In the case of ILM segmentation, the first iteration requires 27%, second iteration 47% and third 36% of the total processing time.

It is well know fact that intensity variation based OCT data segmentation methods have tendency to give erroneous segmentation results especially in pathologic cases, which is also true for our approach. The algorithm assumes that the ILM and RPE layers are continuous. However, that is not true in all cases, as the layers can also be strongly distorted or even destroyed by a disease. Also the misalignment of frames in the OCT volume might be problematic, because information of neighbouring pixels is used for estimating the position of the ILM and the RPE. As a result, the presented method may give erroneous segmentation results. Although, the presented segmentation method seems to work quite reliably, the number of evaluated cases is very limited. The further investigation of segmentation accuracy is needed.

5. Conclusion

An alternative intensity variation-based ILM and RPE segmentation method is presented. Since its algorithms, which can be utilized independently, do not require massive pre-processing, it is very effective in terms of calculation time. In successful tests with a normal and a diseased macula, the entire data processing took less than 40 seconds for both layers, demonstrating that the method offers a highly promising tool for ophthalmological studies and enhances the usability of OCT technology in clinical applications.

References and links

1. D. Koozekanani, K. Boyer, and C. Roberts, “Retinal thickness measurements from optical coherence tomography using a Markov boundary model,” IEEE Trans. Med. Imaging 20, 900–916 (2001). [PubMed]  

2. H. Ishikawa, D.M. Stein, G. Wollstein, S. Beaton, J.G. Fujimoto, and J.S. Schuman, “Macular segmentation with optical coherence tomography,” Investigative Ophthalmol. Visual Scie. 46, 2012–2017 (2005).

3. M. Mujat, R. C. Chan, B. Cense, B. H. Park, C. Joo, T. Akkin, T. C. Chen, and J. F. de Boer, “Retinal nerve fiber layer thickness map determined from optical coherence tomography images,” Opt. Express 13, 9480–9491 (2005). [PubMed]  

4. M. Szkulmowski, M. Wojtkowski, B. Sikorski, T. Bajraszewski, V. J. Srinivasan, A. Szkulmowska, J. J. Kaluzny, J. G. Fujimoto, and A. Kowalczyk, “Analysis of posterior retinal layers in spectral optical coherence tomography images of the normal retina and retinal pathologies,” J. Biomed. Opt.12, (2007). [PubMed]  

5. D. C. Fernandez, H. M. Salinas, and C. A. Puliafito, “Automated detection of retinal layer structures on optical coherence tomography images,” Opt. Express 13, 200–216 (2005).

6. M. Baroni, P. Fortunato, and A. L. Torre, “Towards quantitative analysis of retinal features in optical coherence tomography,” Med. Engin. Phys. 29, 432–441 (2007).

7. E. Götzinger, M. Pircher, W. Geitzenauer, C. Ahlers, B. Baumann, S. Michels, U. Schmidt-Erfurth, and C. K. Hitzenberger, “Retinal pigment epithelium segmentation by polarization sensitive optical coherence tomography,” Opt. Express 16, 16410–16422 (2008). [PubMed]  

8. M. Zeng, J. Li, and P. Zhang “The design of Top-Hat morphological filter and application to infrared target detection,” Infr. Phys. Technol. 48, 67–76 (2006).

9. N. Otsu “A threshold selection method from gray-level histograms,” IEEE Trans. Syst. Man Cyber. 9, 62–66 (1979).

10. S. Makita, Y.J. Hong, M. Yamanari, T. Yatagai, and Y. Yasuno, “Optical coherence angiography,” Opt. Express 17, 7821–7840 (2006).

11. A. Misota, T. Sakuma, O. Miyauchi, M. Honda, and M. Tanaka, “Measurement of retinal thickness from the three-dimensional images obtained from C scan images from the optical coherence tomography ophthalmoscope,” Clinical and Experimental Ophthalmology 35, 220–224 (2007).

12. S. H. M. Liew, C. E. Gilbert, T. D. Spector, J. Mellerio, F. J. Van Kuijk, S. Beatty, F. Fitzke, J. Marshall, and C. J. Hammond, “Central retinal thickness is positively correlated with macular pigment optical density,” Experimental Eye Research 82, 915–920 (2006).

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (9)

Fig. 1.
Fig. 1. Steps in RPE identification. (i) The positions of maximum intensity pixels are determined and a 2D RPE position matrix is obtained, (ii) Automated binarization is performed to obtain a mask to identify erroneous pixels in the RPE position matrix (iii) Erroneous pixels are removed and replaced with new numerical values based on information on neighbouring pixels, (iv) 30 pixels around the RPE estimation are extracted from the original volume data, (v) the position of the RPE is redetermined on the basis of maximum intensity determination. The thus obtained RPE position map can be further improved by repeating steps (iv-vi) using a reduced amount of pixels around the estimated RPE.
Fig. 2.
Fig. 2. Steps in ILM identification. (i) A threshold value is calculated automatically and each slice of volume data is binarized; (ii) the depth position of the first zero value of each binarized A-scan is determined to get a first estimation of the ILM position; (iii) to remove erroneous ILM position pixels, 45 pixels around the estimated ILM are extracted and reprocessed; (iv) intensity-based binarization is performed again with same threshold value as the first time; (v) the depth position of the ILM is re-estimated based on determining the position of the first zero value pixels of A-scans; (vi) to improve the reliability of the obtained ILM position matrix, steps (iii–v) are repeated with a smaller amount of pixels around the estimated ILM.
Fig. 3.
Fig. 3. RPE and ILM layer segmentation results in a healthy volunteers macula. En-face projection image and ten cross-section images with identified RPE (green line) and ILM (red line). The position of each cross-section image is pointed by a number and a line in the en-face image. The projection image covers an area of 5×5 mm2 and the vertical dimension of each cross-section image is 1.37 mm. (in air)
Fig. 4.
Fig. 4. Healthy volunteers macula. (a) Position map of the ILM; (b) position map of the RPE; (c) retinal thickness map. The position of the ILM and RPE are given by the depth from the top of the cross-section image. The effect of the refractive index of the tissue is not taken into account.
Fig. 5.
Fig. 5. Macula of the left eye with ARMD. An en-face projection image and ten cross-section images showing the identified RPE (green line) and ILM (red line). The position of each cross-section image is pointed by a number and a line in the en-face image. The projection image covers an area of 5×5 mm2 and the vertical dimension of each crosssection image is 1.72 mm.
Fig. 6.
Fig. 6. Macula of a patient with ARMD. (a) Position map of the ILM; (b) position map of the RPE; (c) retinal thickness map. The positions of the ILM and RPE are given by the depth from the top of cross-section image. The effect of refractive index is ignored.
Fig. 7.
Fig. 7. Macula of the right eye with PCV. An en-face projection image and ten cross-section images showing the identified RPE (green line) and ILM (red line). The position of each cross-section image is pointed by a number and a line in the en-face image. The projection image covers an area of 5×5 mm2, and the vertical dimension of each cross-section image is 1.64 mm. Images 4 and 7 are magnified to see segmentation results in more details. White arrows are used to indicate the location of errors.
Fig. 8.
Fig. 8. Macula of a patient with PCV. (a) Position map of the ILM; (b) position map of the RPE;(c) retinal thickness map. The position of the ILM and the RPE are given by the depth from the top of the cross-section image. The effect of refractive index is ignored.
Fig. 9.
Fig. 9. The effect of iterations on segmentation result (a) RPE estimation based on maximum intensity search (red dots). The green line stands for final segmentation result; (b) RPE position estimate after masking the erroneous pixels (yellow dots); (c) representative cross-section image (healthy eye) with the red line showing the result of the first iteration, while the yellow and green lines stand for the second and third iteration, respectively; (d) magnified image of the region of interest with the white arrows showing the position where the second iteration fails; (e) representative cross-section image (ARMD eye) with the red line showing the result of the first iteration, while the yellow, blue and green lines stand for the second, third and forth iteration, respectively; (f) magnified image of the region of interest with the white arrows showing the position where the two first iterations fail.
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.