Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Registration of high-density cross sectional images to the fundus image in spectral-domain ophthalmic optical coherence tomography

Open Access Open Access

Abstract

We previously developed a technique to acquire a SLO (scanning laser ophthalmoscope) like fundus intensity image from the raw spectra measured with spectral-domain optical coherence tomography (OCT), the same spectra used to generate a 3D OCT data set. This technique offers simultaneous fundus and OCT images and, therefore, solves the problem of registering a cross sectional OCT image to fundus features. However, the registration of high density OCT images is still an unsolved problem because no useful fundus image can be generated from the high density scans. High density OCT images can significantly improve the image quality and enhance the visualization of retinal structure, especially the structure of small lesions. We have developed a feature-based algorithm, which can register a high density OCT image on the fundus image generated from normal density scans. The algorithm was successfully tested for both normal and diseased eyes.

©2006 Optical Society of America

1. Introduction

Optical coherence tomography (OCT) [1] has been used in a variety of medical research and diagnostic applications with the most successful being in ophthalmology for retinal cross sectional imaging. OCT has become one of the standard non-contact noninvasive ophthalmic imaging modalities for routine clinic diagnosis. The recent development of spectral-domain OCT [2–7] has made it possible to collect 3-dimensional (3D) image data. From the measured data, an en face fundus image (OCT fundus image) similar to the fundus images acquired with a scanning laser ophthalmoscope (SLO) can be generated [8–10]. The generated OCT fundus image is used to register a cross sectional OCT image to fundus landmarks, which solves a major problem in ophthalmic OCT practice.

The fast image acquisition speed of spectral-domain OCT also has an advantage in generating high pixel density OCT images through lateral over-sampling [7]. High density OCT images can significantly improve the image quality and enhance the visualization of retinal structure and especially the structure of small lesions. However, for a particular imaging speed, the total number of axial scans (A-scan) in a certain period of time is fixed. The longest image acquisition time is determined by the time during which the patient can effectively focus the eye on a fixation target. As a result, for a raster scan pattern, increasing the line density for an OCT image in the horizontal direction requires a decrease of the line density in the vertical direction. For example, with a total number of 65536 A-scans, the raster scan can be 512×128 (horizontal × vertical) for the normal density acquisition or 2048×32 for the high density scan. If the covered areas in the retina for the two scan patterns are the same, the high density scan patterns can not generate an effective fundus image for registration purposes. One solution for this registration problem is to register the high density OCT image on the fundus image generated from the normal density scans.

We developed a technique based on the correlation of the contours of the retinal surface of the high density and normal density OCT images to find the normal density OCT image that matches the selected high density image. The position of the matched normal density image on the OCT fundus image provides the spatial registration for the high density OCT images.

2. Theory

2.1 OCT fundus image

In spectral-domain OCT, the en-face SLO like fundus image can be generated either from the measured raw spectral data or from the processed OCT data set. We discuss here the method using the measured raw spectral data. The light intensity incident on each element of the line scan camera in the spectrometer in spectral-domain OCT is proportional to the spectral density Gd(v) of the combined reference and sample light, which can be expressed as

Gd(ν)=Gs(ν){1+nRn+2nmRnRmcos[2πν(τnτm)]+2nRncos[2πν(τnτr)]},

where v is the light frequency; Rn is the normalized intensity reflection representing the contribution to the collected sample light by the nth scatterer; Gs (v) is the spectral density of the light source; the reflection of the reference arm is assumed to be unity; distances are represented by propagation times τn, τm of the light reflected by the nth and mth scatterers in the sample and τr reflected by the reference mirror in the reference arm and summation is across all axial depths in the sample beam. The fourth term of Eq. (1) represents the interference between the sample arm and the reference arm and is the one that contributes to the OCT image.

To construct the fundus intensity image from the raw spectral data set we can first extract the interference terms in Eq. (1) by high pass filtering the detected spectrum, then square and sum them over the spectrum. Because the high pass filter is for the signal in spectral domain, the frequency response of the high pass filter is in time domain and can be expressed as H(τ). The result of the high pass filtering can be expressed in the time domain as

If(τ)=I(τ)H(τ),

where I(τ) = FT -1 [Gd(v)]; FT -1 denotes the inverse Fourier transform. We can see from Eq. (2) that by tuning the frequency response of the high pass filter we can generate the fundus image with different weights placed on the reflections from different depths in the sample. The quality of the fundus image can thus be tuned by tuning the frequency response of the high pass filter. For example, specular reflections from certain points on the surface of the retina can be suppressed by tuning the high pass filter so that in the constructed fundus image more weight is placed on the contributions from deeper points.

2.2. Spatial registration of OCT images on the fundus image

The spatial location of each OCT image is registered on the fundus image automatically if the fundus image is generated from the same measured OCT data set. However, we cannot generate a fundus image with clear landmarks, for example blood vessel pattern, for registration purposes from the high density scans [see Fig. 1(a)]. Therefore, we wish to locate a high density OCT image on the fundus image generated from the normal density scans that cover the area of interest. To find the location of a high density OCT image we can use the location of the corresponding OCT image in the normal density OCT images. The task is then converted to finding the OCT image (reference image) in the normal density scans that corresponds to the high-density image (target image). It is equivalent to register a 2D image (high density) to a 3D image (normal density).

 figure: Fig. 1.

Fig. 1. OCT fundus image for the fovea of a normal human eye with different scan densities. The images covered an area of 6mm × 6mm on the retina. (a) 2048×32 pixels (horizontal × vertical); (b) 512 × 128 pixels.

Download Full Size | PDF

To find the corresponding high density and normal density cross sectional OCT images we first detect specific features in the high density image and all the normal density images. We then match the features of the high density image and those of each normal density image. The normal density image whose features best match those of the high density image is the corresponding one. Different kinds of salient and distinctive features, such as closed-boundary regions, edges, contours, etc, can be used for the registration [11]. The features in the reference (A) and target images (B) can be expressed as:

A={xA:xAΩA}
B={xB:xBΩB},

where ΩA and ΩB are the domains of the reference and target images.

As long as the features are detected, a transformation T is used to relate the position of features in ΩA and the position of the corresponding features in ΩB. A rigid-body transformation can be used for OCT images where deformation among the high-density OCT images and the normal-density OCT images can be neglected. A rigid-body transformation consists of a combination of rotation (θ), translation(tx,ty), and scale change (s). For a 2D rigid-body transformation, T can be expressed as [11]

xz=txtz+scosθsinθsinθcosθx'z',

where (x,z) and (x',z') represent the horizontal and depth positions in the reference and target coordinate systems.

The next step is to measure the similarity of the features among the target and reference images [12] by calculating the correlation coefficients (C):

C=xA[A(xA)A¯]{B[T(xB)]B¯}xA[A(xA)A¯]2xB{B[T(xB)]B¯}2,

where A and B are the characteristic values of the features in both images, which can be the intensity value of each pixel or the coordinate values; A̅ and B̅ represent the mean values. The pair of images that have the maximal C is the matched pair.

3. Methods

A schematic of the experimental system is shown in Fig. 2. A two-module superluminescent diode light source (D830-HP2, Superlumdiodes Ltd, Moscow, Russia) with a center wavelength of 830nm and a FWHM bandwidth of 70nm was used as the low coherence light source. The output power exiting the single-mode optical fiber pigtail was 12 mW. After passing through a fiber-based isolator, the source light was coupled into a fiber-based Michelson interferometer. The light was split into the sample and reference arms by a 2 × 2 3dB fiber coupler. The sample light was coupled into the modified optical head of an OCT 2 system (Carl Zeiss Meditec Inc., Dublin, CA), which consists of a x-y scanner and optics for delivering the sample light into the eye and collecting the backreflected sample light. A raster scan pattern was used with the fast scan in the x direction. The power of the sample light was lowered to 750μW by adjusting the source power to ensure that the light intensity delivered to the eye was within the ANSI standard.

 figure: Fig. 2.

Fig. 2. Schematic of the experimental system. SLD: superluminescent diode; PC: polarization controller. The spectrometer consists of a f=50 mm collimating lens, a 1200 line/mm transmission grating and a f=200 mm imaging lens. The CCD camera is a 2048 element linear array.

Download Full Size | PDF

In the detection arm, a spectrometer consisting of a collimating lens (f = 50 mm), a 1200 line/mm transmission grating, a doublet imaging lens (f = 200 mm), and a line scan CCD camera (Aviiva-M2-CL-2014, 2048 pixels with 14 micron pixel size operating in 12-bit mode) was used to detect the combined reference and sample light. With an element size of 14μm × 14μm of the CCD camera and a center diffraction angle of about 29.87°, the calculated spectral resolution is 0.051nm, which corresponds to a detectable depth range of 3.3mm in air [13]. An image acquisition board (NI IMAQ PCI 1428) acquired the image captured by the camera and transferred it to a workstation (IBM IntelliStation Z Pro, dual 3.6 GHz processor, 3 GB memory) for signal processing and image display. A complete raster scan consisting of 128 × 512 scanning steps took about 2.7 seconds when the A-line rate of the OCT system was set to be 24 kHz.

Among all the features in a retina the most distinctive and easiest to detect is the front surface of the retina. For each OCT image, either a high density or a normal density one, the contour of the retinal surface was extracted and converted to a 1D curve. The problem is then simplified to finding the similarities between the contours of the retinal surface in the high density and normal density OCT images.

Equation (4) was used to transform the high density contour to the coordinate systems of the normal density contours. Since only 1D correlation coefficients were calculated for the contours, we set tz=0. For each pair of the high density and normal density contours a correlation coefficient was calculated using Eq. (5) for each θ and tx. The correlation coefficients then formed a 2D array C(θ,tx). The maximal correlation coefficient max[C(θ, tx)] is the measure of the similarity of the two contours. The corresponding pair of the high density and normal density contours was determined by selecting the one pair with maximal max[C(θ, tx)].

4. Results and Discussion

4.1. OCT fundus image

Figure 3 shows the generated fundus images from a normal density scan for the optic disc of a normal human eye with various normalized cutoff frequencies for the high pass filtering on the raw measured spectra. The high pass filter used is an elliptic of order 1. We can see that the bright spots in Fig. 3(a) are suppressed in Fig. 3(d), which has higher cutoff frequency (0.5 compare to 0.1).

 figure: Fig. 3.

Fig. 3. Fundus images for the optic disc of a normal human eye with an elliptical high-pass filter of order 1 with different normalized cut off frequencies. (a) 0.1; (b) 0.2; (c) 0.3; (d) 0.5.

Download Full Size | PDF

The image covered a 6mm×6mm fundus area. From the images we can see that the generated OCT fundus image has good quality similar to a SLO image and by simply tuning of the cut off frequency the quality of the OCT fundus image can be greatly improved. Both large and small blood vessels around the optic disc are imaged with good contrast. Clearly, the blood vessel pattern on the fundus image can serve as landmarks for the registration of OCT images to other imaging modalities like fundus photograph and fluorescein angiogram.

4.2. Image registration

For each eye two types of scan were performed separately: 512×128 (normal density) and 2048×32 (high density). The requirements are that the two scanned areas have an overlapped region (the region of interest) and no topological changes in the retina between the two scans, for example no treatment is applied to the eye between the two image acquisitions. A fundus image was generated from the normal density scans. The contours of the retinal surface of one of the high density images and all of the normal density images were extracted. Fig. 4 shows the picked frame of the high density OCT image of the fovea of a normal human eye and the extracted contours of the retinal surface for the high density and 31 normal density images (frame No. 50 to frame No. 80). To find the similarities between the high density frame and the normal density images, the correlation coefficients between the high density contour and each of the normal density contours were calculated. For each pair of the contours, the coordinate system of the high density contour was rotated in 1° steps between -10° to 10° (θ∈[-10°,10°]) and tx was shifted between -200 to 200 pixels to find the maximal correlation coefficient. The pair with the maximal max[C(θ, tx)] was then selected and the position of the corresponding normal density image was determined and marked on the fundus image.

 figure: Fig. 4.

Fig. 4. (a) high density OCT image of the fovea of a normal human eye consisting of 2048 A-scans; (b) contour of the retinal surface in the high density image; (c) contours of the retinal surface in the normal density OCT images (31 frames from No. 50 to No. 80); (d) the calculated maximal correlation coefficients vs frame number; (e) the enhanced correlation coefficients vs frame number.

Download Full Size | PDF

Fig. 4(d) shows the maximal correlation coefficients between the high density contour and the normal density contours. We can see that due to the small variation of the adjacent contours of a normal eye, the correlation coefficients are very close to each other in the vicinity of the corresponding image. To increase the sensitivity and reliability of the algorithm, the calculated maximal coefficients are converted to what we call the enhanced correlation coefficients (Ce) using the following formula by incorporating another similarity measure— the sum of squared differences:

Ce=max[C(θ,tx)]m{zA(m)T[zB(m)]}2,

where zA and zB are the normalized coordinate values of normal density and high density contours; m is the overlapped pixel numbers between the high density and normal density contours for the maximal correlation coefficients max[C(θ,tx)] where the pair of contours was shifted tx pixels relative to each other. Fig. 4(e) shows the enhanced maximal correlation coefficients. We can see that the sensitivity is greatly increased by the enhanced algorithm.

 figure: Fig. 5.

Fig. 5. (a) The picked high density OCT image; (b) the fundus image generated from the normal density scans; (c) the corresponding frame of the normal density scans (frame No. 64 in the 128 OCT images). The white line on the fundus image represents the determined location of the high density image.

Download Full Size | PDF

Fig. 5 shows the picked high density OCT image, the determined corresponding normal density OCT image and the generated fundus image for the normal human eye shown in Fig. 4. The location of the high density OCT image was marked as a white line on the fundus image.

 figure: Fig. 6.

Fig. 6. (a) The picked high density OCT image; (b) the fundus image generated from the normal density scans; (c) the corresponding frame of the normal density OCT images. The white line on the fundus image represents the determined location of the high density OCT image.

Download Full Size | PDF

We tested the algorithm on 20 different normal and diseased eyes to check the effectiveness for different types of contour. The algorithm is successful on all the tests. Fig. 6 shows the result of registration for the eye of a patient with idiopathic central serous chorioretinopathy (ICSC). Fig. 7 shows the result of registration for the eye with lamella hole ERM (epiretinal membrane). The more distinctive the contour of the selected high density OCT image the more sensitive the algorithm is. For diseased eyes like the one shown in Fig. 7 the algorithm works well even without using the enhanced correlation coefficients.

 figure: Fig. 7.

Fig. 7. (a) The picked high density OCT image; (b) the fundus image generated from the normal density scans; (c) the corresponding frame of the normal density OCT images. The white line on the fundus image represents the determined location of the high density OCT image.

Download Full Size | PDF

When selecting the frame of the high density OCT images, the contour of the retinal surface in the frame should contain distinctive features, for example the center of the fovea or where the retinal surface is distorted the most by the pathology. In our tests on normal and diseased eyes we found the error of this feature-based method is about ±1 frame of the normal density scan. If necessary, other features can be used in combination with the contour of the retinal surface and the reliability of registration may be further increased. In our tests we assume that the high density scans are parallel to the normal density scans in the x-y plane. When they are obviously not in parallel and more accurate registration is needed, interpolation among adjacent OCT images may be necessary.

5. Conclusion

We have developed a technique for the simultaneous generation of OCT images and a SLO like fundus intensity image from the raw spectra measured with spectral-domain OCT. This technique solved the problem of registering a cross sectional OCT image to fundus features if the fundus image is generated from the same spectra used to generate the 3D OCT data set. We have developed a feature-based algorithm, which can register a high density OCT image on the fundus image generated from normal density OCT data. This technique solved the problem of the registration of high density OCT images and therefore can potentially fully exploit the advantages of a high speed spectral-domain OCT. The algorithm was successfully tested for both normal and diseased eyes.

Acknowledgments

This work was supported in part by Carl Zeiss Meditec Inc. and a generous gift from The Wollowick Family Foundation.

References and links

1. D. Huang, E. A. Swanson, C. P. Lin, J. S. Schuman, W. G. Stinson, W. Chang, M. R. Hee, T. Flotte, K. Gregori, C. A. Puliafito, and J. G. Fujimoto, “Optical coherence tomography,” Science 254, 1178–1181 (1991). [CrossRef]   [PubMed]  

2. A. F. Fercher, C. K. Hitzenberger, G. Kamp, and S. Y. El-Zaiat, “Measurement of intraocular distances by backscattering spectral interferometry,” Optics Commun. 117, 43–48 (1995). [CrossRef]  

3. M. Wojtkowski, R. Leitgeb, A. Kowalczyk, T. Bajraszewski, and A. F. Fercher, “in vivo human retinal imaging by Fourier domain optical coherence tomography,” J. Biomedical Opt. 7, 457–463 (2002). [CrossRef]  

4. N. A. Nassif, B. Cense, B. H. Park, M. C. Pierce, S. H. Yun, B. E. Bouma, G. J. Tearney, T. C. Chen, and J. F. de Boer, “In vivo high-resolution video-rate spectral-domain optical coherence tomography of the human retina and optic nerve,” Opt. Express 12, 367–376 (2004), http://www.opticsexpress.org/abstract.cfm?URI=OPEX-12-3-367 [CrossRef]   [PubMed]  

5. B. Cense, N. A. Nassif, T. C. Chen, M. C. Pierce, S. Yun, B. H. Park, B. E. Bouma, G. J. Tearney, and J. F. de Boer, “Ultrahigh-resolution high-speed retinal imaging using spectral-domain optical coherence tomography,” Opt. Express 12, 2435–2447 (2004), http://www.opticsexpress.org/abstract.cfm?URI=OPEX-12-11-2435 [CrossRef]   [PubMed]  

6. R. A. Leitgeb, W. Drexler, A. Unterhuber, B. Hermann, T. Bajraszewski, T. Le, A. Stingl, and A. F. Fercher, “Ultrahigh resolution Fourier domain optical coherence tomography,” Opt. Express 12, 2156–2165 (2004), http://www.opticsexpress.org/abstract.cfm?URI=OPEX-12-10-2156 [CrossRef]   [PubMed]  

7. M. Wojtkowski, V. J. Srinivasan, T. H. Ko, J. G. Fujimoto, A. Kowalczyk, and J. S. Duker, “Ultrahigh-resolution, high-speed, Fourier domain optical coherence tomography and methods for dispersion compensation,” Opt. Express 12, 2404–2422 (2004), http://www.opticsexpress.org/abstract.cfm?URI=OPEX-12-11-2404. [CrossRef]   [PubMed]  

8. S. Jiao, R. Knighton, X. Huang, G. Gregori, and C. A. Puliafito, “Simultaneous acquisition of sectional and fundus ophthalmic images with spectral-domain optical coherence tomography,” Opt. Express 13, 444–452 (2005), http://www.opticsexpress.org/abstract.cfm?URI=OPEX-13-2-444 [CrossRef]   [PubMed]  

9. Maciej Wojtkowski, Vivek Srinivasan, James G. Fujimoto, Tony Ko, Joel S. Schuman, Andrzej Kowalczyk, and Jay S. Duker, “Three-dimensional retinal imaging with high-speed ultrahigh-resolution optical coherence tomography,” Ophthalmology 112, 1734–1746 (2005). [CrossRef]   [PubMed]  

10. C. K. Hitzenberger, P. Trost, P. Lo, and Q. Zhou, “Three-dimensional imaging of the human retina by high-speed optical coherence tomography,” Opt. Express 11, 2753–2761 (2003), http://www.opticsexpress.org/abstract.cfm?URI=OPEX-11-21-2753 [CrossRef]   [PubMed]  

11. B. Zitová and J. Flusser, “Image registration methods: a survey,” Image and Vision Computing 21, 977–1000 (2003). [CrossRef]  

12. D. L. G. Hill, P. G. Batchelor, M. Holden, and D. J. Hawkes, “Medical image registration,” Phys. Med. Biol. 46, R1–R45 (2001). [CrossRef]   [PubMed]  

13. G. Häusler and M. W. Lindner, “Coherence radar and spectral radar—new tools for dermatological diagnosis,” J. Biomedical Opt. 3, 21–31 (1998). [CrossRef]  

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (7)

Fig. 1.
Fig. 1. OCT fundus image for the fovea of a normal human eye with different scan densities. The images covered an area of 6mm × 6mm on the retina. (a) 2048×32 pixels (horizontal × vertical); (b) 512 × 128 pixels.
Fig. 2.
Fig. 2. Schematic of the experimental system. SLD: superluminescent diode; PC: polarization controller. The spectrometer consists of a f=50 mm collimating lens, a 1200 line/mm transmission grating and a f=200 mm imaging lens. The CCD camera is a 2048 element linear array.
Fig. 3.
Fig. 3. Fundus images for the optic disc of a normal human eye with an elliptical high-pass filter of order 1 with different normalized cut off frequencies. (a) 0.1; (b) 0.2; (c) 0.3; (d) 0.5.
Fig. 4.
Fig. 4. (a) high density OCT image of the fovea of a normal human eye consisting of 2048 A-scans; (b) contour of the retinal surface in the high density image; (c) contours of the retinal surface in the normal density OCT images (31 frames from No. 50 to No. 80); (d) the calculated maximal correlation coefficients vs frame number; (e) the enhanced correlation coefficients vs frame number.
Fig. 5.
Fig. 5. (a) The picked high density OCT image; (b) the fundus image generated from the normal density scans; (c) the corresponding frame of the normal density scans (frame No. 64 in the 128 OCT images). The white line on the fundus image represents the determined location of the high density image.
Fig. 6.
Fig. 6. (a) The picked high density OCT image; (b) the fundus image generated from the normal density scans; (c) the corresponding frame of the normal density OCT images. The white line on the fundus image represents the determined location of the high density OCT image.
Fig. 7.
Fig. 7. (a) The picked high density OCT image; (b) the fundus image generated from the normal density scans; (c) the corresponding frame of the normal density OCT images. The white line on the fundus image represents the determined location of the high density OCT image.

Equations (7)

Equations on this page are rendered with MathJax. Learn more.

G d ( ν ) = G s ( ν ) { 1 + n R n + 2 n m R n R m cos [ 2 πν ( τ n τ m ) ] + 2 n R n cos [ 2 πν ( τ n τ r ) ] } ,
I f ( τ ) = I ( τ ) H ( τ ) ,
A = { x A : x A Ω A }
B = { x B : x B Ω B } ,
x z = t x t z + s cos θ sin θ sin θ cos θ x ' z ' ,
C = x A [ A ( x A ) A ¯ ] { B [ T ( x B ) ] B ¯ } x A [ A ( x A ) A ¯ ] 2 x B { B [ T ( x B ) ] B ¯ } 2 ,
C e = max [ C ( θ , t x ) ] m { z A ( m ) T [ z B ( m ) ] } 2 ,
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.