A high-speed spectral-domain optical coherence tomography (OCT) system was built to image the human retina in vivo. A fundus image similar to the intensity image produced by a scanning laser ophthalmoscope (SLO) was generated from the same spectra that were used for generating the OCT sectional images immediately after the spectra were collected. This function offers perfect spatial registration between the sectional OCT images and the fundus image, which is desired in ophthalmology for monitoring data quality, locating pathology, and increasing reproducibility. This function also offers a practical way to detect eye movements that occur during the acquisition of the OCT image. The system was successfully applied to imaging human retina in vivo.
© 2005 Optical Society of America
Optical coherence tomography (OCT)  is a low-coherence interferometer-based noninvasive medical imaging modality that can provide high-resolution sectional images of biological tissues. Since it was first reported more than a decade ago, OCT has been used in a variety of medical research and diagnostic applications with the most successful being in ophthalmology for retinal sectional imaging. Although ophthalmic OCT has been commercialized for several years and successive generations have been developed, the spatial registration of an OCT image to fundus landmarks has not been achieved satisfactorily. Precise spatial registration of OCT sections to tissue location is also important in other medical applications of OCT [2,3]. Another problem for ophthalmic OCT is unavoidable eye movement during image acquisition that can distort the image .
The scanning laser ophthalmoscope (SLO) provides en face fundus images that are familiar to the ophthalmologist. Combining OCT with SLO (SLO/OCT) provides one possible means for precise spatial registration of the OCT image [5,6]. Previous SLO/OCT systems use a C-mode configuration (2D transverse scans) to provide sectional images in planes perpendicular to the depth of the sample. In these systems the fundus image can be acquired either by splitting the reflected sample light during the transverse scan into two detection channels (one for OCT and one for the intensity image) [5,6] or by summation of the sectional images along the depth . The first approach needs a more complicated setup and the signal-to-noise ratio of the OCT may be reduced by partial sacrifice of the back-reflected sample light. The second approach may not be accurate when the eye moves between different sections.
We developed a technique based on the recent development of spectral-domain OCT [8–14] that provides high-speed OCT images and a fundus image simultaneously. The alignment of OCT and fundus images that results provides precise spatial registration of the OCT image.
2.1 OCT signal
In spectral-domain OCT, the combined back-reflected sample and reference light in a Michelson interferometer is detected by a spectrometer together with an array detector (usually a CCD camera). The following analyses assume that a linear array detector is used whose elements are aligned in the direction along which the spectrum is spread in the spectrometer. We also ignore the polarization effects in the interference between the reference and sample light without losing the generality of the analysis. The signal detected by the array detector is called a spectral-domain signal to distinguish it from the time-varying signal detected in a conventional time-domain OCT. The light intensity incident on each element of the line scan camera is proportional to the spectral density Gd(v) of the combined reference and sample light, which can be expressed as
where v is the light frequency; Rn is the normalized intensity reflection representing the contribution to the collected sample light by the nth scatterer; Gs(v) is the spectral density of the light source; the reflection of the reference arm is assumed to be unity; distances are represented by propagation times τn, τm of the light reflected by the nth and mth scatterers in the sample and τr reflected by the reference mirror in the reference arm and summation is across all axial depths in the sample beam.
The spectral-domain signal can be transformed to the time-domain by using the Wiener-Khinchin theorem :
where Γ(τ) is the autocorrelation function of the source light; u(t) is the amplitude of the electric field of the light; the angle brackets denote integration over time and FT -1 denotes the inverse Fourier transform. By taking an inverse Fourier transformation of Eq. (1), we obtain the time-domain intensity signal:
In Eqs. (1) and (3), the third terms are the mutual interference for all light scattered within the sample expressed in the frequency domain and the time domain, respectively, and the last terms contain the interference between the scattered sample light and the reference light from which an OCT A-scan is calculated [8–10].
The discrete Fourier transformation that yields Eq. (3) requires even sampling in ν. In a spectrometer, however, the spectrum is evenly spread along the wavelength and the acquired raw spectrum must be interpolated to get the correct OCT signal [10,11].
2.2. Intensity image
The contrast in a SLO image is provided by the lateral distribution of ∑Rn across the retina. To construct a fundus intensity image from the OCT data set we need to extract the intensity term ∑Rn that is contained in both the second and fourth terms of Eqs. (1) and (3). There are multiple methods for extracting the intensity; the method selected for a particular application will depend on desired speed and accuracy.
One method is to use the non-interference terms in the frequency domain to construct the intensity image. We noticed that the cosine terms in Eq. (1) have many cycles across the spectrum and will sum to (approximately) zero, leaving only the constant terms. As a result, when we sum Eq. (1) across v, we have
where F v1(x,y) is the output of the processing method for an A-line at lateral scan point (x, y) on the fundus and Ḡs is the total source power.
Another method to derive the fundus intensity is to use the fourth term of Eq. (1) by separating the oscillatory component of Eq. (1) from the slow variation and recognizing that, for retina and other low reflectance samples, the third term is small relative to the fourth term. One way to achieve this is first to remove the low frequency component in Eq. (1) by high pass filtering the detected spectrum, then squaring the remaining oscillatory component and summing over the spectrum. The result can be expressed as
where F v2(x, y) is the intensity calculated in the frequency domain, which can be displayed directly to produce an intensity image. According to Parseval’s theorem, F v2(x, y)=Ft (x, y) where Ft(x, y) is the intensity calculated in time domain. Ft(x, y) can be acquired from the calculated OCT signal in Eq. (3) by squaring and summing the values at all axial positions except those near τ=0. We have
A schematic of the experimental system is shown in Fig. 1. A superluminescent diode (SLD 371 HP, Superlumdiodes Ltd, Moscow, Russia) with a center wavelength of 830nm and a FWHM bandwidth of 50nm was used as the low coherence light source. The output power exiting the single-mode optical fiber pigtail was 6 mW. After passing through a fiber-based isolator, the source light was coupled into a fiber-based Michelson interferometer. The light was split into the sample and reference arms by a 2×2 3dB fiber coupler. The sample light was coupled into the modified optical head of an OCT 2 system (Carl Zeiss Meditec Inc., Dublin, CA), which consists of a x-y scanner and optics for delivering the sample light into the eye and collecting the backreflected sample light. The x-y scanner was driven by a triangular wave in the horizontal direction and a saw tooth wave in the vertical direction generated through the two analog output channels of a data acquisition board (DAQ, NI PXI 6070E). A raster scan pattern was used with the fast scan in the x direction. The power of the sample light was lowered to 750µW by adjusting the source power to ensure that the light intensity delivered to the eye was within the ANSI standard.
In the reference arm a variable neutral density filter was used to adjust the reference intensity. Polarization controllers were used in both the reference and sample arms for fine tuning of the sample and reference polarization states to achieve maximal interference. In the detection arm, a spectrometer consisting of a collimating lens (f=30 mm), a 1200 line/mm transmission grating, a doublet imaging lens (f=100 mm), and a line scan CCD camera (Basler 104K 2048 pixels operating in 10-bit mode) was used to detect the combined reference and sample light. With an element size of 10µm×10µm of the CCD camera and a center diffraction angle of about 26°, the calculated spectral resolution is 0.075nm, which corresponds to a detectable depth range of 2.3mm in air . An image acquisition board (NI IMAQ PCI 1428) acquired the image captured by the camera and transferred it to a workstation (Dell Precision 650, dual 2.8 GHz processor, 2 GB memory) for signal processing and image display. The sampling rate of the analog output of the DAQ board was set to match the line rate of the CCD camera. When the line rate of the CCD camera was set to 29.3 kHz, a complete raster scan consisting of 512×128 scanning steps took about 2.2 second. At this operating condition, the measured sensitivity was about 95dB with about 20dB sensitivity drop at an imaging depth of 2mm.
4. Results and Discussion
4.1 Fundus Intensity Image
To test the algorithm, the eye of a volunteer was imaged. Figure 2(a) and 2(b) show the fundus images of the optic disc and the fovea, respectively, using the algorithm of Eq. (5). The spectrum corresponding to each A-scan was first high-pass filtered to eliminate the first two terms of Eq. (1) and then squared and summed. Figure 2(c) shows the corresponding fundus image acquired with a conventional near-infrared (785 nm) SLO, as implemented in the GDx–VCC™ (Laser Diagnostic Technologies, San Diego, CA).
The OCT intensity images generated by the proposed algorithm [Fig. 2(a), (b)] compare favorably with the SLO image of the same areas [Fig. 2(c)]. Both large blood vessels around the optic disc and much smaller vessels around the fovea are imaged with good contrast. Clearly, these OCT fundus images can serve the purposes of image registration and feature location.
Figure 3 and Fig. 4 are movies that emphasize the relation between the fundus images and the underlying 3-D OCT data sets. The reference plane was placed anterior to the retina surface. The displayed dynamic range of the OCT images is 43 dB. The movies show the sequentially acquired B-scan images and the corresponding fundus images in the region of the human fovea and optic disc, respectively. The white line crossing the fundus image indicates the position of the OCT B-scan. Because each fundus image was generated from the same spectra used to produce the 3-D OCT data set, it provides all necessary landmarks for orienting the 3-D data to other fundus images, such as fundus photographs and fluorescein angiography. In addition, transverse eye movements that occur during the OCT data acquisition can be visualized.
4.2 Comparison of algorithms
Although the expressions in Eqs. (4) and (5) are derived from the spectrum evenly sampled in frequency (requiring interpolation), to generate the fundus images quickly it is desirable to use the raw spectrum (approximately evenly sampled in wavelength). To explore this issue, we calculated two fundus images from the same data set using the algorithm of Eq. (5). One image was calculated from interpolated spectra sampled in frequency and the other from raw spectra sampled in wavelength. Figure 5, the frequency distribution for the difference between the two fundus images, shows that the difference between the two images was negligible. This means that the fundus image can be generated directly from the spectrometer output by high-pass filtering, squaring and summing and makes it possible to display the fundus image as quickly as it is acquired. This advantage allows an operator to assess scan quality before time-consuming data analysis.
We also tested the algorithm for calculating the fundus image based on Eq. (4). The image produced looked much noisier than the image from the algorithm based on Eq. (5). In Eq. (4), the reference light adds an offset that is much bigger than the useful part ∑Rn. Only a small variation in the reference light intensity can obscure the fundus image in the varying background. This problem might be solved by obtaining a measure of the reference intensity (in one of several ways), scaling it appropriately and subtracting it from Eq. (4).
4.3 Blood vessel shadowgrams
The retinal blood vessel pattern is the most prominent feature of fundus images and lends itself naturally to multi-modality image registration. The vessels seen in the fundus intensity images derived here are adequate for subsequent registration algorithms, but higher contrast is always desirable. The contributions of the blood vessels to a fundus image is the superposition of two parts that can be seen in an OCT image—the back reflection at the locations of the vessels and the shadows they cast on the deeper retinal layers. This superposition reduces the contrast of the blood vessels in the fundus image. If in the 3-D OCT data we peel off the inner layers of the retina where the blood vessels are located, we can have a partial fundus image that contains only the contribution of the shadows. The contrast of the blood vessels in this image should be enhanced.
We have developed a means to capture these shadows from the 3-D OCT data set. To overcome the uncertain location of the retina within the 3-D data set and the tilt and curvature of the retina itself, we used image segmentation algorithms to establish a reference surface at the retinal pigment epithelium [Fig. 6(a)]. A slab of retinal tissue was then defined by two surfaces parallel to and on either side of the reference surface [one cross section of the slab is shown as the green region in Fig. 6(a)]. Equation (6) was used with summation restricted to values of τ that fell within the slab to calculate the intensity reflected from the slab:
The resulting intensity image, which can be called a “shadowgram”, shows the blood vessel shadows projected onto the tissue in the slab (primarily the RPE) at very high contrast [Fig. 6(b)].
We have developed a technique to acquire a fundus intensity image of SLO quality from the raw spectra measured with spectral-domain OCT, the same spectra used to generate a 3-D OCT data set. This technique offers simultaneous fundus and OCT images and, therefore, solves the problem of registering a cross-sectional OCT image to fundus features. Because the fundus image is generated from the measured raw spectra, it can be displayed as quickly as it is acquired. The concepts used to generate fundus images were extended to produce high-contrast shadowgrams of retinal blood vessels that will facilitate the registration of OCT data to other imaging modalities. The techniques presented here were successfully demonstrated with a high-speed spectral-domain OCT system on human retina in vivo.
We thank Yoshiaki Yasuno, University of Tsukuba, Japan for providing valuable advice and information. This work was supported in part by a generous gift from The Wollowick Family Foundation.
References and links
1. D. Huang, E. A. Swanson, C. P. Lin, J. S. Schuman, W. G. Stinson, W. Chang, M. R. Hee, T. Flotte, K. Gregori, C. A. Puliafito, and J. G. Fujimoto, “Optical coherence tomography,” Science 254, 1178–1181 (1991). [CrossRef]
2. S. A. Boppart, J. Herrmann, C. Pitris, D. L. Stamper, M. E. Brezinski, and J. G. Fujimoto, “High-resolution optical coherence tomography-guided laser ablation of surgical tissue,” J. Surgical Research 82, 275–284 (1999). [CrossRef]
3. A. V. D’amico, M. Weinstein, X. Li, J. P. Richie, and J. G. Fujimoto, “Optical coherence tomography as a method for identifying benign and malignant microscopic structures in the prostate gland,” Urology 55, 783–787 (2000). [CrossRef]
4. D. X. Hammer, R. D. Ferguson, J. C. Magill, M. A. White, A. E. Elsner, and R. H. Webb, “Image stabilization for scanning laser ophthalmoscopy,” Opt. Express10, 1542–1549 (2002), http://www.opticsexpress.org/abstract.cfm?URI=OPEX-10-26-1542 [PubMed]
5. A. G. Podoleanu, G. M. Dobre, R. G. Cucu, R. Rosen, P. Garcia, J. Nieto, D. Will, R. Gentile, T. Muldoon, J. Walsh, L. A. Yannuzzi, Y. Fisher, D. Orlock, R. Weitz, J. A. Rogers, S. Dune, and A. Boxer, “Combined multiplanar optical coherence tomography and confocal scanning ophthalmoscopy,” J. Biomed. Opt. 9, 86–93 (2004). [CrossRef]
6. A. G. Podoleanu and D. A. Jackson, “Noise analysis of a combined optical coherence tomography and a confocal scanning laser ophthalmoscope,” Appl. Opt. 38, 2116–2127 (1999). [CrossRef]
7. C. K. Hitzenberger, P. Trost, P. Lo, and Q. Zhou, “Three-dimensional imaging of the human retina by high-speed optical coherence tomography,” Opt. Express11, 2753–2761 (2003), http://www.opticsexpress.org/abstract.cfm?URI=OPEX-11-21-2753 [CrossRef]
8. A. F. Fercher, C. K. Hitzenberger, G. Kamp, and S. Y. El-Zaiat, “Measurement of intraocular distances by backscattering spectral interferometry,” Optics Communications 117, 43–48 (1995). [CrossRef]
9. G. Häusler and M. W. Lindner, “Coherence radar and spectral radar—new tools for dermatological diagnosis,” J. Biomed. Opt. 3, 21–31 (1998). [CrossRef]
10. M. Wojtkowski, R. Leitgeb, A. Kowalczyk, T. Bajraszewski, and A. F. Fercher, “in vivo human retinal imaging by Fourier domain optical coherence tomography,” J. Biomed. Opt. 7, 457–463 (2002). [CrossRef]
11. N. A. Nassif, B. Cense, B. H. Park, M. C. Pierce, S. H. Yun, B. E. Bouma, G. J. Tearney, T. C. Chen, and J. F. de Boer, “In vivo high-resolution video-rate spectral-domain optical coherence tomography of the human retina and optic nerve,” Opt. Express12, 367–376 (2004), http://www.opticsexpress.org/abstract.cfm?URI=OPEX-12-3-367 [CrossRef]
12. B. Cense, N. A. Nassif, T. C. Chen, M. C. Pierce, S. Yun, B. H. Park, B. E. Bouma, G. J. Tearney, and J. F. de Boer, “Ultrahigh-resolution high-speed retinal imaging using spectral-domain optical coherence tomography,”Opt. Express12, 2435–2447 (2004), http://www.opticsexpress.org/abstract.cfm?URI=OPEX-12-11-2435 [CrossRef]
13. R. A. Leitgeb, W. Drexler, A. Unterhuber, B. Hermann, T. Bajraszewski, T. Le, A. Stingl, and A. F. Fercher, “Ultrahigh resolution Fourier domain optical coherence tomography,” Opt. Express12, 2156–2165 (2004), http://www.opticsexpress.org/abstract.cfm?URI=OPEX-12-10-2156 [CrossRef]
14. M. Wojtkowski, V. J. Srinivasan, T. H. Ko, J. G. Fujimoto, A. Kowalczyk, and J. S. Duker, “Ultrahigh-resolution, high-speed, Fourier domain optical coherence tomography and methods for dispersion compensation,” Opt. Express12, 2404–2422 (2004), http://www.opticsexpress.org/abstract.cfm?URI=OPEX-12-11-2404 [CrossRef]
15. J. W. Goodman, Statistical Optics, Wiley, New York (1985).