## Abstract

Swept-source optical coherence tomography (SS-OCT) is sensitive to sample motion during the wavelength sweep, which leads to image blurring and image artifacts. In line-field and full-field SS-OCT parallelization is achieved by using a line or area detector, respectively. Thus, approximately 1000 lines or images at different wavenumbers are acquired. The sweep duration is identically with the acquisition time of a complete B-scan or volume, rendering parallel SS-OCT more sensitive to motion artifacts than scanning OCT. The effect of axial motion on the measured spectra is similar to the effect of non-balanced group velocity dispersion (GVD) in the interferometer arms. It causes the apparent optical path lengths in the sample arm to vary with the wavenumber. Here we propose the cross-correlation of sub-bandwidth reconstructions (CCSBR) as a new algorithm that is capable of detecting and correcting the artifacts induced by axial motion in line-field or full-field SS-OCT as well as GVD mismatch in any Fourier-domain OCT (FD-OCT) setup. By cross-correlating images which were reconstructed from a limited spectral range of the interference signal, a phase error is determined which is used to correct the spectral modulation prior to the calculation of the A-scans. Performance of the algorithm is demonstrated on *in vivo* full-field SS-OCT images of skin and scanning FD-OCT of skin and retina.

© 2012 Optical Society of America

## 1. Introduction

Fourier-domain optical coherence tomography (FD-OCT) is an imaging technique that is based on the interference signal between light reflected by a reference mirror and light scattered or reflected by a sample. It uses multiple wavelengths to encode differences in the optical path lengths of these two arms and allows to obtain cross-sectional images. FD-OCT is nowadays well established in ophthalmology and shows great potential in other medical and industrial applications. The non-contact optical imaging technique can measure the anterior and the posterior part of the eye with a few micrometer depth resolution. While in most cases a scanning confocal imaging scheme is being used, recently OCT imaging utilizing a line [1] or area [2] camera to parallelize acquisition [3] for multiple lateral positions have been shown. Although this so-called line-field or full-field swept-source OCT (FF-SS-OCT), based on a rapidly wavelength-swept laser, has been demonstrated to work for *in vivo* measurements of the human retina [4], the requirements on the acquisition speed of the digital camera are enormous. High-speed CMOS cameras are needed, acquiring data with up to 200,000 fps at a reduced area of interest. However, at these acquisition rates the data cannot be transfered continuously to a computer. Additionally, even at these high speeds motion artifacts remain a problem for measurements of not stabilized tissue, as demonstrated in Fig. 1. Compared to scanning OCT, the A-scan acquisition time increases dramatically and becomes identical to the acquisition time of a complete volume as a complete camera image needs to be digitized for each wavenumber. During the sweep time the sample motion can no longer be neglected. The most general motion a sample can perform during measurement is composed of rotations and axial and transverse movements. But as is the case in scanning SS-OCT, the Doppler effect increases the impact of axial motion by a factor of *λ*_{stop}/(*λ*_{start} – *λ*_{stop}), where *λ*_{start} and *λ*_{stop} denote the start and stop wavelength of the sweep, respectively [5]. This effect renders the axial direction the most vulnerable axis in a full-field SS-OCT system and causes axial blurring in the resulting image. For example, for a system with a center wavelength of 860 nm and a sweep range of 50 nm, the axial direction is approximately 17 times more sensitive to motion artifacts than the transverse direction.

The motion causes a chirp of the momentary frequency of the interference. Thus motion introduces additional time-dependent phase factors to the interference signal. As will be shown, these phase factors are comparable to the effect of unmatched group velocity dispersion (GVD) between sample and reference arm. This mismatch is usually caused by different optical components, but can also result from the sample. Especially for high-resolution FD-OCT its correction is crucial, but only matching of the interferometer arms often does not yield optimal results [6, 7, 8]. For retinal imaging, the GVD introduced by the ocular bulb, which length changes individually by 20% [9], influences resolution and image quality and only an individual correction of GVD yields optimal image quality. In order to correct the mismatch it needs to be determined correctly first. Afterwards, GVD correction is possible using optical means [10], but this is expensive and cumbersome. Alternatively, dispersion correction is most easily done numerically by a simple multiplication of phase factors prior to the Fast Fourier Transform (FFT).

There have been approaches to determine these factors by using calibration measurements [11], but these methods fail if dispersion is introduced by the sample itself or if the phase factors are changing for each wavelength scan as in the case of motion-induced effects. Other methods rely on functional optimization techniques by iteratively changing coefficients of polynomial approximations of the dispersion in order to optimize axial resolution of the OCT data at hand [12]. These techniques will also fail for sample motion induced phase factors, if the sample motion cannot be described by a polynomial of sufficiently low degree. Complexity and computation time increase significantly with each additional degree of freedom while robustness decreases. Additionally, optimization algorithms rely on correct axial resolution determination of the data and need a stopping criterion. Hence there is need for a non-iterative algorithm, which can determine the phase error in SS-OCT data which is introduced by axial sample motion or GVD mismatch in the interferometer arms.

Here, we show that the phase error can be retrieved directly from the acquired OCT data and present an algorithm, which is based on the cross-correlation of sub-bandwidth reconstructed images. The algorithm does not require any additional calibration measurements. It rather uses the spectral raw data that was subjected to axial motion or dispersion, respectively. No image quality determination algorithms or optimization techniques are required. It is demonstrated, that the algorithm can determine the chirp in full-field SS-OCT data caused by sample motion in *in vivo* measurements and can correct dispersion of the ocular media in retinal imaging.

## 2. Effects of sample motion and GVD mismatch on the OCT signal and its correction

The complex representation (analytical signal) of the cross-correlation term of an FD-OCT interference signal *I _{D}*(

*k*) is given by

*k*denotes the continuously changed wavenumber,

*S*(

*k*) describes the spectral shape of the irradiation,

*z*is the depth of the structures in the sample relative to the reference and $f(z)=\sqrt{{R}_{\text{sample}}(z)}$ the OCT signal caused by the local reflectivity

*R*

_{sample}(

*z*) at relative depth

*z*. For simplicity we used the analytical signal, although in general the acquired signal is only the real part of

*I*(

_{D}*k*). As long as only positive or negative

*z*exists, i. e. the reference plane is not in the sample volume, the analytical signal is equivalent to the real signal.

Axial movement of the sample during the wavenumber sweep or GVD mismatch between reference and sample arm, both introduce an additional path length difference *z*_{disp}(*k*) that is dependent on the wavenumber *k*:

*ϕ*(k)) with a suitable

*ϕ*(

*k*) = 2

*kz*

_{disp}(

*k*). Once

*ϕ*(

*k*) is known its effect can simply be reverted by multiplying ${I}_{D}^{\text{disp}}(k)$ in Eq. (2) with the complex conjugated phase term [exp(−i

*ϕ*(

*k*))]

^{*}= exp(+i

*ϕ*(

*k*)) prior to calculating the A-scans by the Fourier transform.

The effect of *ϕ*(*k*) on the OCT signal is seen by a Taylor expansion around a central wavenumber *k*_{0}:

^{−1}[

*S*(

*k*)] is the point spread function of the OCT signal and the constant term

*ϕ*(

*k*

_{0}) adds an additional phase. The linear term

*∂*(

_{k}ϕ*k*)|

_{k=k0}Δ

*k*will shift the complete image due to the relation

^{−1}[

*f*(

*z*+ Δ

*z*)] = exp(−i

*k*Δ

*z*)

^{−1}[

*f*(

*z*)] and only higher order terms will cause blurring of the image.

## 3. Determination of the correcting phase function

#### 3.1. Theory

The basic idea for retrieving the phase term *ϕ*(*k*) from the image data is to apply a shifted windowing function *w*(*k*) to the dispersion affected spectral data
${I}_{D}^{\text{disp}}(k)$ prior to the Fourier transform, which has a *k*-bandwidth small enough that the linear approximation of *ϕ*(*k*) in Eq. (4) becomes valid as demonstrated in Fig. 2. The center of the windowing function *k*′_{0} is then shifted along the whole spectrum. The resulting short-time Fourier transform (STFT) is given by

With dispersion the apparent depth of the image structures, which depends on the local frequencies, will depend on *k*′_{0} as seen in Fig. 2. From the displacement of the structures calculated by the STFT at different *k*′_{0} the phase *ϕ*(*k*) can be extracted. For reflections at interfaces this can be done by simple peak detection. For more complicated samples a cross-correlation of the signal amplitudes at the different *k*′_{0} with the STFT at a fixed wavenumber *k*″_{0}, e. g. the center wavenumber of the OCT source, can determine the shift. The cross-correlation can be effectively implemented by using Fast Fourier Transforms (FFTs) and the displacement Δ*z*(*k*′_{0}) = *∂ _{k}ϕ*(

*k*)|

_{k=k′0}can be determined by finding the maximum of the cross correlation

*denotes the*

_{z}*z*-value at the maximum of the argument and ★ denotes the cross-correlation. This procedure to obtain the derivative of the phases is illustrated in Fig. 3. The required phase function

*ϕ*(

*k*) can then be calculated down to a linear term by the integration of Δ

*z*(

*k*) The arbitrary integration constant

*C*defines a constant phase factor of the A-scans which has no influence on the resulting image intensity. The choice of

*k*″

_{0}determines the linear term of

*ϕ*(

*k*) and therefore the axial position at which the dispersion corrected image will appear. In the case of object motion or GVD mismatch this is usually not the exact physical position. Also the apparent displacement Δ

*z*(

*k*) is not identical to physical motion of the sample. The cross-correlation of sub-bandwidth reconstruction (CCSBR) of the recorded images leads directly to a correction phase function which can eliminate the dispersion related image blurring.

The proposed algorithm is limited by a trade-off between the accuracy and spectral resolution with which Δ*z*(*k*′_{0}) = *∂ _{k}ϕ*(

*k*)|

_{k=k′0}can be determined. A large window size

*δk*of the STFT increases the depth resolution of the displacement, but decreases the spectral resolution by which dispersion or motion is calculated. The uncertainty relation states, that the standard deviation of a signal cannot be arbitrary small in time and frequency at the same time, i. e.

*δz*·

*δk*≳ ½. The displacement can however be determined more accurately than predicted by the uncertainty relation.

*δz*determines only the width of the maximum of the cross-correlation. The position can be determined more precisely by means of zero-padding prior to the STFT and local fits to the maximum (see 3.2.2).

#### 3.2. Implementation

### 3.2.1. Preprocessing

Often FD-OCT signals contain fixed pattern noise from the detection electronics or from self-interference of reflecting surfaces in the optical setup, inducing image structures with fixed positions in the B-scans or volume data. These imaging artifacts are independent of dispersion and sample motion. If sufficiently strong, they obscure the detection of the axial shifts.

For spectrometer based FD-OCT fixed pattern artifacts are effectively removed by dividing the spectral data by a reference spectrum and apodization of the spectral shape to a Hann window. The phase in SS-OCT data is not stable enough for this approach. Also removing such artifacts in the final intensity B-scans by subtracting a reference image destroys the phase information that is required for a successful phase detection. Subtracting these patterns in the STFT data destroys large parts of the image due to the reduced spatial resolution in the STFT data and a (negative) fixed pattern is introduced to the images.

Instead, the fixed pattern noise was efficiently removed by lateral high-pass filtering. A similar approach was successfully used as lateral filtering when creating complex spectral data in full-range OCT [14]. Here its only purpose is to remove low-frequency components. The measured spectra of a complete 3D-volume or a series of B-scans were Fourier transformed with respect to *k* and *x*, *y* directions. Fixed patterns show up in the respective depth as bright spots close to the lateral zero frequencies of the Fourier transformed spectra. High-pass filtering with respect to the *x* and *y* directions effectively removes the fixed patterns (see Fig. 4), while maintaining most of the image information including the phases, which are located due to the speckle noise at high frequency regions.

### 3.2.2. Detection algorithm

In our experiments a Hann window, which effectively reduces side lobes, was used for the STFT. However, the Hann window reduces the spatial resolution *δz* compared to a rectangular window. Good results have been achieved using a window length between 64 and 128 pixel for spectra with 1024 data points. Outside the window all data points were set to zero and after the Fourier transform data sets with 513 pixel, i. e. only the positive frequency part of the FFT were cross-correlated. This provides effectively an interpolation of the correlation of the windowed data. For an increase of the signal-to-noise ratio (SNR) the cross-correlation can be restricted to regions of interest which contain image information with high contrast and good SNR.

The cross-correlation was effectively implemented using FFTs, which calculate a circular cross-correlation if no additional zero padding is applied. However, as long as the Δ*z*-shifts remain smaller than half of the measurement depth, the displacement can still be determined uniquely. As the effect of lateral shift was neglected in the motion detection scenario and as no lateral shift is present in the GVD mismatch case, the robustness of the algorithm can be increased by only searching the maximum of the cross-correlation for zero lateral shifts, i. e. Δ*x* = 0 and Δ*y* = 0. For sub-pixel accuracy of the displacement detection, a parabola going through the log of the maximum intensity value and the two neighboring data points was computed. The argument of the maximum of the parabola was taken as the joined displacement between the cross-correlated STFTs. A spline interpolation was used to get the intermediate data points and the resulting data was numerically integrated by building partial sums in order to calculate *ϕ*(*k*) from Eq. (6).

## 4. Materials and methods

The new CCSBR algorithm was tested for motion correction with a newly developed full-field SS-OCT and for GVD mismatch correction in two different spectrometer based OCT devices.

#### 4.1. Full-field SS-OCT setup for axial motion experiments

The purpose built setup for full-field SS-OCT, which is described in more detail elsewhere [4], used a tunable laser (Superlum Broadsweeper BS840-1) with a tuning range from 873.5 nm to 823.5 nm, which output was split in reference and sample arm (Fig. 5). The light in the sample arm illuminated the object with a collimated beam and the backscattered and reflected light was imaged onto a monochrome high speed CMOS cameras (A503k, Basler AG, Germany and FASTCAM SA5, Photron Inc., USA). The reference light was reflected from a mirror and then collimated onto the camera. Camera and light-source were synchronized using a start trigger of the tunable laser to start the acquisition of 1024 images by the camera. Maximal frame rate of the Photron SA5 camera depended on the number of pixels used for imaging. With a field of 512 × 192 pixel a frame rate of up to 36,000 frames/s was reached and all 1024 images were acquired in a total acquisition time of 28 ms.

#### 4.2. FD-OCT setup for GVD mismatch experiments

Correction of GVD mismatch was demonstrated with a commercially available 1300 nm spectrometer-based FD-OCT (Telesto, Thorlabs GmbH, Germany) with an axial resolution of 10*μ*m in air, which was set to an acquisition speed of 5.5 kHz A-scan rate. GVD mismatch between reference and sample arm was created by a 16 mm rod of SF57 in the reference arm.

For experiments at 900 nm and *in vivo* measurements of the human retina a commercial high-resolution spectrometer-based FD-OCT (Ganymede, Thorlabs GmbH, Germany) was used. It provides an axial resolution of 4*μ*m in air with an acquisition speed of 29 kHz. For measurements of the posterior segment of the human eye, it was coupled to a slit-lamp by using the scanning unit and reference of a commercial FD-OCT (SL SCAN-1, Topcon Europe, Netherlands).

## 5. Results and discussion

#### 5.1. Axial motion in full-field SS-OCT

The proposed CCSBR algorithm was able to correct for dynamically changing dispersion in full-field SS-OCT, which was introduced by sample motion during the wavelength sweep. Retinal-imaging needs a few hundred sweeps per second for *in vivo* measurements [4]. At lower imaging speed tissue movements lead to blurring of the images. Imaging of the finger tip with 36 volumes/second showed considerable broadening of the surface reflection and a wash-out of the sweat glands, even when the finger was stabilized in a ring (Fig. 6a). For motion correction windows of 64 pixel at every 16^{th} pixel were used. The apparent change of the axial position Δ*z* over the sweep, which was determined by cross-correlation of the STFTs, was more than 14 pixel which corresponds to 100*μ*m (Fig. 6c). This is about 17 times the real axial movement. After calculation of *ϕ*(*k*) and correction no motion artifacts were visible in the reconstructed image (Fig. 6b). The CCSBR algorithm improved the FWHM from 4.3 pixel to 2.1 pixel, which is almost theoretical possible optimum for a Hann-windowed signal of 2.0 pixel for a flat reflecting surface [15]. However, the improvement of the image quality appears to be even larger. According to the extracted shift Δ*z* (Fig. 6c) only an approximately 5 – 10× higher imaging speed could achieve an imaging without motion correction.

However, motion correction using CCSBR cannot restore the resolution if the motion is too severe. When the imaging speed was reduced to nearly 4 volumes/second, the corrected images did not attain the full resolution and contained severe image artifacts (Fig. 7). During the sweep the apparent position of structures in the image moved over 100 pixel which corresponded to 0.74 mm (≈ 1000 × *λ*) axial shift. Not only the amplitude of the motion was higher, also the movement was not monotonously and changed the direction five times over the wavelength sweep, which took about 260 ms. The STFT was no longer able to resolve the temporal changes of the frequency chirp and CCSBR could not recover *ϕ*(*k*) with sufficient accuracy. Though not restoring the full image quality, the FWHM of the finger surface reflection appeared to be 2.1 pixel after the correction, again close to the optimal FWHM of 2.0 pixel. Additionally, lateral blurring is seen in the corrected image. This indicates, that lateral motion might contribute here to the image degradation.

The presented examples demonstrate the dominance of axial motion on the image quality due to the magnification by the Doppler effect. As the axial and lateral resolution increase this will change, because *λ*_{start}/(*λ*_{stop} – *λ*_{start}) is reduced and lateral motion gets more visible. As soon as lateral motion can no longer be neglected the correction fails. The proposed algorithm to correct motion induced phase errors is computational expensive compared to full-field SS-OCT processing without compensation. But it allows to reduce acquisition speed and lowers costs. On a dual 2.13 GHz Quad Core Intel Xeon processor, the algorithm takes approximately 3 minutes for a complete measurement of *ϕ*(*k*) of a full-field raw volume of size 512 × 192 × 1024. Performance could probably be improved by restricted the measurement to a local subregion of the volume or by porting it to a graphics processing unit (GPU) which has been shown to increase OCT processing speed [16, 17].

#### 5.2. GVD mismatch in FD-OCT

CCSBR was also tested for correction of GVD mismatch. Windows of 128 pixel, centered at every 8^{th} pixel were used for the STFT, which was applied to B-scan data with 2048 lateral positions. Each scan position had a spectrum with 1024 pixel for the 1300 nm FD-OCT and 2048 pixel for the 900 nm high-resolution FD-OCT, respectively. The data were fitted by a 4^{th} degree polynomial to recover the phase function.

*In vivo* measurements of a finger tip were performed with a GVD mismatch of 1.9 fs/nm using the 1300 nm spectrometer based OCT. When the conjugate of the recovered phase error was multiplied to the analytical interference signal prior to the FFT, the FWHM of the surface of the finger tip as shown in Fig. 8 could be reduced from about 36*μ*m to about 13*μ*m. The improved FWHM corresponds to about 2.6 pixel, which is near to theoretical possible optimum of 2.0 pixel for a Hann-windowed spectrum [15].

Additionally, *in vivo* measurements of the human retina were performed where the dispersion was only due to residual non-compensated dispersion of the human eye itself. We performed retina measurements of a normal sighted and of a myopic eye (–15 dioptry) which differ in the GVD due to the different length of the eye bulb. Although the GVD in the setup was matched for normal sighted eyes, imaging quality could still be improved in practice even for the normal sighted eye as can be seen from Fig. 9a and 9b (
Media 1). For the myopic eye the axial blurring of the point-spread-function due to dispersion is visible even better (Fig. 9c). Correcting the myopic eye using the phase error function *ϕ*(*k*) that was obtained for the normal sighted eye does not give optimal results as can be seen from Fig. 9d. This shows that the difference in GVD between the normal sighted and the myopic eye is enough to decrease imaging quality and an individual correction is required. Using the individual phase error function *ϕ*(*k*) the B-scan provides the maximum improvement in axial resolution and imaging quality as can be seen from Fig. 9e (
Media 2).

At last, we directly measured the improvement in the point-spread-function (PSF) by covering a mirror with 20 mm of water and compared the results to a dispersion correction method using an additional calibration measurement [11]. In both cases the PSF is improved significantly showing hardly any difference between the two methods as shown in Fig. 10a. Also, the phase-errors *ϕ*(*k*) that were obtained using the two methods show only minor difference (Fig. 10b).

In general dispersion correction is quite robust using the proposed algorithm. The phase error *ϕ*(*k*) directly depends on the refractive index function *n*(*k*), which varies according to the Kramers-Kronig relation in non-absorbing media only slowly with the wavenumber. Therefore in the optical window of tissue *n*(*k*) the GVD is a continuous and smooth function that can be well approximated by a low-degree polynomial.

The proposed algorithm shows good performance. A comparison with two alternatives, the calculation of phase error from previously measured PSFs and iterative improvement of image sharpness is shown in Table 1. The proposed CCSBR algorithm does not need a priory calibration measurements and is therefore especially suitable for correcting sample induced GVD. Compared to approaches which parametrize the phase error function *ϕ*(*k*) and optimize the image quality, CCSBR is deterministic and should be faster. This makes the algorithm simpler and more effective. The described filtering for fixed pattern noise in not necessary for spectrometer based FD-OCT. On a dual 2.13 GHz Quad Core Intel Xeon processor, the phase-error detection algorithm took approximately 4 – 5 seconds for FD-OCT spectral raw data of size 2048×1024 and approximately 10 seconds for raw data of size 2048 × 2048. Using GPUs [16, 17] the computation time for the phase-error function might even be reduced to allow almost real-time determination. If the GVD is not dependent on the sample the function *ϕ*(*k*) does not need to be determined for every measurement, as the obtained phase data will be valid for the following measurements.

## 6. Conclusion and outlook

We presented the cross-correlation of sub-bandwidth reconstructions (CCSBR) as a new algorithm for determining wavenumber-dependent phase errors in FD-OCT that induce axial blurring of the images. These phase errors are either caused by axial motion in a full-field SS-OCT or by GVD mismatch between reference and sample arm. The presented algorithm does not depend on functional optimization or a polynomial representation of the phases. This is of importance when applied to motion induced phase errors as these can in general not be described by a polynomial of low degree. The algorithm is not iterative and thus does not require a stopping criterion. For GVD mismatch correction no calibration is required, rendering it suitable for sample induced phase errors and making it a valuable extension to scanning FD-OCT. If acquired data was subject to an unknown dispersion mismatch between the interferometer arms, the original data can be restored. However, the importance of the algorithm for full-field SS-OCT might even be bigger. As full-field SS-OCT is very sensitive to motion artifacts, *in vivo* measurements are currently only possible with very expensive high-speed cameras and reduced lateral area-of-interest. CCSBR thus reduces cost and enables to acquire larger lateral areas of interest.

## References and links

**1. **S.-W. Lee and B.-M. Kim, “Line-field optical coherence tomography using frequency-sweeping source,” IEEE J. Sel. Top. Quantum Electron. **14**, 50–55 (2008). [CrossRef]

**2. **B. Považay, A. Unterhuber, B. Hermann, H. Sattmann, H. Arthaber, and W. Drexler, “Full-field time-encoded frequency-domain optical coherence tomography,” Opt. Express **14**, 7661–7669 (2006). [CrossRef]

**3. **M. Mujat, N. V. Iftimia, R. D. Ferguson, and D. X. Hammer, “Swept-source parallel oct,” Proc. SPIE **7168**, 71681E (2009). [CrossRef]

**4. **T. Bonin, G. Franke, M. Hagen-Eggert, P. Koch, and G. Hüttmann, “In vivo Fourier-domain full-field oct of the
human retina with 1.5 million a-lines/s,” Opt.
Lett. **35**, 3432–3434
(2010). [CrossRef] [PubMed]

**5. **S. H. Yun, G. Tearney, J. de Boer, and B. Bouma, “Motion artifacts in optical coherence tomography with frequency-domain ranging,” Opt. Express **12**, 2977–2998 (2004). [CrossRef] [PubMed]

**6. **R. Yadav, K.-S. Lee, J. P. Rolland, J. M. Zavislan, J. V. Aquavella, and G. Yoon, “Micrometer axial resolution oct for corneal imaging,” Biomed. Opt. Express **2**, 3037–3046 (2011). [CrossRef] [PubMed]

**7. **T. Hillman and D. Sampson, “The effect of water dispersion and absorption on axial resolution in ultrahigh-resolution optical coherence tomography,” Opt. Express **13**, 1860–1874 (2005). [CrossRef] [PubMed]

**8. **P. Puvanathasan, P. Forbes, Z. Ren, D. Malchow, S. Boyd, and K. Bizheva, “High-speed, high-resolution Fourier-domain
optical coherence tomography system for retinal imaging in the 1060 nm wavelength
region,” Opt. Lett. **33**, 2479–2481
(2008). [PubMed]

**9. **W. Benjamin and I. Borish, *Borish’s Clinical Refraction* (Butterworth-Heinemann/Elsevier, 2006). [PubMed]

**10. **A. Yang, F. Vanholsbeeck, S. Coen, and J. Schroeder, “Chromatic dispersion compensation of an oct system with a programmable spectral filter,” in Optical Coherence Tomography and Coherence Techniques V, R. Leitgeb and B. Bouma, eds., Vol. 8091 of *Proceedings of SPIE-OSA Biomedical Optics* (Optical Society of America, 2011), paper 809125.

**11. **S. Makita, T. Fabritius, and Y. Yasuno, “Full-range, high-speed, high-resolution 1-*μ*m spectral-domain optical coherence tomography using bm-scan for volumetric imaging of the human posterior eye,” Opt. Express **16**, 8406–8420 (2008). [CrossRef] [PubMed]

**12. **M. Wojtkowski, V. Srinivasan, T. Ko, J. Fujimoto, A. Kowalczyk, and J. Duker, “Ultrahigh-resolution, high-speed, Fourier domain optical coherence tomography and methods for dispersion compensation,” Opt. Express **12**, 2404–2422 (2004). [CrossRef] [PubMed]

**13. **A. F. Fercher, W. Drexler, C. K. Hitzenberger, and T. Lasser, “Optical coherence tomography—principles and applications,” Rep. Prog. Phys. **66**, 239 (2003). [CrossRef]

**14. **Y. Yasuno, S. Makita, T. Endo, G. Aoki, M. Itoh, and T. Yatagai, “Simultaneous b-m-mode scanning method for
real-time full-range Fourier domain optical coherence
tomography,” Appl. Opt. **45**, 1861–1865
(2006). [CrossRef] [PubMed]

**15. **F. J. Harris, “On the use of windows for harmonic analysis with the discrete Fourier transform,” Proc. IEEE **66**(1), 51–83 (1978). [CrossRef]

**16. **K. Zhang and J. U. Kang, “Graphics processing unit accelerated non-uniform
fast fourier transform for ultrahigh-speed, real-time Fourier-domain
oct,” Opt. Express **18**, 23472–23487
(2010). [CrossRef] [PubMed]

**17. **K. Zhang and J. U. Kang, “Real-time intraoperative 4D full-range fd-oct based on the dual graphics processing units architecture for microsurgery guidance,” Biomed. Opt. Express **2**, 764–770 (2011). [CrossRef] [PubMed]