## Abstract

This paper describes a digital method that is capable of automatically focusing optical
coherence tomography (OCT) *en face* images without prior knowledge of the point
spread function of the imaging system. The method utilizes a scalar diffraction model to
simulate wave propagation from out-of-focus scatter to the focal plane, from which the
propagation distance between the out-of-focus plane and the focal plane is determined
automatically via an image-definition-evaluation criterion based on information entropy theory.
By use of the proposed approach, we demonstrate that the lateral resolution close to that at
the focal plane can be recovered from the imaging planes outside the depth of field region with
minimal loss of resolution. Fresh onion tissues and mouse fat tissues are used in the
experiments to show the performance of the proposed method.

© 2012 OSA

## 1. Introduction

Optical coherence tomography (OCT) [1,2] is a relatively novel imaging modality that allows for non-invasive, cross-sectional imaging of the turbid biological tissues with micrometer resolution and with an imaging depth of up to 2 mm below the surface. In addition, OCT is also capable of providing useful information about physiological processes within biological tissue, for example functional microcirculations [3,4]. When the OCT system scans a tissue, the probe beam is typically focused into the sample by an objective lens. Because of the use of the optical lens to focus the probe beam, only the OCT image that falls within the depth of field (DOF) exhibits the desired lateral resolution, whereas the OCT image that falls outside the DOF region is blurred laterally. Although the problem is somewhat mitigated if one focuses a probe beam with a relatively low numerical aperture (NA) to perform imaging, this treatment unfortunately reduces the attainable lateral resolution at the focal plane. In general, the higher lateral resolution and the longer DOF are two of the most wishful parameters for the most OCT imaging applications; however, they are reciprocally coupled. The relationship between the lateral resolution and DOF is schematically illustrated in Fig. 1(a) for low and Fig. 1(b) for high NAs.

Efforts have been paid in the community to overcome the coupling problems between the lateral
resolution and the DOF. Ding et al. [5] incorporated an
axicon lens with a top angle of 160° into the sample arm of an interferometer and
maintained 10 μm lateral resolution over a focusing depth of 6 mm. Xie et al. [6] developed a probe based on gradient index lens rod for
endoscopic spectral domain OCT with a capability of dynamically tracking the beam focus, for
which a dynamic focusing range between 0 to 7.5 mm was demonstrated without moving the probe
beam itself. Divetia et al. [7] used a 1 mm liquid-filled
polymer lens for endoscopic OCT applications that dynamically provided scanning depth of field
by changing the hydraulic pressure within the lens, which enabled dynamic control of the focal
depth without the need for articulated parts. This configuration was shown to have resolving
power of 5 μm over an object-depth of 2.5 mm. David et al. [8] presented a combined *en face* OCT/SLO system equipped with
an adaptive optics closed loop for high resolution imaging of the in-vivo human retina. The
correction of aberrations produced by the adaptive optics closed-loop system increased the
signal-to-noise ratio in images and a slight improvement in the lateral resolution was also
obtained. Dynamic focusing or focus tracking [9,10] is used for maintaining high lateral resolution over a
large imaging depth. Holmes et al. [11] presented a
multi-beam OCT system that overcomes the problem of limited lateral resolution inherent in
single-beam Fourier domain OCT and reduces speckle noise. However, the above mentioned
hardware-based methods all require a kind of special configuration of the hardware set-up in the
system design in order to achieve their objectives, which inevitably limits the flexibility of
OCT system by sacrificing the scanning speed and increasing the system cost.

There also exist several digital methods in the literature to compensate image degradation out of the DOF. Some research groups, Yasuno et al. [12], Kulkarni et al. [13], Ralston et al. [14], Wang et al. [15], Liu et al. [16] proposed system models or point spread functions (PSF) to describe coherent light and tissue interactions in the OCT images, upon which deconvolution algorithms were applied to reduce transverse blurring, thus the transverse resolution was improved. Rolland et al. [17] developed a Gabor-based fusion technique based on the concept of the inverse local Fourier transform and the Gabor’s signal expansion to produce a high lateral resolution image throughout the depth of imaging. Above digital methods [12–17] require a prior knowledge of the optical parameters or the PSF of the optical system. A vector-field model of optical coherence microscopy [18] and interferometric synthetic aperture microscopy (ISAM) [19] were proposed to solve the inverse scattering problem for interference microscopy. They could achieve reconstructed volumes with a resolution in all planes that was equivalent to the resolution achieved only at the focal plane for the conventional high-resolution microscopy. Yu et al. [20] adopted angular spectrum diffraction model to simulate the wave propagation process from out-of-focus scatters and high-resolution details could be recovered from outside the depth-of-field region. The vector-field model [18] and ISAM [19] and angular spectrum diffraction method [20] do not require a prior knowledge about the optical parameters; however, they do require knowing the position of the beam focus before image reconstruction. Although it is not ideal, the PSF function may be measured by the use of a specialized phantom in a well-controlled separate experiment. Basing on the Gaussian function OCT model, Liu et al. [21] proposed an automatic PSF estimation method by searching the discontinuous point of information entropy from a series of the recovered images. However, the adopted OCT model is not sufficiently accurate because the phase information of OCT signal is neglected. In addition, the method that is used to estimate PSF is only applicable to the Gaussian function OCT model.

In this paper, we describe an alternative software approach that uses a scalar diffraction model to simulate the wave propagation from the out-of-focus scatters to the focal plane. We use information entropy method to automatically determine the propagation distance between the out-of-focus plane and the focal plane. We then demonstrate the effectiveness of the proposed method by the use of fresh onion samples and fat tissues excised from mice.

## 2. Principle

#### 2.1. Scalar diffraction model

The scalar diffraction theory is accurate provided that the diffracting structures is large compared to the wavelength of light. For a monochromatic wave, the scalar field may be written as

where $A({P}_{0})$and $\phi ({P}_{0})$are the amplitude and phase of the wave at position P_{0},
and *ν* is the optical frequency. The scalar field may also be described
as a complex function (Eq. (2)) and obeys the
time-independent (Eq. (3))

where_{${\nabla}^{2}$}is the Laplacian operator, *k* is the wave
number given by

and *λ* is the wavelength in the medium. According to the
Rayleigh-Sommerfeld diffraction theory [22], the wave
disturbance at observation point P at the focal plane Σ is the superposition of waves
emanating from all points at the de-focal plane Σ_{0} (Fig. 2
). So, the optical field over the focal plane Σ can be represented as

Based on the coordinate system displayed in Fig. 2, we get

For paraxial approximation, it can be re-written as

Here ${k}_{x},{k}_{y}$ are spatial frequencies and $F[U({x}_{0},{y}_{0})]$ is the Fourier transform of $U({x}_{0},{y}_{0})$with respect to spatial frequencies (${k}_{x},{k}_{y}$). The distance *z* between each de-focal plane and
the focal plane is determined automatically by searching for the clearest recovered image
_{$\left|U(x,y)\right|$}via an image-definition-evaluation criterion based on
information entropy theory (see Subsection 2.2).

There are a number of digital diffraction methods that may be used for focusing OCT
*en face* images. For the scalar diffraction integral method [Eq. (6) and Eq.
(7)], the pixel resolution of the reconstructed images could vary with the
reconstruction distance because the method is based on the propagation of spherical wavefronts.
However, the image resolution does not change for the Fresnel-approximation convolution method
[Eq. (8)] and the Fresnel approximation FFT
method [Eq. (9) and Eq. (10)] [23,24]. The numerical implementation of the scalar diffraction [Eq. (6) and Eq.
(7)] requires a diffraction distance large enough to avoid aliasing. The Fresnel
approximation condition in Eq. (7) to Eq. (10) also requires a diffraction distance large
enough to guarantee precise reconstruction. The integral method [Eq. (6) and Eq. (7)] and the
convolution method [Eq. (8)] do not work if the
reconstruction plane is close to the initial plane. However, the Fresnel approximation FFT
method [Eq. (9) and Eq. (10)] and the angular spectrum method [20,23,24] do not have any distance limitations because they are based on the propagation of
plane waves. Although the angular spectrum method do not assume paraxial approximation, in this
paper, the Fresnel approximation FFT method was used to focus the OCT *en face*
images since the diffraction distance of defocused images is large enough to satisfy the
requirement of Fresnel approximation and the method has advantages of simplicity and
computational cost effective.

#### 2.2. Image-definition-evaluation criterion based on information entropy

In 1948, Shannon used probability theory to model the information sources, i.e., the data
produced by a source is treated as a random variable. The information context (entropy) of a
discrete random variable *X* that has a probability
distribution${P}_{X}=({p}_{1},{p}_{2},\mathrm{...}{p}_{n})$ is then defined as

where the term $\mathrm{log}(1/{p}_{i})$indicates the amount of uncertainty (information) associated with
the corresponding outcome. Thus, entropy is a statistical average of uncertainty in a random
event or information obtained by observing a data source, and the dispersion of a probability
distribution. Based on this definition, we can analyze the OCT images as realizations of random
variables, and information entropy of the grayscale image
O′(*x*, *y*) can be expressed as

where $p\left({O}^{\prime}\left(a\right)\right)p\left({I}_{i}\right)$is the marginal probability of the image O′, which is the
probability that the intensity value of the image is equal to *I _{i}*
(possible values of

*I*are ${I}_{1},{I}_{2},\dots ,{I}_{n}$):

The Shannon entropy is a measure of dispersion of a probability distribution. A focused image
corresponds to a low entropy value since it has a probability distribution with sharp structure
appearance, whereas a defocused image yields a high entropy value since it has a dispersed
structure distribution. Therefore, information entropy can be used as
image-definition-evaluation (IDE) criterion in our evaluation, meaning that the lower the
information entropy of image O′(*x*, *y*) is, the
clearer the image O′(*x*, *y*) becomes. It then
comes to that the diffraction distance *z* for each *en face*
image in Eq. (10) can be determined by searching
the minimum value of the information entropy of the recovered images as

where *z*_{min} and *z*_{max} are minimal and
maximal distances, respectively, between the de-focal plane and the focal plane. Theoretically,
the recovered image is the clearest when the variable *z* in diffraction Eq. (10) equals the actual distance between the
de-focal plane and the focal plane. Therefore, the complex OCT signal when focused can be
calculated via Eq. (9).

#### 2.3. Flow diagram of OCT 3D volume recovering

The flow diagram of the proposed method for recovering defocused FD-OCT images is given in Fig. 3 .

The original 3D data set (spectrograms) of a sample is acquired by two-dimensional scanning
of the probe beam through a pair of X-Y galvanometer mirrors. Each non-linear spectral
interferogram that represents an A-scan is rescaled into linear k-space followed by Fast
Fourier Transform (FFT) to become A-line complex OCT signal (phase and amplitude) along the z
axis. Then, the 3D volume data is resampled along the depth (in the z direction) to obtain a
sequence of *en face* (x-y) frames at different depth locations. Complex signals
of each x-y/*en face* frame is considered as the optical sources in our model;
and then they are digitally focused onto the focal plane using diffraction Eq. (9), where the distance z between de-focal plane
and focal plane is determined by searching the minimal information entropy of the recovered
images. This process is done one-by-one for the *en face* OCT image at all the
depth within the 3D volume.

## 3. Experimental results

The OCT system used to acquire the 3D volumetric data set from a tissue sample is shown in Fig. 4 , which is similar to the one previously reported [21]. The system used a super luminescent diode as the illumination light source, with a central wavelength of 1300 nm and a bandwidth of 80 nm that provided a ~10 µm axial resolution in air. The light was split into two paths in a fiber based Michelson interferometer. One beam was coupled onto a stationary reference mirror and the second beam was delivered to the sample by a collimating lens and a focusing lens that gave a theoretical lateral resolution of ~16 μm and a depth of field of ~350 μm. The spectral interferogram between the reference light from the reference mirror and the light backscattered from the sample was sent to a homebuilt spectrometer via an optical circulator. The spectrometer consisted of a collimating lens, a transmission grating, a camera lens with a focal length of 100 mm, and 1024 element linear array InGaAs detector. A pilot laser is also coupled into the interferometer for the scanning beam guidance. The focal beam on the sample was scanned using a pair of galvanometer mirror mounted in the sample arm.

In order to show the performance of the proposed method, experiments were performed on fresh onion tissues and also the fat tissues excised from a euthanized mouse. In the experiments, 3D OCT images were obtained with 512 A-scans in each B-scan and 512 B scans in each 3D scan. We first acquired one such 3D data set under the condition that the focal position of the sample beam was placed 200 μm below the tissue surface. Due to the limited OCT imaging depth, this 3D data set is treated as a standard image with desirable imaging resolution and with minimal blurring effect because most of the imaging content is within the system DOF region. We then acquired another 3D data set under the condition that the focal position was 0.3 mm above the tissue surface. In this case, the imaging content is outside of the DOF region, leading to progressively lateral blurring of the OCT images over the imaging depth.

Figure 5
illustrates the recovered images at the different stages during the recovery process
using the information entropy theory described in the last section, where the entropy curve as a
function of the diffraction distance *z* in the entire process is shown in the
middle of the figure. This example used the OCT images captured from a fresh onion tissue. The
original *en face* OCT (defocused) is given in Fig. 5C. In this case, the entropy reached the minimal value of 1.8 when the recovered
image (Fig. 5D) is achieved with diffraction distance z =
+240 μm. This distance can be considered as the diffraction distance between this
de-focal image plane and the actual focal plane. Note that the originally blurred image (Fig. 5C) in the out-of-focus region is now focused
automatically through a digital means using the proposed algorithm. The computational time
required to compute the entropy of one recovered image (512x512) at the selected depth z was
0.33 sec by the use of a HP dv6 laptop computer (processor: Intel i3 2.4GHz; RAM: 6G). The total
processing time for one *en face* image is (0.33x*N _{z}*)
sec, where

*N*is the selected number of

_{z}*z*from

*z*

_{min}to

*z*

_{max}.

Figure 6(a)
shows one typical defocused *en face* image of a fresh onion tissue from
the 3D data set acquired when the focal position of the probe beam was 0.3 mm above the sample
surface. Figure 6(b) shows the automatically recovered
image from Fig. 6(a) by using the diffraction focusing
method and entropy based IDE function. Figure 6(c) is the
*en face* image of the onion tissue that was acquired when the focal position
was placed about 200 μm below the tissue surface, i.e. the lateral blurring is minimal.
Comparing Fig. 6(b) and Fig. 6(c), we can find that the recovered image from the de-focal area has similar
quality to the image acquired when the sample was placed at the focal region. Entropy values for
images in Fig. 6(a), Fig.
6(b), and Fig. 6(c) are 2.6776, 1.7971, 2.1282,
respectively. We also plotted in Fig. 7
the signal profiles along selected locations for comparison, where it is clear that the
lateral resolution of the OCT image is significantly improved ((Fig. 7(b)) after the defocused image (Fig. 7(a))
was recovered by the use of the proposed method. The recovered image (Fig. 7(b)) has similar lateral resolution to that captured when the sample
was at the focal plane (Fig. 7(c)). Note that Fig. 7(a) and Fig. 7(c)
should be ideally from the same sample locations. However, the images of Fig. 6(a) and Fig. 6(c) were captured
at two different situations, one with the sample defocused and the other with the sample focused
by translating the sample by 450 μm. Thus, it is difficult to find the perfect match
between the locations for comparison.

To further scrutinize the effectiveness of the proposed method in detail, we provide here the
*en face* movie (Media 1) that is played along the imaging depth. In this
movie, the image on the left is the de-focused image acquired when the sample was outside the
DOF, the middle image is recovered from the left de-focused image after using the proposed
approach, and the image on the right is the physically focused image acquired when the sample
was placed within the DOF. Note that the images on the left and right are the raw OCT images
without any post-processing.

The effectiveness of the proposed method to de-blur the OCT images is further substantiated by
the experimental results from real tissue samples, i.e., fat tissues excised from the euthanized
mice. The typical *en face* results are displayed in Fig. 8
. Entropy values for images in Figs. 8(a), 8(b), and 8(c) are
4.3137, 3.4148, and 2.6509, respectively. Again, a 3D movie (Media 2) is also provided for detailed comparison, where the
convention for placing each images in the movie is the same as the previous one.

However, it is noticed that although the recovered image meets the desired expectations, the structure, such as the onion cell, is not as smooth as that captured exactly at the focal plane. This phenomenon is most likely due to the light attenuation within the sample, leading to that the light energy impinging onto the same tissue plane is slightly different between the images that were acquired under de-focused and focused conditions. Our next step in the development would be to test the proposed algorithm for in-vivo imaging applications, where the challenges are obvious, which is the inevitable subject movement that gives deleterious motion artifacts in the OCT images.

## 4. Conclusion

In this paper, we have demonstrated a scalar diffraction model to simulate the wave propagation from the out-of-focus scatters to the focal plane that gives an ability of digitally focusing the de-focused OCT images without the need of a prior knowledge of the system PSF function. We used the information entropy theory as the base for the IDE criterion to automatically find the diffraction distance between out-of-focus plane and focal plane, which is required to digitally propagate the defocused image to the focal plane. We have shown that the structural details can be recovered with a minimum loss of resolution from the blurred images that are from outside the DOF region. This method can be used in the automatic recovery of defocused OCT images where the system parameters or refraction index of the sample is unknown. Although a spectrometer-based OCT system is considered in the paper, the proposed method is also applicable to swept-source OCT system, full-field OCT system and other confocal optical system.

## Acknowledgments

This work was supported in part by research grants from the National Institutes of Health (NIH) (R01HL093140 and R01HL093140S).

## References and links

**1. **A. F. Fercher, W. Drexler, C. K. Hitzenberger, and T. Lasser, “Optical coherence tomography—principles and
applications,” Rep. Prog. Phys. **66**(2), 239–303
(2003). [CrossRef]

**2. **P. H. Tomlins and R. K. Wang, “Theory, developments and applications of optical
coherence tomography,” J. Phys. D Appl. Phys. **38**(15), 2519–2535
(2005). [CrossRef]

**3. **R. K. Wang, S. L. Jacques, Z. Ma, S. Hurst, S. R. Hanson, and A. Gruber, “Three dimensional optical
angiography,” Opt. Express **15**(7), 4083–4097
(2007). [CrossRef]

**4. **R. K. Wang and Z. Ma, “Real-time flow imaging by removing texture pattern
artifacts in spectral-domain optical Doppler tomography,” Opt.
Lett. **31**(20), 3001–3003
(2006). [CrossRef]

**5. **Z. H. Ding, H. W. Ren, Y. H. Zhao, J. S. Nelson, and Z. P. Chen, “High-resolution optical coherence tomography over a
large depth range with an axicon lens,” Opt. Lett. **27**(4), 243–245
(2002). [CrossRef]

**6. **T. Xie, S. Guo, Z. Chen, D. Mukai, and M. Brenner, “GRIN lens rod based probe for endoscopic spectral
domain optical coherence tomography with fast dynamic focus tracking,”
Opt. Express **14**(8), 3238–3246
(2006). [CrossRef]

**7. **A. Divetia, T. H. Hsieh, J. Zhang, Z. Chen, M. Bachman, and G. P. Li, “Dynamically focused optical coherence tomography for
endoscopic applications,” Appl. Phys. Lett. **86**(10), 103902 (2005). [CrossRef]

**8. **D. Merino, Ch. Dainty, A. Bradu, and A. G. Podoleanu, “Adaptive optics enhanced simultaneous
*en-face* optical coherence tomography and scanning laser
ophthalmoscopy,” Opt. Express **14**(8), 3345–3353
(2006). [CrossRef]

**9. **M. J. Cobb, X. Liu, and X. Li, “Continuous focus tracking for real-time optical
coherence tomography,” Opt. Lett. **30**(13), 1680–1682
(2005). [CrossRef]

**10. **B. Qi, A. P. Himmer, L. M. Gordon, X. D. V. Yang, L. D. Dickensheets, and I. A. Vitkin, “Dynamic focus control in high-speed optical coherence
tomography based on a micro-electromechanical mirror,” Opt.
Commun. **232**(1-6), 123–128
(2004). [CrossRef]

**11. **J. Holmes and S. Hattersley, “Image blending and speckle noise reduction in
multi-beam OCT,” Proc. SPIE **7168**, 71681N, 71681N-8
(2009). [CrossRef]

**12. **Y. Yasuno, J. I. Sugisaka, Y. Sando, Y. Nakamura, S. Makita, M. Itoh, and T. Yatagai, “Non-iterative numerical method for laterally
superresolving Fourier domain optical coherence tomography,”
Opt. Express **14**(3), 1006–1020
(2006). [CrossRef]

**13. **M. D. Kulkarni, C. W. Thomas, and J. A. Izatt, “Image enhancement in optical coherence tomography using
deconvolution,” Electron. Lett. **33**(16), 1365–1367
(1997). [CrossRef]

**14. **T. S. Ralston, D. L. Marks, F. Kamalabadi, and S. A. Boppart, “Deconvolution methods for mitigation of transverse
blurring in optical coherence tomography,” IEEE Trans. Image
Process. **14**(9), 1254–1264
(2005). [CrossRef]

**15. **R. K. Wang, “Resolution improved optical coherence-gating tomography
for imaging biological tissue,” J. Mod. Opt. **46**, 1905–1913
(1999).

**16. **Y. Liu, Y. Liang, G. Mu, and X. Zhu, “Deconvolution methods for image deblurring in optical
coherence tomography,” J. Opt. Soc. Am. A **26**(1), 72–77
(2009). [CrossRef]

**17. **J. P. Rolland, P. Meemon, S. Murali, K. P. Thompson, and K. S. Lee, “Gabor-based fusion technique for optical coherence
microscopy,” Opt. Express **18**(4), 3632–3642
(2010). [CrossRef]

**18. **B. J. Davis, S. C. Schlachter, D. L. Marks, T. S. Ralston, S. A. Boppart, and P. S. Carney, “Nonparaxial vector-field modeling of optical coherence
tomography and interferometric synthetic aperture microscopy,”
J. Opt. Soc. Am. A **24**(9), 2527–2542
(2007). [CrossRef]

**19. **T. S. Ralston, D. L. Marks, P. S. Carney, and S. A. Boppart, “Interferometric synthetic aperture
microscopy,” Nat. Phys. **3**(2), 129–134
(2007). [CrossRef]

**20. **L. Yu, B. Rao, J. Zhang, J. Su, Q. Wang, S. Guo, and Z. Chen, “Improved lateral resolution in optical coherence
tomography by digital focusing using two-dimensional numerical diffraction
method,” Opt. Express **15**(12), 7634–7641
(2007). [CrossRef]

**21. **G. Liu, S. Yousefi, Z. Zhi, and R. K. Wang, “Automatic estimation of point-spread-function for
deconvoluting out-of-focus optical coherence tomographic images using information
entropy-based approach,” Opt. Express **19**(19), 18135–18148
(2011). [CrossRef]

**22. **J. W. Goodman, *Introduction to Fourier
Optics*, 2nd ed. (McGraw Hill, Boston, 1996).

**23. **L. Yu and M. K. Kim, “Wavelength-scanning digital interference holography for
tomographic three-dimensional imaging by use of the angular spectrum
method,” Opt. Lett. **30**(16), 2092–2094
(2005). [CrossRef]

**24. **L. Yu and M. K. Kim, “Pixel resolution control in numerical reconstruction of
digital holography,” Opt. Lett. **31**(7), 897–899
(2006). [CrossRef]