## Abstract

Holoscopy is a tomographic imaging technique that combines digital holography and Fourier-domain optical coherence tomography (OCT) to gain tomograms with diffraction limited resolution and uniform sensitivity over several Rayleigh lengths. The lateral image information is calculated from the spatial interference pattern formed by light scattered from the sample and a reference beam. The depth information is obtained from the spectral dependence of the recorded digital holograms. Numerous digital holograms are acquired at different wavelengths and then reconstructed for a common plane in the sample. Afterwards standard Fourier-domain OCT signal processing achieves depth discrimination. Here we describe and demonstrate an optimized data reconstruction algorithm for holoscopy which is related to the inverse scattering reconstruction of wavelength-scanned full-field optical coherence tomography data. Instead of calculating a regularized pseudoinverse of the forward operator, the recorded optical fields are propagated back into the sample volume. In one processing step the high frequency components of the scattering potential are reconstructed on a non-equidistant grid in three-dimensional spatial frequency space. A Fourier transform yields an OCT equivalent image of the object structure. In contrast to the original holoscopy reconstruction with backpropagation and Fourier transform with respect to the wavenumber, the required processing time does neither depend on the confocal parameter nor on the depth of the volume. For an imaging NA of 0.14, the processing time was decreased by a factor of 15, at higher NA the gain in reconstruction speed may reach two orders of magnitude.

© 2012 Optical Society of America

## 1. Introduction

Holoscopy is a new imaging technique that combines principles of Fourier-domain optical coherence tomography (FD-OCT) with digital holography (DH) [1, 2]. It has promising advantages compared to other optical tomographic imaging techniques. All light backscattered from the sample is collected by an area camera and no confocal or coherence gating restricts the depth of field. Compared to FD-OCT, holoscopy is thus capable of obtaining more information in less time and with less light illuminating the sample. High-resolution imaging is not limited to the focus region, which is defined by the confocal parameter (twice the Rayleigh length). Holographic techniques are used to refocus the data that build the tomographic images. These techniques can bring any layer of the sample into the ‘virtual’ focus – even after the data has been acquired.

Several techniques have been demonstrated to achieve acquisition of tomographic images over several Rayleigh lengths with constant or almost constant lateral resolution. Using multiple foci [3] or non-diffracting beams [4–7], i. e. Bessel illumination, a more uniform lateral resolution over several Rayleigh lengths was demonstrated with an imaging quality that is comparable to conventional FD-OCT. Interferometric synthetic aperture microscopy (ISAM) extends the depth range of scanning FD-OCT by numerical post-processing routines that reconstruct the image structure outside the focus at increased resolution [8–10]. Similar effects can be achieved by applying refocusing techniques of digital holography to FD-OCT data [11, 12]. However, all techniques that use scanning with either Gaussian or Bessel beams suffer from reduced sensitivity. In case Gaussian beams are used the light intensity illuminating the scatterers drops rapidly in out-of-focus layers. In case of Bessel illumination, intensity is reduced in all depth layers as relatively large parts of the energy are deposited in the side-lobes of the beam. Thus, to achieve constantly high sensitivity over the full measurement depth, full-field approaches have to be applied where a collimated beam is used for illumination. For example, full-field Fourier-domain OCT has been demonstrated *ex-vivo* and also *in-vivo* by using high speed imaging cameras and tunable lasers [13,14]. Depth independent sensitivity was achieved by abandoning the confocal gating, but lateral resolution was optimal only in the focal plane of the imaging optics.

Digital holography (DH) captures the entire wave-field of the object and has a lateral resolution that is not restricted to a focal plane. By modeling the light propagation the optical field is traced back to the object to reconstruct a real image. Using multiple wavelengths, either simultaneously in low-coherence holography [15, 16], or sequentially in digital interference holography [17, 18] depth structures can be resolved and three-dimensional tomograms can be obtained. However, previous approaches using multiple wavelengths in DH to obtain tomographic images either show a non-uniform resolution over depth [19–21] or suffer from inefficient reconstruction algorithms and reduced imaging quality [18, 22, 23].

It was shown by Emil Wolf that the Fourier transform of the scattering potential can be obtained when the scattered field of an incident plane wave is observed holographically for multiple directions of the incident wave [24]. In backscattering geometry only a part of the Fourier coefficients can be recorded and especially at limited NA a large portion of the low frequency information for the *z*-direction is missing. By using multiple wavelengths the sampled bandwidth is considerably enlarged [25]. Still information of the low frequency components of the scattering potential are missing. Therefore the theory of inverse scattering was applied to full-field swept-source OCT (SS-OCT) to calculate an estimation of the scattering potential [26, 27]. From a forward model for the scattered light field the Tikhonov-regularized least-squares solution was calculated. Depth-invariant resolution and sensitivity were demonstrated with simulated data.

In contrast to this work holoscopy records the scattered irradiation in the far-field. It combines swept-source FD-OCT with DH by using a tunable laser in a holography-like setup. Instead of using inverse scattering theory, a fusion of numerical reconstruction techniques of FD-OCT and digital holography is used to achieve uniform sensitivity and lateral resolution over the entire measurement depth even at high lateral resolution. First experimental results have demonstrated capabilities of holoscopy for *ex-vivo* and *in-vivo* imaging [1]. In this work holoscopy suffered from a computationally expensive reconstruction. At first, all acquired holograms were reconstructed to the same numerical focus. Afterwards a Fourier transform for each lateral position over the recorded wavenumbers provided the depth resolution similar to full-field FD-OCT. Unfortunately, only a depth of about a Rayleigh length around the chosen layer in the tissue volume was reconstructed with optimal lateral resolution. For imaging over more than 3mm the reconstruction steps, which took about 22s, had to be repeated up to 15 times with different virtual foci. Though holoscopy allows image acquisition of the full volume without physically refocusing at any NA, computing the image needed serial reconstruction of all tissue layers with a respective depth of twice the Rayleigh length. This is neither very elegant nor very efficient.

Here we propose a highly efficient one-step reconstruction process of the complete volume. A similar resampling technique as in ISAM [8–10,26,27] is applied, but without regularization and thus neglecting the low frequency components of the object structure. Depending on the relation between Rayleigh length and measurement depth it significantly decreases the required reconstruction time compared to the previously demonstrated holoscopy reconstruction. For large NAs improvements by two or more orders of magnitude are expected.

## 2. Setup

We used two setups to demonstrate the capabilities of the new reconstruction technique. The first lens-less setup (Fig. 1(a)) was previously used for holoscopy [1]. The light of a rapidly tunable laser (Broadsweeper BS-840-1, Superlum, Ireland) with a tuning range from 823.5nm to 873.5nm and 3mW output power was collimated to 2.2mm beam diameter and then split by a beam splitter cube into reference and sample light. The reference beam was brought onto a convex mirror (*f* = −10.34mm) and the sample beam onto the specimen. The two beams of reflected and scattered light were then superimposed on a high-speed CMOS camera (EoSens MC3010, Mikrotron GmbH, Germany) with 1696 × 1710 pixels measuring 8*μ*m × 8*μ*m each. Camera and laser were synchronized by a trigger from the camera which started the sweep of the laser with the first frame of the imaging sequence. During the sweep 1024 holograms with 1024 × 1024 pixels were acquired at a frame-rate of 440 frames per second resulting in a total measurement time of approximately 2.3s. The setup was adjusted in such a way that the path lengths of all light scattered or reflected by the sample were, in analogy to FD-OCT, slightly longer than the path length of the reference light. Certain restrictions applied to the optical layout, as the spatial frequencies of the interference fringes, which depend on the local angle between sample and reference light, have to be sampled by the camera (see e. g. [28]). The distance between reference mirror and camera was 81mm. The numerical aperture (NA) of the setup – determined by the opening angle between sample and camera – was 0.05. The setup thus provided axial and lateral resolutions of approximately 14*μ*m and 11*μ*m, respectively. The confocal parameter was 220*μ*m.

The second setup used a Mach-Zehnder type interferometer for off-axis holoscopy as shown in Fig. 1(b). It utilized a microscope objective (MO) to magnify the object in order to achieve higher NAs. The light of a newer version of the rapidly tunable laser (Broadsweeper BS-840-1, Superlum, Ireland) with an extended tuning range from 882nm to 800 nm and 3mW output power was split by a fiber coupler into reference and sample light. The sample light was collimated to 2.2mm beam diameter and illuminated the sample through a *f* = 75mm lens and an MO of NA 0.14 (5× Plan Apo NIR, Mitutoyo, Japan). The sample was approximately put in the focal plane of the MO. The width of the illumination beam limited the effective field of view, which was about 1mm in diameter. The light that was scattered back by the sample passed the MO again. Irradiation scattered at one point approximately left the MO as a parallel beam. In this far-field a high speed CMOS camera was placed (ACE AC2040-180km, Basler, Germany) with 2048 × 2048 pixels measuring 5.5*μ*m × 5.5*μ*m each. The reference light was collimated to a beam diameter of 16mm and was also brought under an angle of approximately 3° onto the camera. The reference light was adjusted such that its overall path length is slightly shorter than the path lengths of sample scatterers. A total of 1024 holograms with 2048 × 2048 pixels were acquired with a frame-rate of 127fps resulting in a measurement time of 8.1s. The NA of this setup was 0.14, as the full aperture of the MO was used. The confocal parameter, axial and lateral resolution of the second setup were 28*μ*m, 8.6*μ*m and 4*μ*m, respectively.

Reflections from the MO that reached the camera and disturbed the actual interference were suppressed by using linear polarized light in the object arm and a quarter-wave plate between MO and object. The plate was adjusted to rotate the backscattered light from the sample by 90° compared to the light reflected by the MO. An additional polarizer between camera and MO suppressed all reflections except those from the camera window and the quarter-wave plate.

## 3. Theory

#### 3.1. The measured signal

### 3.1.1. The intensity distribution in holography

In holography the information of the wave field *O*(*x*, *y*; *k*) as scattered by the sample is encoded in the interference fringes that occur on superposition with a reference wave field *R*(*x*, *y*; *k*). The intensity distribution which is acquired by the camera can be described by

*x*and

*y*denote the lateral position on the camera,

*k*is the wavenumber and

*γ*is a conversion factor of the camera. If the intensity distribution is multiplied with the reference wave four terms are obtained

*R*corresponds to the illumination with the original reference wave when reconstructing a hologram optically.

### 3.1.2. Numerical propagation in digital holography (DH)

In DH the lateral distribution of scatterers is reconstructed by numerically propagating the recorded light field *O* from the camera plane back into the sample volume. This is done by the propagator, an operator *𝒫*_{k,Δz} [·] that computes at a specific wavenumber *k* the diffracted wave field in a plane of distance Δ*z* from a known optical wave-field *U* in the plane *z* = *z*_{0}:

*x*and

*y*into plane waves. In the Fourier space the diffraction of the field

*Ũ*is then computed by a simple multiplication with a phase factor

*P*

_{k,Δz}(

*k*,

_{x}*k*):

_{y}*k*,

_{x}*k*and

_{y}*k*are the components of the wave vector which are related to each other by Equation (5) allows to calculate the Fourier transform of the optical field in any distance from the camera once it is known in the camera plane.

_{z}### 3.1.3. Object and reference field in holoscopy

In contrast to digital holography, which images surfaces or flat samples at a single or a few wavenumbers, holoscopy records scattering three-dimensional samples at a large number of different wavenumbers. The propagation times of sample and reference radiation are important as they are used to discriminate the light scattered from different depths. For the mathematical description we assume a common reference plane, the zero delay plane, at which the radiations in the sample and reference arm have the same initial phase *ϕ*_{0}(*k*) for all wavenumbers. The distance from the camera to this reference plane is *z*_{0}. Scatterers are located at different depths *z* from the reference plane. Figure 2 shows the coordinate systems and the variables that were used to derive the following formulas for sample and reference field.

At a certain *k* the object wave field is given by the coherent superposition of the propagated wave fields that are scattered in different depths *z* of the sample

*A*is the overall amplitude and

_{O}*S*(

*k*) the normalized spectral intensity of the illuminating light.

*η*(

*x*,

*y*,

*z*) denotes the scattering potential, i. e. the relative amplitude of the scattered field compared to the incident field. The phase factor exp(−i

*kz*) is created by the propagation of the incident plane wave from the reference plane to the point scatterer, before the scattered field is propagated to the camera by

*𝒫*

_{k,z0+z}[·]. The integral sums the interference signals of all point scatterers originating from different depths. Attenuation of incoming and scattered radiation as well as multiple scattering were neglected, i. e. the scattered wave was treated in the first order Born approximation.

The reference wave is a spherical wave, which has a (virtual) focus in a distance *z*_{Ref} from the reference plane that is also located in a distance *z*_{0} in front of the camera. Using the same notation as for the object wave, the spherical reference wave can be described by

*A*.

_{R}For the Mach-Zehnder type setup as shown in Fig. 1(b) the coordinates and parameters used in Eq. (7) and Eq. (8) need to be adjusted. Numerical reconstruction used a spherical reference wave, although the actual physical wave had been collimated. As the obtained object wave field is in the far field, the action of a lens is required to reconstruct the image. Except for its overall *z*-dependent phase, the spherical reference wave has the same effects as a “numerical lens” with a focal length corresponding to the radius of curvature of the reference wave, since both are represented by the same numerical expression (see e. g. [30]). With the introduction of the phase-corrected reference wave in the next section, the overall phase difference introduced by the reference is compensated. The following reconstruction can therefore be used for both setups.

### 3.1.4. Phase-corrected propagator

For holoscopy the exact treatment of the phases of reference and sample radiation is important because their spectral dependence contains the depth information. In order to simplify further computations we introduce a modified, phase-corrected propagator and its corresponding phase factor in the Fourier-domain

*R*and

*O*are replaced by

*R*

_{0}and

*O*

_{0}, respectively. By adapting Eq. (7) one computes the explicit representation of the modified object wave field to

*η*(

*x*,

*y*,

*z*) as in FD-OCT can be achieved. The depth distribution of the scatterer can be retrieved by a fast Fourier transform (FFT).

#### 3.2. Reconstruction

### 3.2.1. Obtaining the object wave

As in DH the first step in the reconstruction process is to obtain the phase-corrected object wave *O*_{0} from the recorded fringe pattern *I*. To achieve this, twin-image, DC and auto-correlation terms of Eq. (2) need to be separated from the image term. This problem is well known from holography. The most common solution is to introduce a carrier frequency by using an off-axis reference as in our Mach-Zehnder setup shown in Fig. 1(b). The lateral fringes, which encode object wave, conjugated object wave (twin-image), DC and autocorrelation components, will be located in different spatial frequency regions. After acquisition of the fringe pattern the different terms can be separated by spatial filtering [28, 32, 33]. The resulting complex field is no longer real and contains information about amplitude and phase. It also allows for an increased axial measurement range as in full-range FD-OCT [34] without requiring any further hardware or further signal processing.

Alternatively, in holoscopy the different terms can be separated by positioning the object on only one side of the reference plane as is commonly done in FD-OCT. Then, all object-related path lengths are larger than the reference length. After a Fourier transform with respect to *k*, the twin-images will be found only at negative frequencies. This axial filtering was used for the on-axis setup shown in Fig. 1(a). However, as in FD-OCT this approach reduces the measurement depth by a factor two and autocorrelation noise, i. e. self-interference signals of the scattered object waves, remain.

Once the laterally or axially filtered intensity distribution *I _{f}* is obtained and the phase-corrected reference wave

*R*

_{0}is known, the phase-corrected object wave

*O*

_{0}in the camera plane is calculated by

### 3.2.2. Reconstruction of one Rayleigh length

For the phase-corrected object waves, which are recorded at different wavenumbers, the effect of the phase-corrected propagator *𝒫*^{0} in Eq. (9) can be reverted for a chosen plane located at a distance of *z _{P}* from the reference plane, by applying the propagator
${\mathcal{P}}_{k,-{z}_{0}-{z}_{P}}^{0}$ to

*O*

_{0}, i. e.

*z*=

*z*) the propagators will cancel each other and the propagated light field is the same that could be achieved by imaging the plane at

_{P}*z*onto the camera. For each point (

_{P}*x*,

*y*) the dependence of the remaining term on

*k*is comparable to a complex FD-OCT signal and an image of the scattering potential

*S*

^{˜}(

*z*) *

*η*

_{zP}(

*x*,

*y*,

*z*) can be reconstructed for different depths

*z*by a simple inverse Fourier transform along the

*k*axis

*S*

^{˜}(

*z*) is the Fourier transform of the spectrum and determines the axial point spread function of the reconstructed holoscopy image. The demonstrated propagator inversion is only precise for the one selected depth, to which

*O*

_{0}is propagated, and only here

*η*

_{zP}(

*x*,

*y*,

*z*) will be equal to the scattering potential

*η*(

*x*,

*y*,

*z*). In neighbored planes,

*η*

_{zP}(

*x*,

*y*,

*z*) will be defocused by a distance

*z*−

*z*and no sharp image of

_{P}*η*(

*x*,

*y*,

*z*) will be reconstructed. Thus the reconstruction only gives optimal images within a few Rayleigh lengths. The procedure can nevertheless be repeated for various

*z*and images can be stitched together afterwards.

_{P}### 3.2.3. Reconstruction of the complete volume in free space

Writing Eq. (10) in the Fourier-domain ({*k _{x}*,

*k*}-space), the propagation operator is according to Eq. (5) replaced by a multiplication with the phase function

_{y}*P*

_{k,−z0−zP}(

*k*,

_{x}*k*):

_{y}*η̃*

_{zP}and

*Õ*

_{0}are two-dimensional Fourier transforms of

*η*

_{zP}and

*O*

_{0}with respect to the

*x*and

*y*axis, respectively. By setting the reconstruction distance

*z*equal to the chosen propagation plane

*z*the complete volume

_{P}*η*̃ is reconstructed. Shifting the terms independent of the reconstruction depth

*z*to the right,

*η*̃ can be expressed by a simple integral transform

*z*. The reconstruction of the complete volume is possible by a multiplication of the object wave field with the phase factor exp(i(

_{P}*k*−

_{z}*k*)

*z*

_{0}) and an integral transform along the

*k*-axis with the modified Fourier kernel exp(+i(

*k*+

_{z}*k*)

*z*). Though according to Eq. (6)

*k*is a nonlinear function of

_{z}*k*, rescaling of the recorded data to a linear

*k*+

_{z}*k*scale changes the integral transform to a simple Fourier transform. In the analytic Eq. (11) this is achieved by introducing a new variable and a suitable variable substitution in the integral. This yields

*k*(

*κ*) is obtained by inverting Eq. (12).

The problem of not equidistant data sampling in *k* is commonly found in FD-OCT, where the spectrometer disperses spectral interference not linear in *k* but in the wavelength *λ*. Interpolation and resampling or a fast Fourier transform on non-equispaced data (NFFT) are used to preserve image quality in FD-OCT. These algorithms differ in imaging quality and processing speed. Comparison of suitable algorithms have been done for best performance of FD-OCT (see e. g. [35–37]). Accordingly, the same algorithms can be applied although care needs to be taken as the resampling also depends on the lateral components *k _{x}* and

*k*of the wave vector as ${k}_{z}={\left({k}^{2}-{k}_{x}^{2}-{k}_{y}^{2}\right)}^{1/2}$ is a function of

_{y}*k*,

*k*and

_{x}*k*.

_{y}Resampling is necessary, because the wave field *Õ*_{0} (*k _{x}*,

*k*;

_{y}*k*) is not recorded in the Cartesian coordinate system (

*k*,

_{x}*k*,

_{y}*k*) of the Fourier space, which is directly converted by the inverse Fourier transform back to the spatial coordinates. By using a non-equispaced Fourier transform the modified coordinate system, in which the data were recorded, is taken into account. The phase factor arises as lateral reconstruction distances – the second index of the propagator

_{z}*𝒫*–are measured relative to the camera whereas phases and path lengths, which provide the depth information, are measured relative to the reference plane. The phase factor corrects for these different reference points by propagating the object wave fields to the reference plane.

### 3.2.4. Reconstruction of the complete volume in a medium

The reconstruction by Eq. (11) requires the optical path lengths between scatterer and reference and the respective imaging distance to be identical, which is only the case in free space. For example, in a medium with constant refractive index *n* the effective wavenumber is given by *n* · *k* and the according *z*-component is

*k*and

_{x}*k*remain unchanged. This is the case if

_{y}*k*and

_{x}*k*are parallel to the boundary surface of the medium. In the reconstruction by Eq. (11) the kernel thus needs to be modified by replacing

_{y}*k*and

*k*by

_{z}*n*·

*k*and

*k*′

*, respectively. However, assuming that*

_{z}*z*

_{0}describes the effective propagation distance to focus the reference plane when assuming

*n*= 1, the phase factor does not need to be modified. The reconstruction is thus given by

*S*̃ changed to compensate for the modification of the Fourier transform when replacing

*k*by

*n*·

*k*. The integral transform by Eq. (13) with the modified kernel can also be calculated by the NFFT. For moderate NA a simplification is possible. In paraxial approximation the kernel can be rewritten as

*z*′ =

*nz*the kernel can thus be modified to with

*ζ*= 1/

*n*

^{2}.

*ζ*describes the proportionality constant between imaging distance and optical path length as shown in Fig. 3. The complete reconstruction formula is given by

For *ζ* = 0, Eq. (15) reduces to Eq. (10) with *z _{P}* = 0, i. e. the chosen reconstruction distance is the reference plane. For increasing

*n*the parameter

*ζ*tends to zero, thus the higher the refractive index of the sample, the better the approximation by Eq. (10). In fact, for

*ζ*< 0.5 the reconstruction by Eq. (10) yields better results than the reconstruction by Eq. (11) that assumes free space.

In case the paraxial approximation of Eq. (14) is not valid the more specific formula by Eq. (13) needs to be used. A fast implementation of this formula is possible as well.

An approximate solution of the reconstruction can be used for a fast determination of *ζ*. By first Fourier transforming the object waves *O*_{0}(*x*, *y*; *k*) with respect to the *k* axis, i. e. using FD-OCT depth discrimination on the unprocessed holograms and only afterwards performing holographic refocusing with the center wavenumber for at least two different depths, the focus positions and the optical path lengths of these layers can be determined. A linear regression of these points gives *ζ* and the reference propagation length *z*_{0}.

## 4. Results and discussion

The proposed algorithms have been implemented using C++ with the compiler of the GNU Compiler Collection (GCC). In order to increase performance, vectorization using Streaming SIMD Extensions (SSE) intrinsics and parallelization using OpenMP have been applied.

We demonstrated the effectiveness of both proposed reconstruction algorithms using a scattering sample [38] which contains 300 – 800nm sized iron oxide nanoparticles embedded in polyurethane resin. The simple reconstruction according to Eq. (10), which first propagates the object field from the camera plane to one depth in the sample and then applies the Fourier transform, yields sharp images only in the focal range around the reconstruction depth *z _{P}* (Fig. 4(a) and 4(b)). The algorithms can be used to reconstruct the acquired holoscopic data if only a single layer is of interest. The one-step reconstruction of the complete volume by the NFFT in Eq. (15) reconstructed all layers over a depth of more than 30 Rayleigh lengths sharply (Fig. 4(c)). However, this does only work if the index of refraction in the sample volume is correctly incorporated. A one-step reconstruction of a complete volume assuming the free space situation by Eq. (11) reconstructs only a limited depth region sharply (Fig. 4(d)). In fact, with the phantom the simple reconstruction shows better performance than the one-step reconstruction with

*n*= 1.0 as

*ζ*= 1/

*n*

^{2}= 1/1.5

^{2}= 0.44 < 0.5. Only the full reconstruction by Eq. (15) with the correct

*ζ*reconstructs all depths sharply.

To demonstrate the abilities of the reconstruction process for more complex biological structures, tomograms of a bug are shown in Fig. 5. From the whole recorded volume en-face images at three different depths are shown. Media 1 shows a lower-resolution fly-through reconstruction of the bug. While the resolution inside the bug changes due to its non-homogeneous refractive index, the outer shell is sharp within all layers.

The speed advantage of holoscopy becomes more important for higher NA since the Rayleigh length drops quadratically with the NA. Images of a grape were acquired with 0.14 NA using a microscope objective and a Mach-Zehnder interferometer (Fig. 1(b)). B-scans from reconstructed volumes clearly demonstrate that also for more complex structures a one-step reconstruction by Eq. (15) is possible while maintaining lateral reconstruction over the depth (Fig. 6). Some additional artifacts were introduced by reflections of the microscope objective.

#### 4.1. Numerical complexity and execution speed

The one-step reconstruction of the complete volume reduces the computational complexity significantly. The simple reconstruction by Eq. (10) requires for one focal volume the propagation of each acquired hologram to a certain depth. For a single propagation of an image array of size *N _{X}* ×

*N*two 2D Fourier transforms of size 2

_{Y}*N*× 2

_{X}*N*are required, if zero-padding is applied to prevent circular convolution artifacts. One transform is required to go from position space to Fourier-space and one to go back. Consequently, for

_{Y}*N*holograms at different wavelengths a total of 2

*N*two-dimensional Fourier transforms of size 2

*N*× 2

_{X}*N*are required. For each lateral position a one-dimensional Fourier transform of size

_{Y}*N*needs to be performed afterwards to gain depth information, i. e. a total of

*N*×

_{X}*N*Fourier transforms calculate the final data. In total, 2

_{Y}*N*Fourier transforms of 2

*N*× 2

_{X}*N*arrays plus

_{Y}*N*×

_{X}*N*one-dimensional Fourier transforms of

_{Y}*N*data points are needed for each focal volume. The overall complexity

*C*of a simple reconstruction with a single focus layer (SL) is of the order of

_{SL}The one-step reconstruction of the complete volume by Eq. (15) also requires *N* two-dimensional Fourier transforms of size 2*N _{X}* × 2

*N*to bring the images to Fourier space and 2

_{Y}*N*× 2

_{X}*N*one-dimensional Fourier transforms of size

_{Y}*N*to gain depth information, which is for the complete volume 4× more than for the reconstruction of only one focal volume by propagation and FFT. Because of the hermitian symmetry, the back-transformation from 2D Fourier-space to position space only requires

*N*/2 + 1 two-dimensional Fourier transforms of size 2

*N*× 2

_{X}*N*, which is about half the amount required for the propagation and FFT approach. The overall complexity

_{Y}*C*of the one-step reconstruction for the full volume (FV) can thus be written as

_{FV}Using these measurements, we can estimate the required time when stitching several images with different focus layers using the single reconstruction by Eq. (10) compared to using one one-step reconstruction by Eq. (11) or Eq. (15): the confocal parameter (i.e. twice the Rayleigh length) for the lens-less setup was 2*z _{R}* = 220

*μ*m and the measurement depth was 3.7mm which would need 17 reconstructions of different focal volume for an overall diffraction limited resolution in air. In this case the one-step reconstruction of the complete volume offered an 8.5× speed-up. For a medium with refractive index

*n*= 1.5 – as the scattering sample in Fig. 4 –this speed-up is reduced by a factor

*ζ*= 1/

*n*

^{2}≈ 0.44, because the focal range is increased by a factor

*n*and the effective total measurement depth is reduced by a factor 1/

*n*, because it is determined by the optical path length. The full reconstruction is still about 3 to 4 times faster in this case.

For the high NA setup in Fig. 6 the confocal parameter was reduced to about 2*z _{R}* = 30

*μ*m and the measurement depth was about 2.2mm. Therefore in air with Eq. (10) about 70 reconstructions are required with an approximately 30× total speed-up when using Eq. (11). With a refractive index of about

*n*= 1.4 this speed-up is again reduced by a factor

*ζ*≈ 1/2 and thus the grape shown in Fig. 6 can be reconstructed about 15× faster using the complete reconstruction by Eq. (15) compared to the single layer reconstructions by Eq. (10).

For higher lateral resolution, this factor will increase further. The actual speed-up for the reconstruction depends on the ratio of the confocal parameter 2*z _{R}* to the measurement depth

*d*of the system and the refractive index

*n*of the sample. The expected improvement in reconstruction speed is shown in Fig. 7. At high NAs the one-step reconstruction will be up to three orders of magnitude faster.

## 5. Conclusion and outlook

Holoscopy is a promising imaging technique which combines digital holography with full-field swept-source OCT. With inverse scattering for frequency-scanned full-field OCT [26] it shares unique advantages compared to conventional OCT. Depth of field is not limited by the focusing optics, i. e. sharp images are obtained over several Rayleigh lengths with only a single data acquisition. All backscattered photons from the sample volume are used efficiently, sensitivity does not degrade due to confocal gating over the complete measurement depth. The remaining degradation in signal-to-noise ratio in the scattering sample is only caused by the light attenuation of the sample and the limited instantaneous coherence length of the laser. Image reconstruction is done numerically at a significantly increased numerical complexity compared to confocal OCT.

Compared to scanning ISAM [8–10], sensitivity does not degrade outside the confocal range. In contrast to inverse scattering for full-field SS-OCT [26, 27], holoscopy does not image the object volume onto the camera, but samples the interference patterns of the scattered field with a reference field. Similar setups as used as in digital holographic microscopy (DHM) are used. This distributes the scattered irradiation over multiple pixels and should give an advantage in SNR and dynamic range for localized tissue structures. The off-axis reference beam in the presented version of holoscopy, which is also commonly used in DHM, allows full-range measurements, i. e. doubles the measurement depth by avoiding the ambiguity of positive and negative path length differences, and avoids auto-correlation signals and coherence noise. Consequently, holoscopy image reconstruction combines concepts of DH with FD-OCT to yield an image of the sample volume, which does not resemble the real scattering potential due to the missing low spatial frequencies [25]. Inverse scattering of full-field frequency-scanned OCT [26, 27] estimates the complete scattering potential by trying to solve the inverse problem. Both algorithms rely on the same forward model for the scattered radiation and use the same resampling in frequency space. While the holoscopy reconstruction is completely deterministic and more intuitive, since it models the physics of light propagation, inverse scattering potentially yields more quantitative information on the refractive index distribution of the object. Physical constraints like real imaging optics or boundaries between areas of different index of refraction are easily incorporated in the here presented holoscopy reconstruction algorithm. So far, no images of real objects were published using inverse scattering on full-field SS-OCT data, and therefore a comparison of image quality with holoscopy has still to be done.

In the same way as full-field inverse scattering [26, 27] and full-field SS-OCT [13, 14], holoscopy is vulnerable to multiple scattered photons and cross-talk. Contrary to full-field OCT a spatially incoherent light source cannot easily be applied to suppress this cross-talk as the spatial coherence is required to sample the light fields correctly.

In conclusion, we demonstrated that the numerical complexity of holoscopy is considerably reduced by a one-step reconstruction algorithm using an NFFT on the non-equispaced data of the angular spectra which are recorded at multiple wavenumbers.

The reconstruction process uses a resampling in frequency space which was previously demonstrated for ISAM, Synthetic Aperture Radar and other reconstruction algorithms for volumetric imaging, which are based on propagation time measurements [27, 39].

Essential for the reconstruction is a separation of holographic and OCT imaging distances. The one-step algorithm can be applied as long as the argument of the kernel has a linear relation of the depth, as is the case in a medium of constant refractive index. We demonstrated up to 15× reduced processing time and good imaging performance on a phantom and two biological objects at NAs up to 0.14. Using the one-step reconstruction algorithm makes the numerical reconstruction process in holoscopy significantly faster especially for high NA imaging. The one-step reconstruction algorithm is a significant step towards making real-time imaging using holoscopy possible.

## Acknowledgments

This work was funded by the “ Zukunftsprogramm Wirtschaft (ZPW)” of Schleswig-Holstein, Germany ( Competence Center for Technology and Engineering in Medicine (TANDEM), 122-09-024) and the European Union (EFRE 636 and FP7 HEALTH, Contract No. 201880 – FUN OCT).

## References and links

**1. **D. Hillmann, C. Lührs, T. Bonin, P. Koch, and G. Hüttmann, “Holoscopy—holographic optical coherence tomography,” Opt. Lett. **36**, 2390–2392 (2011). [CrossRef] [PubMed]

**2. **D. Hillmann, C. Lhrs, T. Bonin, P. Koch, A. Vogel, and G. Httmann, “Holoscopy: holographic optical coherence tomography,” Proc. SPIE **8091**, 80911H (2011). [CrossRef]

**3. **J. Holmes, “Theory and applications of multi-beam OCT,” Proc. SPIE **7139**, 713908–713907 (2008). [CrossRef]

**4. **C. Blatter, B. Grajciar, C. M. Eigenwillig, W. Wieser, B. R. Biedermann, R. Huber, and R. A. Leitgeb, “Extended focus high-speed swept source OCT with self-reconstructive illumination,” Opt. Express **19**, 12141–12155 (2011). [CrossRef] [PubMed]

**5. **K.-S. Lee and J. P. Rolland, “Bessel beam spectral-domain high-resolution optical coherence tomography with micro-optic axicon providing extended focusing range,” Opt. Lett. **33**, 1696–1698 (2008). [CrossRef] [PubMed]

**6. **R. A. Leitgeb, M. Villiger, A. H. Bachmann, L. Steinmann, and T. Lasser, “Extended focus depth for Fourier domain optical coherence microscopy,” Opt. Lett. **31**, 2450–2452 (2006). [CrossRef] [PubMed]

**7. **L. Liu, C. Liu, W. C. Howe, C. J. R. Sheppard, and N. Chen, “Binary-phase spatial filter for real-time swept-source optical coherence microscopy,” Opt. Lett. **32**, 2375–2377 (2007). [CrossRef] [PubMed]

**8. **T. S. Ralston, D. L. Marks, P. S. Carney, and S. A. Boppart, “Interferometric synthetic aperture microscopy:inverse scattering for optical coherence tomography,” Opt. Photon. News **17**, 25–25 (2006).

**9. **T. S. Ralston, D. L. Marks, P. S. Carney, and S. A. Boppart, “Interferometric synthetic aperture microscopy,” Nat. Phys. **3**, 129–134 (2007). [CrossRef]

**10. **T. S. Ralston, D. L. Marks, P. S. Carney, and S. A. Boppart, “Real-time interferometric synthetic aperture microscopy,” Opt. Express **16**, 2555–2569 (2008). [CrossRef] [PubMed]

**11. **L. Yu, B. Rao, J. Zhang, J. Su, Q. Wang, S. Guo, and Z. Chen, “Improved lateral resolution in optical coherence tomography by digital focusing using two-dimensional numerical diffraction method,” Opt. Express **15**, 7634–7641 (2007). [CrossRef] [PubMed]

**12. **A. A. Moiseev, G. V. Gelikonov, P. A. Shilyagin, D. A. Terpelov, and V. M. Gelikonov, “Digital refocusing in optical coherence tomography,” Proc. SPIE **8213**, 82132C (2012). [CrossRef]

**13. **B. Považay, A. Unterhuber, B. Hermann, H. Sattmann, H. Arthaber, and W. Drexler, “Full-field time-encoded frequency-domain optical coherence tomography,” Opt. Express **14**, 7661–7669 (2006). [CrossRef]

**14. **T. Bonin, G. Franke, M. Hagen-Eggert, P. Koch, and G. Hüttmann, “In vivo Fourier-domain full-field OCT of the human retina with 1.5 million A-lines/s,” Opt. Lett. **35**, 3432–3434 (2010). [CrossRef] [PubMed]

**15. **J. Pomarico, U. Schnars, H. J. Hartmann, and W. Juptner, “Digital recording and numerical reconstruction of holograms: a new method for displaying light in flight,” Appl. Opt. **34**, 8095–8099 (1995). [CrossRef] [PubMed]

**16. **G. Pedrini and H. J. Tiziani, “Short-coherence digital microscopy by use of a lensless holographic imaging system,” Appl. Opt. **41**, 4489–4496 (2002). [CrossRef] [PubMed]

**17. **J. C. Marron and T. J. Schulz, “Three-dimensional, fine-resolution imaging using laser frequency diversity,” Opt. Lett. **17**, 285–287 (1992). [CrossRef] [PubMed]

**18. **M. C. Potcoava and M. K. Kim, “Optical tomography for biomedical applications by digital interference holography,” Meas. Sci. Technol. **19**, 074010 (2008). [CrossRef]

**19. **A. V. Zvyagin, “Fourier-domain optical coherence tomography: optimization of signal-to-noise ratio in full space,” Opt. Commun. **242**, 97–108 (2004). [CrossRef]

**20. **A. V. Zvyagin, P. Blazkiewicz, and J. Vintrou, “Image reconstruction in full-field Fourier-domain optical coherence tomography,” J. Opt. A, Pure Appl. Opt. **7**, 350 (2005). [CrossRef]

**21. **D. V. Shabanov, G. V. Geliknov, and V. M. Gelikonov, “Broadband digital holographic technique of optical coherence tomography for 3-dimensional biotissue visualization,” Laser Phys. Lett. **6**, 753–758 (2009). [CrossRef]

**22. **M. K. Kim, “Wavelength-scanning digital interference holography for optical section imaging,” Opt. Lett. **24**, 1693–1695 (1999). [CrossRef]

**23. **F. Montfort, T. Colomb, F. Charrière, J. Kühn, P. Marquet, E. Cuche, S. Herminjard, and C. Depeursinge, “Submicrometer optical tomography by multiple-wavelength digital holographic microscopy,” Appl. Opt. **45**, 8209–8217 (2006). [CrossRef] [PubMed]

**24. **E. Wolf, “Three-dimensional structure determination of semi-transparent objects from holographic data,” Opt. Commun. **1**, 153–156 (1969). [CrossRef]

**25. **A. F. Fercher, “Optical coherence tomography—development, principles, applications,” Z. Med. Phys. **20**, 251 –276 (2010). [PubMed]

**26. **D. L. Marks, T. S. Ralston, S. A. Boppart, and P. S. Carney, “Inverse scattering for frequency-scanned full-field optical coherence tomography,” J. Opt. Soc. Am. A **24**, 1034–1041 (2007). [CrossRef]

**27. **B. J. Davis, D. L. Marks, T. S. Ralston, P. S. Carney, and S. A. Boppart, “Interferometric synthetic aperture microscopy: Computed imaging for scanned coherent microscopy,” Sensors **8**, 3903–3931 (2008). [CrossRef] [PubMed]

**28. **U. Schnars and W. Jueptner, *Digital Holography: Digital Hologram Recording, Numerical Reconstruction, and Related Techniques* (Springer, 2005).

**29. **M. Born, E. Wolf, A. Bhatia, P. Clemmow, D. Gabor, A. Stokes, A. Taylor, P. Wayman, and W. Wilcock, *Principles of Optics: Electromagnetic Theory of Propagation, Interference and Diffraction of Light* (Cambridge University Press, 2000). [PubMed]

**30. **J. Goodman, *Introduction to Fourier Optics*, McGraw-Hill Physical and Quantum Electronics Series (Roberts & Co., 2005).

**31. **A. Fercher, C. Hitzenberger, G. Kamp, and S. El-Zaiat, “Measurement of intraocular distances by backscattering spectral interferometry,” Opt. Commun. **117**, 43–48 (1995). [CrossRef]

**32. **M. K. Kim, “Principles and techniques of digital holographic microscopy,” SPIE Rev. **1**, 018005 (2010). [CrossRef]

**33. **U. Schnars and W. P. O. Jptner, “Digital recording and numerical reconstruction of holograms,” Meas. Sci. Technol. **13**, R85 (2002). [CrossRef]

**34. **Y. Yasuno, S. Makita, T. Endo, G. Aoki, M. Itoh, and T. Yatagai, “Simultaneous B-M-mode scanning method for real-time full-range Fourier domain optical coherence tomography,” Appl. Opt. **45**, 1861–1865 (2006). [CrossRef] [PubMed]

**35. **D. Hillmann, G. Hüttmann, and P. Koch, “Using nonequispaced fast Fourier transformation to process optical coherence tomography signals,” Proc. SPIE **7372**, 73720R (2009). [CrossRef]

**36. **S. Vergnole, D. Lévesque, and G. Lamouche, “Experimental validation of an optimized signal processing method to handle non-linearity in swept-source optical coherence tomography,” Opt. Express **18**, 10446–10461 (2010). [CrossRef] [PubMed]

**37. **K. K. Chan and S. Tang, “Selection of convolution kernel in non-uniform fast Fourier transform for Fourier domain optical coherence tomography,” Opt. Express **19**, 26891–26904 (2011). [CrossRef]

**38. **P. D. Woolliams, R. A. Ferguson, C. Hart, A. Grimwood, and P. H. Tomlins, “Spatially deconvolved optical coherence tomography,” Appl. Opt. **49**, 2014–21 (2010). [CrossRef] [PubMed]

**39. **K. Langenberg, M. Berger, T. Kreutter, K. Mayer, and V. Schmitz, “Synthetic aperture focusing technique signal processing,” NDT International **19**, 177–189 (1986). [CrossRef]