## Abstract

Optical diffraction tomography (ODT) is an indispensable tool for studying objects in three dimensions. Until now, ODT has been limited to coherent light because spatial phase information is required to solve the inverse scattering problem. We introduce a method that enables ODT to be applied to imaging incoherent contrast mechanisms such as fluorescent emission. Our strategy mimics the coherent scattering process with two spatially coherent illumination beams. The interferometric illumination pattern encodes spatial phase in temporal variations of the fluorescent emission, thereby allowing incoherent fluorescent emission to mimic the behavior of coherent illumination. The temporal variations permit recovery of the spatial distribution of fluorescent emission with an inverse scattering model. Simulations and experiments demonstrate isotropic resolution in the 3D reconstruction of a fluorescent object.

© 2020 Optical Society of America under the terms of the OSA Open Access Publishing Agreement

In fluorescence microscopy, light emitted from the specimen is spatially incoherent. Consequently, 3D imaging techniques require some form of spatial gating to map detected photons to the location from which they were emitted. This spatial gating is often achieved though some combination of confining the illumination volume and detection volumes. Examples of such strategies include selective plane illumination microscopy (SPIM) [1], where the illumination volume is restricted to a thin axial plane, or laser-scanning confocal microscopy, where both the illumination and detection volumes are restricted to a diffraction-limited spot in 3D [2]. These strategies allow each detected photon to be mapped to a 3D location from which it was emitted, and often involve high numerical aperture (NA) optics that tightly focus the illumination light or restrict the light detection volume.

Other strategies for 3D imaging rely on inverting a quantitative model of the illumination, emitted, and collected light to estimate the concentration of fluorescent emitters from recorded images. These computational imaging methods, such as optical projection tomography (OPT) and deconvolution imaging (DI), require axially scanning or rotating the object to collect a set of data to be reconstructed [3]. 3D computational imaging of a scattering object with partially coherent illumination is possible within the Born approximation using the weak optical transfer function [4]; however, these methods rely on scattered light and do not extend to fluorescent imaging.

Conventional fluorescent imaging methods suffer from limitations such as photobleaching and anisotropic spatial resolution between the axial and transverse directions [5]. While SPIM can partially mitigate the effects of photobleaching, anisotropic spatial resolution is a persistent problem [3]. In all of these methods, tissues must be optically cleared to reduce distortions from optical scattering to suitably low levels [3]. A more stringent restriction on SPIM and OPT microscopes is that the spatial resolution is coupled to the size of the object [3]—leading to decreased spatial resolution for an increased imaging region.

Coherent imaging strategies enable 3D imaging by making use of the direction of the scattered light. Emil Wolf recognized that the inverse scattering problem for coherent light propagating in an object can be solved by recording the complex, spatially coherent, scattered field [6]. Directional scattering allows the recording of spatial frequency components of an object by exploiting knowledge of the complex amplitude of light scattered in a particular direction when illuminated by a spatially coherent input wave. This concept is illustrated schematically in Fig. 1(a), where the illumination field, ${E_0}$, and scattered field, ${E_1}$, have corresponding wavevectors ${{\textbf{k}}_0}$ and ${{\textbf{k}}_1}$, respectively.

Interferometric techniques, e.g., holography, record the complex scattered field that, within the Born approximation, can be mapped to the arc of spatial frequency information defined by the Ewald sphere by applying the Fourier diffraction theorem [6] shown in Fig. 1(b). The position in spatial frequency space is given by the wavevector difference, $\Delta {\textbf{k}} = {{\textbf{k}}_1} - {{\textbf{k}}_0}$. This sparse spatial frequency information is encoded by the complex scattered field obtained with coherent imaging. More complete object information can be acquired by introducing a relative rotation between the illumination and the object to fully sample the object spatial frequency distribution, yielding optical diffraction tomography (ODT).

Optical holography, and thus ODT, normally relies on spatially coherent light to interferometrically record the complex scattered field, allowing interior spatial frequency information to be acquired. Coherent scattering data can be inverted to solve the scattering problem for variations in refractive index of the specimen. Using coherent illumination allows object position to be encoded in the complex scattered field. The phase is critical since it encodes the axial location of the scatterer, and it is this phase that is required to enable diffraction tomography to be extended to incoherent fluorescent light. ODT uses a rotation of the object or illumination wave to capture a sequence of scattered fields that fill out the object spatial frequency information. Then computational imaging tools are applied to invert the information recorded in order to recover the object spatial frequency distribution, and thus the object spatial information. ODT has the advantage of not being constrained to imaging objects in the Rayleigh range of the illumination beam, as the light is allowed to diffract before encountering the optical detector.

ODT is conventionally thought to be impossible with fluorescent light because, in the case of incoherent emission, phase is lost due to the random emission of the molecular emitters, and this unstable phase obscures the relationship between the location of the emitter and the propagation direction and phase. If fluorescence is to be imaged in a manner similar to ODT, it is necessary to encode the coherent illumination propagation phase onto fluorescent light so that the phase can be recovered. While an incoherent emitter is coherent with itself allowing encoding of the emitter location [7,8], general adaptation of coherent-like imaging methods to incoherent light remains elusive.

In this Letter, we introduce the first method that uses fluorescent light emission for diffraction tomographic imaging. As fluorescent light is spatially incoherent, it is necessary to mimic the process of coherent scattering to enable ODT with fluorescence. We mimic spatially coherent scattering by transferring the phase difference from a pair of spatially coherent illumination beams [9] into a time variation of fluorescent emission brightness, enabling ODT with fluorescent light recorded on a single element optical detector. By mimicking the incident and scattered fields in the illumination of a fluorescent object, we are able to perform ODT using fluorescent light. We refer to this method as fluorescence diffraction tomography (FDT).

The FDT concept is illustrated in Figs. 1(c) and 1(d). A pair of illumination beams substitute for the incident and scattered waves in coherent scattering. The reference wave, ${E_0}$, in Fig. 1(c), plays the role of the incident wave in coherent scattering and interferes with an illumination plane wave, ${E_1}$, that represents the scattered wave. To map out the equivalent information as in coherent scattering, the incident direction of ${E_1}(t)$ is scanned in time, producing a modulation of the illumination intensity that depends on the relative phase of the two illumination beams, $\Delta {\textbf{k}}(t) \cdot {\textbf{x}}$, where ${\textbf{x}} = (x,z)$ is the spatial coordinate vector in the $x - z$ plane. The difference wavevector, $\Delta {\textbf{k}}(t) = {{\textbf{k}}_1}(t) - {{\textbf{k}}_0}$, behaves as the scattering vector in coherent scattering that is defined as the difference between the k-vector of the scattered field, ${{\textbf{k}}_{\textbf{1}}}{\textbf{(t)}} = k({\sin}[\theta (t)],{\cos}[\theta (t)])$, and incident waves, ${{\textbf{k}}_0} = k(0,1)$, where $k = 2\pi /\lambda$ is the wavenumber of the illumination.

The collected fluorescence recorded with a single-pixel detector serves as the FDT time signal. These measurements imprint the relative phase of the two spatially coherent illumination beams into an intensity modulation in space and time that allows the detected incoherent fluorescent light power to be treated as if it came from a coherent source. Because fluorescent light is incoherent, the detected fluorescent light power is equivalent to the overlap integral between the spatial distribution of the fluorophore concentration—our object—and the illumination intensity. Each measurement at time $t$ samples the complex amplitude of the object spatial frequency distribution at the difference spatial frequency wavevector, $\Delta {\textbf{k}}(t)$. The result is that for each incident angle of ${E_1}(t)$, a spatial frequency projection is recorded that exactly mimics the complex spatial frequency information traditionally obtained through coherent scattering measurements; compare Figs. 1(b) and 1(d).

The key aspect of FDT is coherent transfer mediated by the modulated illumination intensity. The intensity arises from the interference of the reference and scanned fields, i.e., ${I_{{\rm{ill}}}} \propto |{E_0} + {E_1}[\theta (t{)]|^2}$ [9], where $\theta (t)$ denotes the incidence angle of ${E_1}$ with respect to the ${k_z}$ axis, or equivalently the angle between ${{\textbf{k}}_1}$ and ${{\textbf{k}}_0}$, at time $t$. The model for the interference is written as ${I_{\rm{ill}}}({\textbf{x}},t) = 1 +\mu(t)\cos [\Delta \Phi ({\textbf{x}},t)]$, where $\mu(t)$ is the fringe visibility, and $\Delta \Phi$ is the phase difference between the reference and scanned fields in Fig. 1(c) [10,11]. The phase difference between the illumination fields, $\Delta \Phi ({\textbf{x}},t) = {\omega _c} t + \Delta {\textbf{k}}(t) \cdot {\textbf{x}}$, imparts a temporal modulation pattern at each spatial position in the $x - z$ plane. Here, ${\omega _c}$ is a carrier frequency in the modulation, which is critical for isolating the complex phase information in the time signal [9,10], and $\Delta {\textbf{k}}(t) \cdot {\textbf{x}}$ is the spatial phase variation that encodes the location of the object in the $x - z$ plane. The stable relative phase difference between the pair of illumination beams is critical to produce the time-varying interference used to label fluorophores, disallowing the use of partially or fully incoherent illumination in FDT.

The elements of $\Delta {\textbf{k}}(t) = (\Delta {k_x}(t),\Delta {k_z}(t))$ are difference frequencies. In our work, we set $\Delta {k_x}(t) = {k_c} t/T$, corresponding to $\sin \theta (t) = t {\rm{NA}}/T$, where ${k_c} = k {\rm{NA}}$ is the coherent imaging cutoff spatial frequency for the illumination optics, and $t \in [- T,T]$, with $2T$ denoting the total collection time. For this choice, we have $\Delta {k_z} = k (\sqrt {1 - {{(t {\rm{NA}}/T)}^2}} - 1)$. The spatiotemporal intensity modulation encodes the relative spatial phase of the illumination fields as a temporal modulation of the emitted fluorescent power from the object, thereby transferring coherent propagation behavior to fluorescent emission [10].

The time trace is generated by detecting the collected fluorescent emission as the scanning field sweeps through the range of incident angles supported by the NA of the illumination objective [9–11]. The temporal signal $S(t,\phi)$ is the projection of the spatial distribution $c({\textbf{x}})$ of the fluorophore concentration onto illumination intensity at incidence angle $\phi$:

An equivalent representation of Eq. (1) is to write it in its complex-valued form, using Euler’s identity, by including only a single sideband of the sinusoidal term in the illumination pattern. This representation is given by

The spatial distribution of the data collected for each angle is referred to as a sinogram, which is computed from a Fourier transform of the single sideband signal, $s({{\textbf{R}}_\phi}{\textbf{x}}) = {\cal F}\{{\tilde S^{(1)}}(t,\phi)\}$. The least-squares estimate (minimum ${{\cal L}^2}$ norm for error) of the fluorescent concentration, denoted by $\hat c({\textbf{x}})$, is then computed by applying the inverse operator ${{\cal D}^{- 1}}$:

A full simulation of the forward model, Eq. (2), and the reconstruction, Eq. (3), is shown in Fig. 2. The FDT microscope was simulated using an illumination wavelength of 532 nm, ${\rm{NA}} = 0.90$, and a field of view of 20 µm; the full time trace signal processing workflow to generate the FDT sinogram is illustrated in Supplement 1, Fig. S4. Figure 2(a) shows a FDT sinogram in the spatial domain where the scan angles range from $[- 180,180]$ deg. The colored lines and boxes represent the time trace(s) used to generate the corresponding figures in the right of the sinogram. The first and third rows in Fig. 2 show the mapping of measured spatial frequency support onto the Ewald sphere in the spatial frequency domain constrained by optical diffraction generated by taking the Fourier transform of the second and fourth rows, respectively. Figures 2(b) and 2(d) show the frequency support measured by a time trace when $\phi = 0$ and $\phi = 45$ deg, respectively. Figures 2(f) and 2(h) show the spatial frequency support when multiple projections are used in the reconstruction, $\phi = [0, 45]$ and $\phi = [- 180,180)$ deg, respectively. Figures 2(c), 2(e), 2(g), and 2(i) show the reconstructed object. Notice that the object localization improves as additional projection angles are used in the reconstruction.

The FDT microscope was implemented by using a spinning modulation mask that behaves as a time-varying grating spatial frequency [13]. The mask is illuminated by a line focus that is image relayed to the object region. At a snapshot in time, the mask appears as a static grating creating a zero order beam, ${E_0}$, as well as positive, ${E_1}(t)$, and negative order diffracted beams. The negative diffracted order is blocked by a spatial filter, leaving the zero and positive diffracted order beams to be image relayed to the object plane [10]. Interference between the beams produces a spatiotemporally modulated illumination intensity pattern. The object is rotated a full 360 deg. At each rotation angle, a time trace is acquired. Rotation in the spatial domain also causes the spatial frequency arc, Fig. 1(d), to rotate. Acquisition of data from multiple angles produces spatial frequency samples across the $({k_x} - {k_z})$ plane, leading to isotropic spatial resolution; see Visualization 1, Visualization 2, Visualization 3, Visualization 4, Visualization 5, Visualization 6 and Visualization 7 for details.

FDT imaging was demonstrated experimentally using an object fabricated from cotton fibers stained with fluorescein. The stained fibers were mounted on an eight-axis stage [Supplement 1, Fig. S1(c)]. The mounting stage allowed for full 360 deg rotation of the sample as well as the ability to position the sample precisely in the microscope focus. Figure 3 shows a 3D reconstruction of fluorescein stained fibers using alpha blending from Volume Viewer in imageJ. The image was generated with 200 evenly spaced $x - z$ slices obtained by scanning along $y$. Due to mechanical instability of the $y$-axis stage, each $x - z$ slice was shifted to align adjacent slices to avoid object discontinuity in the 3D reconstruction. The sub-images in Fig. 3 are slices from the 3D reconstruction, and the colored frames correspond to the rectangular boxes in the 3D image. An absorption contrast image was simultaneously acquired with the fluorescence; however, for brevity, this image is not shown in the main text; see Supplement 1, Fig. S5.

The FDT image of fluorescent stained fibers in air (Fig. 3) is a highly scattering sample, as the refractive index for cotton is 1.54. The Born approximation is violated by this sample, and yet we recover a nearly exact image of the fluorophore concentration with FDT. The scattering by the fibers distorts the illumination patterns—introducing image artifacts as seen in the upper left panel of Fig. 3. The quantitative impact of illumination distortion in FDT will be explored in future work.

There are several differences between standard ODT and FDT that should be noted. While ODT and FDT obtain the complex spatial frequency values that follow the arc of spatial frequency information governed by diffraction, as shown in Figs. 1(b) and 1(d), the physical origin of these data are remarkably different. ODT relies on the spatial coherence of the light scattered by the spatial variation in the refractive index of the object. As a result, the ODT projection operation deviates from FDT by a complex scaling constant of ${-}2i\Delta {k_z}$. In contrast, FDT records information from the spatial variation of fluorophore concentration from the interference of two spatially coherent illumination beams. Each method samples complex amplitudes that lie on the Ewald sphere, which naturally leads to recording data in the ${k_x} - {k_z}$ spatial frequency plane by relative rotation of the object and illumination beams to spatially resolve the object in the $x - z$ plane. However, in the derivation of the Fourier diffraction theorem for FDT, the only assumption made is illumination by plane waves, and there is no need to invoke the Born approximation or Rytov approximation. Therefore, FDT does not have the same object size or object variation limitations that standard ODT experiences [14].

FDT mitigates the coupling between object size and spatial resolution typically seen in fluorescence imaging. Comparatively, in optical projection tomography, where the fluorescent light is detected with a camera, the object is restricted to the region of good focus (the Rayleigh range) to avoid background blur from out of focus light [3,15]. This causes the coupling of spatial resolution and object size conventionally seen with incoherent imaging modalities. In FDT, incoherent light emission may be treated as a coherent source allowing the object to extend over a much larger region not constrained by the Rayleigh range. Therefore, FDT decouples the need to reduce the NA of the illumination as the object size increases.

In summary, we introduced a new tomographic imaging technique, FDT, that extends ODT to incoherent contrast mechanisms, such as fluorescence and Raman scattering. We developed theory for both forward and inverse models. The forward model uses coherent holographic image reconstruction by phase transfer (CHIRPT) illumination and detection as a projection of spatial frequencies onto the sample [9–11,16]. The projection uses modulation transfer to encode the spatial phase of the illumination to allow phase transfer to incoherent sources. We demonstrated FDT reconstruction with dual functions that are biorthogonal to the intensity illumination of the rotated Fourier elements in the forward model. Additionally, we showed experimentally that FDT works for both coherent and incoherent contrast mechanisms. In principle, it can be used for any contrast mechanism including nonlinear mechanisms. We expect this technique will expand the range of samples that can be imaged and provide an easy method to co-register multiple contrast distributions simultaneously.

## Funding

National Science Foundation (1707287); National Institutes of Health (1R21EB025389, R21MH117786).

## Acknowledgment

J. Squier is supported by the National Science Foundation (NSF).

## Disclosures

The authors declare no conflicts of interest.

See Supplement 1 for supporting content.

## REFERENCES

**1. **J. Huisken, J. Swoger, J. Wittbrodt, and E. Stelzer, Science **305**,
1007 (2004). [CrossRef]

**2. **T. Wilson and C. Sheppard, *Theory and Practice of Scanning
Optical Microscopy*
(Academic,
1984).

**3. **A. Liu, W. Xiao, R. Li, L. Liu, and L. Chen, J. Microsc. **275**, 3 (2019). [CrossRef]

**4. **N. Streibl, J. Opt. Soc. Am. A **2**, 121 (1985). [CrossRef]

**5. **P. Sarder and A. Nehorai, IEEE Signal Process Mag. **23**(3), 32
(2006). [CrossRef]

**6. **E. Wolf, Opt. Commun. **1**, 153 (1969). [CrossRef]

**7. **N. Siegel, V. Lupashin, B. Storrie, and G. Brooker, Nat. Photonics **10**, 802 (2016). [CrossRef]

**8. **J. Linarès-Loyez, P. Bon, J. S. Ferreira, O. Rossier, B. Lounis, G. Giannone, L. Groc, and L. Cognet, Front. Phys. **7**, 68 (2019). [CrossRef]

**9. **J. J. Field, D. Winters, and R. Bartels, Optica **3**,
971 (2016). [CrossRef]

**10. **J. J. Field, D. Winters, and R. Bartels, J. Opt. Soc. Am. A **32**, 2156 (2015). [CrossRef]

**11. **J. J. Field, J. A. Squier, and R. A. Bartels, Opt. Express **27**, 13015 (2019). [CrossRef]

**12. **A. Devaney, Ultrason. Imaging **4**, 336 (1982). [CrossRef]

**13. **G. Futia, P. Schlup, D. G. Winters, and R. A. Bartels, Opt. Express **19**, 1626 (2011). [CrossRef]

**14. **T. Kozacki, M. Kujawińska, and P. Kniażewski, Opto-Electron. Rev. **15**, 102 (2007). [CrossRef]

**15. **U. Jochen Birk, A. Darrell, N. Konstantinides, A. Sarasa-Renedo, and J. Ripoll, Appl. Opt. **50**, 392 (2008). [CrossRef]

**16. **P. Stockton, J. J. Field, and R. Bartels, Methods **136**,
24 (2018). [CrossRef]