## Abstract

Fluorescence lifetime imaging (FLI) is a popular method for extracting useful information that is otherwise unavailable from a conventional intensity image. Usually, however, it requires expensive equipment, is often limited to either distinctly frequency- or time-domain modalities, and demands calibration measurements and precise knowledge of the illumination signal. Here, we present a generalized time-based, cost-effective method for estimating lifetimes by repurposing a consumer-grade time-of-flight sensor. By developing mathematical theory that unifies time- and frequency-domain approaches, we can interpret a time-based signal as a combination of multiple frequency measurements. We show that we can estimate lifetimes without knowledge of the illumination signal and without any calibration. We experimentally demonstrate this blind, reference-free method using a quantum dot solution and discuss the method’s implementation in FLI applications.

© 2015 Optical Society of America

## 1. INTRODUCTION

Fluorescence lifetime imaging (FLI) is a significant research area that spans many engineering applications. Knowledge of a sample’s fluorescence lifetime allows, for example, DNA sequencing [1], tumor detection [2,3], fluorescence tomography [4,5], *in vivo* imaging [6], and high-resolution microscopy [7]. Typically, FLI is categorized into two complementary modes [4,8]. In time-domain FLI [7,9] (TD-FLI), an impulse-like excitation pulse probes the fluorescent sample, and the time-resolved reflection is used to calculate lifetimes. In frequency-domain FLI [10–12] (FD-FLI), the sample is excited with (sinusoidal) intensity modulated light, and the measured phase shift of the reflected signal at the same modulation frequency encodes the lifetime. FD-FLI is theoretically appealing in that phase measurements at one given modulation frequency suffice to resolve the sample lifetime. From a practical standpoint, model mismatch [10] and sample contamination due to multiple lifetimes [13] often limit the accuracy of single-frequency-based systems. Frequency diversity [13,14] may be used for imaging multiple lifetimes. Since this approach requires sweeping frequencies over a given bandwidth, a frequency diversity method is not suitable for wide-field imaging of dynamic samples. In either case (TD-FLI or FD-FLI), because of the costly equipment, system constraints are often strict, so that precise knowledge of the illumination signal and calibration measurements (to compensate for path length delays) are required [15].

Alternatively, time-of-flight (ToF) sensors, which are far more cost-effective and operate in real time, offer a great deal of flexibility in design. ToF sensors, such as the Microsoft Kinect, are essentially depth/range imaging devices and are at the heart of the entertainment industry. Recently, ToF sensing has found prodigious use in computer graphics, computer vision [16], and computational imaging [16–18], with applications in multipath imaging [19–22], ultrafast imaging [22,23], and imaging through scattering media [17,24].

Here, we demonstrate an extension of ToF sensing to FLI that is both blind and calibration-free. To do so, we generalize (from the distinct time or frequency domains) the active illumination signal, which is common to both modalities, to show that neither calibration measurements nor knowledge of the signal are necessary for estimation of fluorescent lifetimes. This blind, reference-free method is implemented in a consumer-grade ToF sensor to estimate simultaneously the range and lifetime of a CdSe–CdS quantum dot sample. This method suggests a cost-effective alternative to the usual FLI methods.

This paper is organized as follows: starting from first principles, we propose a ToF-sensor-based image formation model and discuss its role in depth/range estimation in Section 2. We then discuss the two complementary modes of operation of ToF sensors: time and frequency domains. Section 2.A deals with TD-ToF-sensor-based depth imaging, while Section 2.B discusses frequency-domain depth imaging. In Section 3, we provide a mathematical model for FLI with ToF sensors. We show that consumer ToF sensors can be used to estimate the lifetime of a fluorescent sample. We develop theoretical models for both time- and frequency-domain modes, and our theory is corroborated with experiments for both. These complementary approaches are discussed in Sections 3.A and Section 3.B. Finally, we conclude this work with possible future directions in Section 5.

## 2. ToF IMAGE FORMATION MODEL

ToF sensors operate using the lock-in principle [25]: active illumination probes a sample, and the reflected light is cross-correlated electronically to calculate range and amplitude information. In this way, each ToF sensor exposure results in two images: the usual intensity image and a range image in which each pixel relates to the depth of the scene. Usually, the signal is sinusoidal, essentially classifying a ToF sensor as a homodyne detection system, but the illumination signal can be far more general. Both ToF sensors and current FLI technologies acquire scene information in either the time or frequency domains, so the present analysis for ToF sensing is compatible with both.

Regardless of the operating domain or physical implementation, the ToF imaging process can be understood as follows. A scene is illuminated with a time-dependent $\mathrm{\Delta}$-periodic probing intensity signal $p(t)$, such that $p(t+\mathrm{\Delta})=p(t)$, $\mathrm{\Delta}>0$. Similar to FLI, TD-ToF [21,22] systems use a time-localized pulse $p(t)$ (not necessarily an impulse); the FD-ToF counterpart uses a modulated probing function $p(t)=1+\mathrm{cos}({\omega}_{0}t)$, where ${\omega}_{0}$ (usually in the megahertz range) is the modulation frequency. Below, however, we do not use a specific form for $p(t)$.

This signal interacts with the scene, whose response is characterized by a time-dependent scene response function (SRF) $h(t,{t}^{\prime})$, where ${t}^{\prime}$ is a time-domain variable which models the time-variant SRF. This interaction results in the reflected signal $r(t)$:

A ToF sensor detects $r(t)$ and electronically cross correlates it with $p(t)$:

where $\overline{p}(t)=p(-t)$. The $K$ stored measurements are digital samples of this cross correlation, ${m}_{k}=m(k{T}_{s})$, $k=0,\dots ,K-1$ (${T}_{s}>0$ is the sampling rate).In many cases of interest, the SRF is shift-invariant, that is (see [26], Sec. 2.2.4, p. 50),

so that Eq. (1) represents the convolution operation defined byIn the case of shift-invariant SRFs, the measurements simplify to

implying linear filtering of cross-correlation function $\varphi (t)$ with the scene response filter ${h}_{\mathrm{SI}}$.Conventional ToF sensors [16] are designed for range imaging and depth estimation. For simple reflections from an object (with reflection coefficient $\rho $) that is at a depth $d$ from the sensor, the SRF becomes

where $c$ is the speed of light, and $\delta $ is the Dirac distribution. The reflected signal in this case is simply a delayed version of the probing function and the delay is proportional to the depth parameter $d$. More precisely, in view of the ToF sensor operation, the reflected signal in Eq. (1) reads $r(t)=\rho p(t-{t}_{0})$, where the delay ${t}_{0}=2d/c$. In this case, the measurements amount toEstimation of the scene parameters $\{\rho ,d\}$ by the ToF sensor results in range and intensity images. The estimation process depends on the choice of probing function $p$, which may be a time-localized pulse or an amplitude-modulated continuous-wave (AMCW) function leading to time-domain [Fig. 2(a)] and frequency-domain [Fig. 2(b)] modes of operation, respectively.

#### A. TD-ToF Imaging

For TD-ToF imaging, the probing function is ideally well localized in time, that is, a Dirac delta distribution. In this case, $p(t)\sim \delta (t)$, and the measurements in Eq. (4) simplify to $m(t)={h}_{\mathrm{SI}}(t)$. In practice, maximum length sequences (MLSs) [27] provide an optimal time-localized probing function. Such sequences are a function of the signal length or period $\mathrm{\Delta}$.

In view of the depth estimation problem, where $h(t,{t}^{\prime})=\rho \delta (t-{t}^{\prime}-2d/c)$, the depth is estimated by the operation

More sophisticated methods have been developed for the case of multiple reflections or multipath interference [21,22].

The exact characterization of the true TD-ToF probing function $p(t)$ is challenging due to the electronics of the sensor and the physical process involved. By using a Fourier series expansion [21] to represent the probing function,

The Fourier series coefficients of the cross-correlated signal and the probing signal are related by

Practically, a band-limited approximation, ${C}_{p,\overline{p}}(t)$, suffices to represent $\varphi (t)$:

#### B. FD-ToF Imaging

In FD-ToF imaging, the scene is probed with a continuous wave, sinusoidal probing function with a modulation frequency $\omega $:

where ${p}_{0}$ is the modulation amplitude. For pure depth estimation, with the SRF specified in Eq. (5), the reflected signal becomesThe ToF lock-in sensor acts as a homodyne detector array: each pixel cross correlates the reflected signal with the probing signal to produce measurements:

The two quantities of interest, $\rho $ and $d$, are estimated with the four digital measurements of the measured signal in Eq. (11), that is, ${m}_{k}=m(\pi k/2\omega {T}_{s})$, $k=0,\dots ,3$. Based on these discrete measurements, we define a complex number $z\in \mathbb{C}$:

The scene parameters are thus estimated by

Note that this technique for extracting phase is identical to conventional phase-shifting holography [28].

To summarize, object range is estimated via time delays in TD-ToF [Fig. 2(a)], whereas FD-ToF [Fig. 2(b)] encodes range into the signal phase. A quantitative summary of the ToF-sensor-based depth estimation problem for the time-domain and frequency-domain approaches is presented in Table 1 and Table 2, respectively.

In most consumer ToF sensors, $m(t)$ [Eq. (2)] is a set of measurements that is used to estimate the scene parameters $\{\rho ,d\}$ [22,25]. The goal of this paper is to show that by using exactly the same set of measurements but with a different SRF, one can recover fluorescent lifetimes in context of the FLI.

## 3. FLI WITH TOF SENSORS

The experimental setup for FLI using a ToF sensor is depicted in Fig. 2(e). A fluorescent sample is located in an *x*–*y* plane a distance $z=d$ from the ToF sensor. Pixels corresponding to a nonfluorescent background object $({x}_{b},{y}_{b})$ produce only a time delay proportional to $2d/c$, precisely the case of conventional ToF imaging with the SRF specified in Eq. (5).

The probing function that interacts with the pixels corresponding to a fluorescent sample at location $({x}_{f},{y}_{f})$ undergoes two transformations. The first transformation is attributed to the same depth contribution, $d$. The second transformation results from fluorescence: a fraction of the incident light excites the sample, which fluoresces with a characteristic decay time $\tau $. The total SRF at $({x}_{f},{y}_{f})$ is then

where the contributions due to respective components areWe emphasize that our ToF system is completely compatible with FLI. In TD-FLI, $p(t)=\delta (t)$, whereas in FD-FLI, $p(t)=1+\mathrm{cos}({\omega}_{0}t)$. Conventionally, separate calibration steps provide explicit knowledge of $d$, which is used for subsequent measurements. However, we make no such assumption. Instead, we simultaneously compute $d$ and $\tau $ from our measurements [Eq. (16)]. We need not perform a separate measurement to obtain $d$ explicitly.

#### A. TD-ToF FLI: Theory and Experiments

### 1. Theoretical Modeling

As in TD-ToF, we utilize the same truncated cross-correlation function [Eq. (9)] for FLI. Importantly, explicit knowledge of $p(t)$ is not required. To show this, note that the eigenfunctions of Eq. (16) are precisely the complex exponentials of Eq. (9). By using the convolution theorem, we have

From here onward and for all practical purposes, we assume that Eq. (17) is an equality instead of an approximation. Expressing $\widehat{h}(\omega )$ in polar form, we have $\widehat{h}(\omega )=|\widehat{h}(\omega )|{e}^{j\angle \widehat{h}(\omega )}$, where

Note that the phase of the spectrum encodes both the depth and the lifetime parameters. We may write

whereIn vector-matrix notation, the discretized system of equations in Eq. (17) can be written as

where- • $\mathbf{m}$ is a $K\times 1$ vector of discretized ToF sensor measurements ${m}_{k}=m(k{T}_{s})$, $k\in [0,K-1]$;
- • $\mathbf{V}$ is a Vandermonde matrix of size $K\times (2{N}_{0}+1)$ with matrix element ${[V]}_{k,n}={e}^{j(2\pi /\mathrm{\Delta}){T}_{s}nk}$, $n\in [-{N}_{0},+{N}_{0}]$;
- • ${\mathbf{D}}_{\widehat{\varphi}}$ is a $(2{N}_{0}+1)\times (2{N}_{0}+1)$ diagonal matrix with diagonal entries ${[D]}_{n,n}={\widehat{\varphi}}_{n}$; and
- • $\widehat{\mathbf{h}}$ is $(2{N}_{0}+1)\times 1$ vector of the discretized spectrum [Eq. (19)] with entries $\widehat{h}(2\pi n/\mathrm{\Delta})$, $n=-{N}_{0},\dots ,{N}_{0}$.

The estimation problem is thus: given $K$ measurements $\mathbf{m}$, estimate parameters $d$ and $\tau $.

Because $\varphi (t)$ is a real, time-symmetric function by construction, the matrix ${\mathbf{D}}_{\widehat{\varphi}}$ does not contribute to the phase of vector $\widehat{\mathbf{h}}$ in Eq. (22). Hence, we rely on only the measurements $\mathbf{m}$. Thus, $p$ (or $\varphi $) need not be known, so that we eliminate the calibration requirements that are central to both FLI and ToF imaging [22].

To see this, note that TD-FLI uses [8] $p=\delta (t)=\varphi $, so the corresponding measurements are

Now because $\mathrm{log}({m}_{\text{TD-FLI}}(t))$ is linear, a direct fit [8] can be used to estimate $\tau $, and, hence, calibration is implicitly avoided by the choice of $p=\delta $.

On the other hand, for our proposed TD-ToF method for FLI, the illumination/probing function is composed of ${N}_{0}$ multiplexed frequencies, which yield measurements

### 2. Experimental Verification of TD-ToF FLI

The setup for the TD-ToF FLI method is shown in Fig. 2(e). A 405 nm laser diode illuminates the scene. We use a precomputed, quantized $p$ [Eq. (6)] based on a 31-bit MLS described by the code

with $\mathrm{\Delta}=309.9\text{\hspace{0.17em}}\text{\hspace{0.17em}}\mathrm{ns}$ and ${T}_{s}=7.8120\text{\hspace{0.17em}}\text{\hspace{0.17em}}\mathrm{ps}$. The sample consists of a CdSe–CdS quantum dot sample prepared by dissolving it in hexane and polymethyl methacrylate (PMMA) onto a glass slide. This sample has a lifetime of $\tau =32\text{\hspace{0.17em}}\text{\hspace{0.17em}}\mathrm{ns}$, and it is located at 1.05 m from the sensor.The reflected light is passed through a dielectric interference filter with a cut-off wavelength of 450 nm, leading to $\rho =0$ [Eq. (21)]. A $160\times 120$ pixel PMD 19k-S3 lock-in sensor with custom field programmable gate array (FPGA) programming cross correlates the reflected signal to produce measurements $\mathbf{m}$. The sensor has an 80 MHz modulation bandwidth operating within 90 frames per second [29]. The total cost of our system is $1200.

With ${N}_{0}=15$, we compute

Per-pixel fluorescent sample measurements are shown in Fig. 3, which shows both the time-domain measurement [Fig. 3(a)] and the raw phase measurement [Fig. 3(b)]. We also show the fitted phase measurement in Fig. 3(b). We assign a confidence level to each pixel to estimate the signal-to-noise ratio (SNR). We use ToF measurements from 14 different pixels imaging the same scene [Fig. 2(e)]. The results from our computation are tabulated in Table 3. In addition to the estimated lifetimes and distances, we tabulate two relevant metrics.

- 1. First, the
*mean squared error*or MSE, a measure of distortion, is defined as follows: let $\nu $ be the oracle estimate and ${\{{\tilde{\nu}}_{n}\}}_{n=0}^{N-1}$ be $N$ estimated values of $\nu $. The MSE isWe compute $\mathrm{log}(\sqrt{\mathrm{MSE}})$ for $\tilde{\tau}$, $\tilde{d}$, and $\angle \tilde{\widehat{h}}$ in Table 3, where the last term is due to fitted measurements

$$\angle \tilde{\widehat{h}}(n{\omega}_{0})\stackrel{(20)}{=}-{\mathrm{tan}}^{-1}(n{\omega}_{0}\tilde{\tau})-2\frac{n{\omega}_{0}\tilde{d}}{c},$$which are synthesized after estimating $\{\tilde{\tau},\tilde{d}\}$. - 2. We use the observed SNR to estimate measurement fidelity. Including additive Gaussian noise ${\epsilon}_{n}$, we have
We define

$$\mathrm{SNR}=20(\mathrm{log}\Vert \angle \widehat{h}\Vert -\mathrm{log}\Vert \angle {\widehat{h}}_{\mathrm{obs}}-\angle \widehat{h}\Vert ),$$measured in decibels, and ${\Vert \angle \widehat{h}\Vert}^{2}=\sum _{n=0}^{N-1}{|\angle \widehat{h}(n{\omega}_{0})|}^{2}$. In Table 3, we use $\angle \widehat{h}=\angle \tilde{\widehat{h}}$, the fitted phase function computed using estimates $\{\tilde{\tau},\tilde{d}\}$.

Based on the estimates in Table 3, the expected lifetime is $\tilde{\tau}=31.3142\text{\hspace{0.17em}}\text{\hspace{0.17em}}\mathrm{ns}$, and the estimated expected distance is $\tilde{d}=1.0799\text{\hspace{0.17em}}\text{\hspace{0.17em}}\mathrm{m}$, both consistent with the ground truth.

#### B. FD-ToF FLI: Theory and Experiments

As described in Section 2.B, in the FD-ToF mode of operation, the ToF sensor probes the scene with an AMCW of form $p(t)=1+{p}_{0}\text{\hspace{0.17em}}\mathrm{cos}(\omega t)$. Following Eq. (3), the reflected signal is

The reflected signal is cross correlated [Eq. (10)] at the ToF sensor to produce measurements

To estimate the phase and amplitude, we utilize the method from Section 2.B. Noting, however, that the scene transfer function is not constant, we express $z$ explicitly as a function of the modulation frequency ${\omega}_{0}$: $z=z({\omega}_{0})$. The amplitude and phase estimates are

Note in passing that multiple frequency measurements can be used to estimate multiple depths [16,20,21,31], for which

Based on the image formation model for the SRF in Eq. (13), we will next show how multiple frequency measurements of the form ${\{z(k{\omega}_{0})\}}_{k=1}^{K}$ can be used alternately to estimate the parameters of interest, that is, $\{d,\tau \}$ in the context of FLI.

### 1. Experimental Verification of FD-ToF FLI

Using the same physical setup as that in Section 3.A.2, we move the sample to $d=2.5\text{\hspace{0.17em}}\text{\hspace{0.17em}}\mathrm{m}$, and we set ${\omega}_{0}/2\pi \equiv {f}_{0}=1\text{\hspace{0.17em}}\text{\hspace{0.17em}}\mathrm{MHz}$ and acquire equispaced ToF measurements

The amplitudes and phases of $z$ for $k=20$, 30, and 40 are shown in Fig. 4(a). The effects of fluorescence are clearly visible in both amplitude and phase. The dielectric filter eliminates all nonfluorescent emission, leading to a noisy background signal. The fluorescing quantum dot causes an increase in the measured phase. The increased phase measurement at the sample location is due to the presence of ${\theta}_{\tau}$ term in Eq. (20). In context of the experimental setup described in Fig. 2(e), we mark the background pixel $({x}_{b},{y}_{b})$ as well as the fluorescent pixel $({x}_{f},{y}_{f})$ in Fig. 4(a). The average phase of the background pixel is noted to be $\angle {z}_{({x}_{b},{y}_{b})}(40{f}_{0})=4.1625\text{\hspace{0.17em}}\text{\hspace{0.17em}}\mathrm{rad}$. This amounts to a depth of 2.4843 m, which is consistent with the experimental setup where $d=2.5\text{\hspace{0.17em}}\text{\hspace{0.17em}}\mathrm{m}$. On the other hand, the average phase at the location of the quantum dot is observed to be $\angle {z}_{({x}_{f},{y}_{f})}(40{f}_{0})=5.5822\text{\hspace{0.17em}}\text{\hspace{0.17em}}\mathrm{rad}$. This relates to an erroneous depth of 3.3316 m, which is a result of multipath interference [20].

By using ToF phase measurements [Eq. (34)] for 10 different pixel locations corresponding to $({x}_{f},{y}_{f})$, we solve for the nonlinear least squares problem:

As before in Section 3.A.2, we use the trust-region-based algorithm with the least absolute residual criterion. As a result of fitting, we estimate $\tilde{d}$ and $\tilde{\tau}$ for each pixel. The estimated distance and lifetime values together with $\mathrm{log}(\sqrt{\mathrm{MSE}})$ for $\tilde{\tau}$, $\tilde{d}$, and $\angle \tilde{\widehat{h}}$ for all the pixels are tabulated in Table 4. The average lifetime and distance is estimated to be

Both the measured phase ${\{\angle z(k{f}_{0})\}}_{k=1}^{K=40}$ and the fit obtained by Eq. (35) for four pixels are plotted in Fig. 4(b).

## 4. DISCUSSION

#### A. ToF Sensors for Microscopy

The method generalizes to microscopy modes. A salient feature about the technique is that it is a local, per-pixel calculation, so that the lateral scale of the problem does not influence the technique. Thus, our proof-of-principle demonstration can be extended to integration with microscopy, both wide-field and point-scanning techniques, provided there is no pixel crosstalk. In fact, our current system is not aberration-corrected, and the resulting model mismatch makes reconstruction more challenging. A well-calibrated microscopy setup should alleviate this mismatch and improve results.

#### B. Nanosecond Range Lifetime Sensing

For biological imaging, lifetimes are of the order of nanoseconds. Recent work suggests that current ToF sensors (with bandwidths of tens of megahertz) are optimal for recovering lifetimes under 5 ns [32]. For example, a 3 ns lifetime is optimally estimated by a 30 MHz signal ([32], p. 378). Further, a similar ToF setup estimated lifetimes of the order of 4 ns [12] (though it did not simultaneously estimate distance).

The important difference here is that the present approach simultaneously estimates lifetime and distance based on the same measurement. Numerical simulations in Fig. 5 indicate that our method is suitable for recovering a 4 ns lifetime. We simulate a system bandwidth of 40 MHz (half the experimental bandwidth [29]). Numerically, we compare the estimation accuracy of ${\tau}_{1}=4\text{\hspace{0.17em}}\text{\hspace{0.17em}}\mathrm{ns}$ and ${\tau}_{2}=32\text{\hspace{0.17em}}\text{\hspace{0.17em}}\mathrm{ns}$ with $d=2.5\text{\hspace{0.17em}}\text{\hspace{0.17em}}\mathrm{m}$. We do so by studying the upper bounds on the $\sqrt{\mathrm{MSE}}$ linked with estimation of the set of parameters of interest, $\{{\tau}_{1},d\}$ and $\{{\tau}_{2},d\}$. For a fixed ToF sensor bandwidth, we vary the number of measurements [Eq. (33)] by sampling with equispaced frequencies from 0 to 40 MHz. We use four different step sizes, ${f}_{0}^{(k)}$ (corresponding to ${N}^{(k)}$ samples), specified by

${f}_{0}^{(k)}$ is measured in megahertz. For example, the step size in experiments in Section 3.B.1 was ${f}_{0}=1\text{\hspace{0.17em}}\text{\hspace{0.17em}}\mathrm{MHz}$. For each ${f}_{0}^{(k)}$, we generate the measurement vector,

In Fig. 5(b), we plot the ${\sqrt{\mathrm{MSE}}}_{\tau}$ on log scale as a function of SNR. Note that the ${\mathrm{MSE}}_{\tau}$ [Eq. (28)] is the average over all 2000 trials. After 15 dB, we note a consistent linear relationship of the form

As the number of measurements $N$ increases, the $\mathrm{log}({\sqrt{\mathrm{MSE}}}_{\tau})$ drops consistently. This observation is consistent throughout all of our experiments. In fact, we note that ${\sqrt{\mathrm{MSE}}}_{{\tau}_{1}}<{\sqrt{\mathrm{MSE}}}_{{\tau}_{2}}$, implying that the 4 ns sample may be estimated with higher accuracy when compared to the 32 ns sample. As noted in Table 4, the operational SNR of our system is around 45 dB, implying that the lifetime parameter can be estimated with sufficient accuracy, as plotted in Fig. 5(b).

Thus, although our demonstration here operates at a lower time scale (Fig. 1) than is typical in practice, this is not a fundamental limitation of the method. The reason is that we compensate for lower time resolution by utilizing a computationally different method of inversion. Indeed, appropriate modeling and prior information lend themselves naturally to ToF sensing and offer a path toward superresolution [21,33].

Of course, calculation of the theoretical lower bounds (on lifetimes and distances) requires calculation of the Cramer–Rao bound. Though beyond the scope of the present work, this limit is ultimately dictated by noise. For instance, the Cramer–Rao lower bound for distance estimation is derived in [21] and obeys a version of the law in Eq. (36). For typical systems, we expect from Fig. 5 that the method is suitable for current application needs.

#### C. Experimental and Computational Precision

The current optical ToF technology allows for range estimation in millimeter precision. The variation in estimated lifetime is due to sample inhomogeneity, nonuniform lighting [34], and potential model mismatch from lens aberration.

The measurement precision here is indicated in Fig. 6, which shows cross sections of the four phase plots in Fig. 4(a). The phase contribution the first 50 pixels is due to the background. We mark the average phase value on the $y$ axis. Using ${\theta}_{d}(f)=4\pi fd/c$, we estimate the distance $d$ given ${\theta}_{d}(f)$ and $f=\{10,20,30,40\}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\mathrm{MHz}$. At each modulation frequency, the estimated distance is $d=\{2.56,2.52,2.47,2.48\}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\mathrm{m}$, whereas the actual distance is 2.5 m. The variability in the distance estimates is mainly because, across modulation frequencies, the phase-frequency relation ${\theta}_{d}(f)=4\pi fd/c$ may not be strictly linear due to distortions.

## 5. CONCLUSIONS

In conclusion, we have demonstrated a FLI alternative that is based on cost-effective ToF sensors. We simultaneously estimate the lifetime and the distance of the sample from the sensor. Unlike existing methods [8,12], our approach is calibration-free and requires no prior information on the experimental path length and, thus, allows for faster acquisition time. Furthermore, our technique is blind in that we do not assume the knowledge of the illumination waveform. Overall, our system shows promise for two-dimensional imaging, and can be generalized to volumes [35]. Because the technique is modular, it can be implemented with other computational imaging techniques to create a new platform for wide-field FLI.

The method offers new possibilities for open questions. The case of multiple lifetime imaging [10,13] is interesting and is yet to be explored in the context of ToF sensors, both theoretically as well as experimentally. Comparison with other FLI techniques [36] and fundamental resolution limits for simultaneous estimation of lifetime and depth information will allow for better understanding of the applicability of ToF sensors for bio-imaging tasks such as tumor detection [2,3] and fluorescence tomography [4,5].

## Funding

Massachusetts Institute of Technology Media Laboratory Consortia (2746038); USAR Laboratory and USAR Office (W9911NF-13-D-0001).

## Acknowledgment

We thank R. Whyte and A. Das for discussions about the experimental setup at the initial stages of this collaboration.

## REFERENCES

**1. **H. He, B. K. Nunnally, L.-C. Li, and L. B. McGown, “On-the-fly fluorescence lifetime detection of dye-labeled DNA primers for multiplex analysis,” Anal. Chem. **70**, 3413–3418 (1998). [CrossRef]

**2. **Y. Sun, J. Phipps, D. S. Elson, H. Stoy, S. Tinling, J. Meier, B. Poirier, F. S. Chuang, D. G. Farwell, and L. Marcu, “Fluorescence lifetime imaging microscopy: in vivo application to diagnosis of oral carcinoma,” Opt. Lett. **34**, 2081–2083 (2009). [CrossRef]

**3. **R. Cicchi, D. Massi, S. Sestini, P. Carli, V. De Giorgi, T. Lotti, and F. Pavone, “Multidimensional non-linear laser imaging of basal cell carcinoma,” Opt. Express **15**, 10135–10148 (2007). [CrossRef]

**4. **A. T. Kumar, S. B. Raymond, B. J. Bacskai, and D. A. Boas, “Comparison of frequency-domain and time-domain fluorescence lifetime tomography,” Opt. Lett. **33**, 470–472 (2008). [CrossRef]

**5. **A. T. Kumar, S. B. Raymond, A. K. Dunn, B. J. Bacskai, and D. A. Boas, “A time domain fluorescence tomography system for small animal imaging,” IEEE Trans. Med. Imaging **27**, 1152–1163 (2008). [CrossRef]

**6. **J. Rao, A. Dragulescu-Andrasi, and H. Yao, “Fluorescence imaging in vivo: recent advances,” Curr. Opin. Biotechnol. **18**, 17–25 (2007). [CrossRef]

**7. **B. J. Bacskai, J. Skoch, G. A. Hickey, R. Allen, and B. T. Hyman, “Fluorescence resonance energy transfer determinations using multiphoton fluorescence lifetime imaging microscopy to characterize amyloid-beta plaques,” J. Biomed. Opt. **8**, 368–375 (2003). [CrossRef]

**8. **J. Lakowicz, *Principles of Fluorescence Microscopy* (Springer, 1982).

**9. **D. R. Yankelevich, D. Ma, J. Liu, Y. Sun, Y. Sun, J. Bec, D. S. Elson, and L. Marcu, “Design and evaluation of a device for fast multispectral time-resolved fluorescence spectroscopy and imaging,” Rev. Sci. Instrum. **85**, 034303 (2014). [CrossRef]

**10. **E. Gratton, M. Limkeman, J. R. Lakowicz, B. P. Maliwal, H. Cherek, and G. Laczko, “Resolution of mixtures of fluorophores using variable-frequency phase and modulation data,” Biophys. J. **46**, 479–486 (1984). [CrossRef]

**11. **G.-J. Kremers, E. B. Van Munster, J. Goedhart, and T. W. Gadella, “Quantitative lifetime unmixing of multiexponentially decaying fluorophores using single-frequency fluorescence lifetime imaging microscopy,” Biophys. J. **95**, 378–389 (2008). [CrossRef]

**12. **A. Esposito, T. Oggier, H. Gerritsen, F. Lustenberger, and F. Wouters, “All-solid-state lock-in imaging for wide-field fluorescence lifetime sensing,” Opt. Express **13**, 9812–9821 (2005). [CrossRef]

**13. **A. Squire, P. J. Verveer, and P. I. H. Bastiaens, “Multiple frequency fluorescence lifetime imaging microscopy,” J. Microsc. **197**, 136–149 (2000). [CrossRef]

**14. **J. C. K. Chan, E. D. Diebold, B. W. Buckley, S. Mao, N. Akbari, and B. Jalali, “Digitally synthesized beat frequency-multiplexed fluorescence lifetime spectroscopy,” Biomed. Opt. Express **5**, 4428–4436 (2014). [CrossRef]

**15. **Q. S. Hanley, V. Subramaniam, D. J. Arndt-Jovin, and T. M. Jovin, “Fluorescence lifetime imaging: multi-point calibration, minimum resolvable differences, and artifact suppression,” Cytometry **43**, 248–260 (2001). [CrossRef]

**16. **A. Bhandari, M. Feigin, S. Izadi, C. Rhemann, M. Schmidt, and R. Raskar, “Resolving multipath interference in Kinect: An inverse problem approach,” in *IEEE Sensors* (IEEE, 2014), pp. 614–617.

**17. **F. Heide, L. Xiao, W. Heidrich, and M. B. Hullin, “Diffuse mirrors: 3D reconstruction from diffuse indirect illumination using inexpensive time-of-flight sensors,” in *IEEE Conference on Computer Vision and Pattern Recognition (CVPR)* (2014), pp. 3222–3229.

**18. **A. Kadambi, A. Bhandari, R. Whyte, A. Dorrington, and R. Raskar, “Demultiplexing illumination via low cost sensing and nanosecond coding,” in *IEEE International Conference on Computational Photography (ICCP)* (2014).

**19. **H. Qiao, J. Lin, Y. Liu, M. B. Hullin, and Q. Dai, “Resolving transient time profile in ToF imaging via log-sum sparse regularization,” Opt. Lett. **40**, 918–921 (2015). [CrossRef]

**20. **A. Bhandari, A. Kadambi, R. Whyte, C. Barsi, M. Feigin, A. Dorrington, and R. Raskar, “Resolving multipath interference in time-of-flight imaging via modulation frequency diversity and sparse regularization,” Opt. Lett. **39**, 1705–1708 (2014). [CrossRef]

**21. **A. Bhandari, A. Kadambi, and R. Raskar, “Sparse linear operator identification without sparse regularization? Applications to mixed pixel problem in time-of-flight/range imaging,” in *IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)* (2014), pp. 365–369.

**22. **A. Kadambi, R. Whyte, A. Bhandari, L. Streeter, C. Barsi, A. Dorrington, and R. Raskar, “Coded time of flight cameras: sparse deconvolution to address multipath interference and recover time profiles,” ACM Trans. Graph. **32**, 167 (2013). [CrossRef]

**23. **F. Heide, M. B. Hullin, J. Gregson, and W. Heidrich, “Low-budget transient imaging using photonic mixer devices,” ACM Trans. Graph. **32**, 1 (2013).

**24. **F. Heide, L. Xiao, A. Kolb, M. B. Hullin, and W. Heidrich, “Imaging in scattering media using correlation image sensors and sparse convolutional coding,” Opt. Express **22**, 26338–26350 (2014). [CrossRef]

**25. **S. Foix, G. Alenya, and C. Torras, “Lock-in time-of-flight (ToF) cameras: a survey,” IEEE Sens. J. **11**, 1917–1926 (2011). [CrossRef]

**26. **S. Y. Kung, *Kernel Methods and Machine Learning* (Cambridge University, 2014).

**27. **S. W. Golomb and G. Gong, *Signal Design for Good Correlation: For Wireless Communication, Cryptography, and Radar* (Cambridge University, 2005).

**28. **I. Yamaguchi and T. Zhang, “Phase-shifting digital holography,” Opt. Lett. **22**, 1268–1270 (1997). [CrossRef]

**29. **PMD Technologies, “pmd PhotonICs 19k–S3: Specs and reference design,” (2014).

**30. **T. F. Coleman and Y. Li, “An interior trust region approach for nonlinear minimization subject to bounds,” SIAM J. Optim. **6**, 418–445 (1996). [CrossRef]

**31. **A. Bhandari, A. Bourquard, S. Izadi, and R. Raskar, “Blind transmitted and reflected image separation using depth diversity and time-of-flight sensors,” in *Computational Optical Sensing and Imaging* (Optical Society of America, 2015), paper CT4F-2.

**32. **J. Philip and K. Carlsson, “Theoretical investigation of the signal-to-noise ratio in fluorescence lifetime imaging,” J. Opt. Soc. Am. A **20**, 368–379 (2003). [CrossRef]

**33. **S. Schuon, C. Theobalt, J. Davis, and S. Thrun, “Lidarboost: Depth superresolution for ToF 3D shape scanning,” in *IEEE Conference on Computer Vision and Pattern Recognition (CVPR)* (2009), pp. 343–350.

**34. **D. Lefloch, R. Nair, F. Lenzen, H. Schäfer, L. Streeter, M. J. Cree, R. Koch, and A. Kolb, “Technical foundation and calibration methods for time-of-flight cameras,” in *Time-of-Flight and Depth Imaging: Sensors, Algorithms, and Applications* (Springer, 2013), pp. 3–24.

**35. **G. Satat, B. Heshmat, C. Barsi, D. Raviv, O. Chen, M. G. Bawendi, and R. Raskar, “Locating and classifying fluorescent tags behind turbid layers using time-resolved inversion,” Nat. Commun. **6**, 6796 (2015). [CrossRef]

**36. **M. A. Digman, V. R. Caiolfa, M. Zamai, and E. Gratton, “The phasor approach to fluorescence lifetime imaging analysis,” Biophys. J. **94**, L14–L16 (2008). [CrossRef]