Abstract

The ability to see around corners, i.e., recover details of a hidden scene from its reflections in the surrounding environment, is of considerable interest in a wide range of applications. However, the diffuse nature of light reflected from typical surfaces leads to mixing of spatial information in the collected light, precluding useful scene reconstruction. Here, we employ a computational imaging technique that opportunistically exploits the presence of occluding objects, which obstruct probe-light propagation in the hidden scene, to undo the mixing and greatly improve scene recovery. Importantly, our technique obviates the need for the ultrafast time-of-flight measurements employed by most previous approaches to hidden-scene imaging. Moreover, it does so in a photon-efficient manner (i.e., it only requires a small number of photon detections) based on an accurate forward model and a computational algorithm that, together, respect the physics of three-bounce light propagation and single-photon detection. Using our methodology, we demonstrate reconstruction of hidden-surface reflectivity patterns in a meter-scale environment from non-time-resolved measurements. Ultimately, our technique represents an instance of a rich and promising new imaging modality with important potential implications for imaging science.

© 2018 Optical Society of America under the terms of the OSA Open Access Publishing Agreement

1. Introduction

In recent years, remarkable advances have been achieved in computational imaging, image processing and computer vision [1–4]. Whereas conventional imaging involves direct line-of-sight transport from a light source to a scene, and from the scene back to a camera sensor, the problem of imaging scenes that are hidden from the camera’s direct line of sight, referred to as seeing around corners or non-line-of-sight (NLoS) imaging, has attracted growing interest. Indeed, the ability to reconstruct hidden scenes has the potential to be transformative in important and diverse applications, including, e.g., medicine, transportation, manufacturing, scientific imaging, public safety, and security.

Techniques for NLoS imaging that have been recently demonstrated include time-gated viewing from specular reflections [5–8], wavefront shaping [9,10], and transient imaging, in which time-of-flight (ToF) measurements are collected [11–18]. ToF active imaging using short-duration laser pulses (the most commonly used approach) provide only indirect access to scene information, through detection of photons that have been diffusely reflected by intervening surfaces, which mixes the spatial information they carry. Such systems have used picosecond-resolution ToF measurements—as obtained from a streak camera [12,13] or a single-photon avalanche diode (SPAD) detector [16–18]—to recover hidden scenes. However, collecting such measurements involves complicated and costly apparatus [18]. Klein et al. has reported tracking NLoS objects using intensity images [19]; however, its tracking problem is parametric in nature, allowing it to retrieve object translation and rotation only in the case of known objects. In contrast, our focus is on a non-parametric setting, with the goal of retrieving the unknown reflectivity pattern on a hidden surface.

Recently, in [20], we proposed a new NLoS imaging framework that opportunistically exploits the presence of opaque occluders in the light propagation path within the hidden space to distinguish light emanating from different parts of the hidden scene (see Visualization 1). This framework was shown to recover spatial information otherwise destroyed by diffuse reflection, without reliance on ultrafast ToF measurements. The approach is reminiscent of pin-speck (or anti-pinhole) imaging [21,22], in which an occluder in the scene serves as a defacto lens that facilitates imaging. The focus of [20] was a theoretical study of the framework. The model developed there assumes additive signal-independent Gaussian noise, hence the reconstruction algorithm and the preliminary experiment reported in [20] are tailored to a Gaussian-likelihood method. This Gaussian-noise assumption, however, does not adequately represent shot-noise-limited operation, which prevails in the low-photon-count regime.

In this paper, we extend the applicability of occlusion-based NLoS imaging to operation in that low-photon-count regime. We experimentally demonstrate an imaging system with substantially higher photon efficiency than that reported in [20], performance that is crucial for fast and low-power NLoS imaging. To do so, we develop an accurate forward model and a photon-efficient computational algorithm based on a binomial-likelihood method that, together, respect the physics of three-bounce light propagation and SPAD-based photodetection. As a result, we achieve a 16× speedup in the data acquisition process, because information from 16× fewer photon detections than employed in [20] suffice to produce images of equal quality. Moreover, unlike [20], we report full details of our experiments that, in addition to the photon-efficiency demonstration, include investigations of issues—such as the effects of occluder size and the algorithm’s regularization parameter on scene reconstruction—that were only studied theoretically in [20].

2. Imaging scenario

Our system configuration is illustrated in Fig. 1(a) and a top view of the experimental setup is illustrated in Fig. 2. The objective is to reconstruct the unknown reflectivity pattern on the hidden wall. The visible wall is illuminated by a repetitively-pulsed laser that raster scans an m × m grid. The photons detected from illumination of a particular scan point have undergone three bounces: first, reflection off the visible wall in the direction of the hidden wall; second, reflection off the hidden wall in the direction of the visible wall, where the reflection is multiplicatively scaled by the reflectivity pattern of the hidden wall we seek to recover; and third, reflection off the visible wall in the direction of a SPAD. As shown in Fig. 2(a), the SPAD’s field of view is configured for the left side of the visible wall, to avoid the direct first bounce and to detect as many third-bounce photons as possible. We use a single-pixel SPAD instead of a normal charge-coupled device (CCD) camera, because of its single-photon sensitivity. This is necessary because the returned pulse energy after the three bounces is heavily attenuated (∼130–140 dB in our room-scale experiment), severely limiting the number of detected photons. Thus, the SPAD enables efficient NLoS imaging. We remark that although a SPAD is capable of providing time-stamped measurements, we discard the SPAD’s time-resolved information by integrating detections over a time-gating window to collect just an m × m matrix of the raw photon counts obtained from illumination of each laser grid point. To further clarify, we emphasize that a SPAD is not strictly necessary for our imaging method, as we show next that reconstruction is possible when we throw away the detected third-bounce photons’ time signatures. Although not demonstrated here, alternative high-sensitivity sensors with no or poor timing resolution—such as an intensified CCD or electron-multiplying CCD—can also be used in our experiment. We will investigate such modifications in future work.

 

Fig. 1 (a) Experimental configuration. The goal is to reconstruct the reflectivity pattern on the hidden wall. A repetitively-pulsed laser source raster scans a diffuse (nearly Lambertian) visible wall. Photons striking the visible wall reflect toward the hidden wall, reflect at the hidden wall back toward the visible wall, and finally reflect at the visible wall toward the single-photon avalanche diode (SPAD), whose optics are configured to detect backscattered photons from a large patch on the visible wall. The counts are recorded by a single-photon counting module and further computer processed. When present, an occluder (circular black patch) obstructs some light-propagation paths from the visible wall to the hidden wall (casting a subtle shadow), and from the hidden wall to the visible wall. (b) Raw photon counts in the absence of an occluder. (c) Raw photon counts in the presence of the occluder. (d) Reconstructed reflectivity from the counts in (b). (e) Reconstructed reflectivity from the counts in (c).

Download Full Size | PPT Slide | PDF

 

Fig. 2 Top view of experimental setup and a three-bounce light trajectory of the form ΛijxcΩ. The laser (Λ) illuminates the visible wall (ij) and is diffusively reflected (first bounce) toward the hidden wall (x), where it reflects (second bounce) back toward the visible wall. The third-bounce reflection at the visible wall (c) returns light in the direction of the detector (Ω). A circular occluder is placed between the visible and hidden walls, and partially obstructs light propagating between the visible and hidden walls.

Download Full Size | PPT Slide | PDF

We performed this experiment twice, first with no obstruction between the visible and the hidden walls, and then with a black circular occluder inserted between those walls to block some of the light propagating from the visible wall toward the hidden wall, and some of the light propagating from the hidden wall back toward the visible wall, as illustrated in Fig. 1(a). The corresponding matrices of raw photon counts are shown in Figs. 1(b) and 1(c). We derive an accurate forward model and solve the resulting inverse problem using a photon-efficient reconstruction algorithm that is tailored to the low-photon-count regime associated with three-bounce propagation (see below). Figs. 1(d) and 1(e) show that reconstruction of the hidden-wall reflectivity pattern failed when photon counts were collected without an occluder being present, but succeeded when they were collected in the presence of the occluder. In Figs. 3(a)–3(c), we present experimental results in which different patterns are placed on the hidden wall. These results demonstrate that obstructions in the light propagation path enable imaging from non-time-resolved photon counts. Indeed, as will be explained, occluders do so by increasing the informativeness of measurements made in their presence. Note that we have assumed knowledge of the occluder’s location. This information is easily obtained if the occluder is visible from the detector’s vantage point. Moreover, we have initial indication that location information for the occluder can be gleaned from raw-count data (see below).

 

Fig. 3 Role of the occluder’s shadow in NLoS imaging. The red-dashed square in the ground-truth image indicates the hidden-wall area that is scanned by the occluder’s shadow as the laser raster scans the visible wall. The blue-dashed circle in the ground-truth image indicates the approximate occluder-shadow area for one ij (see Visualization 1). (a) The man-shaped pattern, placed in the upper-left quadrant of the hidden wall, is completely scanned by the occluder’s shadow pattern as the laser scans the visible wall; with the aid of the occluder, the hidden pattern is successfully reconstructed from the raw counts. (b) The T-shaped pattern, placed in the upper-right quadrant of the hidden wall that is outside of the shadow area, yields raw photon counts that fail to reconstruct the pattern owing to the occluder’s shadow not scanning that quadrant. (c) Both the man-shaped pattern and the T-shaped pattern are placed on the hidden wall, with only the man-shaped pattern being scanned by the occluder’s shadow, so the man-shaped pattern is reconstructed successfully while the T-shaped pattern is not.

Download Full Size | PPT Slide | PDF

3. Forward model

In this section, we present a ray-optics light propagation model (see details in Appendix A) that relates the unknown reflectivity on a Lambertian hidden wall to the raw photon counts for specified experimental parameters. The model accounts for: (i) third-bounce reflections involving the hidden wall; (ii) occlusions in the scene; (iii) a low photon-count operating regime; and (iv) wide field-of-view detection. For the derivation we assume that: the geometries of the hidden wall and occluder are known; the occluders are opaque (nonreflecting and nontransmitting); the visible wall is Lambertian with known reflectivity; and the background illumination reaching the detector is known.

The m × m illumination grid on the visible wall is indexed with (i, j). The hidden wall is discretized to n × n pixels indexed with (k, l). Let F be the hidden-wall’s reflectivity matrix, with entry 0 ≤ Fkl ≤ 1 being the reflectivity value of the (k, l)th hidden-wall pixel. We use Yij to denote the average number of photons arriving at the detector from single-pulse illumination at grid point (i, j), and Y to denote the m × m matrix whose ijth entry is Yij. In the absence of background light in three-bounce NLoS imaging, Yij is linearly related to F as follows (Appendix A):

Yij=Kpk,lAkl(ij)Fkl,
where Kp is the average number of photons per transmitted laser pulse, and Akl(ij), for fixed i, j, is the kl th entry of an n × n matrix A(ij) that is determined by the physics of light propagation and the geometry of the surfaces involved. For 1 ≤ i, jm, Eq. (1) defines a linear system of m2 equations in the n2 unknowns F that we wish to retrieve.

In practice, Y is not directly available. Even were it available, the robustness of estimating F from Y would depend on the matrices A(ij). Indeed, high-fidelity inversion of Eq. (1) with a finite-precision calculation requires that the A(ij) vary substantially with (i, j), i.e., that each laser illumination point retrieves a new informative projection of the unknowns. When the space between the visible and hidden walls is free of obstructions, however, the A(ij) matrices vary only slightly and smoothly across different grid points (i, j) [Fig. 1(b)]. Hence inverting Eq. (1) results in poor reconstruction of the unknown reflectivity because the inversion is ill conditioned [Fig. 1(d)].

In contrast, when an occluder is present in the space between the visible and hidden walls, the matrices A(ij) in Eq. (1) become much more diverse [Fig. 1(c)], enabling much better imaging of the hidden wall [Fig. 1(e)]. Intuitively, the occluder partially obstructs light propagation in the hidden space, precluding Y contributions from some hidden-scene patches, thus making some A(ij) entries vanish. Moreover, different laser positions (i, j) and (i′, j′) may be blocked from illuminating different portions of the hidden wall (see Visualization 1). Consequently, some A(ij) entries that are zeros correspond to A(i′j′) entries that are nonzero, and vice-versa, yielding the measurement diversity needed for a much better conditioned inversion of Eq. (1).

A photon-number-resolving SPAD [23] will produce a Poisson-distributed number of photon counts in response to an illumination pulse [24]. Currently available SPADs, however, are not photon-number resolving: after detecting one photon they suffer a dead time [23, 24] whose duration is longer, in our experiment, than the duration of light returned in a single illumination period. Furthermore, after three bounces, the probability of detecting a photon from a single pulse is very low. So, in this low-flux regime, the probability that the SPAD does not detect a photon from single-pulse illumination of the ijth grid point is:

P0(ij)(F)=exp[η(Yij+Bij)],
where η is the SPAD’s quantum efficiency, and Bij is the background contribution to the light illuminating the SPAD. Defining Rij to be the number of photons detected from illuminating that grid point with a sequence of N laser pulses, it follows that Rij has a binomial distribution with success probability 1P0(ij)(F), i.e., [25]
Pr(Rij;F)=(NRij)[1P0(ij)(F)]Rij[P0(ij)(F)]NRij.

4. Reconstruction algorithm

To reconstruct the hidden wall’s reflectivity matrix F from the m × m matrix, R, of photon counts, we make use of the forward model from Eqs. (1)(3). In particular, we seek a matrix that maximizes the likelihood (R;F)i,jPr(Rij;F) of F being the true reflectivity matrix, given that R is the observed photon-count matrix. Significantly, the negative log-likelihood function can be shown to be convex in F, and is thus easy to minimize. The optimization program is still convex—and still easily solved—after we impose reflectivity’s nonnegativity constraint Fkl ≥ 0, and an additive penalty pen(F) chosen to ensure spatial correlation between the reflectivity values of neighboring pixels while allowing abrupt reflectivity changes at the boundaries between multipixel regions. In summary, we reconstruct the reflectivity matrix as the solution to the convex optimization program

F^=argminF:Fkl0{log[(R;F)]+λpen(F)},
for an appropriate choice of the regularization parameter λ. We used the total-variation (TV) semi-norm penalty function [26] and a specialized solver [27] to obtain from Eq. (4).

The regularization parameter λ determines the balance between the two optimization targets: decreasing the negative log-likelihood and promoting locally-smooth scenes with sharp boundaries. In Fig. 4, we demonstrate the effect of varying λ on the reconstructed reflectivity. In practice, we choose the regularization parameter to obtain reasonably smooth images that do not seem overly regularized.

 

Fig. 4 Reconstruction results with different values of the regularization parameter λ. We demonstrate reconstruction according to Eq. (4) with varying values for the regularization parameter λ, as indicated on the bottom of the figures. Higher λ values promote reconstructions with larger regions of near-uniform reflectivity values, whereas smaller λ values produce more detailed but noisier images. In our reconstructions, we chose a λ value that does not severely distort the image; here the preferred value is λ = 0.75.

Download Full Size | PPT Slide | PDF

5. Experiment

Figure 2 depicts the ∼1-m scale imaging scene in our experiment. For illumination, we used a repetitively-pulsed 640-nm laser (Picoquant LDH-640B), with sub-ns pulses, 40 MHz repetition rate, and an average power of ∼8 mW. A two-axis galvo (Thorlabs GVS012) was utilized to raster scan the laser’s output over a grid of points (first bounce in Fig. 2) on a nearly-Lambertian visible wall (white poster board, see its characterization in Appendix B). Light reflected from the visible wall propagates to the hidden wall, where some is reflected back (second bounce) to the visible wall. Finally, some of the second-bounce light that is reflected from the visible wall (third bounce) is collected by a SPAD detector (MPD-PDM with quantum efficiency ∼0.35 at 640 nm). We placed an interference filter (Andover) centered at 640 nm with a 2 nm bandwidth in front of the SPAD to suppress background light. The occluder is a nonreflecting black circular patch. In the experiment, the two side walls inside the room were covered with black curtains so that they too are nonreflecting. Note that our forward model can easily take the side walls into consideration were they reflecting. During measurements, we turned off all ambient room light to minimize the background level.

The focus of the experiment is to utilize the collected third-bounce light to reconstruct the hidden-wall’s reflectivity pattern without use of ToF infomation. Therefore it is important to avoid detecting the first-bounce light that will be much stronger than the third-bounce light. We took two initial steps to minimize first-bounce photon detections. First, as shown in Fig. 2, we oriented the SPAD such that its field of view did not overlap the part of the visible wall that was scanned by the laser. Second, we inserted an opaque screen (not shown in Fig. 2) to block the direct line of sight between the illuminated part of the visible wall and the SPAD. In testing, however, we found that there was still a substantial number of photon detections from residual first-bounce light, which we could determine from their time delays relative to the laser pulses’ emission times. These detections were mostly due to laser light scattered from the two galvo mirrors that illuminated part of the visible wall within the SPAD’s field of view. So, because the visible wall is in direct line of sight of the imaging equipment—and hence its location can be easily and accurately estimated—we further suppressed first-bounce photon detections by the following post-processing procedure. We used the time-resolved (TR) information that is automatically captured by the SPAD to set a gated timing window that excludes first-bounce detections but whose duration is long enough to encompass all possible third-bounce detections, as indicated in the measurements shown in Fig. 5. As a result, no TR information related to the third-bounce photons is used in our measurements and scene reconstructions. In the future, with better galvo mirrors and a single-photon-sensitive CCD detector, we should be able to perform occlusion-based NLoS imaging with neither the need for nor the possibility of ToF-enabled suppression of first-bounce photon detections.

 

Fig. 5 Time-resolved SPAD measurements showing the gated timing window used for post-selecting third-bounce photon detections while suppressing first-bounce photon detections. The gate-off period covers detection times of first-bounce photons and the ∼6 ns duration of the gate-on period is long enough to capture all third-bounce photon detections. In our experiments, only the number of detected photons in gate-on windows were recorded to form the raw-count images.

Download Full Size | PPT Slide | PDF

6. Experimental results

We report experimental results obtained from a meter-scale environment in which the distance between the detector and the visible wall is ∼1.5 m and a circular occluder of diameter ∼6.8 cm is positioned roughly midway between the visible and hidden walls, which are separated by ∼1 m. A ∼0.4 m × 0.4 m reflectivity pattern was mounted on the upper-left quadrant of the ∼1 m × 1 m hidden wall to ensure that the pattern is properly scanned by the occluder’s shadow as the laser raster scans the visible wall (see Fig. 3 and Visualization 2). We performed an initial calibration of the background levels {Bij} that was then used for all subsequent experiments. We note that the need for background calibration can be avoided with better experimental equipment (see Appendix B). The occluder’s shape and position are assumed to be known for the purpose of scene reconstruction. From the known geometry, the matrix A(ij) can be determined. Finally, we chose m = 100 and n = 100 for our measurements.

First, we validate our occluder-assisted NLoS imaging method by reconstructing different reflectivity patterns on the hidden wall. These results are summarized in Figs. 6(a)–6(d). Four reflectivity patterns were placed on the hidden wall, as shown in the first row of Figs. 6(a)–6(d). The laser’s dwell time at each raster-scanned point was set so that N = 7.12 × 105 pulses were sent, resulting in ∼276 detected photons per pixel (PPP) on average. For each reflectivity pattern, a matrix of 100 × 100 raw counts was collected, as given in the middle row of Figs. 6(a)–6(d). The reflectivity patterns on the hidden wall were then reconstructed using our algorithm for solving Eq. (4), successfully revealing their fine details, as seen in the bottom row of Figs. 6(a)–6(d).

 

Fig. 6 Experimental results on the recovery of different hidden-wall reflectivity patterns, (a)–(d). First row: ground truth patterns on the hidden wall; second row: raw photon counts for 100 × 100 raster-scanned laser positions; third row: reconstructions in the presence of the occluder, based on solving Eq. (4), showing that detailed scene features are successfully recovered.

Download Full Size | PPT Slide | PDF

To quantify the photon efficiency and fidelity of our method, we varied the dwell time per laser illumination point (which determines the overall acquisition time) and tracked reconstruction performance as a function of the empirical average PPP, as shown in Figs. 7 and 8. We measure the reconstruction fidelity by the root-mean-square error (RMSE) in the reconstructed reflectivity ,

RMSE(F^,F)=1n2k=1nl=1m(FklF^kl)2,
where F is the true reflectivity pattern as determined from measurements in the high photon-count limit. It is evident from Fig. 7 that reconstruction fidelity for our binomial-distribution-based likelihood method does not degrade much (remains below 0.05) as the average PPP decreases from ∼1100 to ∼100. Figure 7 also shows RMSE for the Gaussian-distribution-based likelihood method employed in [20]. We see that the binomial-likelihood method’s photon efficiency is substantially better than that of the Gaussian-likelihood method: the latter requires at least ∼1100 detected PPP to achieve a fidelity similar to what the former realized with only ∼69 detected PPP. This behavior is mainly due to the mismatched noise model in the standard Gaussian-likelihood method, which presumes the noise to be additive, signal independent, and Gaussian distributed, whereas in the low photon-count regime without photon-number resolution it is really signal dependent and binomial distributed.

 

Fig. 7 Root-mean-square error (RMSE) and reconstruction results (insets) with different numbers of detected photons per pixel (PPP). The RMSE of our binomial-likelihood method remains below 0.05 with >69 detected PPP, whereas the Gaussian-likelihood method employed in [20] requires at least ∼1100 detected PPP to achieve similar performance.

Download Full Size | PPT Slide | PDF

 

Fig. 8 Reflectivity reconstructions with different numbers of detected photons per pixel (PPP). We compare the binomial-likelihood algorithm (Eq. (4)) and the Gaussian-likelihood algorithm [20] for different numbers of average detected PPP, ranging from 17 to 3438 as indicated on the bottom of each figure. The photon efficiency of the binomial-likelihood method is far superior to that of the Gaussian-likelihood method, with the latter requiring at least ∼1100 PPP to achieve reconstructions comparable to those of the former with ∼69 PPP. In the low-photon detection regime, PPP < 276, the Gaussian-likelihood method fails to reconstruct the details of the reflectivity image. Here the regularization parameter is fixed at 0.75, which causes the slight difference between Binomial-likelihood and Gaussian-likelihood at high PPP values.

Download Full Size | PPT Slide | PDF

Finally, we quantify the effect of occluder size [Figs. 9(a)–9(c)] and the limits of achievable spatial resolution [Figs. 9(d)–9(f)]. In Figs. 9(a)–(c), we used our system to image the reflectivity pattern of Fig. 6(a) using circular occluders, whose diameters ranged from 15.8 cm to 4.4 cm, while keeping other experimental parameters unchanged. The results show that a small (large) occluder sharpens (blurs) the image, similar to conventional pinhole imaging. In Figs. 9(d)–9(f), we fixed the diameter of the circular occluder at 6.8 cm and reconstructed a hidden-wall reflectivity pattern consisting of two bars with varying separation. With this occluder we see that our system provides ∼4 cm spatial resolution. Furthermore, the distance between the occluder and hidden wall will also affect the performance of the reconstruction. As shown in Figs. 3(a)–3(c), when the distance between the occluder and the hidden wall decreases (increases) its shadow’s size and field of view will decrease (increase), hence the resulting reconstruction will have better (worse) spatial resolution but smaller (larger) field of view. The distance between the visible and hidden walls also affects the reconstruction. Decreasing (increasing) the distance between the two walls improves (degrades) the conditioning of the forward model’s A matrix, which improves (degrades) scene reconstruction. See Sec. IV.B in ref. [20] for more about this point.

 

Fig. 9 (a)–(c), Reconstructions of the Fig. 6(a) reflectivity pattern obtained using circular occluders with diameters of 15.8 cm, 6.8 cm and 4.4 cm. A small (large) occluder sharpens (blurs) the image. (d)–(f), Reconstructions of two-bar reflectivity patterns with bar separations of 2 cm, 4 cm and 8 cm that were obtained using a 6.8-cm-diameter circular occluder. Our system achieves 4 cm spatial resolution.

Download Full Size | PPT Slide | PDF

7. Discussion

We have assumed throughout that the occluder’s location was known and that it was nonreflecting. These assumptions may be relaxed. In particular, the location of a nonreflecting occluder may be obtained using a blind deconvolution method [20], in which the occluder and the scene hidden behind it are reconstructed jointly. The viability of this approach is suggested by the fact that the occluder can be localized from the raw counts, as shown in Fig. 10. Moreover, if the occluder has nonzero reflectivity, its contribution to the raw photon counts can be modeled using the principles employed in our forward model and incorporated into the blind deconvoloution procedure.

 

Fig. 10 Raw detected-count measurements with a 15.8-cm-diameter occluder placed at different positions. The real location (X, Y, Z) (cm) of the occluder is indicated on the top of each figure. In (a)–(c), we fixed the position of the occluder on the Z axis and shifted it along the X and Y axes: the center of the rings reveals the (X, Y) position of the occluder. In (d)–(f), we fixed the position of the occluder on the X and Y axes and shifted it along the Z axis: the size of the rings reveals the Z-axis position of the occluder. These preliminary measurements suggest that occluder position may be localized from raw-count data.

Download Full Size | PPT Slide | PDF

Our work can be improved in the following directions. First, the raster-scanning system can be replaced by a non-scanning laser together with a SPAD camera [28] or an intensified CCD for measurement. Such a system should be capable of tracking the position of a moving target in the hidden space [16]. Second, the experiment can be operated at an appropriate wavelength outside of the visible range, such as 1550 nm, in order to perform NLoS imaging in the presence of ambient light. Finally, one can combine ToF measurements [18] with our approach to obviate the need for the prior information about the occluder, thus providing a full reconstruction of the hidden space.

In conclusion, we have demonstrated a framework for photon-efficient, occluder-facilitated NLoS imaging. Our results may ultimately lead to new imaging methodologies capable of opportunistically exploiting diverse features of the environment—including, but not limited to, simple occluders—and thus pave the way to NLoS imaging in a wide variety of applications.

Appendices

A. Theoretical details

Light propagation model

Here we provide details for the forward model in Eq. (1). The ray-optics propagation model we use for third-bounce light is that from [20], which we present in detail for ease of reference. Unlike [20], which assumes additive, signal-independent, Gaussian noise, our forward model accurately captures the noise statistics for SPAD detection in the low-photon-count regime.

Light propagates from the laser, located at position Λ, until it reaches the detector, located at position Ω, while accounting for a three-bounce propagation path. Our goal is to reconstruct the reflectivity function f(x), for x𝒮, where 𝒮 is a two-dimensional parameterization of the hidden wall.

Figure 2 illustrates a three-bounce trajectory of the form ΛijxcΩ, where ij is the ijth position in the laser’s illumination grid, x is a point on the hidden wall, and c is a point on the visible wall that is in the SPAD’s field of view. For single-pulse illumination of ij, the average number of photons following this trajectory that arrive at the SPAD is

Kpf(x)GΛ,ij,x,c,ΩdxdcdΩijx2xc2cΩ2,
where Kp is the average number of photons per pulse emitted by the laser, and dx, dc, dΩ are differential areas. This expression accounts for the inverse-square-law losses experienced in free-space light propagation from ij to x, from x to c, and from c to Ω, as well as the linear scaling by f(x) that results from reflection at x. The geometric factor GΛ,ij,x,c,Ω combines the Lambertian bidirectional reflectance distribution functions (BRDFs) associated with the diffuse reflections at the visible wall and the hidden wall, and is given by
GΛ,ij,x,c,Ωcos(Λij,nij)cos(xij,nij)×cos(xij,nx)cos(xc,nx)cos(xc,nc)cos(cΩ,nc),
where nij, nx, nc are the surface normals at ij, x, c, respectively, and cos(a, b) is the cosine of the angle between the vectors a and b.

For single-pulse illumination of ij, we use Yij to denote the average number of photons arriving at the detector from three-bounce trajectories. Deriving an expression for Yij entails summation over all such paths. In particular, this means summing over: (i) all x𝒮(ij, c), where 𝒮(ij, c) is the section of the hidden wall 𝒮 that has an unoccluded line of sight to both ij and c; (ii) all c𝒞, where 𝒞 is a parameterization of the section of the visible wall that is in the SPAD’s field of view; and (iii) all points in 𝒟, the SPAD detector’s photosensitive region. With these definitions we then have:

Yij=Kp𝒮(ij,c)dx𝒞dc𝒟dΩf(x)GΛ,ij,x,c,Ωijx2xc2cΩ2,
where Eq. (8)’s spatial integrations account for all possible three-bounce trajectories from the laser to the detector. This result can be simplified as follows:
Yij=Kp𝒮dxf(x)𝒞dc𝒟dΩ1𝒮(ij,c)(x)GΛ,ij,x,c,Ωijx2xc2cΩ2=Kp𝒮dxf(x)A(ij)(x),
where 1{x′}(x) is the indicator function (i.e., it equals 1 if and only if x =∈ {x′} and is 0 otherwise), and for fixed i, j we have defined
A(ij)(x)𝒞dc𝒟dΩ1𝒮(ij,c)(x)GΛ,ij,x,c,Ωijx2xc2cΩ2.
Equation (9) is specified in terms of the continuous variable x. In what follows, it will be convenient to discretize the coordinate system on the hidden wall 𝒮 by introducing an n × n grid indexed by (k, l). We then have that A(ij)(x) becomes Akl(ij) and f(x) becomes Fkl. Making these substitutions in (9) we obtain the discrete version of the forward model that appeared in Eq. (1):
Yij=Kpk,lAkl(ij)Fkl.

Shadow function

Equations (10) and (11) show that the presence of an occluder only affects Yij through its impact on 𝒮(ij, c), i.e., the patch on the hidden wall that has unobstructed lines of sight to both ij and c. To better understand this connection between the occluder and 𝒮(ij, c), we introduce a binary shadow function Θ(x, y) that indicates whether point x on the hidden wall and point y on the visible wall are visible to each other:

Θ(x,y)={1,unobstructedlineofsightbetweenxandy,0,obstructedlineofsightbetweenxandy.
With this definition we have 𝒮(ij, c) = {x𝒮 : Θ(x, ij)Θ(x, c) = 1, i.e., it is the subset of hidden-wall positions 𝒮 that satisfy both Θ(x, ij) = 1 and Θ(x, c) = 1. Note that 𝒮(ij, c) and 𝒮(i′j′, c) differ on hidden-wall patches for which the occluder blocks light from ij but not from i′j′ or vice versa.

Informative measurements

Our experiment raster scans the grid points {ij} on the visible wall and detects third-bounce photons reflected from a large portion of that wall. The informativeness of these measurements stems from the diversity of the coefficients Akl(ij). In the absence of an 𝒮(ij, c) = 𝒮 for all ij and c. From (10) we then see that the dependence of Akl(ij) on i, j originates from the product of two smoothly-varying functions—the inverse-square-law term ‖ijxkl−2 and the geometric function GΛ,ij,xkl,c,Ω—that yield smooth variations in Akl(ij) as (i, j) changes. In the presence of an occluder, however, the impact of nontrivial shadow functions in determining 𝒮(ij, c) makes Akl(ij) vary more abruptly with (i, j) changes, greatly increasing the informativeness of the measurements.

To demonstrate this effect, we rearrange {Yij} as an m2-dimensional column vector y, {Fkl} as an n2-dimensional column vector f, and {A(ij)} as an m2 × n2-dimensional matrix A such that Eq. (1) for 1 ≤ i, jm gets combined into

y=Af.
With this rearrangement we can evaluate the informativeness of the measurements by analyzing the spectral properties of A. Toward that end, Fig. 11 shows the singular values of A for two experimental setups. The first setup corresponds to an unoccluded scene, whereas the second setup corresponds to an occluded scene, in which a black circular patch has been inserted between the visible and hidden walls. It is evident from these singular values that the occluded measurements are substantially more informative, suggesting that the presence of the occluder will enable higher-fidelity reconstruction of the hidden-wall’s reflectivity pattern.

 

Fig. 11 Comparing the informativeness of occluded and unoccluded measurements. We numerically simulated the setup of Fig. 2 and evaluated the informativeness of the measurements with and without an occluder from the A matrix’s singular values {σ}. In our simulations, the laser illuminates a 50 × 50 grid on the visible wall, and the hidden wall is discretized to a 50 × 50 grid. The singular values of the corresponding 2500 × 2500 A matrix were calculated for an occluded setup (blue dashed curve) and an unoccluded setup (red solid curve). The singular values of the occluded A matrix are substantially higher than those of the unoccluded matrix, suggesting that measurements in the occluded setup will be much more informative.

Download Full Size | PPT Slide | PDF

Measurement statistics

The laser illuminates position ij with N pulses before it addresses the next grid point on the visible wall. Each pulse that illuminates ij results in an average of Yij third-bounce photons arriving at the detector’s location Ω. In this low-flux regime the number of detections registered by a photon-number resolving detector from illumination of ij by a single pulse is Poisson distributed, with mean η(Yij + Bij), where η is the detector’s quantum efficiency, and Bij is the average number of background-light photons arriving during a single-pulse measurement interval (see details below). A SPAD detector, however, is not number resolving; it suffers a dead time after making a single detection that, for our experiment, precludes more than one detection in a single-pulse measurement interval. In this case, each optical pulse can yield either a 0 count or 1 count, and these events occur with probabilities P0(ij)(F) and 1P0(ij)(F), respectively, where

P0(ij)(F)=exp[η(Yij+Bij)]1η(Yij+Bij).
The equality in Eq. (14) comes from the Poisson distribution. The approximation for that Poisson probability is due to the enormous attenuation incurred in three diffuse reflections, which makes ηYij ≪ 1, and the pre-detection optical filtering used to ensure that ηBij ≪ 1, which prevents SPAD counts from occurring in every single-pulse measurement interval. The F dependence of P0(ij)(F) arises from the Yij term, see Eq. (1).

The statistical independence of the photon counts from different laser pulses now makes Rij, the total photon count from the N pulses that illuminate ij, a binomial random variable with success probability 1P0(ij)(F), i.e., [25]

Pr(Rij;F)=(NRij)[1P0(ij)(F)]Rij[P0(ij)(F)]NRij.
Using this binomial distribution for the count statistics, and dropping terms that are independent of F, we get the following negative log-likelihood function for the raw count matrix R given the reflectivity matrix F:
log[(R;F)]=log(i,jPr(Rij;F))=ijlog[Pr(Rij;F)]=ij{(NRij)[ηKpk,lAkl(ij)Fkl]Rijlog[ηKpk,lAkl(ij)Fkl+ηBij]},
where the first equality in (16) follows from the statistical independence of the shot noises generated by different laser pulses.

B. Experimental details

Visible wall characterization

We used a white poster board as a near-Lambertian reflecting surface to serve as the visible wall in our NLoS imaging experiment of Fig. 2. We used a 635-nm laser to illuminate the white board at two different incident angles and measured its reflected power at various viewing angles. The results are displayed in Fig. 12, showing that the white poster board is indeed nearly Lambertian.

 

Fig. 12 Near-Lambertian reflectance behavior of white poster-board visible wall. The blue (red) data points correspond to measurements made with the setup in the blue (red) inset: a laser illuminated the visible wall at normal incidence (20° offset from normal incidence), and a detector recorded the power reflected at different viewing angles. The green line is the theoretical cosine curve for a perfect Lambertian surface. We find that the visible wall has ∼80% reflectivity and is nearly Lambertian except for a small specular component when the viewing angle is perpendicular to the surface. We also performed this characterization for the patterns on the hidden wall, and found that the Lambertian property of those patterns was similar to that of the visible wall.

Download Full Size | PPT Slide | PDF

Background light

In Fig. 13, we show results of background-light detection over a long data-acquisition time that we used to calibrate Bij, the average number of background photons arriving at the detector in a single-pulse measurement interval. For this measurement, the reflectivity pattern on the hidden wall was replaced with a black surface, a total of N = 3.56 × 107 laser pulses were transmitted at each laser point ij, and the third-bounce photons were detected by the SPAD. Note that once performed this calibration applies to all subsequent measurements: in post-processing, we scale these background noise counts according to the dwell time used. The nonuniformity of the background counts is mainly due to scattering from the raster-scan galvo mirrors and to SPAD afterpulsing originating from detections of those first-bounce photons. Galvo-related background counts could be avoided with better scanning mirrors.

 

Fig. 13 Results of long acquisition time background-light measurement used to calibrate Bij. The reflectivity pattern on the hidden wall was replaced with a black surface. A total of 35.6 million laser pulses were transmitted at each laser point ij on the 100 × 100 illumination grid, and the third-bounce counts were recorded by the SPAD. The nonuniformity is mainly due to scattering from the raster-scan galvo mirrors and SPAD afterpulsing that arises from detections of those first-bounce photons.

Download Full Size | PPT Slide | PDF

Funding

Defense Advanced Research Projects Agency (DARPA) REVEAL Program Contract No. HR0011-16-C-0030.

Acknowledgments

The authors thank Changchen Chen for assistance with figure preparation, Connor Henley for the measurements reported in Fig. 11, and William T. Freeman and Vivek K Goyal for helpful discussions.

References and links

1. B. Sun, M. P. Edgar, R. Bowman, L. E. Vittert, S. Welsh, A. Bowman, and M. J. Padgett, “3D computational imaging with single-pixel detectors,” Science 340, 844–847 (2013). [CrossRef]   [PubMed]  

2. A. Kirmani, D. Venkatraman, D. Shin, A. Colaço, F. N. C. Wong, J. H. Shapiro, and V. K. Goyal, “First-photon imaging,” Science 343, 58–61 (2014). [CrossRef]  

3. L. Gao, J. Liang, C. Li, and L. V. Wang, “Single-shot compressed ultrafast photography at one hundred billion frames per second,” Nature 516, 74–77 (2014). [CrossRef]   [PubMed]  

4. A. M. Pawlikowska, A. Halimi, R. A. Lamb, and G. S. Buller, “Single-photon three-dimensional imaging at up to 10 kilometers range,” Opt. Express 25, 11919–11931 (2017). [CrossRef]   [PubMed]  

5. E. Repasi, P. Lutzmann, O. Steinvall, M. Elmqvist, B. Göhler, and G. Anstett, “Advanced short-wavelength infrared range-gated imaging for ground applications in monostatic and bistatic configurations,” Appl. Opt. 48, 5956–5969 (2009). [CrossRef]   [PubMed]  

6. A. Sume, M. Gustafsson, M. Herberthson, A. Janis, S. Nilsson, J. Rahm, and A. Orbom, “Radar detection of moving targets behind corners,” IEEE Trans. Geosci. Remote Sens. 49, 2259–2267 (2011). [CrossRef]  

7. B. Chakraborty, Y. Li, J. J. Zhang, T. Trueblood, A. Papandreou-Suppappola, and D. Morrell, “Multipath exploitation with adaptive waveform design for tracking in urban terrain,” in Proceedings of IEEE International Conference on Acoustics, Speech and Signal Processing (IEEE2010), pp. 3894–3897.

8. O. Steinvall, M. Elmqvist, and H. Larsson, “See around the corner using active imaging,” Proc. SPIE 8186, 818605 (2011). [CrossRef]  

9. A. P. Mosk, A. Lagendijk, G. Lerosey, and M. Fink, “Controlling waves in space and time for imaging and focusing in complex media,” Nat. Photonics 6, 283–292 (2012). [CrossRef]  

10. O. Katz, E. Small, and Y. Silberberg, “Looking around corners and through thin turbid layers in real time with scattered incoherent light,” Nat. Photonics 6, 549–553 (2012). [CrossRef]  

11. A. Kirmani, T. Hutchison, J. Davis, and R. Raskar, “Looking around the corner using transient imaging,” in Proceedings of IEEE International Conference on Computer Vision (IEEE, 2009), pp. 159–166.

12. A. Velten, T. Willwacher, O. Gupta, A. Veeraraghavan, M. G. Bawendi, and R. Raskar, “Recovering three-dimensional shape around a corner using ultrafast time-of-flight imaging,” Nat. Commun. 3, 745 (2012). [CrossRef]   [PubMed]  

13. O. Gupta, T. Willwacher, A. Velten, A. Veeraraghavan, and R. Raskar, “Reconstruction of hidden 3D shapes using diffuse reflections,” Opt. Express 20, 19096–19108 (2012). [CrossRef]   [PubMed]  

14. F. Heide, L. Xiao, W. Heidrich, and M. B. Hullin, “Diffuse mirrors: 3D reconstruction from diffuse indirect illumination using inexpensive time-of-flight sensors,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2014), pp. 3222–3229.

15. M. Laurenzis and A. Velten, “Nonline-of-sight laser gated viewing of scattered photons,” Opt. Eng. 53, 023102 (2014). [CrossRef]  

16. G. Gariepy, F. Tonolini, R. Henderson, J. Leach, and D. Faccio, “Detection and tracking of moving objects hidden from view,” Nat. Photonics 10, 23–26 (2016). [CrossRef]  

17. M. Laurenzis, J. Klein, E. Bacher, and N. Metzger, “Multiple-return single-photon counting of light in flight and sensing of non-line-of-sight objects at shortwave infrared wavelengths,” Opt. Lett. 40, 4815–4818 (2015). [CrossRef]   [PubMed]  

18. M. Buttafava, J. Zeman, A. Tosi, K. Eliceiri, and A. Velten, “Non-line-of-sight imaging using a time-gated single photon avalanche diode,” Opt. Express 23, 20997–21011 (2015). [CrossRef]   [PubMed]  

19. J. Klein, C. Peters, J. Martín, M. Laurenzis, and M. B. Hullin, “Tracking objects outside the line of sight using 2D intensity images,” Sci. Rep. 6, 32491 (2016). [CrossRef]   [PubMed]  

20. C. Thrampoulidis, G. Shulkind, F. Xu, W. T. Freeman, J. H. Shapiro, A. Torralba, F. N. C. Wong, and G. W. Wornell “Exploiting occlusion in non-line-of-sight active imaging,” arXiv:1711.06297 (2017).

21. A. L. Cohen, “Anti-pinhole imaging,” J. Mod. Opt. 29, 63–67 (1982).

22. A. Torralba and W. T. Freeman, “Accidental pinhole and pinspeck cameras: Revealing the scene outside the picture,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2012), pp.. 374–381.

23. R. H. Hadfield, “Single-photon detectors for optical quantum information applications,” Nat. Photonics 3, 696–705 (2009). [CrossRef]  

24. G. Buller and R. J. Collins, “Single-photon generation and detection,” Meas. Sci. Technol. 21, 012002 (2010). [CrossRef]  

25. D. Shin, A. Kirmani, V. K. Goyal, and J. H. Shapiro, “Photon-efficient computational 3-D and reflectivity imaging with single-photon detectors,” IEEE Trans. Comput. Imaging 1, 112–125 (2015). [CrossRef]  

26. L. I. Rudin, S. Osher, and E. Fatemi, “Nonlinear total variation based noise removal algorithms,” Physica D 60, 259–268 (1992). [CrossRef]  

27. Z. T. Harmany, R. F. Marcia, and R. M. Willett, “This is SPIRAL-TAP: Sparse Poisson intensity reconstruction algorithms—theory and practice,” IEEE Trans. Image Process. 21, 1084–1096 (2012). [CrossRef]  

28. D. Shin, F. Xu, D. Venkatraman, R. Lussana, F. Villa, F. Zappa, V. K. Goyal, F. N. C. Wong, and J. H. Shapiro, “Photon-efficient imaging with a single-photon camera,” Nat. Commun. 7, 12046 (2016). [CrossRef]   [PubMed]  

References

  • View by:
  • |
  • |
  • |

  1. B. Sun, M. P. Edgar, R. Bowman, L. E. Vittert, S. Welsh, A. Bowman, and M. J. Padgett, “3D computational imaging with single-pixel detectors,” Science 340, 844–847 (2013).
    [Crossref] [PubMed]
  2. A. Kirmani, D. Venkatraman, D. Shin, A. Colaço, F. N. C. Wong, J. H. Shapiro, and V. K. Goyal, “First-photon imaging,” Science 343, 58–61 (2014).
    [Crossref]
  3. L. Gao, J. Liang, C. Li, and L. V. Wang, “Single-shot compressed ultrafast photography at one hundred billion frames per second,” Nature 516, 74–77 (2014).
    [Crossref] [PubMed]
  4. A. M. Pawlikowska, A. Halimi, R. A. Lamb, and G. S. Buller, “Single-photon three-dimensional imaging at up to 10 kilometers range,” Opt. Express 25, 11919–11931 (2017).
    [Crossref] [PubMed]
  5. E. Repasi, P. Lutzmann, O. Steinvall, M. Elmqvist, B. Göhler, and G. Anstett, “Advanced short-wavelength infrared range-gated imaging for ground applications in monostatic and bistatic configurations,” Appl. Opt. 48, 5956–5969 (2009).
    [Crossref] [PubMed]
  6. A. Sume, M. Gustafsson, M. Herberthson, A. Janis, S. Nilsson, J. Rahm, and A. Orbom, “Radar detection of moving targets behind corners,” IEEE Trans. Geosci. Remote Sens. 49, 2259–2267 (2011).
    [Crossref]
  7. B. Chakraborty, Y. Li, J. J. Zhang, T. Trueblood, A. Papandreou-Suppappola, and D. Morrell, “Multipath exploitation with adaptive waveform design for tracking in urban terrain,” in Proceedings of IEEE International Conference on Acoustics, Speech and Signal Processing (IEEE2010), pp. 3894–3897.
  8. O. Steinvall, M. Elmqvist, and H. Larsson, “See around the corner using active imaging,” Proc. SPIE 8186, 818605 (2011).
    [Crossref]
  9. A. P. Mosk, A. Lagendijk, G. Lerosey, and M. Fink, “Controlling waves in space and time for imaging and focusing in complex media,” Nat. Photonics 6, 283–292 (2012).
    [Crossref]
  10. O. Katz, E. Small, and Y. Silberberg, “Looking around corners and through thin turbid layers in real time with scattered incoherent light,” Nat. Photonics 6, 549–553 (2012).
    [Crossref]
  11. A. Kirmani, T. Hutchison, J. Davis, and R. Raskar, “Looking around the corner using transient imaging,” in Proceedings of IEEE International Conference on Computer Vision (IEEE, 2009), pp. 159–166.
  12. A. Velten, T. Willwacher, O. Gupta, A. Veeraraghavan, M. G. Bawendi, and R. Raskar, “Recovering three-dimensional shape around a corner using ultrafast time-of-flight imaging,” Nat. Commun. 3, 745 (2012).
    [Crossref] [PubMed]
  13. O. Gupta, T. Willwacher, A. Velten, A. Veeraraghavan, and R. Raskar, “Reconstruction of hidden 3D shapes using diffuse reflections,” Opt. Express 20, 19096–19108 (2012).
    [Crossref] [PubMed]
  14. F. Heide, L. Xiao, W. Heidrich, and M. B. Hullin, “Diffuse mirrors: 3D reconstruction from diffuse indirect illumination using inexpensive time-of-flight sensors,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2014), pp. 3222–3229.
  15. M. Laurenzis and A. Velten, “Nonline-of-sight laser gated viewing of scattered photons,” Opt. Eng. 53, 023102 (2014).
    [Crossref]
  16. G. Gariepy, F. Tonolini, R. Henderson, J. Leach, and D. Faccio, “Detection and tracking of moving objects hidden from view,” Nat. Photonics 10, 23–26 (2016).
    [Crossref]
  17. M. Laurenzis, J. Klein, E. Bacher, and N. Metzger, “Multiple-return single-photon counting of light in flight and sensing of non-line-of-sight objects at shortwave infrared wavelengths,” Opt. Lett. 40, 4815–4818 (2015).
    [Crossref] [PubMed]
  18. M. Buttafava, J. Zeman, A. Tosi, K. Eliceiri, and A. Velten, “Non-line-of-sight imaging using a time-gated single photon avalanche diode,” Opt. Express 23, 20997–21011 (2015).
    [Crossref] [PubMed]
  19. J. Klein, C. Peters, J. Martín, M. Laurenzis, and M. B. Hullin, “Tracking objects outside the line of sight using 2D intensity images,” Sci. Rep. 6, 32491 (2016).
    [Crossref] [PubMed]
  20. C. Thrampoulidis, G. Shulkind, F. Xu, W. T. Freeman, J. H. Shapiro, A. Torralba, F. N. C. Wong, and G. W. Wornell “Exploiting occlusion in non-line-of-sight active imaging,” arXiv:1711.06297 (2017).
  21. A. L. Cohen, “Anti-pinhole imaging,” J. Mod. Opt. 29, 63–67 (1982).
  22. A. Torralba and W. T. Freeman, “Accidental pinhole and pinspeck cameras: Revealing the scene outside the picture,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2012), pp.. 374–381.
  23. R. H. Hadfield, “Single-photon detectors for optical quantum information applications,” Nat. Photonics 3, 696–705 (2009).
    [Crossref]
  24. G. Buller and R. J. Collins, “Single-photon generation and detection,” Meas. Sci. Technol. 21, 012002 (2010).
    [Crossref]
  25. D. Shin, A. Kirmani, V. K. Goyal, and J. H. Shapiro, “Photon-efficient computational 3-D and reflectivity imaging with single-photon detectors,” IEEE Trans. Comput. Imaging 1, 112–125 (2015).
    [Crossref]
  26. L. I. Rudin, S. Osher, and E. Fatemi, “Nonlinear total variation based noise removal algorithms,” Physica D 60, 259–268 (1992).
    [Crossref]
  27. Z. T. Harmany, R. F. Marcia, and R. M. Willett, “This is SPIRAL-TAP: Sparse Poisson intensity reconstruction algorithms—theory and practice,” IEEE Trans. Image Process. 21, 1084–1096 (2012).
    [Crossref]
  28. D. Shin, F. Xu, D. Venkatraman, R. Lussana, F. Villa, F. Zappa, V. K. Goyal, F. N. C. Wong, and J. H. Shapiro, “Photon-efficient imaging with a single-photon camera,” Nat. Commun. 7, 12046 (2016).
    [Crossref] [PubMed]

2017 (1)

2016 (3)

G. Gariepy, F. Tonolini, R. Henderson, J. Leach, and D. Faccio, “Detection and tracking of moving objects hidden from view,” Nat. Photonics 10, 23–26 (2016).
[Crossref]

J. Klein, C. Peters, J. Martín, M. Laurenzis, and M. B. Hullin, “Tracking objects outside the line of sight using 2D intensity images,” Sci. Rep. 6, 32491 (2016).
[Crossref] [PubMed]

D. Shin, F. Xu, D. Venkatraman, R. Lussana, F. Villa, F. Zappa, V. K. Goyal, F. N. C. Wong, and J. H. Shapiro, “Photon-efficient imaging with a single-photon camera,” Nat. Commun. 7, 12046 (2016).
[Crossref] [PubMed]

2015 (3)

2014 (3)

M. Laurenzis and A. Velten, “Nonline-of-sight laser gated viewing of scattered photons,” Opt. Eng. 53, 023102 (2014).
[Crossref]

A. Kirmani, D. Venkatraman, D. Shin, A. Colaço, F. N. C. Wong, J. H. Shapiro, and V. K. Goyal, “First-photon imaging,” Science 343, 58–61 (2014).
[Crossref]

L. Gao, J. Liang, C. Li, and L. V. Wang, “Single-shot compressed ultrafast photography at one hundred billion frames per second,” Nature 516, 74–77 (2014).
[Crossref] [PubMed]

2013 (1)

B. Sun, M. P. Edgar, R. Bowman, L. E. Vittert, S. Welsh, A. Bowman, and M. J. Padgett, “3D computational imaging with single-pixel detectors,” Science 340, 844–847 (2013).
[Crossref] [PubMed]

2012 (5)

A. P. Mosk, A. Lagendijk, G. Lerosey, and M. Fink, “Controlling waves in space and time for imaging and focusing in complex media,” Nat. Photonics 6, 283–292 (2012).
[Crossref]

O. Katz, E. Small, and Y. Silberberg, “Looking around corners and through thin turbid layers in real time with scattered incoherent light,” Nat. Photonics 6, 549–553 (2012).
[Crossref]

A. Velten, T. Willwacher, O. Gupta, A. Veeraraghavan, M. G. Bawendi, and R. Raskar, “Recovering three-dimensional shape around a corner using ultrafast time-of-flight imaging,” Nat. Commun. 3, 745 (2012).
[Crossref] [PubMed]

O. Gupta, T. Willwacher, A. Velten, A. Veeraraghavan, and R. Raskar, “Reconstruction of hidden 3D shapes using diffuse reflections,” Opt. Express 20, 19096–19108 (2012).
[Crossref] [PubMed]

Z. T. Harmany, R. F. Marcia, and R. M. Willett, “This is SPIRAL-TAP: Sparse Poisson intensity reconstruction algorithms—theory and practice,” IEEE Trans. Image Process. 21, 1084–1096 (2012).
[Crossref]

2011 (2)

A. Sume, M. Gustafsson, M. Herberthson, A. Janis, S. Nilsson, J. Rahm, and A. Orbom, “Radar detection of moving targets behind corners,” IEEE Trans. Geosci. Remote Sens. 49, 2259–2267 (2011).
[Crossref]

O. Steinvall, M. Elmqvist, and H. Larsson, “See around the corner using active imaging,” Proc. SPIE 8186, 818605 (2011).
[Crossref]

2010 (1)

G. Buller and R. J. Collins, “Single-photon generation and detection,” Meas. Sci. Technol. 21, 012002 (2010).
[Crossref]

2009 (2)

1992 (1)

L. I. Rudin, S. Osher, and E. Fatemi, “Nonlinear total variation based noise removal algorithms,” Physica D 60, 259–268 (1992).
[Crossref]

1982 (1)

A. L. Cohen, “Anti-pinhole imaging,” J. Mod. Opt. 29, 63–67 (1982).

Anstett, G.

Bacher, E.

Bawendi, M. G.

A. Velten, T. Willwacher, O. Gupta, A. Veeraraghavan, M. G. Bawendi, and R. Raskar, “Recovering three-dimensional shape around a corner using ultrafast time-of-flight imaging,” Nat. Commun. 3, 745 (2012).
[Crossref] [PubMed]

Bowman, A.

B. Sun, M. P. Edgar, R. Bowman, L. E. Vittert, S. Welsh, A. Bowman, and M. J. Padgett, “3D computational imaging with single-pixel detectors,” Science 340, 844–847 (2013).
[Crossref] [PubMed]

Bowman, R.

B. Sun, M. P. Edgar, R. Bowman, L. E. Vittert, S. Welsh, A. Bowman, and M. J. Padgett, “3D computational imaging with single-pixel detectors,” Science 340, 844–847 (2013).
[Crossref] [PubMed]

Buller, G.

G. Buller and R. J. Collins, “Single-photon generation and detection,” Meas. Sci. Technol. 21, 012002 (2010).
[Crossref]

Buller, G. S.

Buttafava, M.

Chakraborty, B.

B. Chakraborty, Y. Li, J. J. Zhang, T. Trueblood, A. Papandreou-Suppappola, and D. Morrell, “Multipath exploitation with adaptive waveform design for tracking in urban terrain,” in Proceedings of IEEE International Conference on Acoustics, Speech and Signal Processing (IEEE2010), pp. 3894–3897.

Cohen, A. L.

A. L. Cohen, “Anti-pinhole imaging,” J. Mod. Opt. 29, 63–67 (1982).

Colaço, A.

A. Kirmani, D. Venkatraman, D. Shin, A. Colaço, F. N. C. Wong, J. H. Shapiro, and V. K. Goyal, “First-photon imaging,” Science 343, 58–61 (2014).
[Crossref]

Collins, R. J.

G. Buller and R. J. Collins, “Single-photon generation and detection,” Meas. Sci. Technol. 21, 012002 (2010).
[Crossref]

Davis, J.

A. Kirmani, T. Hutchison, J. Davis, and R. Raskar, “Looking around the corner using transient imaging,” in Proceedings of IEEE International Conference on Computer Vision (IEEE, 2009), pp. 159–166.

Edgar, M. P.

B. Sun, M. P. Edgar, R. Bowman, L. E. Vittert, S. Welsh, A. Bowman, and M. J. Padgett, “3D computational imaging with single-pixel detectors,” Science 340, 844–847 (2013).
[Crossref] [PubMed]

Eliceiri, K.

Elmqvist, M.

Faccio, D.

G. Gariepy, F. Tonolini, R. Henderson, J. Leach, and D. Faccio, “Detection and tracking of moving objects hidden from view,” Nat. Photonics 10, 23–26 (2016).
[Crossref]

Fatemi, E.

L. I. Rudin, S. Osher, and E. Fatemi, “Nonlinear total variation based noise removal algorithms,” Physica D 60, 259–268 (1992).
[Crossref]

Fink, M.

A. P. Mosk, A. Lagendijk, G. Lerosey, and M. Fink, “Controlling waves in space and time for imaging and focusing in complex media,” Nat. Photonics 6, 283–292 (2012).
[Crossref]

Freeman, W. T.

A. Torralba and W. T. Freeman, “Accidental pinhole and pinspeck cameras: Revealing the scene outside the picture,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2012), pp.. 374–381.

C. Thrampoulidis, G. Shulkind, F. Xu, W. T. Freeman, J. H. Shapiro, A. Torralba, F. N. C. Wong, and G. W. Wornell “Exploiting occlusion in non-line-of-sight active imaging,” arXiv:1711.06297 (2017).

Gao, L.

L. Gao, J. Liang, C. Li, and L. V. Wang, “Single-shot compressed ultrafast photography at one hundred billion frames per second,” Nature 516, 74–77 (2014).
[Crossref] [PubMed]

Gariepy, G.

G. Gariepy, F. Tonolini, R. Henderson, J. Leach, and D. Faccio, “Detection and tracking of moving objects hidden from view,” Nat. Photonics 10, 23–26 (2016).
[Crossref]

Göhler, B.

Goyal, V. K.

D. Shin, F. Xu, D. Venkatraman, R. Lussana, F. Villa, F. Zappa, V. K. Goyal, F. N. C. Wong, and J. H. Shapiro, “Photon-efficient imaging with a single-photon camera,” Nat. Commun. 7, 12046 (2016).
[Crossref] [PubMed]

D. Shin, A. Kirmani, V. K. Goyal, and J. H. Shapiro, “Photon-efficient computational 3-D and reflectivity imaging with single-photon detectors,” IEEE Trans. Comput. Imaging 1, 112–125 (2015).
[Crossref]

A. Kirmani, D. Venkatraman, D. Shin, A. Colaço, F. N. C. Wong, J. H. Shapiro, and V. K. Goyal, “First-photon imaging,” Science 343, 58–61 (2014).
[Crossref]

Gupta, O.

A. Velten, T. Willwacher, O. Gupta, A. Veeraraghavan, M. G. Bawendi, and R. Raskar, “Recovering three-dimensional shape around a corner using ultrafast time-of-flight imaging,” Nat. Commun. 3, 745 (2012).
[Crossref] [PubMed]

O. Gupta, T. Willwacher, A. Velten, A. Veeraraghavan, and R. Raskar, “Reconstruction of hidden 3D shapes using diffuse reflections,” Opt. Express 20, 19096–19108 (2012).
[Crossref] [PubMed]

Gustafsson, M.

A. Sume, M. Gustafsson, M. Herberthson, A. Janis, S. Nilsson, J. Rahm, and A. Orbom, “Radar detection of moving targets behind corners,” IEEE Trans. Geosci. Remote Sens. 49, 2259–2267 (2011).
[Crossref]

Hadfield, R. H.

R. H. Hadfield, “Single-photon detectors for optical quantum information applications,” Nat. Photonics 3, 696–705 (2009).
[Crossref]

Halimi, A.

Harmany, Z. T.

Z. T. Harmany, R. F. Marcia, and R. M. Willett, “This is SPIRAL-TAP: Sparse Poisson intensity reconstruction algorithms—theory and practice,” IEEE Trans. Image Process. 21, 1084–1096 (2012).
[Crossref]

Heide, F.

F. Heide, L. Xiao, W. Heidrich, and M. B. Hullin, “Diffuse mirrors: 3D reconstruction from diffuse indirect illumination using inexpensive time-of-flight sensors,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2014), pp. 3222–3229.

Heidrich, W.

F. Heide, L. Xiao, W. Heidrich, and M. B. Hullin, “Diffuse mirrors: 3D reconstruction from diffuse indirect illumination using inexpensive time-of-flight sensors,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2014), pp. 3222–3229.

Henderson, R.

G. Gariepy, F. Tonolini, R. Henderson, J. Leach, and D. Faccio, “Detection and tracking of moving objects hidden from view,” Nat. Photonics 10, 23–26 (2016).
[Crossref]

Herberthson, M.

A. Sume, M. Gustafsson, M. Herberthson, A. Janis, S. Nilsson, J. Rahm, and A. Orbom, “Radar detection of moving targets behind corners,” IEEE Trans. Geosci. Remote Sens. 49, 2259–2267 (2011).
[Crossref]

Hullin, M. B.

J. Klein, C. Peters, J. Martín, M. Laurenzis, and M. B. Hullin, “Tracking objects outside the line of sight using 2D intensity images,” Sci. Rep. 6, 32491 (2016).
[Crossref] [PubMed]

F. Heide, L. Xiao, W. Heidrich, and M. B. Hullin, “Diffuse mirrors: 3D reconstruction from diffuse indirect illumination using inexpensive time-of-flight sensors,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2014), pp. 3222–3229.

Hutchison, T.

A. Kirmani, T. Hutchison, J. Davis, and R. Raskar, “Looking around the corner using transient imaging,” in Proceedings of IEEE International Conference on Computer Vision (IEEE, 2009), pp. 159–166.

Janis, A.

A. Sume, M. Gustafsson, M. Herberthson, A. Janis, S. Nilsson, J. Rahm, and A. Orbom, “Radar detection of moving targets behind corners,” IEEE Trans. Geosci. Remote Sens. 49, 2259–2267 (2011).
[Crossref]

Katz, O.

O. Katz, E. Small, and Y. Silberberg, “Looking around corners and through thin turbid layers in real time with scattered incoherent light,” Nat. Photonics 6, 549–553 (2012).
[Crossref]

Kirmani, A.

D. Shin, A. Kirmani, V. K. Goyal, and J. H. Shapiro, “Photon-efficient computational 3-D and reflectivity imaging with single-photon detectors,” IEEE Trans. Comput. Imaging 1, 112–125 (2015).
[Crossref]

A. Kirmani, D. Venkatraman, D. Shin, A. Colaço, F. N. C. Wong, J. H. Shapiro, and V. K. Goyal, “First-photon imaging,” Science 343, 58–61 (2014).
[Crossref]

A. Kirmani, T. Hutchison, J. Davis, and R. Raskar, “Looking around the corner using transient imaging,” in Proceedings of IEEE International Conference on Computer Vision (IEEE, 2009), pp. 159–166.

Klein, J.

Lagendijk, A.

A. P. Mosk, A. Lagendijk, G. Lerosey, and M. Fink, “Controlling waves in space and time for imaging and focusing in complex media,” Nat. Photonics 6, 283–292 (2012).
[Crossref]

Lamb, R. A.

Larsson, H.

O. Steinvall, M. Elmqvist, and H. Larsson, “See around the corner using active imaging,” Proc. SPIE 8186, 818605 (2011).
[Crossref]

Laurenzis, M.

J. Klein, C. Peters, J. Martín, M. Laurenzis, and M. B. Hullin, “Tracking objects outside the line of sight using 2D intensity images,” Sci. Rep. 6, 32491 (2016).
[Crossref] [PubMed]

M. Laurenzis, J. Klein, E. Bacher, and N. Metzger, “Multiple-return single-photon counting of light in flight and sensing of non-line-of-sight objects at shortwave infrared wavelengths,” Opt. Lett. 40, 4815–4818 (2015).
[Crossref] [PubMed]

M. Laurenzis and A. Velten, “Nonline-of-sight laser gated viewing of scattered photons,” Opt. Eng. 53, 023102 (2014).
[Crossref]

Leach, J.

G. Gariepy, F. Tonolini, R. Henderson, J. Leach, and D. Faccio, “Detection and tracking of moving objects hidden from view,” Nat. Photonics 10, 23–26 (2016).
[Crossref]

Lerosey, G.

A. P. Mosk, A. Lagendijk, G. Lerosey, and M. Fink, “Controlling waves in space and time for imaging and focusing in complex media,” Nat. Photonics 6, 283–292 (2012).
[Crossref]

Li, C.

L. Gao, J. Liang, C. Li, and L. V. Wang, “Single-shot compressed ultrafast photography at one hundred billion frames per second,” Nature 516, 74–77 (2014).
[Crossref] [PubMed]

Li, Y.

B. Chakraborty, Y. Li, J. J. Zhang, T. Trueblood, A. Papandreou-Suppappola, and D. Morrell, “Multipath exploitation with adaptive waveform design for tracking in urban terrain,” in Proceedings of IEEE International Conference on Acoustics, Speech and Signal Processing (IEEE2010), pp. 3894–3897.

Liang, J.

L. Gao, J. Liang, C. Li, and L. V. Wang, “Single-shot compressed ultrafast photography at one hundred billion frames per second,” Nature 516, 74–77 (2014).
[Crossref] [PubMed]

Lussana, R.

D. Shin, F. Xu, D. Venkatraman, R. Lussana, F. Villa, F. Zappa, V. K. Goyal, F. N. C. Wong, and J. H. Shapiro, “Photon-efficient imaging with a single-photon camera,” Nat. Commun. 7, 12046 (2016).
[Crossref] [PubMed]

Lutzmann, P.

Marcia, R. F.

Z. T. Harmany, R. F. Marcia, and R. M. Willett, “This is SPIRAL-TAP: Sparse Poisson intensity reconstruction algorithms—theory and practice,” IEEE Trans. Image Process. 21, 1084–1096 (2012).
[Crossref]

Martín, J.

J. Klein, C. Peters, J. Martín, M. Laurenzis, and M. B. Hullin, “Tracking objects outside the line of sight using 2D intensity images,” Sci. Rep. 6, 32491 (2016).
[Crossref] [PubMed]

Metzger, N.

Morrell, D.

B. Chakraborty, Y. Li, J. J. Zhang, T. Trueblood, A. Papandreou-Suppappola, and D. Morrell, “Multipath exploitation with adaptive waveform design for tracking in urban terrain,” in Proceedings of IEEE International Conference on Acoustics, Speech and Signal Processing (IEEE2010), pp. 3894–3897.

Mosk, A. P.

A. P. Mosk, A. Lagendijk, G. Lerosey, and M. Fink, “Controlling waves in space and time for imaging and focusing in complex media,” Nat. Photonics 6, 283–292 (2012).
[Crossref]

Nilsson, S.

A. Sume, M. Gustafsson, M. Herberthson, A. Janis, S. Nilsson, J. Rahm, and A. Orbom, “Radar detection of moving targets behind corners,” IEEE Trans. Geosci. Remote Sens. 49, 2259–2267 (2011).
[Crossref]

Orbom, A.

A. Sume, M. Gustafsson, M. Herberthson, A. Janis, S. Nilsson, J. Rahm, and A. Orbom, “Radar detection of moving targets behind corners,” IEEE Trans. Geosci. Remote Sens. 49, 2259–2267 (2011).
[Crossref]

Osher, S.

L. I. Rudin, S. Osher, and E. Fatemi, “Nonlinear total variation based noise removal algorithms,” Physica D 60, 259–268 (1992).
[Crossref]

Padgett, M. J.

B. Sun, M. P. Edgar, R. Bowman, L. E. Vittert, S. Welsh, A. Bowman, and M. J. Padgett, “3D computational imaging with single-pixel detectors,” Science 340, 844–847 (2013).
[Crossref] [PubMed]

Papandreou-Suppappola, A.

B. Chakraborty, Y. Li, J. J. Zhang, T. Trueblood, A. Papandreou-Suppappola, and D. Morrell, “Multipath exploitation with adaptive waveform design for tracking in urban terrain,” in Proceedings of IEEE International Conference on Acoustics, Speech and Signal Processing (IEEE2010), pp. 3894–3897.

Pawlikowska, A. M.

Peters, C.

J. Klein, C. Peters, J. Martín, M. Laurenzis, and M. B. Hullin, “Tracking objects outside the line of sight using 2D intensity images,” Sci. Rep. 6, 32491 (2016).
[Crossref] [PubMed]

Rahm, J.

A. Sume, M. Gustafsson, M. Herberthson, A. Janis, S. Nilsson, J. Rahm, and A. Orbom, “Radar detection of moving targets behind corners,” IEEE Trans. Geosci. Remote Sens. 49, 2259–2267 (2011).
[Crossref]

Raskar, R.

A. Velten, T. Willwacher, O. Gupta, A. Veeraraghavan, M. G. Bawendi, and R. Raskar, “Recovering three-dimensional shape around a corner using ultrafast time-of-flight imaging,” Nat. Commun. 3, 745 (2012).
[Crossref] [PubMed]

O. Gupta, T. Willwacher, A. Velten, A. Veeraraghavan, and R. Raskar, “Reconstruction of hidden 3D shapes using diffuse reflections,” Opt. Express 20, 19096–19108 (2012).
[Crossref] [PubMed]

A. Kirmani, T. Hutchison, J. Davis, and R. Raskar, “Looking around the corner using transient imaging,” in Proceedings of IEEE International Conference on Computer Vision (IEEE, 2009), pp. 159–166.

Repasi, E.

Rudin, L. I.

L. I. Rudin, S. Osher, and E. Fatemi, “Nonlinear total variation based noise removal algorithms,” Physica D 60, 259–268 (1992).
[Crossref]

Shapiro, J. H.

D. Shin, F. Xu, D. Venkatraman, R. Lussana, F. Villa, F. Zappa, V. K. Goyal, F. N. C. Wong, and J. H. Shapiro, “Photon-efficient imaging with a single-photon camera,” Nat. Commun. 7, 12046 (2016).
[Crossref] [PubMed]

D. Shin, A. Kirmani, V. K. Goyal, and J. H. Shapiro, “Photon-efficient computational 3-D and reflectivity imaging with single-photon detectors,” IEEE Trans. Comput. Imaging 1, 112–125 (2015).
[Crossref]

A. Kirmani, D. Venkatraman, D. Shin, A. Colaço, F. N. C. Wong, J. H. Shapiro, and V. K. Goyal, “First-photon imaging,” Science 343, 58–61 (2014).
[Crossref]

C. Thrampoulidis, G. Shulkind, F. Xu, W. T. Freeman, J. H. Shapiro, A. Torralba, F. N. C. Wong, and G. W. Wornell “Exploiting occlusion in non-line-of-sight active imaging,” arXiv:1711.06297 (2017).

Shin, D.

D. Shin, F. Xu, D. Venkatraman, R. Lussana, F. Villa, F. Zappa, V. K. Goyal, F. N. C. Wong, and J. H. Shapiro, “Photon-efficient imaging with a single-photon camera,” Nat. Commun. 7, 12046 (2016).
[Crossref] [PubMed]

D. Shin, A. Kirmani, V. K. Goyal, and J. H. Shapiro, “Photon-efficient computational 3-D and reflectivity imaging with single-photon detectors,” IEEE Trans. Comput. Imaging 1, 112–125 (2015).
[Crossref]

A. Kirmani, D. Venkatraman, D. Shin, A. Colaço, F. N. C. Wong, J. H. Shapiro, and V. K. Goyal, “First-photon imaging,” Science 343, 58–61 (2014).
[Crossref]

Shulkind, G.

C. Thrampoulidis, G. Shulkind, F. Xu, W. T. Freeman, J. H. Shapiro, A. Torralba, F. N. C. Wong, and G. W. Wornell “Exploiting occlusion in non-line-of-sight active imaging,” arXiv:1711.06297 (2017).

Silberberg, Y.

O. Katz, E. Small, and Y. Silberberg, “Looking around corners and through thin turbid layers in real time with scattered incoherent light,” Nat. Photonics 6, 549–553 (2012).
[Crossref]

Small, E.

O. Katz, E. Small, and Y. Silberberg, “Looking around corners and through thin turbid layers in real time with scattered incoherent light,” Nat. Photonics 6, 549–553 (2012).
[Crossref]

Steinvall, O.

Sume, A.

A. Sume, M. Gustafsson, M. Herberthson, A. Janis, S. Nilsson, J. Rahm, and A. Orbom, “Radar detection of moving targets behind corners,” IEEE Trans. Geosci. Remote Sens. 49, 2259–2267 (2011).
[Crossref]

Sun, B.

B. Sun, M. P. Edgar, R. Bowman, L. E. Vittert, S. Welsh, A. Bowman, and M. J. Padgett, “3D computational imaging with single-pixel detectors,” Science 340, 844–847 (2013).
[Crossref] [PubMed]

Thrampoulidis, C.

C. Thrampoulidis, G. Shulkind, F. Xu, W. T. Freeman, J. H. Shapiro, A. Torralba, F. N. C. Wong, and G. W. Wornell “Exploiting occlusion in non-line-of-sight active imaging,” arXiv:1711.06297 (2017).

Tonolini, F.

G. Gariepy, F. Tonolini, R. Henderson, J. Leach, and D. Faccio, “Detection and tracking of moving objects hidden from view,” Nat. Photonics 10, 23–26 (2016).
[Crossref]

Torralba, A.

C. Thrampoulidis, G. Shulkind, F. Xu, W. T. Freeman, J. H. Shapiro, A. Torralba, F. N. C. Wong, and G. W. Wornell “Exploiting occlusion in non-line-of-sight active imaging,” arXiv:1711.06297 (2017).

A. Torralba and W. T. Freeman, “Accidental pinhole and pinspeck cameras: Revealing the scene outside the picture,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2012), pp.. 374–381.

Tosi, A.

Trueblood, T.

B. Chakraborty, Y. Li, J. J. Zhang, T. Trueblood, A. Papandreou-Suppappola, and D. Morrell, “Multipath exploitation with adaptive waveform design for tracking in urban terrain,” in Proceedings of IEEE International Conference on Acoustics, Speech and Signal Processing (IEEE2010), pp. 3894–3897.

Veeraraghavan, A.

A. Velten, T. Willwacher, O. Gupta, A. Veeraraghavan, M. G. Bawendi, and R. Raskar, “Recovering three-dimensional shape around a corner using ultrafast time-of-flight imaging,” Nat. Commun. 3, 745 (2012).
[Crossref] [PubMed]

O. Gupta, T. Willwacher, A. Velten, A. Veeraraghavan, and R. Raskar, “Reconstruction of hidden 3D shapes using diffuse reflections,” Opt. Express 20, 19096–19108 (2012).
[Crossref] [PubMed]

Velten, A.

M. Buttafava, J. Zeman, A. Tosi, K. Eliceiri, and A. Velten, “Non-line-of-sight imaging using a time-gated single photon avalanche diode,” Opt. Express 23, 20997–21011 (2015).
[Crossref] [PubMed]

M. Laurenzis and A. Velten, “Nonline-of-sight laser gated viewing of scattered photons,” Opt. Eng. 53, 023102 (2014).
[Crossref]

O. Gupta, T. Willwacher, A. Velten, A. Veeraraghavan, and R. Raskar, “Reconstruction of hidden 3D shapes using diffuse reflections,” Opt. Express 20, 19096–19108 (2012).
[Crossref] [PubMed]

A. Velten, T. Willwacher, O. Gupta, A. Veeraraghavan, M. G. Bawendi, and R. Raskar, “Recovering three-dimensional shape around a corner using ultrafast time-of-flight imaging,” Nat. Commun. 3, 745 (2012).
[Crossref] [PubMed]

Venkatraman, D.

D. Shin, F. Xu, D. Venkatraman, R. Lussana, F. Villa, F. Zappa, V. K. Goyal, F. N. C. Wong, and J. H. Shapiro, “Photon-efficient imaging with a single-photon camera,” Nat. Commun. 7, 12046 (2016).
[Crossref] [PubMed]

A. Kirmani, D. Venkatraman, D. Shin, A. Colaço, F. N. C. Wong, J. H. Shapiro, and V. K. Goyal, “First-photon imaging,” Science 343, 58–61 (2014).
[Crossref]

Villa, F.

D. Shin, F. Xu, D. Venkatraman, R. Lussana, F. Villa, F. Zappa, V. K. Goyal, F. N. C. Wong, and J. H. Shapiro, “Photon-efficient imaging with a single-photon camera,” Nat. Commun. 7, 12046 (2016).
[Crossref] [PubMed]

Vittert, L. E.

B. Sun, M. P. Edgar, R. Bowman, L. E. Vittert, S. Welsh, A. Bowman, and M. J. Padgett, “3D computational imaging with single-pixel detectors,” Science 340, 844–847 (2013).
[Crossref] [PubMed]

Wang, L. V.

L. Gao, J. Liang, C. Li, and L. V. Wang, “Single-shot compressed ultrafast photography at one hundred billion frames per second,” Nature 516, 74–77 (2014).
[Crossref] [PubMed]

Welsh, S.

B. Sun, M. P. Edgar, R. Bowman, L. E. Vittert, S. Welsh, A. Bowman, and M. J. Padgett, “3D computational imaging with single-pixel detectors,” Science 340, 844–847 (2013).
[Crossref] [PubMed]

Willett, R. M.

Z. T. Harmany, R. F. Marcia, and R. M. Willett, “This is SPIRAL-TAP: Sparse Poisson intensity reconstruction algorithms—theory and practice,” IEEE Trans. Image Process. 21, 1084–1096 (2012).
[Crossref]

Willwacher, T.

A. Velten, T. Willwacher, O. Gupta, A. Veeraraghavan, M. G. Bawendi, and R. Raskar, “Recovering three-dimensional shape around a corner using ultrafast time-of-flight imaging,” Nat. Commun. 3, 745 (2012).
[Crossref] [PubMed]

O. Gupta, T. Willwacher, A. Velten, A. Veeraraghavan, and R. Raskar, “Reconstruction of hidden 3D shapes using diffuse reflections,” Opt. Express 20, 19096–19108 (2012).
[Crossref] [PubMed]

Wong, F. N. C.

D. Shin, F. Xu, D. Venkatraman, R. Lussana, F. Villa, F. Zappa, V. K. Goyal, F. N. C. Wong, and J. H. Shapiro, “Photon-efficient imaging with a single-photon camera,” Nat. Commun. 7, 12046 (2016).
[Crossref] [PubMed]

A. Kirmani, D. Venkatraman, D. Shin, A. Colaço, F. N. C. Wong, J. H. Shapiro, and V. K. Goyal, “First-photon imaging,” Science 343, 58–61 (2014).
[Crossref]

C. Thrampoulidis, G. Shulkind, F. Xu, W. T. Freeman, J. H. Shapiro, A. Torralba, F. N. C. Wong, and G. W. Wornell “Exploiting occlusion in non-line-of-sight active imaging,” arXiv:1711.06297 (2017).

Wornell, G. W.

C. Thrampoulidis, G. Shulkind, F. Xu, W. T. Freeman, J. H. Shapiro, A. Torralba, F. N. C. Wong, and G. W. Wornell “Exploiting occlusion in non-line-of-sight active imaging,” arXiv:1711.06297 (2017).

Xiao, L.

F. Heide, L. Xiao, W. Heidrich, and M. B. Hullin, “Diffuse mirrors: 3D reconstruction from diffuse indirect illumination using inexpensive time-of-flight sensors,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2014), pp. 3222–3229.

Xu, F.

D. Shin, F. Xu, D. Venkatraman, R. Lussana, F. Villa, F. Zappa, V. K. Goyal, F. N. C. Wong, and J. H. Shapiro, “Photon-efficient imaging with a single-photon camera,” Nat. Commun. 7, 12046 (2016).
[Crossref] [PubMed]

C. Thrampoulidis, G. Shulkind, F. Xu, W. T. Freeman, J. H. Shapiro, A. Torralba, F. N. C. Wong, and G. W. Wornell “Exploiting occlusion in non-line-of-sight active imaging,” arXiv:1711.06297 (2017).

Zappa, F.

D. Shin, F. Xu, D. Venkatraman, R. Lussana, F. Villa, F. Zappa, V. K. Goyal, F. N. C. Wong, and J. H. Shapiro, “Photon-efficient imaging with a single-photon camera,” Nat. Commun. 7, 12046 (2016).
[Crossref] [PubMed]

Zeman, J.

Zhang, J. J.

B. Chakraborty, Y. Li, J. J. Zhang, T. Trueblood, A. Papandreou-Suppappola, and D. Morrell, “Multipath exploitation with adaptive waveform design for tracking in urban terrain,” in Proceedings of IEEE International Conference on Acoustics, Speech and Signal Processing (IEEE2010), pp. 3894–3897.

Appl. Opt. (1)

IEEE Trans. Comput. Imaging (1)

D. Shin, A. Kirmani, V. K. Goyal, and J. H. Shapiro, “Photon-efficient computational 3-D and reflectivity imaging with single-photon detectors,” IEEE Trans. Comput. Imaging 1, 112–125 (2015).
[Crossref]

IEEE Trans. Geosci. Remote Sens. (1)

A. Sume, M. Gustafsson, M. Herberthson, A. Janis, S. Nilsson, J. Rahm, and A. Orbom, “Radar detection of moving targets behind corners,” IEEE Trans. Geosci. Remote Sens. 49, 2259–2267 (2011).
[Crossref]

IEEE Trans. Image Process. (1)

Z. T. Harmany, R. F. Marcia, and R. M. Willett, “This is SPIRAL-TAP: Sparse Poisson intensity reconstruction algorithms—theory and practice,” IEEE Trans. Image Process. 21, 1084–1096 (2012).
[Crossref]

J. Mod. Opt. (1)

A. L. Cohen, “Anti-pinhole imaging,” J. Mod. Opt. 29, 63–67 (1982).

Meas. Sci. Technol. (1)

G. Buller and R. J. Collins, “Single-photon generation and detection,” Meas. Sci. Technol. 21, 012002 (2010).
[Crossref]

Nat. Commun. (2)

D. Shin, F. Xu, D. Venkatraman, R. Lussana, F. Villa, F. Zappa, V. K. Goyal, F. N. C. Wong, and J. H. Shapiro, “Photon-efficient imaging with a single-photon camera,” Nat. Commun. 7, 12046 (2016).
[Crossref] [PubMed]

A. Velten, T. Willwacher, O. Gupta, A. Veeraraghavan, M. G. Bawendi, and R. Raskar, “Recovering three-dimensional shape around a corner using ultrafast time-of-flight imaging,” Nat. Commun. 3, 745 (2012).
[Crossref] [PubMed]

Nat. Photonics (4)

G. Gariepy, F. Tonolini, R. Henderson, J. Leach, and D. Faccio, “Detection and tracking of moving objects hidden from view,” Nat. Photonics 10, 23–26 (2016).
[Crossref]

A. P. Mosk, A. Lagendijk, G. Lerosey, and M. Fink, “Controlling waves in space and time for imaging and focusing in complex media,” Nat. Photonics 6, 283–292 (2012).
[Crossref]

O. Katz, E. Small, and Y. Silberberg, “Looking around corners and through thin turbid layers in real time with scattered incoherent light,” Nat. Photonics 6, 549–553 (2012).
[Crossref]

R. H. Hadfield, “Single-photon detectors for optical quantum information applications,” Nat. Photonics 3, 696–705 (2009).
[Crossref]

Nature (1)

L. Gao, J. Liang, C. Li, and L. V. Wang, “Single-shot compressed ultrafast photography at one hundred billion frames per second,” Nature 516, 74–77 (2014).
[Crossref] [PubMed]

Opt. Eng. (1)

M. Laurenzis and A. Velten, “Nonline-of-sight laser gated viewing of scattered photons,” Opt. Eng. 53, 023102 (2014).
[Crossref]

Opt. Express (3)

Opt. Lett. (1)

Physica D (1)

L. I. Rudin, S. Osher, and E. Fatemi, “Nonlinear total variation based noise removal algorithms,” Physica D 60, 259–268 (1992).
[Crossref]

Proc. SPIE (1)

O. Steinvall, M. Elmqvist, and H. Larsson, “See around the corner using active imaging,” Proc. SPIE 8186, 818605 (2011).
[Crossref]

Sci. Rep. (1)

J. Klein, C. Peters, J. Martín, M. Laurenzis, and M. B. Hullin, “Tracking objects outside the line of sight using 2D intensity images,” Sci. Rep. 6, 32491 (2016).
[Crossref] [PubMed]

Science (2)

B. Sun, M. P. Edgar, R. Bowman, L. E. Vittert, S. Welsh, A. Bowman, and M. J. Padgett, “3D computational imaging with single-pixel detectors,” Science 340, 844–847 (2013).
[Crossref] [PubMed]

A. Kirmani, D. Venkatraman, D. Shin, A. Colaço, F. N. C. Wong, J. H. Shapiro, and V. K. Goyal, “First-photon imaging,” Science 343, 58–61 (2014).
[Crossref]

Other (5)

A. Kirmani, T. Hutchison, J. Davis, and R. Raskar, “Looking around the corner using transient imaging,” in Proceedings of IEEE International Conference on Computer Vision (IEEE, 2009), pp. 159–166.

B. Chakraborty, Y. Li, J. J. Zhang, T. Trueblood, A. Papandreou-Suppappola, and D. Morrell, “Multipath exploitation with adaptive waveform design for tracking in urban terrain,” in Proceedings of IEEE International Conference on Acoustics, Speech and Signal Processing (IEEE2010), pp. 3894–3897.

C. Thrampoulidis, G. Shulkind, F. Xu, W. T. Freeman, J. H. Shapiro, A. Torralba, F. N. C. Wong, and G. W. Wornell “Exploiting occlusion in non-line-of-sight active imaging,” arXiv:1711.06297 (2017).

F. Heide, L. Xiao, W. Heidrich, and M. B. Hullin, “Diffuse mirrors: 3D reconstruction from diffuse indirect illumination using inexpensive time-of-flight sensors,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2014), pp. 3222–3229.

A. Torralba and W. T. Freeman, “Accidental pinhole and pinspeck cameras: Revealing the scene outside the picture,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2012), pp.. 374–381.

Supplementary Material (2)

NameDescription
» Visualization 1       This video provides a visualization of the laser’s raster scanning of the visible wall, and the occluder’s resulting shadow on the hidden wall.
» Visualization 2       This video provides a visualization of the man-shape pattern and the movement of the occluder shadow on the hidden wall.

Cited By

OSA participates in Crossref's Cited-By Linking service. Citing articles from OSA journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (13)

Fig. 1
Fig. 1 (a) Experimental configuration. The goal is to reconstruct the reflectivity pattern on the hidden wall. A repetitively-pulsed laser source raster scans a diffuse (nearly Lambertian) visible wall. Photons striking the visible wall reflect toward the hidden wall, reflect at the hidden wall back toward the visible wall, and finally reflect at the visible wall toward the single-photon avalanche diode (SPAD), whose optics are configured to detect backscattered photons from a large patch on the visible wall. The counts are recorded by a single-photon counting module and further computer processed. When present, an occluder (circular black patch) obstructs some light-propagation paths from the visible wall to the hidden wall (casting a subtle shadow), and from the hidden wall to the visible wall. (b) Raw photon counts in the absence of an occluder. (c) Raw photon counts in the presence of the occluder. (d) Reconstructed reflectivity from the counts in (b). (e) Reconstructed reflectivity from the counts in (c).
Fig. 2
Fig. 2 Top view of experimental setup and a three-bounce light trajectory of the form ΛijxcΩ. The laser (Λ) illuminates the visible wall (ij) and is diffusively reflected (first bounce) toward the hidden wall (x), where it reflects (second bounce) back toward the visible wall. The third-bounce reflection at the visible wall (c) returns light in the direction of the detector (Ω). A circular occluder is placed between the visible and hidden walls, and partially obstructs light propagating between the visible and hidden walls.
Fig. 3
Fig. 3 Role of the occluder’s shadow in NLoS imaging. The red-dashed square in the ground-truth image indicates the hidden-wall area that is scanned by the occluder’s shadow as the laser raster scans the visible wall. The blue-dashed circle in the ground-truth image indicates the approximate occluder-shadow area for one ij (see Visualization 1). (a) The man-shaped pattern, placed in the upper-left quadrant of the hidden wall, is completely scanned by the occluder’s shadow pattern as the laser scans the visible wall; with the aid of the occluder, the hidden pattern is successfully reconstructed from the raw counts. (b) The T-shaped pattern, placed in the upper-right quadrant of the hidden wall that is outside of the shadow area, yields raw photon counts that fail to reconstruct the pattern owing to the occluder’s shadow not scanning that quadrant. (c) Both the man-shaped pattern and the T-shaped pattern are placed on the hidden wall, with only the man-shaped pattern being scanned by the occluder’s shadow, so the man-shaped pattern is reconstructed successfully while the T-shaped pattern is not.
Fig. 4
Fig. 4 Reconstruction results with different values of the regularization parameter λ. We demonstrate reconstruction according to Eq. (4) with varying values for the regularization parameter λ, as indicated on the bottom of the figures. Higher λ values promote reconstructions with larger regions of near-uniform reflectivity values, whereas smaller λ values produce more detailed but noisier images. In our reconstructions, we chose a λ value that does not severely distort the image; here the preferred value is λ = 0.75.
Fig. 5
Fig. 5 Time-resolved SPAD measurements showing the gated timing window used for post-selecting third-bounce photon detections while suppressing first-bounce photon detections. The gate-off period covers detection times of first-bounce photons and the ∼6 ns duration of the gate-on period is long enough to capture all third-bounce photon detections. In our experiments, only the number of detected photons in gate-on windows were recorded to form the raw-count images.
Fig. 6
Fig. 6 Experimental results on the recovery of different hidden-wall reflectivity patterns, (a)–(d). First row: ground truth patterns on the hidden wall; second row: raw photon counts for 100 × 100 raster-scanned laser positions; third row: reconstructions in the presence of the occluder, based on solving Eq. (4), showing that detailed scene features are successfully recovered.
Fig. 7
Fig. 7 Root-mean-square error (RMSE) and reconstruction results (insets) with different numbers of detected photons per pixel (PPP). The RMSE of our binomial-likelihood method remains below 0.05 with >69 detected PPP, whereas the Gaussian-likelihood method employed in [20] requires at least ∼1100 detected PPP to achieve similar performance.
Fig. 8
Fig. 8 Reflectivity reconstructions with different numbers of detected photons per pixel (PPP). We compare the binomial-likelihood algorithm (Eq. (4)) and the Gaussian-likelihood algorithm [20] for different numbers of average detected PPP, ranging from 17 to 3438 as indicated on the bottom of each figure. The photon efficiency of the binomial-likelihood method is far superior to that of the Gaussian-likelihood method, with the latter requiring at least ∼1100 PPP to achieve reconstructions comparable to those of the former with ∼69 PPP. In the low-photon detection regime, PPP < 276, the Gaussian-likelihood method fails to reconstruct the details of the reflectivity image. Here the regularization parameter is fixed at 0.75, which causes the slight difference between Binomial-likelihood and Gaussian-likelihood at high PPP values.
Fig. 9
Fig. 9 (a)–(c), Reconstructions of the Fig. 6(a) reflectivity pattern obtained using circular occluders with diameters of 15.8 cm, 6.8 cm and 4.4 cm. A small (large) occluder sharpens (blurs) the image. (d)–(f), Reconstructions of two-bar reflectivity patterns with bar separations of 2 cm, 4 cm and 8 cm that were obtained using a 6.8-cm-diameter circular occluder. Our system achieves 4 cm spatial resolution.
Fig. 10
Fig. 10 Raw detected-count measurements with a 15.8-cm-diameter occluder placed at different positions. The real location (X, Y, Z) (cm) of the occluder is indicated on the top of each figure. In (a)–(c), we fixed the position of the occluder on the Z axis and shifted it along the X and Y axes: the center of the rings reveals the (X, Y) position of the occluder. In (d)–(f), we fixed the position of the occluder on the X and Y axes and shifted it along the Z axis: the size of the rings reveals the Z-axis position of the occluder. These preliminary measurements suggest that occluder position may be localized from raw-count data.
Fig. 11
Fig. 11 Comparing the informativeness of occluded and unoccluded measurements. We numerically simulated the setup of Fig. 2 and evaluated the informativeness of the measurements with and without an occluder from the A matrix’s singular values {σ}. In our simulations, the laser illuminates a 50 × 50 grid on the visible wall, and the hidden wall is discretized to a 50 × 50 grid. The singular values of the corresponding 2500 × 2500 A matrix were calculated for an occluded setup (blue dashed curve) and an unoccluded setup (red solid curve). The singular values of the occluded A matrix are substantially higher than those of the unoccluded matrix, suggesting that measurements in the occluded setup will be much more informative.
Fig. 12
Fig. 12 Near-Lambertian reflectance behavior of white poster-board visible wall. The blue (red) data points correspond to measurements made with the setup in the blue (red) inset: a laser illuminated the visible wall at normal incidence (20° offset from normal incidence), and a detector recorded the power reflected at different viewing angles. The green line is the theoretical cosine curve for a perfect Lambertian surface. We find that the visible wall has ∼80% reflectivity and is nearly Lambertian except for a small specular component when the viewing angle is perpendicular to the surface. We also performed this characterization for the patterns on the hidden wall, and found that the Lambertian property of those patterns was similar to that of the visible wall.
Fig. 13
Fig. 13 Results of long acquisition time background-light measurement used to calibrate Bij. The reflectivity pattern on the hidden wall was replaced with a black surface. A total of 35.6 million laser pulses were transmitted at each laser point ij on the 100 × 100 illumination grid, and the third-bounce counts were recorded by the SPAD. The nonuniformity is mainly due to scattering from the raster-scan galvo mirrors and SPAD afterpulsing that arises from detections of those first-bounce photons.

Equations (16)

Equations on this page are rendered with MathJax. Learn more.

Y i j = K p k , l A k l ( i j ) F k l ,
P 0 ( i j ) ( F ) = exp [ η ( Y i j + B i j ) ] ,
Pr ( R i j ; F ) = ( N R i j ) [ 1 P 0 ( i j ) ( F ) ] R i j [ P 0 ( i j ) ( F ) ] N R i j .
F ^ = arg min F : F k l 0 { log [ ( R ; F ) ] + λ pen ( F ) } ,
RMSE ( F ^ , F ) = 1 n 2 k = 1 n l = 1 m ( F k l F ^ k l ) 2 ,
K p f ( x ) G Λ , i j , x , c , Ω d x d c d Ω i j x 2 x c 2 c Ω 2 ,
G Λ , i j , x , c , Ω cos ( Λ i j , n i j ) cos ( x i j , n i j ) × cos ( x i j , n x ) cos ( x c , n x ) cos ( x c , n c ) cos ( c Ω , n c ) ,
Y i j = K p 𝒮 ( i j , c ) d x 𝒞 d c 𝒟 d Ω f ( x ) G Λ , i j , x , c , Ω i j x 2 x c 2 c Ω 2 ,
Y i j = K p 𝒮 d x f ( x ) 𝒞 d c 𝒟 d Ω 1 𝒮 ( i j , c ) ( x ) G Λ , i j , x , c , Ω i j x 2 x c 2 c Ω 2 = K p 𝒮 d x f ( x ) A ( i j ) ( x ) ,
A ( i j ) ( x ) 𝒞 d c 𝒟 d Ω 1 𝒮 ( i j , c ) ( x ) G Λ , i j , x , c , Ω i j x 2 x c 2 c Ω 2 .
Y i j = K p k , l A k l ( i j ) F k l .
Θ ( x , y ) = { 1 , unobstructed line of sight between x and y , 0 , obstructed line of sight between x and y .
y = Af .
P 0 ( i j ) ( F ) = exp [ η ( Y i j + B i j ) ] 1 η ( Y i j + B i j ) .
Pr ( R i j ; F ) = ( N R i j ) [ 1 P 0 ( i j ) ( F ) ] R i j [ P 0 ( i j ) ( F ) ] N R i j .
log [ ( R ; F ) ] = log ( i , j Pr ( R i j ; F ) ) = i j log [ Pr ( R i j ; F ) ] = i j { ( N R i j ) [ η K p k , l A k l ( i j ) F k l ] R i j log [ η K p k , l A k l ( i j ) F k l + η B i j ] } ,

Metrics