## Abstract

The ability to see around corners, i.e., recover details of a hidden scene from its reflections in the surrounding environment, is of considerable interest in a wide range of applications. However, the diffuse nature of light reflected from typical surfaces leads to mixing of spatial information in the collected light, precluding useful scene reconstruction. Here, we employ a computational imaging technique that opportunistically exploits the presence of occluding objects, which obstruct probe-light propagation in the hidden scene, to undo the mixing and greatly improve scene recovery. Importantly, our technique obviates the need for the ultrafast time-of-flight measurements employed by most previous approaches to hidden-scene imaging. Moreover, it does so in a photon-efficient manner (i.e., it only requires a small number of photon detections) based on an accurate forward model and a computational algorithm that, together, respect the physics of three-bounce light propagation and single-photon detection. Using our methodology, we demonstrate reconstruction of hidden-surface reflectivity patterns in a meter-scale environment from non-time-resolved measurements. Ultimately, our technique represents an instance of a rich and promising new imaging modality with important potential implications for imaging science.

© 2018 Optical Society of America under the terms of the OSA Open Access Publishing Agreement

## 1. Introduction

In recent years, remarkable advances have been achieved in computational imaging, image processing and computer vision [1–4]. Whereas conventional imaging involves direct line-of-sight transport from a light source to a scene, and from the scene back to a camera sensor, the problem of imaging scenes that are hidden from the camera’s direct line of sight, referred to as seeing around corners or non-line-of-sight (NLoS) imaging, has attracted growing interest. Indeed, the ability to reconstruct hidden scenes has the potential to be transformative in important and diverse applications, including, e.g., medicine, transportation, manufacturing, scientific imaging, public safety, and security.

Techniques for NLoS imaging that have been recently demonstrated include time-gated viewing from specular reflections [5–8], wavefront shaping [9,10], and transient imaging, in which time-of-flight (ToF) measurements are collected [11–18]. ToF active imaging using short-duration laser pulses (the most commonly used approach) provide only indirect access to scene information, through detection of photons that have been diffusely reflected by intervening surfaces, which mixes the spatial information they carry. Such systems have used picosecond-resolution ToF measurements—as obtained from a streak camera [12,13] or a single-photon avalanche diode (SPAD) detector [16–18]—to recover hidden scenes. However, collecting such measurements involves complicated and costly apparatus [18]. Klein *et al.* has reported tracking NLoS objects using intensity images [19]; however, its tracking problem is parametric in nature, allowing it to retrieve object translation and rotation only in the case of known objects. In contrast, our focus is on a non-parametric setting, with the goal of retrieving the unknown reflectivity pattern on a hidden surface.

Recently, in [20], we proposed a new NLoS imaging framework that opportunistically exploits the presence of opaque occluders in the light propagation path within the hidden space to distinguish light emanating from different parts of the hidden scene (see Visualization 1). This framework was shown to recover spatial information otherwise destroyed by diffuse reflection, *without* reliance on ultrafast ToF measurements. The approach is reminiscent of pin-speck (or anti-pinhole) imaging [21,22], in which an occluder in the scene serves as a defacto lens that facilitates imaging. The focus of [20] was a theoretical study of the framework. The model developed there assumes additive signal-independent Gaussian noise, hence the reconstruction algorithm and the preliminary experiment reported in [20] are tailored to a Gaussian-likelihood method. This Gaussian-noise assumption, however, does not adequately represent shot-noise-limited operation, which prevails in the low-photon-count regime.

In this paper, we extend the applicability of occlusion-based NLoS imaging to operation in that low-photon-count regime. We experimentally demonstrate an imaging system with substantially higher photon efficiency than that reported in [20], performance that is crucial for fast and low-power NLoS imaging. To do so, we develop an accurate forward model and a photon-efficient computational algorithm based on a binomial-likelihood method that, together, respect the physics of three-bounce light propagation and SPAD-based photodetection. As a result, we achieve a 16× speedup in the data acquisition process, because information from 16× fewer photon detections than employed in [20] suffice to produce images of equal quality. Moreover, unlike [20], we report full details of our experiments that, in addition to the photon-efficiency demonstration, include investigations of issues—such as the effects of occluder size and the algorithm’s regularization parameter on scene reconstruction—that were only studied theoretically in [20].

## 2. Imaging scenario

Our system configuration is illustrated in Fig. 1(a) and a top view of the experimental setup is illustrated in Fig. 2. The objective is to reconstruct the unknown reflectivity pattern on the hidden wall. The visible wall is illuminated by a repetitively-pulsed laser that raster scans an *m* × *m* grid. The photons detected from illumination of a particular scan point have undergone three bounces: first, reflection off the visible wall in the direction of the hidden wall; second, reflection off the hidden wall in the direction of the visible wall, where the reflection is multiplicatively scaled by the reflectivity pattern of the hidden wall we seek to recover; and third, reflection off the visible wall in the direction of a SPAD. As shown in Fig. 2(a), the SPAD’s field of view is configured for the left side of the visible wall, to avoid the direct first bounce and to detect as many third-bounce photons as possible. We use a *single-pixel* SPAD instead of a normal charge-coupled device (CCD) camera, because of its single-photon sensitivity. This is necessary because the returned pulse energy after the three bounces is heavily attenuated (∼130–140 dB in our room-scale experiment), severely limiting the number of detected photons. Thus, the SPAD enables efficient NLoS imaging. We remark that although a SPAD is capable of providing time-stamped measurements, we *discard* the SPAD’s time-resolved information by integrating detections over a time-gating window to collect just an *m* × *m* matrix of the raw photon counts obtained from illumination of each laser grid point. To further clarify, we emphasize that a SPAD is not strictly necessary for our imaging method, as we show next that reconstruction is possible when we throw away the detected third-bounce photons’ time signatures. Although not demonstrated here, alternative high-sensitivity sensors with no or poor timing resolution—such as an intensified CCD or electron-multiplying CCD—can also be used in our experiment. We will investigate such modifications in future work.

We performed this experiment twice, first with no obstruction between the visible and the hidden walls, and then with a black circular occluder inserted between those walls to block some of the light propagating from the visible wall toward the hidden wall, and some of the light propagating from the hidden wall back toward the visible wall, as illustrated in Fig. 1(a). The corresponding matrices of raw photon counts are shown in Figs. 1(b) and 1(c). We derive an accurate forward model and solve the resulting inverse problem using a photon-efficient reconstruction algorithm that is tailored to the low-photon-count regime associated with three-bounce propagation (see below). Figs. 1(d) and 1(e) show that reconstruction of the hidden-wall reflectivity pattern failed when photon counts were collected without an occluder being present, but succeeded when they were collected in the presence of the occluder. In Figs. 3(a)–3(c), we present experimental results in which different patterns are placed on the hidden wall. These results demonstrate that obstructions in the light propagation path enable imaging from non-time-resolved photon counts. Indeed, as will be explained, occluders do so by increasing the informativeness of measurements made in their presence. Note that we have assumed knowledge of the occluder’s location. This information is easily obtained if the occluder is visible from the detector’s vantage point. Moreover, we have initial indication that location information for the occluder can be gleaned from raw-count data (see below).

## 3. Forward model

In this section, we present a ray-optics light propagation model (see details in Appendix A) that relates the unknown reflectivity on a Lambertian hidden wall to the raw photon counts for specified experimental parameters. The model accounts for: (i) third-bounce reflections involving the hidden wall; (ii) occlusions in the scene; (iii) a low photon-count operating regime; and (iv) wide field-of-view detection. For the derivation we assume that: the geometries of the hidden wall and occluder are known; the occluders are opaque (nonreflecting and nontransmitting); the visible wall is Lambertian with known reflectivity; and the background illumination reaching the detector is known.

The *m* × *m* illumination grid on the visible wall is indexed with (*i*, *j*). The hidden wall is discretized to *n* × *n* pixels indexed with (*k*, *l*). Let **F** be the hidden-wall’s *reflectivity matrix*, with entry 0 ≤ *F _{kl}* ≤ 1 being the reflectivity value of the (

*k*,

*l*)th hidden-wall pixel. We use

*Y*to denote the average number of photons arriving at the detector from single-pulse illumination at grid point (

_{ij}*i*,

*j*), and

**Y**to denote the

*m*×

*m*matrix whose

*ij*th entry is

*Y*. In the absence of background light in three-bounce NLoS imaging,

_{ij}*Y*is linearly related to

_{ij}**F**as follows (Appendix A):

*K*

_{p}is the average number of photons per transmitted laser pulse, and ${A}_{kl}^{(ij)}$, for fixed

*i*,

*j*, is the

*kl*th entry of an

*n*×

*n*matrix

**A**

^{(ij)}that is determined by the physics of light propagation and the geometry of the surfaces involved. For 1 ≤

*i*,

*j*≤

*m*, Eq. (1) defines a linear system of

*m*

^{2}equations in the

*n*

^{2}unknowns

**F**that we wish to retrieve.

In practice, **Y** is not directly available. Even were it available, the robustness of estimating **F** from **Y** would depend on the matrices **A**^{(ij)}. Indeed, high-fidelity inversion of Eq. (1) with a finite-precision calculation requires that the **A**^{(ij)} vary substantially with (*i*, *j*), i.e., that each laser illumination point retrieves a new informative projection of the unknowns. When the space between the visible and hidden walls is free of obstructions, however, the **A**^{(ij)} matrices vary only slightly and smoothly across different grid points (*i*, *j*) [Fig. 1(b)]. Hence inverting Eq. (1) results in poor reconstruction of the unknown reflectivity because the inversion is ill conditioned [Fig. 1(d)].

In contrast, when an occluder is present in the space between the visible and hidden walls, the matrices **A**^{(ij)} in Eq. (1) become much more diverse [Fig. 1(c)], enabling much better imaging of the hidden wall [Fig. 1(e)]. Intuitively, the occluder partially obstructs light propagation in the hidden space, precluding **Y** contributions from some hidden-scene patches, thus making some **A**^{(ij)} entries vanish. Moreover, different laser positions (*i*, *j*) and (*i′*, *j′*) may be blocked from illuminating different portions of the hidden wall (see
Visualization 1). Consequently, some **A**^{(ij)} entries that are zeros correspond to **A**^{(i′j′)} entries that are nonzero, and vice-versa, yielding the measurement diversity needed for a much better conditioned inversion of Eq. (1).

A photon-number-resolving SPAD [23] will produce a Poisson-distributed number of photon counts in response to an illumination pulse [24]. Currently available SPADs, however, are not photon-number resolving: after detecting one photon they suffer a dead time [23, 24] whose duration is longer, in our experiment, than the duration of light returned in a single illumination period. Furthermore, after three bounces, the probability of detecting a photon from a single pulse is very low. So, in this low-flux regime, the probability that the SPAD does not detect a photon from single-pulse illumination of the *ij*th grid point is:

*η*is the SPAD’s quantum efficiency, and

*B*is the background contribution to the light illuminating the SPAD. Defining

_{ij}*R*to be the number of photons detected from illuminating that grid point with a sequence of

_{ij}*N*laser pulses, it follows that

*R*has a binomial distribution with success probability $1-{P}_{0}^{(ij)}(\mathbf{F})$, i.e., [25]

_{ij}## 4. Reconstruction algorithm

To reconstruct the hidden wall’s reflectivity matrix **F** from the *m* × *m* matrix, **R**, of photon counts, we make use of the forward model from Eqs. (1)–(3). In particular, we seek a matrix **F̂** that maximizes the likelihood $\mathcal{L}(\mathbf{R};\mathbf{F})\equiv {\prod}_{i,j}\text{Pr}\left({R}_{ij};\mathbf{F}\right)$ of **F** being the true reflectivity matrix, given that **R** is the observed photon-count matrix. Significantly, the negative log-likelihood function can be shown to be convex in **F**, and is thus easy to minimize. The optimization program is still convex—and still easily solved—after we impose reflectivity’s nonnegativity constraint *F _{kl}* ≥ 0, and an additive penalty pen(

**F**) chosen to ensure spatial correlation between the reflectivity values of neighboring pixels while allowing abrupt reflectivity changes at the boundaries between multipixel regions. In summary, we reconstruct the reflectivity matrix as the solution

**F̂**to the convex optimization program

*λ*. We used the total-variation (TV) semi-norm penalty function [26] and a specialized solver [27] to obtain

**F̂**from Eq. (4).

The regularization parameter *λ* determines the balance between the two optimization targets: decreasing the negative log-likelihood and promoting locally-smooth scenes with sharp boundaries. In Fig. 4, we demonstrate the effect of varying *λ* on the reconstructed reflectivity. In practice, we choose the regularization parameter to obtain reasonably smooth images that do not seem overly regularized.

## 5. Experiment

Figure 2 depicts the ∼1-m scale imaging scene in our experiment. For illumination, we used a repetitively-pulsed 640-nm laser (Picoquant LDH-640B), with sub-ns pulses, 40 MHz repetition rate, and an average power of ∼8 mW. A two-axis galvo (Thorlabs GVS012) was utilized to raster scan the laser’s output over a grid of points (first bounce in Fig. 2) on a nearly-Lambertian visible wall (white poster board, see its characterization in Appendix B). Light reflected from the visible wall propagates to the hidden wall, where some is reflected back (second bounce) to the visible wall. Finally, some of the second-bounce light that is reflected from the visible wall (third bounce) is collected by a SPAD detector (MPD-PDM with quantum efficiency ∼0.35 at 640 nm). We placed an interference filter (Andover) centered at 640 nm with a 2 nm bandwidth in front of the SPAD to suppress background light. The occluder is a nonreflecting black circular patch. In the experiment, the two side walls inside the room were covered with black curtains so that they too are nonreflecting. Note that our forward model can easily take the side walls into consideration were they reflecting. During measurements, we turned off all ambient room light to minimize the background level.

The focus of the experiment is to utilize the collected third-bounce light to reconstruct the hidden-wall’s reflectivity pattern without use of ToF infomation. Therefore it is important to avoid detecting the first-bounce light that will be much stronger than the third-bounce light. We took two initial steps to minimize first-bounce photon detections. First, as shown in Fig. 2, we oriented the SPAD such that its field of view did not overlap the part of the visible wall that was scanned by the laser. Second, we inserted an opaque screen (not shown in Fig. 2) to block the direct line of sight between the illuminated part of the visible wall and the SPAD. In testing, however, we found that there was still a substantial number of photon detections from residual first-bounce light, which we could determine from their time delays relative to the laser pulses’ emission times. These detections were mostly due to laser light scattered from the two galvo mirrors that illuminated part of the visible wall within the SPAD’s field of view. So, because the visible wall is in direct line of sight of the imaging equipment—and hence its location can be easily and accurately estimated—we further suppressed first-bounce photon detections by the following *post-processing* procedure. We used the time-resolved (TR) information that is automatically captured by the SPAD to set a gated timing window that excludes first-bounce detections but whose duration is long enough to encompass all possible third-bounce detections, as indicated in the measurements shown in Fig. 5. As a result, *no* TR information related to the third-bounce photons is used in our measurements and scene reconstructions. In the future, with better galvo mirrors and a single-photon-sensitive CCD detector, we should be able to perform occlusion-based NLoS imaging with neither the need for nor the possibility of ToF-enabled suppression of first-bounce photon detections.

## 6. Experimental results

We report experimental results obtained from a meter-scale environment in which the distance between the detector and the visible wall is ∼1.5 m and a circular occluder of diameter ∼6.8 cm is positioned roughly midway between the visible and hidden walls, which are separated by ∼1 m. A ∼0.4 m × 0.4 m reflectivity pattern was mounted on the upper-left quadrant of the ∼1 m × 1 m hidden wall to ensure that the pattern is properly scanned by the occluder’s shadow as the laser raster scans the visible wall (see Fig. 3 and
Visualization 2). We performed an initial calibration of the background levels {*B _{ij}*} that was then used for all subsequent experiments. We note that the need for background calibration can be avoided with better experimental equipment (see Appendix B). The occluder’s shape and position are assumed to be known for the purpose of scene reconstruction. From the known geometry, the matrix

**A**

^{(ij)}can be determined. Finally, we chose

*m*= 100 and

*n*= 100 for our measurements.

First, we validate our occluder-assisted NLoS imaging method by reconstructing different reflectivity patterns on the hidden wall. These results are summarized in Figs. 6(a)–6(d). Four reflectivity patterns were placed on the hidden wall, as shown in the first row of Figs. 6(a)–6(d). The laser’s dwell time at each raster-scanned point was set so that *N* = 7.12 × 10^{5} pulses were sent, resulting in ∼276 detected photons per pixel (PPP) on average. For each reflectivity pattern, a matrix of 100 × 100 raw counts was collected, as given in the middle row of Figs. 6(a)–6(d). The reflectivity patterns on the hidden wall were then reconstructed using our algorithm for solving Eq. (4), successfully revealing their fine details, as seen in the bottom row of Figs. 6(a)–6(d).

To quantify the photon efficiency and fidelity of our method, we varied the dwell time per laser illumination point (which determines the overall acquisition time) and tracked reconstruction performance as a function of the empirical average PPP, as shown in Figs. 7 and 8. We measure the reconstruction fidelity by the root-mean-square error (RMSE) in the reconstructed reflectivity **F̂**,

**F**is the true reflectivity pattern as determined from measurements in the high photon-count limit. It is evident from Fig. 7 that reconstruction fidelity for our binomial-distribution-based likelihood method does not degrade much (remains below 0.05) as the average PPP decreases from ∼1100 to ∼100. Figure 7 also shows RMSE for the Gaussian-distribution-based likelihood method employed in [20]. We see that the binomial-likelihood method’s photon efficiency is substantially better than that of the Gaussian-likelihood method: the latter requires at least ∼1100 detected PPP to achieve a fidelity similar to what the former realized with only ∼69 detected PPP. This behavior is mainly due to the mismatched noise model in the standard Gaussian-likelihood method, which presumes the noise to be additive, signal independent, and Gaussian distributed, whereas in the low photon-count regime without photon-number resolution it is really signal dependent and binomial distributed.

Finally, we quantify the effect of occluder size [Figs. 9(a)–9(c)] and the limits of achievable spatial resolution [Figs. 9(d)–9(f)]. In Figs. 9(a)–(c), we used our system to image the reflectivity pattern of Fig. 6(a) using circular occluders, whose diameters ranged from 15.8 cm to 4.4 cm, while keeping other experimental parameters unchanged. The results show that a small (large) occluder sharpens (blurs) the image, similar to conventional pinhole imaging. In Figs. 9(d)–9(f), we fixed the diameter of the circular occluder at 6.8 cm and reconstructed a hidden-wall reflectivity pattern consisting of two bars with varying separation. With this occluder we see that our system provides ∼4 cm spatial resolution. Furthermore, the distance between the occluder and hidden wall will also affect the performance of the reconstruction. As shown in Figs. 3(a)–3(c), when the distance between the occluder and the hidden wall decreases (increases) its shadow’s size and field of view will decrease (increase), hence the resulting reconstruction will have better (worse) spatial resolution but smaller (larger) field of view. The distance between the visible and hidden walls also affects the reconstruction. Decreasing (increasing) the distance between the two walls improves (degrades) the conditioning of the forward model’s **A** matrix, which improves (degrades) scene reconstruction. See Sec. IV.B in ref. [20] for more about this point.

## 7. Discussion

We have assumed throughout that the occluder’s location was known and that it was nonreflecting. These assumptions may be relaxed. In particular, the location of a nonreflecting occluder may be obtained using a blind deconvolution method [20], in which the occluder and the scene hidden behind it are reconstructed jointly. The viability of this approach is suggested by the fact that the occluder can be localized from the raw counts, as shown in Fig. 10. Moreover, if the occluder has nonzero reflectivity, its contribution to the raw photon counts can be modeled using the principles employed in our forward model and incorporated into the blind deconvoloution procedure.

Our work can be improved in the following directions. First, the raster-scanning system can be replaced by a non-scanning laser together with a SPAD camera [28] or an intensified CCD for measurement. Such a system should be capable of tracking the position of a moving target in the hidden space [16]. Second, the experiment can be operated at an appropriate wavelength outside of the visible range, such as 1550 nm, in order to perform NLoS imaging in the presence of ambient light. Finally, one can combine ToF measurements [18] with our approach to obviate the need for the prior information about the occluder, thus providing a full reconstruction of the hidden space.

In conclusion, we have demonstrated a framework for photon-efficient, occluder-facilitated NLoS imaging. Our results may ultimately lead to new imaging methodologies capable of opportunistically exploiting diverse features of the environment—including, but not limited to, simple occluders—and thus pave the way to NLoS imaging in a wide variety of applications.

## Appendices

## A. Theoretical details

## Light propagation model

Here we provide details for the forward model in Eq. (1). The ray-optics propagation model we use for third-bounce light is that from [20], which we present in detail for ease of reference. Unlike [20], which assumes additive, signal-independent, Gaussian noise, our forward model accurately captures the noise statistics for SPAD detection in the low-photon-count regime.

Light propagates from the laser, located at position **Λ**, until it reaches the detector, located at position **Ω**, while accounting for a three-bounce propagation path. Our goal is to reconstruct the reflectivity function *f*(**x**), for **x** ∈ *𝒮*, where *𝒮* is a two-dimensional parameterization of the hidden wall.

Figure 2 illustrates a three-bounce trajectory of the form **Λ** → * ℓ_{ij}* →

**x**→

**c**→

**Ω**, where

*is the*

**ℓ**_{ij}*ij*th position in the laser’s illumination grid,

**x**is a point on the hidden wall, and

**c**is a point on the visible wall that is in the SPAD’s field of view. For single-pulse illumination of

*, the average number of photons following this trajectory that arrive at the SPAD is*

**ℓ**_{ij}*K*

_{p}is the average number of photons per pulse emitted by the laser, and d

**x**, d

**c**, d

**Ω**are differential areas. This expression accounts for the inverse-square-law losses experienced in free-space light propagation from

*to*

**ℓ**_{ij}**x**, from

**x**to

**c**, and from

**c**to

**Ω**, as well as the linear scaling by

*f*(

**x**) that results from reflection at

**x**. The geometric factor

*G*

_{Λ,ℓij,x,c,Ω}combines the Lambertian bidirectional reflectance distribution functions (BRDFs) associated with the diffuse reflections at the visible wall and the hidden wall, and is given by

**n**

_{ℓij},

**n**,

_{x}**n**are the surface normals at

_{c}*,*

**ℓ**_{ij}**x**,

**c**, respectively, and cos(

**,**

*a***) is the cosine of the angle between the vectors**

*b***and**

*a***.**

*b*For single-pulse illumination of * ℓ_{ij}*, we use

*Y*to denote the average number of photons arriving at the detector from three-bounce trajectories. Deriving an expression for

_{ij}*Y*entails summation over all such paths. In particular, this means summing over: (i) all

_{ij}**x**∈

*𝒮*(

*,*

**ℓ**_{ij}**c**), where

*𝒮*(

*,*

**ℓ**_{ij}**c**) is the section of the hidden wall

*𝒮*that has an unoccluded line of sight to both

*and*

**ℓ**_{ij}**c**; (ii) all

**c**∈

*𝒞*, where

*𝒞*is a parameterization of the section of the visible wall that is in the SPAD’s field of view; and (iii) all points in

*𝒟*, the SPAD detector’s photosensitive region. With these definitions we then have:

_{{x′}}(

**x**) is the indicator function (i.e., it equals 1 if and only if

**x**=∈ {

**x′**} and is 0 otherwise), and for fixed

*i*,

*j*we have defined

**x**. In what follows, it will be convenient to discretize the coordinate system on the hidden wall

*𝒮*by introducing an

*n*×

*n*grid indexed by (

*k*,

*l*). We then have that

*A*

^{(ij)}(

**x**) becomes ${A}_{kl}^{(ij)}$ and

*f*(

**x**) becomes

*F*. Making these substitutions in (9) we obtain the discrete version of the forward model that appeared in Eq. (1):

_{kl}## Shadow function

Equations (10) and (11) show that the presence of an occluder only affects *Y _{ij}* through its impact on

*𝒮*(

*,*

**ℓ**_{ij}**c**), i.e., the patch on the hidden wall that has unobstructed lines of sight to both

*and*

**ℓ**_{ij}**c**. To better understand this connection between the occluder and

*𝒮*(

*,*

**ℓ**_{ij}**c**), we introduce a binary

*shadow function*Θ(

**x**,

**y**) that indicates whether point

**x**on the hidden wall and point

**y**on the visible wall are

*visible*to each other:

*𝒮*(

*,*

**ℓ**_{ij}**c**) = {

**x**∈

*𝒮*: Θ(

**x**,

*)Θ(*

**ℓ**_{ij}**x**,

**c**) = 1, i.e., it is the subset of hidden-wall positions

*𝒮*that satisfy both Θ(

**x**,

*) = 1 and Θ(*

**ℓ**_{ij}**x**,

**c**) = 1. Note that

*𝒮*(

*,*

**ℓ**_{ij}**c**) and

*𝒮*(

*ℓ**,*

_{i′j′}**c**) differ on hidden-wall patches for which the occluder blocks light from

*but not from*

**ℓ**_{ij}

*ℓ**or vice versa.*

_{i′j′}## Informative measurements

Our experiment raster scans the grid points {* ℓ_{ij}*} on the visible wall and detects third-bounce photons reflected from a large portion of that wall. The informativeness of these measurements stems from the diversity of the coefficients ${A}_{kl}^{(ij)}$. In the absence of an

*𝒮*(

*,*

**ℓ**_{ij}**c**) =

*𝒮*for all

*and*

**ℓ**_{ij}**c**. From (10) we then see that the dependence of ${A}_{kl}^{(ij)}$ on

*i*,

*j*originates from the product of two smoothly-varying functions—the inverse-square-law term ‖

*−*

**ℓ**_{ij}**x**

*‖*

_{kl}^{−2}and the geometric function

*G*

_{Λ,ℓij,xkl,c,Ω}—that yield smooth variations in ${A}_{kl}^{(ij)}$ as (

*i*,

*j*) changes. In the presence of an occluder, however, the impact of nontrivial shadow functions in determining

*𝒮*(

*,*

**ℓ**_{ij}**c**) makes ${A}_{kl}^{(ij)}$ vary more abruptly with (

*i*,

*j*) changes, greatly increasing the informativeness of the measurements.

To demonstrate this effect, we rearrange {*Y _{ij}*} as an

*m*

^{2}-dimensional column vector

**y**, {

*F*} as an

_{kl}*n*

^{2}-dimensional column vector

**f**, and {

**A**

^{(ij)}} as an

*m*

^{2}×

*n*

^{2}-dimensional matrix

**A**such that Eq. (1) for 1 ≤

*i*,

*j*≤

*m*gets combined into

**A**. Toward that end, Fig. 11 shows the singular values of

**A**for two experimental setups. The first setup corresponds to an unoccluded scene, whereas the second setup corresponds to an occluded scene, in which a black circular patch has been inserted between the visible and hidden walls. It is evident from these singular values that the occluded measurements are substantially more informative, suggesting that the presence of the occluder will enable higher-fidelity reconstruction of the hidden-wall’s reflectivity pattern.

## Measurement statistics

The laser illuminates position * ℓ_{ij}* with

*N*pulses before it addresses the next grid point on the visible wall. Each pulse that illuminates

*results in an average of*

**ℓ**_{ij}*Y*third-bounce photons arriving at the detector’s location

_{ij}**Ω**. In this

*low-flux*regime the number of detections registered by a photon-number resolving detector from illumination of

*by a single pulse is Poisson distributed, with mean*

**ℓ**_{ij}*η*(

*Y*+

_{ij}*B*), where

_{ij}*η*is the detector’s quantum efficiency, and

*B*is the average number of background-light photons arriving during a single-pulse measurement interval (see details below). A SPAD detector, however, is not number resolving; it suffers a dead time after making a single detection that, for our experiment, precludes more than one detection in a single-pulse measurement interval. In this case, each optical pulse can yield either a 0 count or 1 count, and these events occur with probabilities ${P}_{0}^{(ij)}(\mathbf{F})$ and $1-{P}_{0}^{(ij)}(\mathbf{F})$, respectively, where

_{ij}*ηY*≪ 1, and the pre-detection optical filtering used to ensure that

_{ij}*ηB*≪ 1, which prevents SPAD counts from occurring in every single-pulse measurement interval. The

_{ij}**F**dependence of ${P}_{0}^{(ij)}(\mathbf{F})$ arises from the

*Y*term, see Eq. (1).

_{ij}The statistical independence of the photon counts from different laser pulses now makes *R _{ij}*, the total photon count from the

*N*pulses that illuminate

*, a binomial random variable with success probability $1-{P}_{0}^{(ij)}(\mathbf{F})$, i.e., [25]*

**ℓ**_{ij}**F**, we get the following negative log-likelihood function for the raw count matrix

**R**given the reflectivity matrix

**F**:

## B. Experimental details

## Visible wall characterization

We used a white poster board as a near-Lambertian reflecting surface to serve as the visible wall in our NLoS imaging experiment of Fig. 2. We used a 635-nm laser to illuminate the white board at two different incident angles and measured its reflected power at various viewing angles. The results are displayed in Fig. 12, showing that the white poster board is indeed nearly Lambertian.

## Background light

In Fig. 13, we show results of background-light detection over a long data-acquisition time that we used to calibrate *B _{ij}*, the average number of background photons arriving at the detector in a single-pulse measurement interval. For this measurement, the reflectivity pattern on the hidden wall was replaced with a black surface, a total of

*N*= 3.56 × 10

^{7}laser pulses were transmitted at each laser point

*, and the third-bounce photons were detected by the SPAD. Note that once performed this calibration applies to all subsequent measurements: in post-processing, we scale these background noise counts according to the dwell time used. The nonuniformity of the background counts is mainly due to scattering from the raster-scan galvo mirrors and to SPAD afterpulsing originating from detections of those first-bounce photons. Galvo-related background counts could be avoided with better scanning mirrors.*

**ℓ**_{ij}## Funding

Defense Advanced Research Projects Agency (DARPA) REVEAL Program Contract No. HR0011-16-C-0030.

## Acknowledgments

The authors thank Changchen Chen for assistance with figure preparation, Connor Henley for the measurements reported in Fig. 11, and William T. Freeman and Vivek K Goyal for helpful discussions.

## References and links

**1. **B. Sun, M. P. Edgar, R. Bowman, L. E. Vittert, S. Welsh, A. Bowman, and M. J. Padgett, “3D computational imaging with single-pixel detectors,” Science **340**, 844–847 (2013). [CrossRef] [PubMed]

**2. **A. Kirmani, D. Venkatraman, D. Shin, A. Colaço, F. N. C. Wong, J. H. Shapiro, and V. K. Goyal, “First-photon imaging,” Science **343**, 58–61 (2014). [CrossRef]

**3. **L. Gao, J. Liang, C. Li, and L. V. Wang, “Single-shot compressed ultrafast photography at one hundred billion frames per second,” Nature **516**, 74–77 (2014). [CrossRef] [PubMed]

**4. **A. M. Pawlikowska, A. Halimi, R. A. Lamb, and G. S. Buller, “Single-photon three-dimensional imaging at up to 10 kilometers range,” Opt. Express **25**, 11919–11931 (2017). [CrossRef] [PubMed]

**5. **E. Repasi, P. Lutzmann, O. Steinvall, M. Elmqvist, B. Göhler, and G. Anstett, “Advanced short-wavelength infrared range-gated imaging for ground applications in monostatic and bistatic configurations,” Appl. Opt. **48**, 5956–5969 (2009). [CrossRef] [PubMed]

**6. **A. Sume, M. Gustafsson, M. Herberthson, A. Janis, S. Nilsson, J. Rahm, and A. Orbom, “Radar detection of moving targets behind corners,” IEEE Trans. Geosci. Remote Sens. **49**, 2259–2267 (2011). [CrossRef]

**7. **B. Chakraborty, Y. Li, J. J. Zhang, T. Trueblood, A. Papandreou-Suppappola, and D. Morrell, “Multipath exploitation with adaptive waveform design for tracking in urban terrain,” in *Proceedings of IEEE International Conference on Acoustics, Speech and Signal Processing* (IEEE2010), pp. 3894–3897.

**8. **O. Steinvall, M. Elmqvist, and H. Larsson, “See around the corner using active imaging,” Proc. SPIE **8186**, 818605 (2011). [CrossRef]

**9. **A. P. Mosk, A. Lagendijk, G. Lerosey, and M. Fink, “Controlling waves in space and time for imaging and focusing in complex media,” Nat. Photonics **6**, 283–292 (2012). [CrossRef]

**10. **O. Katz, E. Small, and Y. Silberberg, “Looking around corners and through thin turbid layers in real time with scattered incoherent light,” Nat. Photonics **6**, 549–553 (2012). [CrossRef]

**11. **A. Kirmani, T. Hutchison, J. Davis, and R. Raskar, “Looking around the corner using transient imaging,” in *Proceedings of IEEE International Conference on Computer Vision* (IEEE, 2009), pp. 159–166.

**12. **A. Velten, T. Willwacher, O. Gupta, A. Veeraraghavan, M. G. Bawendi, and R. Raskar, “Recovering three-dimensional shape around a corner using ultrafast time-of-flight imaging,” Nat. Commun. **3**, 745 (2012). [CrossRef] [PubMed]

**13. **O. Gupta, T. Willwacher, A. Velten, A. Veeraraghavan, and R. Raskar, “Reconstruction of hidden 3D shapes using diffuse reflections,” Opt. Express **20**, 19096–19108 (2012). [CrossRef] [PubMed]

**14. **F. Heide, L. Xiao, W. Heidrich, and M. B. Hullin, “Diffuse mirrors: 3D reconstruction from diffuse indirect illumination using inexpensive time-of-flight sensors,” in *Proceedings of IEEE Conference on Computer Vision and Pattern Recognition* (IEEE, 2014), pp. 3222–3229.

**15. **M. Laurenzis and A. Velten, “Nonline-of-sight laser gated viewing of scattered photons,” Opt. Eng. **53**, 023102 (2014). [CrossRef]

**16. **G. Gariepy, F. Tonolini, R. Henderson, J. Leach, and D. Faccio, “Detection and tracking of moving objects hidden from view,” Nat. Photonics **10**, 23–26 (2016). [CrossRef]

**17. **M. Laurenzis, J. Klein, E. Bacher, and N. Metzger, “Multiple-return single-photon counting of light in flight and sensing of non-line-of-sight objects at shortwave infrared wavelengths,” Opt. Lett. **40**, 4815–4818 (2015). [CrossRef] [PubMed]

**18. **M. Buttafava, J. Zeman, A. Tosi, K. Eliceiri, and A. Velten, “Non-line-of-sight imaging using a time-gated single photon avalanche diode,” Opt. Express **23**, 20997–21011 (2015). [CrossRef] [PubMed]

**19. **J. Klein, C. Peters, J. Martín, M. Laurenzis, and M. B. Hullin, “Tracking objects outside the line of sight using 2D intensity images,” Sci. Rep. **6**, 32491 (2016). [CrossRef] [PubMed]

**20. **C. Thrampoulidis, G. Shulkind, F. Xu, W. T. Freeman, J. H. Shapiro, A. Torralba, F. N. C. Wong, and G. W. Wornell “Exploiting occlusion in non-line-of-sight active imaging,” arXiv:1711.06297 (2017).

**21. **A. L. Cohen, “Anti-pinhole imaging,” J. Mod. Opt. **29**, 63–67 (1982).

**22. **A. Torralba and W. T. Freeman, “Accidental pinhole and pinspeck cameras: Revealing the scene outside the picture,” in *Proceedings of IEEE Conference on Computer Vision and Pattern Recognition* (IEEE, 2012), pp.. 374–381.

**23. **R. H. Hadfield, “Single-photon detectors for optical quantum information applications,” Nat. Photonics **3**, 696–705 (2009). [CrossRef]

**24. **G. Buller and R. J. Collins, “Single-photon generation and detection,” Meas. Sci. Technol. **21**, 012002 (2010). [CrossRef]

**25. **D. Shin, A. Kirmani, V. K. Goyal, and J. H. Shapiro, “Photon-efficient computational 3-D and reflectivity imaging with single-photon detectors,” IEEE Trans. Comput. Imaging **1**, 112–125 (2015). [CrossRef]

**26. **L. I. Rudin, S. Osher, and E. Fatemi, “Nonlinear total variation based noise removal algorithms,” Physica D **60**, 259–268 (1992). [CrossRef]

**27. **Z. T. Harmany, R. F. Marcia, and R. M. Willett, “This is SPIRAL-TAP: Sparse Poisson intensity reconstruction algorithms—theory and practice,” IEEE Trans. Image Process. **21**, 1084–1096 (2012). [CrossRef]

**28. **D. Shin, F. Xu, D. Venkatraman, R. Lussana, F. Villa, F. Zappa, V. K. Goyal, F. N. C. Wong, and J. H. Shapiro, “Photon-efficient imaging with a single-photon camera,” Nat. Commun. **7**, 12046 (2016). [CrossRef] [PubMed]