Abstract

The non-line-of-sight (NLOS) imaging problem has attracted a lot of interest in recent years. The objective is to produce images of objects that are hidden around a corner, using the information encoded in the time-of-flight (ToF) of photons that scatter multiple times after incidence at a given relay surface. Most current methods assume a Lambertian, flat and static relay surface, with non-moving targets in the hidden scene. Here we show NLOS reconstructions for a relay surface that is non-planar and rapidly changing during data acquisition. Our NLOS imaging system exploits two different detectors to collect the ToF data; one pertaining to the relay surface and another one regarding the ToF information of the hidden scene. The system is then able to associate where the multiply-scattered photons originated from the relay surface. This step allows us to account for changing relay positions in the reconstruction algorithm. Results show that the reconstructions for a dynamic relay surface are similar to the ones obtained using a traditional non-dynamic relay surface.

© 2020 Optical Society of America under the terms of the OSA Open Access Publishing Agreement

1. Introduction

Conventional imaging systems, such as the built-in cameras available in every smartphone, intrinsically assume that the objects of interest are located in the imager’s direct line-of-sight (LoS), as depicted in Fig. 1(a). Standard optical, microwave and ultrasound imaging systems fall in this category. In a non-line-of-sight (NLOS) scenario, the goal is to recover the image of a target that is not directly visible from the imager, as shown in Fig. 1(b). Being able to create NLOS images has potential impacts in different applications ranging from law enforcement, to medical applications and space exploration.

 figure: Fig. 1.

Fig. 1. Shown in (a) is a typical LOS scene where the imaging system is collecting data of the object under investigation. Conversely shown in (b) is an example of a NLOS scenario, where an occluder wall blocks the imaging systems’ field of view. The optical imaging system discussed in this paper is capable of collecting data that has been scattered from the relay surface and create an image of the hidden object around the corner.

Download Full Size | PPT Slide | PDF

For a solid understanding of the theoretical background on light scattering and how photons can be used to create an image of a target hidden around a corner, we refer to [112] (and references therein). In this manuscript, we focus our attention on the experimental aspects of NLOS imaging.

Experimental efforts published in prior art can be classified into different categories depending on the equipment and reconstruction algorithms employed. For example, some imaging systems comprise inexpensive, off-the-shelf diode lasers and CMOS time-of-flight (ToF) cameras, as light sources and detectors, respectively (e.g.: [1315]). In a second approach, a laser pointer and inexpensive 2D cameras are used to track occluded target by simulating the light scattering in the scene and then fitting the experimental data to the simulated one [16,17]. Other imaging systems comprise more expensive equipment, which typically includes an ultra-fast laser and ToF detectors that are able to sense even a single photon [1822]. Backprojection-based algorithms are typically employed to reconstruct the image of the targets (cf. [20,2326]) and there exist fast implementation of such algorithms (e.g.: [27]). Moreover, recent publications [26,2831] have shown that the NLOS imaging problem can be treated as a diffractive wave propagation problem and therefore modeled as a LoS problem. In the end, other techniques (cf. [32]) exploit the presence of occluding objects in the light path.

For the remainder of the manuscript, we consider the arrangement in Fig. 1(b), where we collect data an active light source that illuminates a relay surface, on a point $x_{l, m}$. The photons scatter in all directions and a fraction of them arrive at the hidden object. The photons scatter from the object once again in all directions and some return to the relay wall. Assuming that the detector is focused on a point of the relay surface, $x_{c, n}$, we collect the ToF (or time-of-arrival, TOA) of these photons, with respect to an initial origin time. We repeat this data acquisition for a specific number of laser positions, $m=1, \dots , M$ and camera positions $n=1, \dots , N$.

Previous publications (e.g. [23,26]) acquire data by uniformly scanning a pre-defined area on a flat relay wall. This grid is computed offline and stored in memory. Any changes to the relay surface or positioning of the imaging system with respect to the relay wall requires adjusting the offline laser grid, which may be difficult, if not tedious. Other published works, such as [33,34], consider non-planar relay surfaces and moving targets, but still rely on a pre-computed grid, which requires a slow measurement. Moreover, the LOS geometry has to be calibrated as well. In this work we address the problem of non-planar, dynamic relay surfaces. To keep track of a moving relay surface, we equip the imaging system with a second detector that locates the laser position ($x_{l,m}$ on the relay surface at any given time as depicted in Fig. 1(b)).

2. System description

2.1 Hardware setup

A more detailed schematic of a NLOS imaging system is shown in Fig. 2. As a light source, we utilize a Onefive Katana 10 HP fiber laser, that is able to output $35 \pm 15~\mbox {ps}$ long pulses at an operating wavelength of $512~\textrm {nm}$ and with a $10~\textrm {MHz}$ repetition rate. To illuminate an area of the relay surface, we employ a Thorlabs GVS012 2D large beam diameter galvo system with silver coated mirrors. Similarly to the previous section, we define $x_{l,m}$ as the $m$-th laser position. To detect the returning photons, we employ two silicon single photon avalanche diodes (SPADs) [3537]. To decrease the background noise, both of these SPADs are equipped with a bandpass filter with a full width at half maximum (FWHM) bandwidth of $10~\mbox {nm}$. When a photon arrives at the SPAD’s detector head - namely where the active chip is located - it triggers an internal electron avalanche. This has two effects on the SPAD: 1) it becomes ‘blind’ to subsequent photons for a specific amount of time, called the hold-off time, and 2) after an internal delay, an electrical signal is generated to the output. An eight channel PicoQuant HydraHarp 400 time-correlated single photon counting (TCSPC) unit is used to calculate the photons ToA with a a picosecond resolution. The two SPADs in Fig. 2 are positioned in two different locations of the imaging system, as we exploit them to recover the information of the visible and hidden scene, as described below.

  • 1. Visible scene inference. We utilize an unfocused, free-running SPAD close to the galvo (as shown in Fig. 2) to collect the information relative to the photons traveling from the galvo to the relay surface ($x_{l,m}$) and back. This information is typically referred as first bounce. The galvo and the SPAD form a bistatic lidar system. With the knowledge of their 3D position in the imaging system, paired with the temporal information retrieved by the first bounce, it is possible to calculate the laser position $x_{l,m}$.
  • 2. Hidden scene inference. To collect the multi-bounce data in the scenario shown in Fig. 2, we utilize one gated-SPAD (cf. [20]), which has been focused on $x_{c,n}$ ($n=1$) on the relay surface. The gate feature allows the detector to time-gate out photons, a particularly useful feature when strong light signals need to be ‘filtered’ out. When the SPAD is gated off it is completely insensitive to photons, thus the probability to detect a photon is essentially zero, regardless of the number of photons that actually hit the SPAD. The gating technique can be interpreted as a way to decrease the dark count rate (DCR), afterpulsing probability and consequently increasing the overall signal-to-noise (SNR) ratio. In our scenario, the gate allows us to remove the first bounce from the time response collected by the gated SPAD, for each laser position $x_{l,m}$.

 figure: Fig. 2.

Fig. 2. Capture system described in Sec. 2.

Download Full Size | PPT Slide | PDF

2.2 Data acquisition

The underlying assumption in traditional acquisition methods is that the relay surface does not move or alter during the entire course of data acquisition. When this is true, the position $x_{l,m}$ does not change over time and the collected time response corresponds to the photons scattering from this considered point. When the relay surface moves, the collected time response corresponds to the photons arriving around $x_{l,m}$. If we were to use this laser position value in the reconstruction algorithm, it would cause substantial image reconstruction degradation. The use of a second detector as described above and the TCSPC time-tagged time-resolved (TTTR) mode can be used to address this issue. In TTTR, the TCSPC only records the photon time events, but does not parse them into histograms.

To acquire TTTR data, the laser beam is first positioned at its starting position, for example the top-left corner shown in Fig. 3(a). The laser beam then moves in a raster scan (cf. green line), at a constant speed, $v$, until the ending position, bottom-right corner in Fig. 3(a), is reached. As the laser beam moves along the surface, the TCSPC collects the ToA data arriving from the free-running and gated SPADs, which arrive at two different TCSPC inputs. The detected and stored events for each channel (SPAD) are shown in Fig. 3(a).

 figure: Fig. 3.

Fig. 3. Data acquisition method described in Sec. 2.2. The TCSPC TTTR mode registers only the time events. Afterwards, we need to discretize the scanned area into laser positions and associate the time events to laser positions.

Download Full Size | PPT Slide | PDF

The next step is to generate the histograms for both SPADs. We discretize the relay surface into $M$ points, say $x_{l,m}$, $m=\{1, \dots , M\}$, which correspond to the laser positions on the relay surface. Note that, at this time, we do not know the 3D coordinates of $x_{l,m}$, they will be calculated using the information from the first bounce (free-running SPAD information). Considering the data coming from the free-running SPAD, the events associated with the generic position $x_{l,m}$ are the ones that satisfy the following condition

$$\textrm{events} \left\{ x_{l,m} \right\} \in \frac{x_{l,m}}{v} + \left[ \frac{-\Delta T}{2}, \frac{\Delta T}{2} \right],$$
where $v$ is the known speed at which the laser is being scanned and $\Delta T$ is a time frame that can be specified in post-processing. We call this operation ‘grouping’ and this operation is represented in Fig. 3(b) by the different colored boxes (red, purple and black). We now have the histograms collected from the free-running SPAD, namely $h_{f,1}(t), h_{f,2}(t), \dots , h_{f,M}(t)$. We repeat the same grouping operation for the ToA arriving from the gated SPAD, obtaining $h_{g,1}(t), h_{g,2}(t), \dots , h_{g,M}(t)$ and hence the multiply scattered photons (hidden scene) data necessary for the reconstruction algorithm.

The 3D coordinates of the laser positions, $x_{l,m}$, with respect to an origin in our imaging system, can be determined using the knowledge of the galvo mirror voltages (there is a direct relationship between voltage applied and mirror angle) and the location of the first bounce peak (which translate into distance), contained in $h_{f,m}(t)$.

3. Results and discussion

In Fig. 4, we depict the relay surface employed for our experiments. It comprises a rectangular frame holding two white curtains, divided by a mount with a statue on top of it. The gated SPAD is focused on the statue, whereas the laser scans the area inside the rectangle shown in black. The imaging system is approximately $2.20~\textrm {m}$ away from the relay surface and the hidden objects are placed at approximately $1~\textrm {m}$ from the relay surface. Our post-processing algorithm discretizes the relay surface into approximately $47$ thousand laser positions and, for each, creates a histogram with an equivalent exposure time of $5~\textrm {ms}$. Afterwards, we apply the phasor field reconstruction algorithm, as described in [26,2831,38]. For the purpose of this manuscript, we have selected two different hidden scenes that we want to reconstruct and these are shown in Fig. 5(a) and Fig. 6(a). The former is a simple, almost binary scene that contains well separated objects (a ’4’ and two poster tubes in a box). The latter scene is more complex and roughly resembles the one in [26].

 figure: Fig. 4.

Fig. 4. The relay wall for this experiment comprises two curtains mounted on a rectangular frame. The black lines represent the laser scanning area. Moreover, we assume that the gated SPAD is focused on the statue between the two curtains. The red arrow indicates where the hidden scene is located w.r.t. to the relay surface. To create motion, we use an air fan, which is located on the right (but not visible here).Visualization 1.

Download Full Size | PPT Slide | PDF

 figure: Fig. 5.

Fig. 5. Experiment 1: (a) depicts the true hidden scene, whereas (b) shows the reconstructions with the fan off, (c) with the fan set to ‘high’.

Download Full Size | PPT Slide | PDF

 figure: Fig. 6.

Fig. 6. Experiment 2: (a) depicts the true hidden scene, whereas (b) shows the reconstructions with the fan off, (c) with the fan set to ‘high’, and (d) when the fan is set to ‘high’, but we reconstruct using the laser positions retrieved from (b).

Download Full Size | PPT Slide | PDF

Figures 5(b) and 6(b) show the reconstruction results when the relay wall is static (motionless). The results are comparable to the ones obtained in [26],-in the case of which- the relay surface was Lambertian and flat. More specifically, Fig. 5(b) shows that the ‘4’ and the tubes are completely visible and distinguishable. The reconstruction of the more complex scene is shown in Fig. 6(b). Even in this case the reconstructions closely resembles the hidden scene: we have recovered the shelves, the books, the statue and the letter ‘T’, the poster tubes (right) and part of the wall behind the shelf (lower-mid). It should be noted that the top-left shelf is partly reconstructed. Moreover, the bottom-right shelf (containing the mannequin) is missing from the reconstruction. However, this shelf is directly in front of the statue, where there is a gap between the two curtains. It is highly likely that the ToF information regarding this shelf has been lost because of this gap. Alternatively, the wall behind the shelf provides a stronger return w.r.t. the shelf.

In our second set of data acquisition, the curtains have been set in motion by an air fan. A video showing the acquisition while the laser is scanning and the relay wall is moving has been provided as a supplemental material as Visualization 1 (the video has been sped up by 10x). The reconstruction results for the scene in Fig. 5(a) are shown in Fig. 5(c). Once again, it can be concluded that the number ’4’ and the tubes have been reconstructed, since they are clearly visible. Because of the motion, it seems that the poster tubes have a lower intensity in this image w.r.t. Fig. 5(b). This could be explained by the fact that, as the curtains were moving, the laser was impinging on areas that were not directly facing the targets. Figure 6(c) depicts the reconstruction for the complex scene shown in Fig. 6(a). Once again, it is clear that the bookshelf, the books, the statue and the letter ‘T’ have been successfully recovered. It is also interesting to note that, although the bottom-right shelf is still missing, we are still able to recover the wall behind the shelf. The tubes have a lower return, but are still visible.

It is important to note that the reconstructions we have shown in Fig. 5(c) and Fig. 6(c) are possible because we have used the free-running SPAD to locate the laser positions on the relay surface as it was moving. To show the importance of this information, we have run a reconstruction using the multi-bounce information gathered from the moving curtains scene and the laser position information gathered from the static curtains scene. The result is depicted in Fig. 6(d), where it is easy to see that the reconstruction does not provide any information about the hidden scene. In other words, this results shows that it is of paramount necessity - especially in a moving relay surface scenario - to be able to correctly associate first bounce data with multi-bounce data, as we proposed here.

4. Conclusion

Considering an optical NLOS imaging scenario, we have shown how it is possible to acquire meaningful data when the relay surface is considered rugged, discontinuous and also in motion. This can essentially be achieved using a secondary detector (a SPAD, in our case) that collects the first bounce ToF information, together with the TCSPC TTTR mode. Similar to only a few other methods in prior art, our acquisition time is in the order of minutes. Furthermore, we compare the reconstruction results with a dynamic relay surface to ones where the relay surface is static using the standard phasor field reconstruction approach in both cases. The former results show minimal deterioration w.r.t. the latter ones. In future work, we foresee employing SPAD arrays and faster reconstruction algorithms which would allow to reduce the acquisition time (at the expense of the data processing) and a quicker rendering of the hidden scene respectively.

Funding

Defense Advanced Research Projects Agency (HR0011-16-C-002); Office of Naval Research (ONR N00014-15-1-265); National Aeronautics and Space Administration (NNX15AQ29).

Disclosures

The authors declare that there are no conflicts of interest related to this article.

References

1. I. Freund, “Looking through walls and around corners,” Phys. A 168(1), 49–65 (1990). [CrossRef]  

2. S. K. Nayar, G. Krishnan, M. D. Grossberg, and R. Raskar, “Fast separation of direct and global components of a scene using high frequency illumination,” ACM Trans. Graph. 25(3), 935–944 (2006). [CrossRef]  

3. S. M. Seitz, Y. Matsushita, and K. N. Kutulakos, “A theory of inverse light transport,” in Proceedings of the Tenth IEEE International Conference on Computer Vision - Volume 2, (IEEE Computer Society, Washington, DC, USA, 2005), ICCV ’05, pp. 1440–1447.

4. A. Kirmani, T. Hutchison, J. Davis, and R. Raskar, “Looking around the corner using transient imaging,” in 2009 IEEE 12th International Conference on Computer Vision, (2009), pp. 159–166.

5. A. Kirmani, T. Hutchison, J. Davis, and R. Raskar, “Looking around the corner using ultrafast transient imaging,” Int. J. Comput. Vis. 95(1), 13–28 (2011). [CrossRef]  

6. O. Steinvall, M. Elmqvist, and H. Larsson, “See around the corner using active imaging,” in Proc. SPIE, vol. 8186 (Electro-Optical Remote Sensing, Photonic Technologies, and Applications V) (2011).

7. A. Kirmani, H. Jeelani, V. Montazerhodjat, and V. K. Goyal, “Diffuse imaging: Creating optical images with unfocused time-resolved illumination and sensing,” IEEE Signal Process. Lett. 19(1), 31–34 (2012). [CrossRef]  

8. A. Jarabo, J. Marco, A. Munoz, R. Buisan, W. Jarosz, and D. Gutierrez, “A framework for transient rendering,” ACM Transactions on Graph. (Proceedings SIGGRAPH Asia) 33 (2014).

9. A. Jarabo, B. Masia, J. Marco, and D. Gutierrez, “Recent advances in transient imaging: A computer graphics and vision perspective,” Vis. Informatics 1(1), 65–79 (2017). [CrossRef]  

10. A. K. Pediredla, M. Buttafava, A. Tosi, O. Cossairt, and A. Veeraraghavan, “Reconstructing rooms using photon echoes: A plane based model and reconstruction algorithm for looking around the corner,” in 2017 IEEE International Conference on Computational Photography (ICCP), (2017), pp. 1–12.

11. C. Tsai, K. N. Kutulakos, S. G. Narasimhan, and A. C. Sankaranarayanan, “The geometry of first-returning photons for non-line-of-sight imaging,” in 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), (2017), pp. 2336–2344.

12. M. Laurenzis, J. Klein, E. Bacher, and S. Schertzer, “Approaches to solve inverse problems for optical sensing around corners,” in Emerging Imaging and Sensing Technologies for Security and Defence IV, vol. 11163G. S. Buller, R. C. Hollins, R. A. Lamb, and M. Laurenzis, eds., International Society for Optics and Photonics (SPIE, 2019), pp. 1–7.

13. F. Heide, M. B. Hullin, J. Gregson, and W. Heidrich, “Low-budget transient imaging using photonic mixer devices,” ACM Trans. Graph. 32(4), 1 (2013). [CrossRef]  

14. F. Heide, L. Xiao, W. Heidrich, and M. B. Hullin, “Diffuse mirrors: 3d reconstruction from diffuse indirect illumination using inexpensive time-of-flight sensors,” in 2014 IEEE Conference on Computer Vision and Pattern Recognition, (2014), pp. 3222–3229.

15. F. Heide, W. Heidrich, M. Hullin, and G. Wetzstein, “Doppler time-of-flight imaging,” ACM Trans. Graph. 34(4), 361 (2015). [CrossRef]  

16. J. Klein, C. Peters, J. Martin, M. Laurenzis, and M. B. Hullin, “Tracking objects outside the line of sight using 2D intensity images,” Sci. Rep. 6(1), 32491 (2016). [CrossRef]  

17. J. Klein, C. Peters, M. Laurenzis, and M. Hullin, “Non-line-of-sight mocap,” in ACM SIGGRAPH 2017 Emerging Technologies, (ACM, New York, NY, USA, 2017), SIGGRAPH ’17, pp. 18:1–18:2.

18. G. S. Buller and R. J. Collins, “Single-photon generation and detection,” Meas. Sci. Technol. 21(1), 012002 (2010). [CrossRef]  

19. G. Gariepy, N. Krstajic, R. Henderson, C. Li, R. R. Thomson, G. S. Buller, B. Heshmat, R. Raskar, J. Leach, and D. Faccio, “Single-photon sensitive light-in-fight imaging,” Nat. Commun. 6(1), 6021 (2015). [CrossRef]  

20. M. Buttafava, J. Zeman, A. Tosi, K. Eliceiri, and A. Velten, “Non-line-of-sight imaging using a time-gated single photon avalanche diode,” Opt. Express 23(16), 20997–21011 (2015). [CrossRef]  

21. M. Laurenzis, “Computational sensing approaches for enhanced active imaging,” in Proc. SPIE, vol. 10796 (Electro-Optical Remote Sensing XII) (2018).

22. M. Laurenzis, M. La Manna, M. Buttafava, A. Tosi, J. Nam, M. Gupta, and A. Velten, “Advanced active imaging with single photon avalanche diodes,” in Proc. SPIE, vol. 10799 (Emerging Imaging and Sensing Technologies for Security and Defence III) (2018).

23. A. Velten, T. Willwacher, O. Gupta, A. Veeraraghavan, M. G. Bawendi, and R. Raskar, “Recovering three-dimensional shape around a corner using ultrafast time-of-flight imaging,” Nat. Commun. 3(1), 745 (2012). [CrossRef]  

24. M. Laurenzis and A. Velten, “Nonline-of-sight laser gated viewing of scattered photons,” Opt. Eng. 53(2), 023102 (2014). [CrossRef]  

25. M. La Manna, F. Kine, E. Breitbach, J. Jackson, T. Sultan, and A. Velten, “Error backprojection algorithms for non-line-of-sight imaging,” IEEE Trans. Pattern Anal. Mach. Intell. 41(7), 1615–1626 (2019). [CrossRef]  

26. X. Liu, I. Guillén, M. La Manna, J. H. Nam, S. Azer Reza, T. H. Le, D. Gutierrez, A. Jarabo, and A. Velten, “Non-line-of-sight imaging using phasor-field virtual waveoptics,” Nature 572(7771), 620–623 (2019). [CrossRef]  

27. V. Arellano, D. Gutierrez, and A. Jarabo, “Fast back-projection for non-line of sight reconstruction,” in ACM SIGGRAPH 2017 Posters, (ACM, New York, NY, USA, 2017), SIGGRAPH ’17, pp. 79:1–79:2.

28. S. A. Reza, M. La Manna, S. Bauer, and A. Velten, “Phasor field waves: A Huygens-like light transport model for non-line-of-sight imaging applications,” Opt. Express 27(20), 29380–29400 (2019). [CrossRef]  

29. S. A. Reza, M. La Manna, S. Bauer, and A. Velten, “Phasor field waves: Experimental demonstrations of wave-like properties,” Opt. Express 27(22), 32587–32608 (2019). [CrossRef]  

30. J. A. Teichman, “Phasor field waves: A mathematical treatment,” Opt. Express 27(20), 27500–27506 (2019). [CrossRef]  

31. J. Dove and J. H. Shapiro, “Paraxial theory of phasor-field imaging,” Opt. Express 27(13), 18016–18037 (2019). [CrossRef]  

32. F. Xu, G. Shulkind, C. Thrampoulidis, J. H. Shapiro, A. Torralba, F. N. C. Wong, and G. W. Wornell, “Revealing hidden scenes by photon-efficient occlusion-based opportunistic active imaging,” Opt. Express 26(8), 9945–9962 (2018). [CrossRef]  

33. M. O’Toole, D. B. Lindell, and G. Wetzstein, “Confocal non-line-of-sight imaging,” in ACM SIGGRAPH 2018 Talks, (ACM, New York, NY, USA, 2018), SIGGRAPH ’18, pp. 1:1–1:2.

34. D. B. Lindell, G. Wetzstein, and M. O’Toole, “Wave-based non-line-of-sight imaging using fast f-k migration,” ACM Trans. Graph. 38(4), 1–13 (2019). [CrossRef]  

35. S. Cova, M. Ghioni, A. Lacaita, C. Samori, and F. Zappa, “Avalanche photodiodes and quenching circuits for single-photon detection,” Appl. Opt. 35(12), 1956–1976 (1996). [CrossRef]  

36. F. Zappa, S. Tisa, A. Tosi, and S. Cova, “Principles and features of single-photon avalanche diode arrays,” Sens. Actuators, A 140(1), 103–112 (2007). [CrossRef]  

37. M. Buttafava, G. Boso, A. Ruggeri, A. D. Mora, and A. Tosi, “Time-gated single-photon detection module with 110 ps transition time and up to 80 MHz repetition rate,” Rev. Sci. Instrum. 85(8), 083114 (2014). [CrossRef]  

38. S. A. Reza, M. La Manna, and A. Velten, “Imaging with Phasor Fields for Non-Line-of Sight Applications,” in Imaging and Applied Optics 2018 (3D, AO, AIO, COSI, DH, IS, LACSEA, LS&C, MATH, pcAOP), (Optical Society of America, 2018), p. CM2E.7.

References

  • View by:
  • |
  • |
  • |

  1. I. Freund, “Looking through walls and around corners,” Phys. A 168(1), 49–65 (1990).
    [Crossref]
  2. S. K. Nayar, G. Krishnan, M. D. Grossberg, and R. Raskar, “Fast separation of direct and global components of a scene using high frequency illumination,” ACM Trans. Graph. 25(3), 935–944 (2006).
    [Crossref]
  3. S. M. Seitz, Y. Matsushita, and K. N. Kutulakos, “A theory of inverse light transport,” in Proceedings of the Tenth IEEE International Conference on Computer Vision - Volume 2, (IEEE Computer Society, Washington, DC, USA, 2005), ICCV ’05, pp. 1440–1447.
  4. A. Kirmani, T. Hutchison, J. Davis, and R. Raskar, “Looking around the corner using transient imaging,” in 2009 IEEE 12th International Conference on Computer Vision, (2009), pp. 159–166.
  5. A. Kirmani, T. Hutchison, J. Davis, and R. Raskar, “Looking around the corner using ultrafast transient imaging,” Int. J. Comput. Vis. 95(1), 13–28 (2011).
    [Crossref]
  6. O. Steinvall, M. Elmqvist, and H. Larsson, “See around the corner using active imaging,” in Proc. SPIE, vol. 8186 (Electro-Optical Remote Sensing, Photonic Technologies, and Applications V) (2011).
  7. A. Kirmani, H. Jeelani, V. Montazerhodjat, and V. K. Goyal, “Diffuse imaging: Creating optical images with unfocused time-resolved illumination and sensing,” IEEE Signal Process. Lett. 19(1), 31–34 (2012).
    [Crossref]
  8. A. Jarabo, J. Marco, A. Munoz, R. Buisan, W. Jarosz, and D. Gutierrez, “A framework for transient rendering,” ACM Transactions on Graph. (Proceedings SIGGRAPH Asia) 33 (2014).
  9. A. Jarabo, B. Masia, J. Marco, and D. Gutierrez, “Recent advances in transient imaging: A computer graphics and vision perspective,” Vis. Informatics 1(1), 65–79 (2017).
    [Crossref]
  10. A. K. Pediredla, M. Buttafava, A. Tosi, O. Cossairt, and A. Veeraraghavan, “Reconstructing rooms using photon echoes: A plane based model and reconstruction algorithm for looking around the corner,” in 2017 IEEE International Conference on Computational Photography (ICCP), (2017), pp. 1–12.
  11. C. Tsai, K. N. Kutulakos, S. G. Narasimhan, and A. C. Sankaranarayanan, “The geometry of first-returning photons for non-line-of-sight imaging,” in 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), (2017), pp. 2336–2344.
  12. M. Laurenzis, J. Klein, E. Bacher, and S. Schertzer, “Approaches to solve inverse problems for optical sensing around corners,” in Emerging Imaging and Sensing Technologies for Security and Defence IV, vol. 11163G. S. Buller, R. C. Hollins, R. A. Lamb, and M. Laurenzis, eds., International Society for Optics and Photonics (SPIE, 2019), pp. 1–7.
  13. F. Heide, M. B. Hullin, J. Gregson, and W. Heidrich, “Low-budget transient imaging using photonic mixer devices,” ACM Trans. Graph. 32(4), 1 (2013).
    [Crossref]
  14. F. Heide, L. Xiao, W. Heidrich, and M. B. Hullin, “Diffuse mirrors: 3d reconstruction from diffuse indirect illumination using inexpensive time-of-flight sensors,” in 2014 IEEE Conference on Computer Vision and Pattern Recognition, (2014), pp. 3222–3229.
  15. F. Heide, W. Heidrich, M. Hullin, and G. Wetzstein, “Doppler time-of-flight imaging,” ACM Trans. Graph. 34(4), 36:1–36:11 (2015).
    [Crossref]
  16. J. Klein, C. Peters, J. Martin, M. Laurenzis, and M. B. Hullin, “Tracking objects outside the line of sight using 2D intensity images,” Sci. Rep. 6(1), 32491 (2016).
    [Crossref]
  17. J. Klein, C. Peters, M. Laurenzis, and M. Hullin, “Non-line-of-sight mocap,” in ACM SIGGRAPH 2017 Emerging Technologies, (ACM, New York, NY, USA, 2017), SIGGRAPH ’17, pp. 18:1–18:2.
  18. G. S. Buller and R. J. Collins, “Single-photon generation and detection,” Meas. Sci. Technol. 21(1), 012002 (2010).
    [Crossref]
  19. G. Gariepy, N. Krstajic, R. Henderson, C. Li, R. R. Thomson, G. S. Buller, B. Heshmat, R. Raskar, J. Leach, and D. Faccio, “Single-photon sensitive light-in-fight imaging,” Nat. Commun. 6(1), 6021 (2015).
    [Crossref]
  20. M. Buttafava, J. Zeman, A. Tosi, K. Eliceiri, and A. Velten, “Non-line-of-sight imaging using a time-gated single photon avalanche diode,” Opt. Express 23(16), 20997–21011 (2015).
    [Crossref]
  21. M. Laurenzis, “Computational sensing approaches for enhanced active imaging,” in Proc. SPIE, vol. 10796 (Electro-Optical Remote Sensing XII) (2018).
  22. M. Laurenzis, M. La Manna, M. Buttafava, A. Tosi, J. Nam, M. Gupta, and A. Velten, “Advanced active imaging with single photon avalanche diodes,” in Proc. SPIE, vol. 10799 (Emerging Imaging and Sensing Technologies for Security and Defence III) (2018).
  23. A. Velten, T. Willwacher, O. Gupta, A. Veeraraghavan, M. G. Bawendi, and R. Raskar, “Recovering three-dimensional shape around a corner using ultrafast time-of-flight imaging,” Nat. Commun. 3(1), 745 (2012).
    [Crossref]
  24. M. Laurenzis and A. Velten, “Nonline-of-sight laser gated viewing of scattered photons,” Opt. Eng. 53(2), 023102 (2014).
    [Crossref]
  25. M. La Manna, F. Kine, E. Breitbach, J. Jackson, T. Sultan, and A. Velten, “Error backprojection algorithms for non-line-of-sight imaging,” IEEE Trans. Pattern Anal. Mach. Intell. 41(7), 1615–1626 (2019).
    [Crossref]
  26. X. Liu, I. Guillén, M. La Manna, J. H. Nam, S. Azer Reza, T. H. Le, D. Gutierrez, A. Jarabo, and A. Velten, “Non-line-of-sight imaging using phasor-field virtual waveoptics,” Nature 572(7771), 620–623 (2019).
    [Crossref]
  27. V. Arellano, D. Gutierrez, and A. Jarabo, “Fast back-projection for non-line of sight reconstruction,” in ACM SIGGRAPH 2017 Posters, (ACM, New York, NY, USA, 2017), SIGGRAPH ’17, pp. 79:1–79:2.
  28. S. A. Reza, M. La Manna, S. Bauer, and A. Velten, “Phasor field waves: A Huygens-like light transport model for non-line-of-sight imaging applications,” Opt. Express 27(20), 29380–29400 (2019).
    [Crossref]
  29. S. A. Reza, M. La Manna, S. Bauer, and A. Velten, “Phasor field waves: Experimental demonstrations of wave-like properties,” Opt. Express 27(22), 32587–32608 (2019).
    [Crossref]
  30. J. A. Teichman, “Phasor field waves: A mathematical treatment,” Opt. Express 27(20), 27500–27506 (2019).
    [Crossref]
  31. J. Dove and J. H. Shapiro, “Paraxial theory of phasor-field imaging,” Opt. Express 27(13), 18016–18037 (2019).
    [Crossref]
  32. F. Xu, G. Shulkind, C. Thrampoulidis, J. H. Shapiro, A. Torralba, F. N. C. Wong, and G. W. Wornell, “Revealing hidden scenes by photon-efficient occlusion-based opportunistic active imaging,” Opt. Express 26(8), 9945–9962 (2018).
    [Crossref]
  33. M. O’Toole, D. B. Lindell, and G. Wetzstein, “Confocal non-line-of-sight imaging,” in ACM SIGGRAPH 2018 Talks, (ACM, New York, NY, USA, 2018), SIGGRAPH ’18, pp. 1:1–1:2.
  34. D. B. Lindell, G. Wetzstein, and M. O’Toole, “Wave-based non-line-of-sight imaging using fast f-k migration,” ACM Trans. Graph. 38(4), 1–13 (2019).
    [Crossref]
  35. S. Cova, M. Ghioni, A. Lacaita, C. Samori, and F. Zappa, “Avalanche photodiodes and quenching circuits for single-photon detection,” Appl. Opt. 35(12), 1956–1976 (1996).
    [Crossref]
  36. F. Zappa, S. Tisa, A. Tosi, and S. Cova, “Principles and features of single-photon avalanche diode arrays,” Sens. Actuators, A 140(1), 103–112 (2007).
    [Crossref]
  37. M. Buttafava, G. Boso, A. Ruggeri, A. D. Mora, and A. Tosi, “Time-gated single-photon detection module with 110 ps transition time and up to 80 MHz repetition rate,” Rev. Sci. Instrum. 85(8), 083114 (2014).
    [Crossref]
  38. S. A. Reza, M. La Manna, and A. Velten, “Imaging with Phasor Fields for Non-Line-of Sight Applications,” in Imaging and Applied Optics 2018 (3D, AO, AIO, COSI, DH, IS, LACSEA, LS&C, MATH, pcAOP), (Optical Society of America, 2018), p. CM2E.7.

2019 (7)

M. La Manna, F. Kine, E. Breitbach, J. Jackson, T. Sultan, and A. Velten, “Error backprojection algorithms for non-line-of-sight imaging,” IEEE Trans. Pattern Anal. Mach. Intell. 41(7), 1615–1626 (2019).
[Crossref]

X. Liu, I. Guillén, M. La Manna, J. H. Nam, S. Azer Reza, T. H. Le, D. Gutierrez, A. Jarabo, and A. Velten, “Non-line-of-sight imaging using phasor-field virtual waveoptics,” Nature 572(7771), 620–623 (2019).
[Crossref]

S. A. Reza, M. La Manna, S. Bauer, and A. Velten, “Phasor field waves: A Huygens-like light transport model for non-line-of-sight imaging applications,” Opt. Express 27(20), 29380–29400 (2019).
[Crossref]

S. A. Reza, M. La Manna, S. Bauer, and A. Velten, “Phasor field waves: Experimental demonstrations of wave-like properties,” Opt. Express 27(22), 32587–32608 (2019).
[Crossref]

J. A. Teichman, “Phasor field waves: A mathematical treatment,” Opt. Express 27(20), 27500–27506 (2019).
[Crossref]

J. Dove and J. H. Shapiro, “Paraxial theory of phasor-field imaging,” Opt. Express 27(13), 18016–18037 (2019).
[Crossref]

D. B. Lindell, G. Wetzstein, and M. O’Toole, “Wave-based non-line-of-sight imaging using fast f-k migration,” ACM Trans. Graph. 38(4), 1–13 (2019).
[Crossref]

2018 (1)

2017 (1)

A. Jarabo, B. Masia, J. Marco, and D. Gutierrez, “Recent advances in transient imaging: A computer graphics and vision perspective,” Vis. Informatics 1(1), 65–79 (2017).
[Crossref]

2016 (1)

J. Klein, C. Peters, J. Martin, M. Laurenzis, and M. B. Hullin, “Tracking objects outside the line of sight using 2D intensity images,” Sci. Rep. 6(1), 32491 (2016).
[Crossref]

2015 (3)

G. Gariepy, N. Krstajic, R. Henderson, C. Li, R. R. Thomson, G. S. Buller, B. Heshmat, R. Raskar, J. Leach, and D. Faccio, “Single-photon sensitive light-in-fight imaging,” Nat. Commun. 6(1), 6021 (2015).
[Crossref]

M. Buttafava, J. Zeman, A. Tosi, K. Eliceiri, and A. Velten, “Non-line-of-sight imaging using a time-gated single photon avalanche diode,” Opt. Express 23(16), 20997–21011 (2015).
[Crossref]

F. Heide, W. Heidrich, M. Hullin, and G. Wetzstein, “Doppler time-of-flight imaging,” ACM Trans. Graph. 34(4), 36:1–36:11 (2015).
[Crossref]

2014 (2)

M. Laurenzis and A. Velten, “Nonline-of-sight laser gated viewing of scattered photons,” Opt. Eng. 53(2), 023102 (2014).
[Crossref]

M. Buttafava, G. Boso, A. Ruggeri, A. D. Mora, and A. Tosi, “Time-gated single-photon detection module with 110 ps transition time and up to 80 MHz repetition rate,” Rev. Sci. Instrum. 85(8), 083114 (2014).
[Crossref]

2013 (1)

F. Heide, M. B. Hullin, J. Gregson, and W. Heidrich, “Low-budget transient imaging using photonic mixer devices,” ACM Trans. Graph. 32(4), 1 (2013).
[Crossref]

2012 (2)

A. Kirmani, H. Jeelani, V. Montazerhodjat, and V. K. Goyal, “Diffuse imaging: Creating optical images with unfocused time-resolved illumination and sensing,” IEEE Signal Process. Lett. 19(1), 31–34 (2012).
[Crossref]

A. Velten, T. Willwacher, O. Gupta, A. Veeraraghavan, M. G. Bawendi, and R. Raskar, “Recovering three-dimensional shape around a corner using ultrafast time-of-flight imaging,” Nat. Commun. 3(1), 745 (2012).
[Crossref]

2011 (1)

A. Kirmani, T. Hutchison, J. Davis, and R. Raskar, “Looking around the corner using ultrafast transient imaging,” Int. J. Comput. Vis. 95(1), 13–28 (2011).
[Crossref]

2010 (1)

G. S. Buller and R. J. Collins, “Single-photon generation and detection,” Meas. Sci. Technol. 21(1), 012002 (2010).
[Crossref]

2007 (1)

F. Zappa, S. Tisa, A. Tosi, and S. Cova, “Principles and features of single-photon avalanche diode arrays,” Sens. Actuators, A 140(1), 103–112 (2007).
[Crossref]

2006 (1)

S. K. Nayar, G. Krishnan, M. D. Grossberg, and R. Raskar, “Fast separation of direct and global components of a scene using high frequency illumination,” ACM Trans. Graph. 25(3), 935–944 (2006).
[Crossref]

1996 (1)

1990 (1)

I. Freund, “Looking through walls and around corners,” Phys. A 168(1), 49–65 (1990).
[Crossref]

Arellano, V.

V. Arellano, D. Gutierrez, and A. Jarabo, “Fast back-projection for non-line of sight reconstruction,” in ACM SIGGRAPH 2017 Posters, (ACM, New York, NY, USA, 2017), SIGGRAPH ’17, pp. 79:1–79:2.

Azer Reza, S.

X. Liu, I. Guillén, M. La Manna, J. H. Nam, S. Azer Reza, T. H. Le, D. Gutierrez, A. Jarabo, and A. Velten, “Non-line-of-sight imaging using phasor-field virtual waveoptics,” Nature 572(7771), 620–623 (2019).
[Crossref]

Bacher, E.

M. Laurenzis, J. Klein, E. Bacher, and S. Schertzer, “Approaches to solve inverse problems for optical sensing around corners,” in Emerging Imaging and Sensing Technologies for Security and Defence IV, vol. 11163G. S. Buller, R. C. Hollins, R. A. Lamb, and M. Laurenzis, eds., International Society for Optics and Photonics (SPIE, 2019), pp. 1–7.

Bauer, S.

Bawendi, M. G.

A. Velten, T. Willwacher, O. Gupta, A. Veeraraghavan, M. G. Bawendi, and R. Raskar, “Recovering three-dimensional shape around a corner using ultrafast time-of-flight imaging,” Nat. Commun. 3(1), 745 (2012).
[Crossref]

Boso, G.

M. Buttafava, G. Boso, A. Ruggeri, A. D. Mora, and A. Tosi, “Time-gated single-photon detection module with 110 ps transition time and up to 80 MHz repetition rate,” Rev. Sci. Instrum. 85(8), 083114 (2014).
[Crossref]

Breitbach, E.

M. La Manna, F. Kine, E. Breitbach, J. Jackson, T. Sultan, and A. Velten, “Error backprojection algorithms for non-line-of-sight imaging,” IEEE Trans. Pattern Anal. Mach. Intell. 41(7), 1615–1626 (2019).
[Crossref]

Buisan, R.

A. Jarabo, J. Marco, A. Munoz, R. Buisan, W. Jarosz, and D. Gutierrez, “A framework for transient rendering,” ACM Transactions on Graph. (Proceedings SIGGRAPH Asia) 33 (2014).

Buller, G. S.

G. Gariepy, N. Krstajic, R. Henderson, C. Li, R. R. Thomson, G. S. Buller, B. Heshmat, R. Raskar, J. Leach, and D. Faccio, “Single-photon sensitive light-in-fight imaging,” Nat. Commun. 6(1), 6021 (2015).
[Crossref]

G. S. Buller and R. J. Collins, “Single-photon generation and detection,” Meas. Sci. Technol. 21(1), 012002 (2010).
[Crossref]

Buttafava, M.

M. Buttafava, J. Zeman, A. Tosi, K. Eliceiri, and A. Velten, “Non-line-of-sight imaging using a time-gated single photon avalanche diode,” Opt. Express 23(16), 20997–21011 (2015).
[Crossref]

M. Buttafava, G. Boso, A. Ruggeri, A. D. Mora, and A. Tosi, “Time-gated single-photon detection module with 110 ps transition time and up to 80 MHz repetition rate,” Rev. Sci. Instrum. 85(8), 083114 (2014).
[Crossref]

M. Laurenzis, M. La Manna, M. Buttafava, A. Tosi, J. Nam, M. Gupta, and A. Velten, “Advanced active imaging with single photon avalanche diodes,” in Proc. SPIE, vol. 10799 (Emerging Imaging and Sensing Technologies for Security and Defence III) (2018).

A. K. Pediredla, M. Buttafava, A. Tosi, O. Cossairt, and A. Veeraraghavan, “Reconstructing rooms using photon echoes: A plane based model and reconstruction algorithm for looking around the corner,” in 2017 IEEE International Conference on Computational Photography (ICCP), (2017), pp. 1–12.

Collins, R. J.

G. S. Buller and R. J. Collins, “Single-photon generation and detection,” Meas. Sci. Technol. 21(1), 012002 (2010).
[Crossref]

Cossairt, O.

A. K. Pediredla, M. Buttafava, A. Tosi, O. Cossairt, and A. Veeraraghavan, “Reconstructing rooms using photon echoes: A plane based model and reconstruction algorithm for looking around the corner,” in 2017 IEEE International Conference on Computational Photography (ICCP), (2017), pp. 1–12.

Cova, S.

F. Zappa, S. Tisa, A. Tosi, and S. Cova, “Principles and features of single-photon avalanche diode arrays,” Sens. Actuators, A 140(1), 103–112 (2007).
[Crossref]

S. Cova, M. Ghioni, A. Lacaita, C. Samori, and F. Zappa, “Avalanche photodiodes and quenching circuits for single-photon detection,” Appl. Opt. 35(12), 1956–1976 (1996).
[Crossref]

Davis, J.

A. Kirmani, T. Hutchison, J. Davis, and R. Raskar, “Looking around the corner using ultrafast transient imaging,” Int. J. Comput. Vis. 95(1), 13–28 (2011).
[Crossref]

A. Kirmani, T. Hutchison, J. Davis, and R. Raskar, “Looking around the corner using transient imaging,” in 2009 IEEE 12th International Conference on Computer Vision, (2009), pp. 159–166.

Dove, J.

Eliceiri, K.

Elmqvist, M.

O. Steinvall, M. Elmqvist, and H. Larsson, “See around the corner using active imaging,” in Proc. SPIE, vol. 8186 (Electro-Optical Remote Sensing, Photonic Technologies, and Applications V) (2011).

Faccio, D.

G. Gariepy, N. Krstajic, R. Henderson, C. Li, R. R. Thomson, G. S. Buller, B. Heshmat, R. Raskar, J. Leach, and D. Faccio, “Single-photon sensitive light-in-fight imaging,” Nat. Commun. 6(1), 6021 (2015).
[Crossref]

Freund, I.

I. Freund, “Looking through walls and around corners,” Phys. A 168(1), 49–65 (1990).
[Crossref]

Gariepy, G.

G. Gariepy, N. Krstajic, R. Henderson, C. Li, R. R. Thomson, G. S. Buller, B. Heshmat, R. Raskar, J. Leach, and D. Faccio, “Single-photon sensitive light-in-fight imaging,” Nat. Commun. 6(1), 6021 (2015).
[Crossref]

Ghioni, M.

Goyal, V. K.

A. Kirmani, H. Jeelani, V. Montazerhodjat, and V. K. Goyal, “Diffuse imaging: Creating optical images with unfocused time-resolved illumination and sensing,” IEEE Signal Process. Lett. 19(1), 31–34 (2012).
[Crossref]

Gregson, J.

F. Heide, M. B. Hullin, J. Gregson, and W. Heidrich, “Low-budget transient imaging using photonic mixer devices,” ACM Trans. Graph. 32(4), 1 (2013).
[Crossref]

Grossberg, M. D.

S. K. Nayar, G. Krishnan, M. D. Grossberg, and R. Raskar, “Fast separation of direct and global components of a scene using high frequency illumination,” ACM Trans. Graph. 25(3), 935–944 (2006).
[Crossref]

Guillén, I.

X. Liu, I. Guillén, M. La Manna, J. H. Nam, S. Azer Reza, T. H. Le, D. Gutierrez, A. Jarabo, and A. Velten, “Non-line-of-sight imaging using phasor-field virtual waveoptics,” Nature 572(7771), 620–623 (2019).
[Crossref]

Gupta, M.

M. Laurenzis, M. La Manna, M. Buttafava, A. Tosi, J. Nam, M. Gupta, and A. Velten, “Advanced active imaging with single photon avalanche diodes,” in Proc. SPIE, vol. 10799 (Emerging Imaging and Sensing Technologies for Security and Defence III) (2018).

Gupta, O.

A. Velten, T. Willwacher, O. Gupta, A. Veeraraghavan, M. G. Bawendi, and R. Raskar, “Recovering three-dimensional shape around a corner using ultrafast time-of-flight imaging,” Nat. Commun. 3(1), 745 (2012).
[Crossref]

Gutierrez, D.

X. Liu, I. Guillén, M. La Manna, J. H. Nam, S. Azer Reza, T. H. Le, D. Gutierrez, A. Jarabo, and A. Velten, “Non-line-of-sight imaging using phasor-field virtual waveoptics,” Nature 572(7771), 620–623 (2019).
[Crossref]

A. Jarabo, B. Masia, J. Marco, and D. Gutierrez, “Recent advances in transient imaging: A computer graphics and vision perspective,” Vis. Informatics 1(1), 65–79 (2017).
[Crossref]

A. Jarabo, J. Marco, A. Munoz, R. Buisan, W. Jarosz, and D. Gutierrez, “A framework for transient rendering,” ACM Transactions on Graph. (Proceedings SIGGRAPH Asia) 33 (2014).

V. Arellano, D. Gutierrez, and A. Jarabo, “Fast back-projection for non-line of sight reconstruction,” in ACM SIGGRAPH 2017 Posters, (ACM, New York, NY, USA, 2017), SIGGRAPH ’17, pp. 79:1–79:2.

Heide, F.

F. Heide, W. Heidrich, M. Hullin, and G. Wetzstein, “Doppler time-of-flight imaging,” ACM Trans. Graph. 34(4), 36:1–36:11 (2015).
[Crossref]

F. Heide, M. B. Hullin, J. Gregson, and W. Heidrich, “Low-budget transient imaging using photonic mixer devices,” ACM Trans. Graph. 32(4), 1 (2013).
[Crossref]

F. Heide, L. Xiao, W. Heidrich, and M. B. Hullin, “Diffuse mirrors: 3d reconstruction from diffuse indirect illumination using inexpensive time-of-flight sensors,” in 2014 IEEE Conference on Computer Vision and Pattern Recognition, (2014), pp. 3222–3229.

Heidrich, W.

F. Heide, W. Heidrich, M. Hullin, and G. Wetzstein, “Doppler time-of-flight imaging,” ACM Trans. Graph. 34(4), 36:1–36:11 (2015).
[Crossref]

F. Heide, M. B. Hullin, J. Gregson, and W. Heidrich, “Low-budget transient imaging using photonic mixer devices,” ACM Trans. Graph. 32(4), 1 (2013).
[Crossref]

F. Heide, L. Xiao, W. Heidrich, and M. B. Hullin, “Diffuse mirrors: 3d reconstruction from diffuse indirect illumination using inexpensive time-of-flight sensors,” in 2014 IEEE Conference on Computer Vision and Pattern Recognition, (2014), pp. 3222–3229.

Henderson, R.

G. Gariepy, N. Krstajic, R. Henderson, C. Li, R. R. Thomson, G. S. Buller, B. Heshmat, R. Raskar, J. Leach, and D. Faccio, “Single-photon sensitive light-in-fight imaging,” Nat. Commun. 6(1), 6021 (2015).
[Crossref]

Heshmat, B.

G. Gariepy, N. Krstajic, R. Henderson, C. Li, R. R. Thomson, G. S. Buller, B. Heshmat, R. Raskar, J. Leach, and D. Faccio, “Single-photon sensitive light-in-fight imaging,” Nat. Commun. 6(1), 6021 (2015).
[Crossref]

Hullin, M.

F. Heide, W. Heidrich, M. Hullin, and G. Wetzstein, “Doppler time-of-flight imaging,” ACM Trans. Graph. 34(4), 36:1–36:11 (2015).
[Crossref]

J. Klein, C. Peters, M. Laurenzis, and M. Hullin, “Non-line-of-sight mocap,” in ACM SIGGRAPH 2017 Emerging Technologies, (ACM, New York, NY, USA, 2017), SIGGRAPH ’17, pp. 18:1–18:2.

Hullin, M. B.

J. Klein, C. Peters, J. Martin, M. Laurenzis, and M. B. Hullin, “Tracking objects outside the line of sight using 2D intensity images,” Sci. Rep. 6(1), 32491 (2016).
[Crossref]

F. Heide, M. B. Hullin, J. Gregson, and W. Heidrich, “Low-budget transient imaging using photonic mixer devices,” ACM Trans. Graph. 32(4), 1 (2013).
[Crossref]

F. Heide, L. Xiao, W. Heidrich, and M. B. Hullin, “Diffuse mirrors: 3d reconstruction from diffuse indirect illumination using inexpensive time-of-flight sensors,” in 2014 IEEE Conference on Computer Vision and Pattern Recognition, (2014), pp. 3222–3229.

Hutchison, T.

A. Kirmani, T. Hutchison, J. Davis, and R. Raskar, “Looking around the corner using ultrafast transient imaging,” Int. J. Comput. Vis. 95(1), 13–28 (2011).
[Crossref]

A. Kirmani, T. Hutchison, J. Davis, and R. Raskar, “Looking around the corner using transient imaging,” in 2009 IEEE 12th International Conference on Computer Vision, (2009), pp. 159–166.

Jackson, J.

M. La Manna, F. Kine, E. Breitbach, J. Jackson, T. Sultan, and A. Velten, “Error backprojection algorithms for non-line-of-sight imaging,” IEEE Trans. Pattern Anal. Mach. Intell. 41(7), 1615–1626 (2019).
[Crossref]

Jarabo, A.

X. Liu, I. Guillén, M. La Manna, J. H. Nam, S. Azer Reza, T. H. Le, D. Gutierrez, A. Jarabo, and A. Velten, “Non-line-of-sight imaging using phasor-field virtual waveoptics,” Nature 572(7771), 620–623 (2019).
[Crossref]

A. Jarabo, B. Masia, J. Marco, and D. Gutierrez, “Recent advances in transient imaging: A computer graphics and vision perspective,” Vis. Informatics 1(1), 65–79 (2017).
[Crossref]

A. Jarabo, J. Marco, A. Munoz, R. Buisan, W. Jarosz, and D. Gutierrez, “A framework for transient rendering,” ACM Transactions on Graph. (Proceedings SIGGRAPH Asia) 33 (2014).

V. Arellano, D. Gutierrez, and A. Jarabo, “Fast back-projection for non-line of sight reconstruction,” in ACM SIGGRAPH 2017 Posters, (ACM, New York, NY, USA, 2017), SIGGRAPH ’17, pp. 79:1–79:2.

Jarosz, W.

A. Jarabo, J. Marco, A. Munoz, R. Buisan, W. Jarosz, and D. Gutierrez, “A framework for transient rendering,” ACM Transactions on Graph. (Proceedings SIGGRAPH Asia) 33 (2014).

Jeelani, H.

A. Kirmani, H. Jeelani, V. Montazerhodjat, and V. K. Goyal, “Diffuse imaging: Creating optical images with unfocused time-resolved illumination and sensing,” IEEE Signal Process. Lett. 19(1), 31–34 (2012).
[Crossref]

Kine, F.

M. La Manna, F. Kine, E. Breitbach, J. Jackson, T. Sultan, and A. Velten, “Error backprojection algorithms for non-line-of-sight imaging,” IEEE Trans. Pattern Anal. Mach. Intell. 41(7), 1615–1626 (2019).
[Crossref]

Kirmani, A.

A. Kirmani, H. Jeelani, V. Montazerhodjat, and V. K. Goyal, “Diffuse imaging: Creating optical images with unfocused time-resolved illumination and sensing,” IEEE Signal Process. Lett. 19(1), 31–34 (2012).
[Crossref]

A. Kirmani, T. Hutchison, J. Davis, and R. Raskar, “Looking around the corner using ultrafast transient imaging,” Int. J. Comput. Vis. 95(1), 13–28 (2011).
[Crossref]

A. Kirmani, T. Hutchison, J. Davis, and R. Raskar, “Looking around the corner using transient imaging,” in 2009 IEEE 12th International Conference on Computer Vision, (2009), pp. 159–166.

Klein, J.

J. Klein, C. Peters, J. Martin, M. Laurenzis, and M. B. Hullin, “Tracking objects outside the line of sight using 2D intensity images,” Sci. Rep. 6(1), 32491 (2016).
[Crossref]

J. Klein, C. Peters, M. Laurenzis, and M. Hullin, “Non-line-of-sight mocap,” in ACM SIGGRAPH 2017 Emerging Technologies, (ACM, New York, NY, USA, 2017), SIGGRAPH ’17, pp. 18:1–18:2.

M. Laurenzis, J. Klein, E. Bacher, and S. Schertzer, “Approaches to solve inverse problems for optical sensing around corners,” in Emerging Imaging and Sensing Technologies for Security and Defence IV, vol. 11163G. S. Buller, R. C. Hollins, R. A. Lamb, and M. Laurenzis, eds., International Society for Optics and Photonics (SPIE, 2019), pp. 1–7.

Krishnan, G.

S. K. Nayar, G. Krishnan, M. D. Grossberg, and R. Raskar, “Fast separation of direct and global components of a scene using high frequency illumination,” ACM Trans. Graph. 25(3), 935–944 (2006).
[Crossref]

Krstajic, N.

G. Gariepy, N. Krstajic, R. Henderson, C. Li, R. R. Thomson, G. S. Buller, B. Heshmat, R. Raskar, J. Leach, and D. Faccio, “Single-photon sensitive light-in-fight imaging,” Nat. Commun. 6(1), 6021 (2015).
[Crossref]

Kutulakos, K. N.

C. Tsai, K. N. Kutulakos, S. G. Narasimhan, and A. C. Sankaranarayanan, “The geometry of first-returning photons for non-line-of-sight imaging,” in 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), (2017), pp. 2336–2344.

S. M. Seitz, Y. Matsushita, and K. N. Kutulakos, “A theory of inverse light transport,” in Proceedings of the Tenth IEEE International Conference on Computer Vision - Volume 2, (IEEE Computer Society, Washington, DC, USA, 2005), ICCV ’05, pp. 1440–1447.

La Manna, M.

M. La Manna, F. Kine, E. Breitbach, J. Jackson, T. Sultan, and A. Velten, “Error backprojection algorithms for non-line-of-sight imaging,” IEEE Trans. Pattern Anal. Mach. Intell. 41(7), 1615–1626 (2019).
[Crossref]

X. Liu, I. Guillén, M. La Manna, J. H. Nam, S. Azer Reza, T. H. Le, D. Gutierrez, A. Jarabo, and A. Velten, “Non-line-of-sight imaging using phasor-field virtual waveoptics,” Nature 572(7771), 620–623 (2019).
[Crossref]

S. A. Reza, M. La Manna, S. Bauer, and A. Velten, “Phasor field waves: Experimental demonstrations of wave-like properties,” Opt. Express 27(22), 32587–32608 (2019).
[Crossref]

S. A. Reza, M. La Manna, S. Bauer, and A. Velten, “Phasor field waves: A Huygens-like light transport model for non-line-of-sight imaging applications,” Opt. Express 27(20), 29380–29400 (2019).
[Crossref]

M. Laurenzis, M. La Manna, M. Buttafava, A. Tosi, J. Nam, M. Gupta, and A. Velten, “Advanced active imaging with single photon avalanche diodes,” in Proc. SPIE, vol. 10799 (Emerging Imaging and Sensing Technologies for Security and Defence III) (2018).

S. A. Reza, M. La Manna, and A. Velten, “Imaging with Phasor Fields for Non-Line-of Sight Applications,” in Imaging and Applied Optics 2018 (3D, AO, AIO, COSI, DH, IS, LACSEA, LS&C, MATH, pcAOP), (Optical Society of America, 2018), p. CM2E.7.

Lacaita, A.

Larsson, H.

O. Steinvall, M. Elmqvist, and H. Larsson, “See around the corner using active imaging,” in Proc. SPIE, vol. 8186 (Electro-Optical Remote Sensing, Photonic Technologies, and Applications V) (2011).

Laurenzis, M.

J. Klein, C. Peters, J. Martin, M. Laurenzis, and M. B. Hullin, “Tracking objects outside the line of sight using 2D intensity images,” Sci. Rep. 6(1), 32491 (2016).
[Crossref]

M. Laurenzis and A. Velten, “Nonline-of-sight laser gated viewing of scattered photons,” Opt. Eng. 53(2), 023102 (2014).
[Crossref]

M. Laurenzis, “Computational sensing approaches for enhanced active imaging,” in Proc. SPIE, vol. 10796 (Electro-Optical Remote Sensing XII) (2018).

M. Laurenzis, M. La Manna, M. Buttafava, A. Tosi, J. Nam, M. Gupta, and A. Velten, “Advanced active imaging with single photon avalanche diodes,” in Proc. SPIE, vol. 10799 (Emerging Imaging and Sensing Technologies for Security and Defence III) (2018).

J. Klein, C. Peters, M. Laurenzis, and M. Hullin, “Non-line-of-sight mocap,” in ACM SIGGRAPH 2017 Emerging Technologies, (ACM, New York, NY, USA, 2017), SIGGRAPH ’17, pp. 18:1–18:2.

M. Laurenzis, J. Klein, E. Bacher, and S. Schertzer, “Approaches to solve inverse problems for optical sensing around corners,” in Emerging Imaging and Sensing Technologies for Security and Defence IV, vol. 11163G. S. Buller, R. C. Hollins, R. A. Lamb, and M. Laurenzis, eds., International Society for Optics and Photonics (SPIE, 2019), pp. 1–7.

Le, T. H.

X. Liu, I. Guillén, M. La Manna, J. H. Nam, S. Azer Reza, T. H. Le, D. Gutierrez, A. Jarabo, and A. Velten, “Non-line-of-sight imaging using phasor-field virtual waveoptics,” Nature 572(7771), 620–623 (2019).
[Crossref]

Leach, J.

G. Gariepy, N. Krstajic, R. Henderson, C. Li, R. R. Thomson, G. S. Buller, B. Heshmat, R. Raskar, J. Leach, and D. Faccio, “Single-photon sensitive light-in-fight imaging,” Nat. Commun. 6(1), 6021 (2015).
[Crossref]

Li, C.

G. Gariepy, N. Krstajic, R. Henderson, C. Li, R. R. Thomson, G. S. Buller, B. Heshmat, R. Raskar, J. Leach, and D. Faccio, “Single-photon sensitive light-in-fight imaging,” Nat. Commun. 6(1), 6021 (2015).
[Crossref]

Lindell, D. B.

D. B. Lindell, G. Wetzstein, and M. O’Toole, “Wave-based non-line-of-sight imaging using fast f-k migration,” ACM Trans. Graph. 38(4), 1–13 (2019).
[Crossref]

M. O’Toole, D. B. Lindell, and G. Wetzstein, “Confocal non-line-of-sight imaging,” in ACM SIGGRAPH 2018 Talks, (ACM, New York, NY, USA, 2018), SIGGRAPH ’18, pp. 1:1–1:2.

Liu, X.

X. Liu, I. Guillén, M. La Manna, J. H. Nam, S. Azer Reza, T. H. Le, D. Gutierrez, A. Jarabo, and A. Velten, “Non-line-of-sight imaging using phasor-field virtual waveoptics,” Nature 572(7771), 620–623 (2019).
[Crossref]

Marco, J.

A. Jarabo, B. Masia, J. Marco, and D. Gutierrez, “Recent advances in transient imaging: A computer graphics and vision perspective,” Vis. Informatics 1(1), 65–79 (2017).
[Crossref]

A. Jarabo, J. Marco, A. Munoz, R. Buisan, W. Jarosz, and D. Gutierrez, “A framework for transient rendering,” ACM Transactions on Graph. (Proceedings SIGGRAPH Asia) 33 (2014).

Martin, J.

J. Klein, C. Peters, J. Martin, M. Laurenzis, and M. B. Hullin, “Tracking objects outside the line of sight using 2D intensity images,” Sci. Rep. 6(1), 32491 (2016).
[Crossref]

Masia, B.

A. Jarabo, B. Masia, J. Marco, and D. Gutierrez, “Recent advances in transient imaging: A computer graphics and vision perspective,” Vis. Informatics 1(1), 65–79 (2017).
[Crossref]

Matsushita, Y.

S. M. Seitz, Y. Matsushita, and K. N. Kutulakos, “A theory of inverse light transport,” in Proceedings of the Tenth IEEE International Conference on Computer Vision - Volume 2, (IEEE Computer Society, Washington, DC, USA, 2005), ICCV ’05, pp. 1440–1447.

Montazerhodjat, V.

A. Kirmani, H. Jeelani, V. Montazerhodjat, and V. K. Goyal, “Diffuse imaging: Creating optical images with unfocused time-resolved illumination and sensing,” IEEE Signal Process. Lett. 19(1), 31–34 (2012).
[Crossref]

Mora, A. D.

M. Buttafava, G. Boso, A. Ruggeri, A. D. Mora, and A. Tosi, “Time-gated single-photon detection module with 110 ps transition time and up to 80 MHz repetition rate,” Rev. Sci. Instrum. 85(8), 083114 (2014).
[Crossref]

Munoz, A.

A. Jarabo, J. Marco, A. Munoz, R. Buisan, W. Jarosz, and D. Gutierrez, “A framework for transient rendering,” ACM Transactions on Graph. (Proceedings SIGGRAPH Asia) 33 (2014).

Nam, J.

M. Laurenzis, M. La Manna, M. Buttafava, A. Tosi, J. Nam, M. Gupta, and A. Velten, “Advanced active imaging with single photon avalanche diodes,” in Proc. SPIE, vol. 10799 (Emerging Imaging and Sensing Technologies for Security and Defence III) (2018).

Nam, J. H.

X. Liu, I. Guillén, M. La Manna, J. H. Nam, S. Azer Reza, T. H. Le, D. Gutierrez, A. Jarabo, and A. Velten, “Non-line-of-sight imaging using phasor-field virtual waveoptics,” Nature 572(7771), 620–623 (2019).
[Crossref]

Narasimhan, S. G.

C. Tsai, K. N. Kutulakos, S. G. Narasimhan, and A. C. Sankaranarayanan, “The geometry of first-returning photons for non-line-of-sight imaging,” in 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), (2017), pp. 2336–2344.

Nayar, S. K.

S. K. Nayar, G. Krishnan, M. D. Grossberg, and R. Raskar, “Fast separation of direct and global components of a scene using high frequency illumination,” ACM Trans. Graph. 25(3), 935–944 (2006).
[Crossref]

O’Toole, M.

D. B. Lindell, G. Wetzstein, and M. O’Toole, “Wave-based non-line-of-sight imaging using fast f-k migration,” ACM Trans. Graph. 38(4), 1–13 (2019).
[Crossref]

M. O’Toole, D. B. Lindell, and G. Wetzstein, “Confocal non-line-of-sight imaging,” in ACM SIGGRAPH 2018 Talks, (ACM, New York, NY, USA, 2018), SIGGRAPH ’18, pp. 1:1–1:2.

Pediredla, A. K.

A. K. Pediredla, M. Buttafava, A. Tosi, O. Cossairt, and A. Veeraraghavan, “Reconstructing rooms using photon echoes: A plane based model and reconstruction algorithm for looking around the corner,” in 2017 IEEE International Conference on Computational Photography (ICCP), (2017), pp. 1–12.

Peters, C.

J. Klein, C. Peters, J. Martin, M. Laurenzis, and M. B. Hullin, “Tracking objects outside the line of sight using 2D intensity images,” Sci. Rep. 6(1), 32491 (2016).
[Crossref]

J. Klein, C. Peters, M. Laurenzis, and M. Hullin, “Non-line-of-sight mocap,” in ACM SIGGRAPH 2017 Emerging Technologies, (ACM, New York, NY, USA, 2017), SIGGRAPH ’17, pp. 18:1–18:2.

Raskar, R.

G. Gariepy, N. Krstajic, R. Henderson, C. Li, R. R. Thomson, G. S. Buller, B. Heshmat, R. Raskar, J. Leach, and D. Faccio, “Single-photon sensitive light-in-fight imaging,” Nat. Commun. 6(1), 6021 (2015).
[Crossref]

A. Velten, T. Willwacher, O. Gupta, A. Veeraraghavan, M. G. Bawendi, and R. Raskar, “Recovering three-dimensional shape around a corner using ultrafast time-of-flight imaging,” Nat. Commun. 3(1), 745 (2012).
[Crossref]

A. Kirmani, T. Hutchison, J. Davis, and R. Raskar, “Looking around the corner using ultrafast transient imaging,” Int. J. Comput. Vis. 95(1), 13–28 (2011).
[Crossref]

S. K. Nayar, G. Krishnan, M. D. Grossberg, and R. Raskar, “Fast separation of direct and global components of a scene using high frequency illumination,” ACM Trans. Graph. 25(3), 935–944 (2006).
[Crossref]

A. Kirmani, T. Hutchison, J. Davis, and R. Raskar, “Looking around the corner using transient imaging,” in 2009 IEEE 12th International Conference on Computer Vision, (2009), pp. 159–166.

Reza, S. A.

Ruggeri, A.

M. Buttafava, G. Boso, A. Ruggeri, A. D. Mora, and A. Tosi, “Time-gated single-photon detection module with 110 ps transition time and up to 80 MHz repetition rate,” Rev. Sci. Instrum. 85(8), 083114 (2014).
[Crossref]

Samori, C.

Sankaranarayanan, A. C.

C. Tsai, K. N. Kutulakos, S. G. Narasimhan, and A. C. Sankaranarayanan, “The geometry of first-returning photons for non-line-of-sight imaging,” in 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), (2017), pp. 2336–2344.

Schertzer, S.

M. Laurenzis, J. Klein, E. Bacher, and S. Schertzer, “Approaches to solve inverse problems for optical sensing around corners,” in Emerging Imaging and Sensing Technologies for Security and Defence IV, vol. 11163G. S. Buller, R. C. Hollins, R. A. Lamb, and M. Laurenzis, eds., International Society for Optics and Photonics (SPIE, 2019), pp. 1–7.

Seitz, S. M.

S. M. Seitz, Y. Matsushita, and K. N. Kutulakos, “A theory of inverse light transport,” in Proceedings of the Tenth IEEE International Conference on Computer Vision - Volume 2, (IEEE Computer Society, Washington, DC, USA, 2005), ICCV ’05, pp. 1440–1447.

Shapiro, J. H.

Shulkind, G.

Steinvall, O.

O. Steinvall, M. Elmqvist, and H. Larsson, “See around the corner using active imaging,” in Proc. SPIE, vol. 8186 (Electro-Optical Remote Sensing, Photonic Technologies, and Applications V) (2011).

Sultan, T.

M. La Manna, F. Kine, E. Breitbach, J. Jackson, T. Sultan, and A. Velten, “Error backprojection algorithms for non-line-of-sight imaging,” IEEE Trans. Pattern Anal. Mach. Intell. 41(7), 1615–1626 (2019).
[Crossref]

Teichman, J. A.

Thomson, R. R.

G. Gariepy, N. Krstajic, R. Henderson, C. Li, R. R. Thomson, G. S. Buller, B. Heshmat, R. Raskar, J. Leach, and D. Faccio, “Single-photon sensitive light-in-fight imaging,” Nat. Commun. 6(1), 6021 (2015).
[Crossref]

Thrampoulidis, C.

Tisa, S.

F. Zappa, S. Tisa, A. Tosi, and S. Cova, “Principles and features of single-photon avalanche diode arrays,” Sens. Actuators, A 140(1), 103–112 (2007).
[Crossref]

Torralba, A.

Tosi, A.

M. Buttafava, J. Zeman, A. Tosi, K. Eliceiri, and A. Velten, “Non-line-of-sight imaging using a time-gated single photon avalanche diode,” Opt. Express 23(16), 20997–21011 (2015).
[Crossref]

M. Buttafava, G. Boso, A. Ruggeri, A. D. Mora, and A. Tosi, “Time-gated single-photon detection module with 110 ps transition time and up to 80 MHz repetition rate,” Rev. Sci. Instrum. 85(8), 083114 (2014).
[Crossref]

F. Zappa, S. Tisa, A. Tosi, and S. Cova, “Principles and features of single-photon avalanche diode arrays,” Sens. Actuators, A 140(1), 103–112 (2007).
[Crossref]

M. Laurenzis, M. La Manna, M. Buttafava, A. Tosi, J. Nam, M. Gupta, and A. Velten, “Advanced active imaging with single photon avalanche diodes,” in Proc. SPIE, vol. 10799 (Emerging Imaging and Sensing Technologies for Security and Defence III) (2018).

A. K. Pediredla, M. Buttafava, A. Tosi, O. Cossairt, and A. Veeraraghavan, “Reconstructing rooms using photon echoes: A plane based model and reconstruction algorithm for looking around the corner,” in 2017 IEEE International Conference on Computational Photography (ICCP), (2017), pp. 1–12.

Tsai, C.

C. Tsai, K. N. Kutulakos, S. G. Narasimhan, and A. C. Sankaranarayanan, “The geometry of first-returning photons for non-line-of-sight imaging,” in 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), (2017), pp. 2336–2344.

Veeraraghavan, A.

A. Velten, T. Willwacher, O. Gupta, A. Veeraraghavan, M. G. Bawendi, and R. Raskar, “Recovering three-dimensional shape around a corner using ultrafast time-of-flight imaging,” Nat. Commun. 3(1), 745 (2012).
[Crossref]

A. K. Pediredla, M. Buttafava, A. Tosi, O. Cossairt, and A. Veeraraghavan, “Reconstructing rooms using photon echoes: A plane based model and reconstruction algorithm for looking around the corner,” in 2017 IEEE International Conference on Computational Photography (ICCP), (2017), pp. 1–12.

Velten, A.

M. La Manna, F. Kine, E. Breitbach, J. Jackson, T. Sultan, and A. Velten, “Error backprojection algorithms for non-line-of-sight imaging,” IEEE Trans. Pattern Anal. Mach. Intell. 41(7), 1615–1626 (2019).
[Crossref]

S. A. Reza, M. La Manna, S. Bauer, and A. Velten, “Phasor field waves: Experimental demonstrations of wave-like properties,” Opt. Express 27(22), 32587–32608 (2019).
[Crossref]

X. Liu, I. Guillén, M. La Manna, J. H. Nam, S. Azer Reza, T. H. Le, D. Gutierrez, A. Jarabo, and A. Velten, “Non-line-of-sight imaging using phasor-field virtual waveoptics,” Nature 572(7771), 620–623 (2019).
[Crossref]

S. A. Reza, M. La Manna, S. Bauer, and A. Velten, “Phasor field waves: A Huygens-like light transport model for non-line-of-sight imaging applications,” Opt. Express 27(20), 29380–29400 (2019).
[Crossref]

M. Buttafava, J. Zeman, A. Tosi, K. Eliceiri, and A. Velten, “Non-line-of-sight imaging using a time-gated single photon avalanche diode,” Opt. Express 23(16), 20997–21011 (2015).
[Crossref]

M. Laurenzis and A. Velten, “Nonline-of-sight laser gated viewing of scattered photons,” Opt. Eng. 53(2), 023102 (2014).
[Crossref]

A. Velten, T. Willwacher, O. Gupta, A. Veeraraghavan, M. G. Bawendi, and R. Raskar, “Recovering three-dimensional shape around a corner using ultrafast time-of-flight imaging,” Nat. Commun. 3(1), 745 (2012).
[Crossref]

M. Laurenzis, M. La Manna, M. Buttafava, A. Tosi, J. Nam, M. Gupta, and A. Velten, “Advanced active imaging with single photon avalanche diodes,” in Proc. SPIE, vol. 10799 (Emerging Imaging and Sensing Technologies for Security and Defence III) (2018).

S. A. Reza, M. La Manna, and A. Velten, “Imaging with Phasor Fields for Non-Line-of Sight Applications,” in Imaging and Applied Optics 2018 (3D, AO, AIO, COSI, DH, IS, LACSEA, LS&C, MATH, pcAOP), (Optical Society of America, 2018), p. CM2E.7.

Wetzstein, G.

D. B. Lindell, G. Wetzstein, and M. O’Toole, “Wave-based non-line-of-sight imaging using fast f-k migration,” ACM Trans. Graph. 38(4), 1–13 (2019).
[Crossref]

F. Heide, W. Heidrich, M. Hullin, and G. Wetzstein, “Doppler time-of-flight imaging,” ACM Trans. Graph. 34(4), 36:1–36:11 (2015).
[Crossref]

M. O’Toole, D. B. Lindell, and G. Wetzstein, “Confocal non-line-of-sight imaging,” in ACM SIGGRAPH 2018 Talks, (ACM, New York, NY, USA, 2018), SIGGRAPH ’18, pp. 1:1–1:2.

Willwacher, T.

A. Velten, T. Willwacher, O. Gupta, A. Veeraraghavan, M. G. Bawendi, and R. Raskar, “Recovering three-dimensional shape around a corner using ultrafast time-of-flight imaging,” Nat. Commun. 3(1), 745 (2012).
[Crossref]

Wong, F. N. C.

Wornell, G. W.

Xiao, L.

F. Heide, L. Xiao, W. Heidrich, and M. B. Hullin, “Diffuse mirrors: 3d reconstruction from diffuse indirect illumination using inexpensive time-of-flight sensors,” in 2014 IEEE Conference on Computer Vision and Pattern Recognition, (2014), pp. 3222–3229.

Xu, F.

Zappa, F.

F. Zappa, S. Tisa, A. Tosi, and S. Cova, “Principles and features of single-photon avalanche diode arrays,” Sens. Actuators, A 140(1), 103–112 (2007).
[Crossref]

S. Cova, M. Ghioni, A. Lacaita, C. Samori, and F. Zappa, “Avalanche photodiodes and quenching circuits for single-photon detection,” Appl. Opt. 35(12), 1956–1976 (1996).
[Crossref]

Zeman, J.

ACM Trans. Graph. (4)

S. K. Nayar, G. Krishnan, M. D. Grossberg, and R. Raskar, “Fast separation of direct and global components of a scene using high frequency illumination,” ACM Trans. Graph. 25(3), 935–944 (2006).
[Crossref]

F. Heide, M. B. Hullin, J. Gregson, and W. Heidrich, “Low-budget transient imaging using photonic mixer devices,” ACM Trans. Graph. 32(4), 1 (2013).
[Crossref]

F. Heide, W. Heidrich, M. Hullin, and G. Wetzstein, “Doppler time-of-flight imaging,” ACM Trans. Graph. 34(4), 36:1–36:11 (2015).
[Crossref]

D. B. Lindell, G. Wetzstein, and M. O’Toole, “Wave-based non-line-of-sight imaging using fast f-k migration,” ACM Trans. Graph. 38(4), 1–13 (2019).
[Crossref]

Appl. Opt. (1)

IEEE Signal Process. Lett. (1)

A. Kirmani, H. Jeelani, V. Montazerhodjat, and V. K. Goyal, “Diffuse imaging: Creating optical images with unfocused time-resolved illumination and sensing,” IEEE Signal Process. Lett. 19(1), 31–34 (2012).
[Crossref]

IEEE Trans. Pattern Anal. Mach. Intell. (1)

M. La Manna, F. Kine, E. Breitbach, J. Jackson, T. Sultan, and A. Velten, “Error backprojection algorithms for non-line-of-sight imaging,” IEEE Trans. Pattern Anal. Mach. Intell. 41(7), 1615–1626 (2019).
[Crossref]

Int. J. Comput. Vis. (1)

A. Kirmani, T. Hutchison, J. Davis, and R. Raskar, “Looking around the corner using ultrafast transient imaging,” Int. J. Comput. Vis. 95(1), 13–28 (2011).
[Crossref]

Meas. Sci. Technol. (1)

G. S. Buller and R. J. Collins, “Single-photon generation and detection,” Meas. Sci. Technol. 21(1), 012002 (2010).
[Crossref]

Nat. Commun. (2)

G. Gariepy, N. Krstajic, R. Henderson, C. Li, R. R. Thomson, G. S. Buller, B. Heshmat, R. Raskar, J. Leach, and D. Faccio, “Single-photon sensitive light-in-fight imaging,” Nat. Commun. 6(1), 6021 (2015).
[Crossref]

A. Velten, T. Willwacher, O. Gupta, A. Veeraraghavan, M. G. Bawendi, and R. Raskar, “Recovering three-dimensional shape around a corner using ultrafast time-of-flight imaging,” Nat. Commun. 3(1), 745 (2012).
[Crossref]

Nature (1)

X. Liu, I. Guillén, M. La Manna, J. H. Nam, S. Azer Reza, T. H. Le, D. Gutierrez, A. Jarabo, and A. Velten, “Non-line-of-sight imaging using phasor-field virtual waveoptics,” Nature 572(7771), 620–623 (2019).
[Crossref]

Opt. Eng. (1)

M. Laurenzis and A. Velten, “Nonline-of-sight laser gated viewing of scattered photons,” Opt. Eng. 53(2), 023102 (2014).
[Crossref]

Opt. Express (6)

Phys. A (1)

I. Freund, “Looking through walls and around corners,” Phys. A 168(1), 49–65 (1990).
[Crossref]

Rev. Sci. Instrum. (1)

M. Buttafava, G. Boso, A. Ruggeri, A. D. Mora, and A. Tosi, “Time-gated single-photon detection module with 110 ps transition time and up to 80 MHz repetition rate,” Rev. Sci. Instrum. 85(8), 083114 (2014).
[Crossref]

Sci. Rep. (1)

J. Klein, C. Peters, J. Martin, M. Laurenzis, and M. B. Hullin, “Tracking objects outside the line of sight using 2D intensity images,” Sci. Rep. 6(1), 32491 (2016).
[Crossref]

Sens. Actuators, A (1)

F. Zappa, S. Tisa, A. Tosi, and S. Cova, “Principles and features of single-photon avalanche diode arrays,” Sens. Actuators, A 140(1), 103–112 (2007).
[Crossref]

Vis. Informatics (1)

A. Jarabo, B. Masia, J. Marco, and D. Gutierrez, “Recent advances in transient imaging: A computer graphics and vision perspective,” Vis. Informatics 1(1), 65–79 (2017).
[Crossref]

Other (14)

A. K. Pediredla, M. Buttafava, A. Tosi, O. Cossairt, and A. Veeraraghavan, “Reconstructing rooms using photon echoes: A plane based model and reconstruction algorithm for looking around the corner,” in 2017 IEEE International Conference on Computational Photography (ICCP), (2017), pp. 1–12.

C. Tsai, K. N. Kutulakos, S. G. Narasimhan, and A. C. Sankaranarayanan, “The geometry of first-returning photons for non-line-of-sight imaging,” in 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), (2017), pp. 2336–2344.

M. Laurenzis, J. Klein, E. Bacher, and S. Schertzer, “Approaches to solve inverse problems for optical sensing around corners,” in Emerging Imaging and Sensing Technologies for Security and Defence IV, vol. 11163G. S. Buller, R. C. Hollins, R. A. Lamb, and M. Laurenzis, eds., International Society for Optics and Photonics (SPIE, 2019), pp. 1–7.

O. Steinvall, M. Elmqvist, and H. Larsson, “See around the corner using active imaging,” in Proc. SPIE, vol. 8186 (Electro-Optical Remote Sensing, Photonic Technologies, and Applications V) (2011).

S. M. Seitz, Y. Matsushita, and K. N. Kutulakos, “A theory of inverse light transport,” in Proceedings of the Tenth IEEE International Conference on Computer Vision - Volume 2, (IEEE Computer Society, Washington, DC, USA, 2005), ICCV ’05, pp. 1440–1447.

A. Kirmani, T. Hutchison, J. Davis, and R. Raskar, “Looking around the corner using transient imaging,” in 2009 IEEE 12th International Conference on Computer Vision, (2009), pp. 159–166.

M. Laurenzis, “Computational sensing approaches for enhanced active imaging,” in Proc. SPIE, vol. 10796 (Electro-Optical Remote Sensing XII) (2018).

M. Laurenzis, M. La Manna, M. Buttafava, A. Tosi, J. Nam, M. Gupta, and A. Velten, “Advanced active imaging with single photon avalanche diodes,” in Proc. SPIE, vol. 10799 (Emerging Imaging and Sensing Technologies for Security and Defence III) (2018).

A. Jarabo, J. Marco, A. Munoz, R. Buisan, W. Jarosz, and D. Gutierrez, “A framework for transient rendering,” ACM Transactions on Graph. (Proceedings SIGGRAPH Asia) 33 (2014).

F. Heide, L. Xiao, W. Heidrich, and M. B. Hullin, “Diffuse mirrors: 3d reconstruction from diffuse indirect illumination using inexpensive time-of-flight sensors,” in 2014 IEEE Conference on Computer Vision and Pattern Recognition, (2014), pp. 3222–3229.

V. Arellano, D. Gutierrez, and A. Jarabo, “Fast back-projection for non-line of sight reconstruction,” in ACM SIGGRAPH 2017 Posters, (ACM, New York, NY, USA, 2017), SIGGRAPH ’17, pp. 79:1–79:2.

S. A. Reza, M. La Manna, and A. Velten, “Imaging with Phasor Fields for Non-Line-of Sight Applications,” in Imaging and Applied Optics 2018 (3D, AO, AIO, COSI, DH, IS, LACSEA, LS&C, MATH, pcAOP), (Optical Society of America, 2018), p. CM2E.7.

J. Klein, C. Peters, M. Laurenzis, and M. Hullin, “Non-line-of-sight mocap,” in ACM SIGGRAPH 2017 Emerging Technologies, (ACM, New York, NY, USA, 2017), SIGGRAPH ’17, pp. 18:1–18:2.

M. O’Toole, D. B. Lindell, and G. Wetzstein, “Confocal non-line-of-sight imaging,” in ACM SIGGRAPH 2018 Talks, (ACM, New York, NY, USA, 2018), SIGGRAPH ’18, pp. 1:1–1:2.

Supplementary Material (1)

NameDescription
» Visualization 1       Video of the dynamic relay surface during capture

Cited By

OSA participates in Crossref's Cited-By Linking service. Citing articles from OSA journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (6)

Fig. 1.
Fig. 1. Shown in (a) is a typical LOS scene where the imaging system is collecting data of the object under investigation. Conversely shown in (b) is an example of a NLOS scenario, where an occluder wall blocks the imaging systems’ field of view. The optical imaging system discussed in this paper is capable of collecting data that has been scattered from the relay surface and create an image of the hidden object around the corner.
Fig. 2.
Fig. 2. Capture system described in Sec. 2.
Fig. 3.
Fig. 3. Data acquisition method described in Sec. 2.2. The TCSPC TTTR mode registers only the time events. Afterwards, we need to discretize the scanned area into laser positions and associate the time events to laser positions.
Fig. 4.
Fig. 4. The relay wall for this experiment comprises two curtains mounted on a rectangular frame. The black lines represent the laser scanning area. Moreover, we assume that the gated SPAD is focused on the statue between the two curtains. The red arrow indicates where the hidden scene is located w.r.t. to the relay surface. To create motion, we use an air fan, which is located on the right (but not visible here).Visualization 1.
Fig. 5.
Fig. 5. Experiment 1: (a) depicts the true hidden scene, whereas (b) shows the reconstructions with the fan off, (c) with the fan set to ‘high’.
Fig. 6.
Fig. 6. Experiment 2: (a) depicts the true hidden scene, whereas (b) shows the reconstructions with the fan off, (c) with the fan set to ‘high’, and (d) when the fan is set to ‘high’, but we reconstruct using the laser positions retrieved from (b).

Equations (1)

Equations on this page are rendered with MathJax. Learn more.

events { x l , m } x l , m v + [ Δ T 2 , Δ T 2 ] ,

Metrics