Abstract

The ability to see around corners, i.e., recover details of a hidden scene from its reflections in the surrounding environment, is of considerable interest in a wide range of applications. However, the diffuse nature of light reflected from typical surfaces leads to mixing of spatial information in the collected light, precluding useful scene reconstruction. Here, we employ a computational imaging technique that opportunistically exploits the presence of occluding objects, which obstruct probe-light propagation in the hidden scene, to undo the mixing and greatly improve scene recovery. Importantly, our technique obviates the need for the ultrafast time-of-flight measurements employed by most previous approaches to hidden-scene imaging. Moreover, it does so in a photon-efficient manner (i.e., it only requires a small number of photon detections) based on an accurate forward model and a computational algorithm that, together, respect the physics of three-bounce light propagation and single-photon detection. Using our methodology, we demonstrate reconstruction of hidden-surface reflectivity patterns in a meter-scale environment from non-time-resolved measurements. Ultimately, our technique represents an instance of a rich and promising new imaging modality with important potential implications for imaging science.

© 2018 Optical Society of America under the terms of the OSA Open Access Publishing Agreement

Full Article  |  PDF Article
OSA Recommended Articles
Detection and removal of fence occlusions in an image using a video of the static/dynamic scene

Sankaraganesh Jonna, Krishna K. Nakka, Vrushali S. Khasare, Rajiv R. Sahay, and Mohan S. Kankanhalli
J. Opt. Soc. Am. A 33(10) 1917-1930 (2016)

Adaptive compressed photon counting 3D imaging based on wavelet trees and depth map sparse representation

Huidong Dai, Guohua Gu, Weiji He, Ling Ye, Tianyi Mao, and Qian Chen
Opt. Express 24(23) 26080-26096 (2016)

Non-line-of-sight tracking of people at long range

Susan Chan, Ryan E. Warburton, Genevieve Gariepy, Jonathan Leach, and Daniele Faccio
Opt. Express 25(9) 10109-10117 (2017)

References

  • View by:
  • |
  • |
  • |

  1. B. Sun, M. P. Edgar, R. Bowman, L. E. Vittert, S. Welsh, A. Bowman, and M. J. Padgett, “3D computational imaging with single-pixel detectors,” Science 340, 844–847 (2013).
    [Crossref] [PubMed]
  2. A. Kirmani, D. Venkatraman, D. Shin, A. Colaço, F. N. C. Wong, J. H. Shapiro, and V. K. Goyal, “First-photon imaging,” Science 343, 58–61 (2014).
    [Crossref]
  3. L. Gao, J. Liang, C. Li, and L. V. Wang, “Single-shot compressed ultrafast photography at one hundred billion frames per second,” Nature 516, 74–77 (2014).
    [Crossref] [PubMed]
  4. A. M. Pawlikowska, A. Halimi, R. A. Lamb, and G. S. Buller, “Single-photon three-dimensional imaging at up to 10 kilometers range,” Opt. Express 25, 11919–11931 (2017).
    [Crossref] [PubMed]
  5. E. Repasi, P. Lutzmann, O. Steinvall, M. Elmqvist, B. Göhler, and G. Anstett, “Advanced short-wavelength infrared range-gated imaging for ground applications in monostatic and bistatic configurations,” Appl. Opt. 48, 5956–5969 (2009).
    [Crossref] [PubMed]
  6. A. Sume, M. Gustafsson, M. Herberthson, A. Janis, S. Nilsson, J. Rahm, and A. Orbom, “Radar detection of moving targets behind corners,” IEEE Trans. Geosci. Remote Sens. 49, 2259–2267 (2011).
    [Crossref]
  7. B. Chakraborty, Y. Li, J. J. Zhang, T. Trueblood, A. Papandreou-Suppappola, and D. Morrell, “Multipath exploitation with adaptive waveform design for tracking in urban terrain,” in Proceedings of IEEE International Conference on Acoustics, Speech and Signal Processing (IEEE2010), pp. 3894–3897.
  8. O. Steinvall, M. Elmqvist, and H. Larsson, “See around the corner using active imaging,” Proc. SPIE 8186, 818605 (2011).
    [Crossref]
  9. A. P. Mosk, A. Lagendijk, G. Lerosey, and M. Fink, “Controlling waves in space and time for imaging and focusing in complex media,” Nat. Photonics 6, 283–292 (2012).
    [Crossref]
  10. O. Katz, E. Small, and Y. Silberberg, “Looking around corners and through thin turbid layers in real time with scattered incoherent light,” Nat. Photonics 6, 549–553 (2012).
    [Crossref]
  11. A. Kirmani, T. Hutchison, J. Davis, and R. Raskar, “Looking around the corner using transient imaging,” in Proceedings of IEEE International Conference on Computer Vision (IEEE, 2009), pp. 159–166.
  12. A. Velten, T. Willwacher, O. Gupta, A. Veeraraghavan, M. G. Bawendi, and R. Raskar, “Recovering three-dimensional shape around a corner using ultrafast time-of-flight imaging,” Nat. Commun. 3, 745 (2012).
    [Crossref] [PubMed]
  13. O. Gupta, T. Willwacher, A. Velten, A. Veeraraghavan, and R. Raskar, “Reconstruction of hidden 3D shapes using diffuse reflections,” Opt. Express 20, 19096–19108 (2012).
    [Crossref] [PubMed]
  14. F. Heide, L. Xiao, W. Heidrich, and M. B. Hullin, “Diffuse mirrors: 3D reconstruction from diffuse indirect illumination using inexpensive time-of-flight sensors,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2014), pp. 3222–3229.
  15. M. Laurenzis and A. Velten, “Nonline-of-sight laser gated viewing of scattered photons,” Opt. Eng. 53, 023102 (2014).
    [Crossref]
  16. G. Gariepy, F. Tonolini, R. Henderson, J. Leach, and D. Faccio, “Detection and tracking of moving objects hidden from view,” Nat. Photonics 10, 23–26 (2016).
    [Crossref]
  17. M. Laurenzis, J. Klein, E. Bacher, and N. Metzger, “Multiple-return single-photon counting of light in flight and sensing of non-line-of-sight objects at shortwave infrared wavelengths,” Opt. Lett. 40, 4815–4818 (2015).
    [Crossref] [PubMed]
  18. M. Buttafava, J. Zeman, A. Tosi, K. Eliceiri, and A. Velten, “Non-line-of-sight imaging using a time-gated single photon avalanche diode,” Opt. Express 23, 20997–21011 (2015).
    [Crossref] [PubMed]
  19. J. Klein, C. Peters, J. Martín, M. Laurenzis, and M. B. Hullin, “Tracking objects outside the line of sight using 2D intensity images,” Sci. Rep. 6, 32491 (2016).
    [Crossref] [PubMed]
  20. C. Thrampoulidis, G. Shulkind, F. Xu, W. T. Freeman, J. H. Shapiro, A. Torralba, F. N. C. Wong, and G. W. Wornell “Exploiting occlusion in non-line-of-sight active imaging,” arXiv:1711.06297 (2017).
  21. A. L. Cohen, “Anti-pinhole imaging,” J. Mod. Opt. 29, 63–67 (1982).
  22. A. Torralba and W. T. Freeman, “Accidental pinhole and pinspeck cameras: Revealing the scene outside the picture,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2012), pp.. 374–381.
  23. R. H. Hadfield, “Single-photon detectors for optical quantum information applications,” Nat. Photonics 3, 696–705 (2009).
    [Crossref]
  24. G. Buller and R. J. Collins, “Single-photon generation and detection,” Meas. Sci. Technol. 21, 012002 (2010).
    [Crossref]
  25. D. Shin, A. Kirmani, V. K. Goyal, and J. H. Shapiro, “Photon-efficient computational 3-D and reflectivity imaging with single-photon detectors,” IEEE Trans. Comput. Imaging 1, 112–125 (2015).
    [Crossref]
  26. L. I. Rudin, S. Osher, and E. Fatemi, “Nonlinear total variation based noise removal algorithms,” Physica D 60, 259–268 (1992).
    [Crossref]
  27. Z. T. Harmany, R. F. Marcia, and R. M. Willett, “This is SPIRAL-TAP: Sparse Poisson intensity reconstruction algorithms—theory and practice,” IEEE Trans. Image Process. 21, 1084–1096 (2012).
    [Crossref]
  28. D. Shin, F. Xu, D. Venkatraman, R. Lussana, F. Villa, F. Zappa, V. K. Goyal, F. N. C. Wong, and J. H. Shapiro, “Photon-efficient imaging with a single-photon camera,” Nat. Commun. 7, 12046 (2016).
    [Crossref] [PubMed]

2017 (1)

2016 (3)

G. Gariepy, F. Tonolini, R. Henderson, J. Leach, and D. Faccio, “Detection and tracking of moving objects hidden from view,” Nat. Photonics 10, 23–26 (2016).
[Crossref]

J. Klein, C. Peters, J. Martín, M. Laurenzis, and M. B. Hullin, “Tracking objects outside the line of sight using 2D intensity images,” Sci. Rep. 6, 32491 (2016).
[Crossref] [PubMed]

D. Shin, F. Xu, D. Venkatraman, R. Lussana, F. Villa, F. Zappa, V. K. Goyal, F. N. C. Wong, and J. H. Shapiro, “Photon-efficient imaging with a single-photon camera,” Nat. Commun. 7, 12046 (2016).
[Crossref] [PubMed]

2015 (3)

2014 (3)

M. Laurenzis and A. Velten, “Nonline-of-sight laser gated viewing of scattered photons,” Opt. Eng. 53, 023102 (2014).
[Crossref]

A. Kirmani, D. Venkatraman, D. Shin, A. Colaço, F. N. C. Wong, J. H. Shapiro, and V. K. Goyal, “First-photon imaging,” Science 343, 58–61 (2014).
[Crossref]

L. Gao, J. Liang, C. Li, and L. V. Wang, “Single-shot compressed ultrafast photography at one hundred billion frames per second,” Nature 516, 74–77 (2014).
[Crossref] [PubMed]

2013 (1)

B. Sun, M. P. Edgar, R. Bowman, L. E. Vittert, S. Welsh, A. Bowman, and M. J. Padgett, “3D computational imaging with single-pixel detectors,” Science 340, 844–847 (2013).
[Crossref] [PubMed]

2012 (5)

A. P. Mosk, A. Lagendijk, G. Lerosey, and M. Fink, “Controlling waves in space and time for imaging and focusing in complex media,” Nat. Photonics 6, 283–292 (2012).
[Crossref]

O. Katz, E. Small, and Y. Silberberg, “Looking around corners and through thin turbid layers in real time with scattered incoherent light,” Nat. Photonics 6, 549–553 (2012).
[Crossref]

A. Velten, T. Willwacher, O. Gupta, A. Veeraraghavan, M. G. Bawendi, and R. Raskar, “Recovering three-dimensional shape around a corner using ultrafast time-of-flight imaging,” Nat. Commun. 3, 745 (2012).
[Crossref] [PubMed]

O. Gupta, T. Willwacher, A. Velten, A. Veeraraghavan, and R. Raskar, “Reconstruction of hidden 3D shapes using diffuse reflections,” Opt. Express 20, 19096–19108 (2012).
[Crossref] [PubMed]

Z. T. Harmany, R. F. Marcia, and R. M. Willett, “This is SPIRAL-TAP: Sparse Poisson intensity reconstruction algorithms—theory and practice,” IEEE Trans. Image Process. 21, 1084–1096 (2012).
[Crossref]

2011 (2)

A. Sume, M. Gustafsson, M. Herberthson, A. Janis, S. Nilsson, J. Rahm, and A. Orbom, “Radar detection of moving targets behind corners,” IEEE Trans. Geosci. Remote Sens. 49, 2259–2267 (2011).
[Crossref]

O. Steinvall, M. Elmqvist, and H. Larsson, “See around the corner using active imaging,” Proc. SPIE 8186, 818605 (2011).
[Crossref]

2010 (1)

G. Buller and R. J. Collins, “Single-photon generation and detection,” Meas. Sci. Technol. 21, 012002 (2010).
[Crossref]

2009 (2)

1992 (1)

L. I. Rudin, S. Osher, and E. Fatemi, “Nonlinear total variation based noise removal algorithms,” Physica D 60, 259–268 (1992).
[Crossref]

1982 (1)

A. L. Cohen, “Anti-pinhole imaging,” J. Mod. Opt. 29, 63–67 (1982).

Anstett, G.

Bacher, E.

Bawendi, M. G.

A. Velten, T. Willwacher, O. Gupta, A. Veeraraghavan, M. G. Bawendi, and R. Raskar, “Recovering three-dimensional shape around a corner using ultrafast time-of-flight imaging,” Nat. Commun. 3, 745 (2012).
[Crossref] [PubMed]

Bowman, A.

B. Sun, M. P. Edgar, R. Bowman, L. E. Vittert, S. Welsh, A. Bowman, and M. J. Padgett, “3D computational imaging with single-pixel detectors,” Science 340, 844–847 (2013).
[Crossref] [PubMed]

Bowman, R.

B. Sun, M. P. Edgar, R. Bowman, L. E. Vittert, S. Welsh, A. Bowman, and M. J. Padgett, “3D computational imaging with single-pixel detectors,” Science 340, 844–847 (2013).
[Crossref] [PubMed]

Buller, G.

G. Buller and R. J. Collins, “Single-photon generation and detection,” Meas. Sci. Technol. 21, 012002 (2010).
[Crossref]

Buller, G. S.

Buttafava, M.

Chakraborty, B.

B. Chakraborty, Y. Li, J. J. Zhang, T. Trueblood, A. Papandreou-Suppappola, and D. Morrell, “Multipath exploitation with adaptive waveform design for tracking in urban terrain,” in Proceedings of IEEE International Conference on Acoustics, Speech and Signal Processing (IEEE2010), pp. 3894–3897.

Cohen, A. L.

A. L. Cohen, “Anti-pinhole imaging,” J. Mod. Opt. 29, 63–67 (1982).

Colaço, A.

A. Kirmani, D. Venkatraman, D. Shin, A. Colaço, F. N. C. Wong, J. H. Shapiro, and V. K. Goyal, “First-photon imaging,” Science 343, 58–61 (2014).
[Crossref]

Collins, R. J.

G. Buller and R. J. Collins, “Single-photon generation and detection,” Meas. Sci. Technol. 21, 012002 (2010).
[Crossref]

Davis, J.

A. Kirmani, T. Hutchison, J. Davis, and R. Raskar, “Looking around the corner using transient imaging,” in Proceedings of IEEE International Conference on Computer Vision (IEEE, 2009), pp. 159–166.

Edgar, M. P.

B. Sun, M. P. Edgar, R. Bowman, L. E. Vittert, S. Welsh, A. Bowman, and M. J. Padgett, “3D computational imaging with single-pixel detectors,” Science 340, 844–847 (2013).
[Crossref] [PubMed]

Eliceiri, K.

Elmqvist, M.

Faccio, D.

G. Gariepy, F. Tonolini, R. Henderson, J. Leach, and D. Faccio, “Detection and tracking of moving objects hidden from view,” Nat. Photonics 10, 23–26 (2016).
[Crossref]

Fatemi, E.

L. I. Rudin, S. Osher, and E. Fatemi, “Nonlinear total variation based noise removal algorithms,” Physica D 60, 259–268 (1992).
[Crossref]

Fink, M.

A. P. Mosk, A. Lagendijk, G. Lerosey, and M. Fink, “Controlling waves in space and time for imaging and focusing in complex media,” Nat. Photonics 6, 283–292 (2012).
[Crossref]

Freeman, W. T.

A. Torralba and W. T. Freeman, “Accidental pinhole and pinspeck cameras: Revealing the scene outside the picture,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2012), pp.. 374–381.

C. Thrampoulidis, G. Shulkind, F. Xu, W. T. Freeman, J. H. Shapiro, A. Torralba, F. N. C. Wong, and G. W. Wornell “Exploiting occlusion in non-line-of-sight active imaging,” arXiv:1711.06297 (2017).

Gao, L.

L. Gao, J. Liang, C. Li, and L. V. Wang, “Single-shot compressed ultrafast photography at one hundred billion frames per second,” Nature 516, 74–77 (2014).
[Crossref] [PubMed]

Gariepy, G.

G. Gariepy, F. Tonolini, R. Henderson, J. Leach, and D. Faccio, “Detection and tracking of moving objects hidden from view,” Nat. Photonics 10, 23–26 (2016).
[Crossref]

Göhler, B.

Goyal, V. K.

D. Shin, F. Xu, D. Venkatraman, R. Lussana, F. Villa, F. Zappa, V. K. Goyal, F. N. C. Wong, and J. H. Shapiro, “Photon-efficient imaging with a single-photon camera,” Nat. Commun. 7, 12046 (2016).
[Crossref] [PubMed]

D. Shin, A. Kirmani, V. K. Goyal, and J. H. Shapiro, “Photon-efficient computational 3-D and reflectivity imaging with single-photon detectors,” IEEE Trans. Comput. Imaging 1, 112–125 (2015).
[Crossref]

A. Kirmani, D. Venkatraman, D. Shin, A. Colaço, F. N. C. Wong, J. H. Shapiro, and V. K. Goyal, “First-photon imaging,” Science 343, 58–61 (2014).
[Crossref]

Gupta, O.

A. Velten, T. Willwacher, O. Gupta, A. Veeraraghavan, M. G. Bawendi, and R. Raskar, “Recovering three-dimensional shape around a corner using ultrafast time-of-flight imaging,” Nat. Commun. 3, 745 (2012).
[Crossref] [PubMed]

O. Gupta, T. Willwacher, A. Velten, A. Veeraraghavan, and R. Raskar, “Reconstruction of hidden 3D shapes using diffuse reflections,” Opt. Express 20, 19096–19108 (2012).
[Crossref] [PubMed]

Gustafsson, M.

A. Sume, M. Gustafsson, M. Herberthson, A. Janis, S. Nilsson, J. Rahm, and A. Orbom, “Radar detection of moving targets behind corners,” IEEE Trans. Geosci. Remote Sens. 49, 2259–2267 (2011).
[Crossref]

Hadfield, R. H.

R. H. Hadfield, “Single-photon detectors for optical quantum information applications,” Nat. Photonics 3, 696–705 (2009).
[Crossref]

Halimi, A.

Harmany, Z. T.

Z. T. Harmany, R. F. Marcia, and R. M. Willett, “This is SPIRAL-TAP: Sparse Poisson intensity reconstruction algorithms—theory and practice,” IEEE Trans. Image Process. 21, 1084–1096 (2012).
[Crossref]

Heide, F.

F. Heide, L. Xiao, W. Heidrich, and M. B. Hullin, “Diffuse mirrors: 3D reconstruction from diffuse indirect illumination using inexpensive time-of-flight sensors,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2014), pp. 3222–3229.

Heidrich, W.

F. Heide, L. Xiao, W. Heidrich, and M. B. Hullin, “Diffuse mirrors: 3D reconstruction from diffuse indirect illumination using inexpensive time-of-flight sensors,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2014), pp. 3222–3229.

Henderson, R.

G. Gariepy, F. Tonolini, R. Henderson, J. Leach, and D. Faccio, “Detection and tracking of moving objects hidden from view,” Nat. Photonics 10, 23–26 (2016).
[Crossref]

Herberthson, M.

A. Sume, M. Gustafsson, M. Herberthson, A. Janis, S. Nilsson, J. Rahm, and A. Orbom, “Radar detection of moving targets behind corners,” IEEE Trans. Geosci. Remote Sens. 49, 2259–2267 (2011).
[Crossref]

Hullin, M. B.

J. Klein, C. Peters, J. Martín, M. Laurenzis, and M. B. Hullin, “Tracking objects outside the line of sight using 2D intensity images,” Sci. Rep. 6, 32491 (2016).
[Crossref] [PubMed]

F. Heide, L. Xiao, W. Heidrich, and M. B. Hullin, “Diffuse mirrors: 3D reconstruction from diffuse indirect illumination using inexpensive time-of-flight sensors,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2014), pp. 3222–3229.

Hutchison, T.

A. Kirmani, T. Hutchison, J. Davis, and R. Raskar, “Looking around the corner using transient imaging,” in Proceedings of IEEE International Conference on Computer Vision (IEEE, 2009), pp. 159–166.

Janis, A.

A. Sume, M. Gustafsson, M. Herberthson, A. Janis, S. Nilsson, J. Rahm, and A. Orbom, “Radar detection of moving targets behind corners,” IEEE Trans. Geosci. Remote Sens. 49, 2259–2267 (2011).
[Crossref]

Katz, O.

O. Katz, E. Small, and Y. Silberberg, “Looking around corners and through thin turbid layers in real time with scattered incoherent light,” Nat. Photonics 6, 549–553 (2012).
[Crossref]

Kirmani, A.

D. Shin, A. Kirmani, V. K. Goyal, and J. H. Shapiro, “Photon-efficient computational 3-D and reflectivity imaging with single-photon detectors,” IEEE Trans. Comput. Imaging 1, 112–125 (2015).
[Crossref]

A. Kirmani, D. Venkatraman, D. Shin, A. Colaço, F. N. C. Wong, J. H. Shapiro, and V. K. Goyal, “First-photon imaging,” Science 343, 58–61 (2014).
[Crossref]

A. Kirmani, T. Hutchison, J. Davis, and R. Raskar, “Looking around the corner using transient imaging,” in Proceedings of IEEE International Conference on Computer Vision (IEEE, 2009), pp. 159–166.

Klein, J.

Lagendijk, A.

A. P. Mosk, A. Lagendijk, G. Lerosey, and M. Fink, “Controlling waves in space and time for imaging and focusing in complex media,” Nat. Photonics 6, 283–292 (2012).
[Crossref]

Lamb, R. A.

Larsson, H.

O. Steinvall, M. Elmqvist, and H. Larsson, “See around the corner using active imaging,” Proc. SPIE 8186, 818605 (2011).
[Crossref]

Laurenzis, M.

J. Klein, C. Peters, J. Martín, M. Laurenzis, and M. B. Hullin, “Tracking objects outside the line of sight using 2D intensity images,” Sci. Rep. 6, 32491 (2016).
[Crossref] [PubMed]

M. Laurenzis, J. Klein, E. Bacher, and N. Metzger, “Multiple-return single-photon counting of light in flight and sensing of non-line-of-sight objects at shortwave infrared wavelengths,” Opt. Lett. 40, 4815–4818 (2015).
[Crossref] [PubMed]

M. Laurenzis and A. Velten, “Nonline-of-sight laser gated viewing of scattered photons,” Opt. Eng. 53, 023102 (2014).
[Crossref]

Leach, J.

G. Gariepy, F. Tonolini, R. Henderson, J. Leach, and D. Faccio, “Detection and tracking of moving objects hidden from view,” Nat. Photonics 10, 23–26 (2016).
[Crossref]

Lerosey, G.

A. P. Mosk, A. Lagendijk, G. Lerosey, and M. Fink, “Controlling waves in space and time for imaging and focusing in complex media,” Nat. Photonics 6, 283–292 (2012).
[Crossref]

Li, C.

L. Gao, J. Liang, C. Li, and L. V. Wang, “Single-shot compressed ultrafast photography at one hundred billion frames per second,” Nature 516, 74–77 (2014).
[Crossref] [PubMed]

Li, Y.

B. Chakraborty, Y. Li, J. J. Zhang, T. Trueblood, A. Papandreou-Suppappola, and D. Morrell, “Multipath exploitation with adaptive waveform design for tracking in urban terrain,” in Proceedings of IEEE International Conference on Acoustics, Speech and Signal Processing (IEEE2010), pp. 3894–3897.

Liang, J.

L. Gao, J. Liang, C. Li, and L. V. Wang, “Single-shot compressed ultrafast photography at one hundred billion frames per second,” Nature 516, 74–77 (2014).
[Crossref] [PubMed]

Lussana, R.

D. Shin, F. Xu, D. Venkatraman, R. Lussana, F. Villa, F. Zappa, V. K. Goyal, F. N. C. Wong, and J. H. Shapiro, “Photon-efficient imaging with a single-photon camera,” Nat. Commun. 7, 12046 (2016).
[Crossref] [PubMed]

Lutzmann, P.

Marcia, R. F.

Z. T. Harmany, R. F. Marcia, and R. M. Willett, “This is SPIRAL-TAP: Sparse Poisson intensity reconstruction algorithms—theory and practice,” IEEE Trans. Image Process. 21, 1084–1096 (2012).
[Crossref]

Martín, J.

J. Klein, C. Peters, J. Martín, M. Laurenzis, and M. B. Hullin, “Tracking objects outside the line of sight using 2D intensity images,” Sci. Rep. 6, 32491 (2016).
[Crossref] [PubMed]

Metzger, N.

Morrell, D.

B. Chakraborty, Y. Li, J. J. Zhang, T. Trueblood, A. Papandreou-Suppappola, and D. Morrell, “Multipath exploitation with adaptive waveform design for tracking in urban terrain,” in Proceedings of IEEE International Conference on Acoustics, Speech and Signal Processing (IEEE2010), pp. 3894–3897.

Mosk, A. P.

A. P. Mosk, A. Lagendijk, G. Lerosey, and M. Fink, “Controlling waves in space and time for imaging and focusing in complex media,” Nat. Photonics 6, 283–292 (2012).
[Crossref]

Nilsson, S.

A. Sume, M. Gustafsson, M. Herberthson, A. Janis, S. Nilsson, J. Rahm, and A. Orbom, “Radar detection of moving targets behind corners,” IEEE Trans. Geosci. Remote Sens. 49, 2259–2267 (2011).
[Crossref]

Orbom, A.

A. Sume, M. Gustafsson, M. Herberthson, A. Janis, S. Nilsson, J. Rahm, and A. Orbom, “Radar detection of moving targets behind corners,” IEEE Trans. Geosci. Remote Sens. 49, 2259–2267 (2011).
[Crossref]

Osher, S.

L. I. Rudin, S. Osher, and E. Fatemi, “Nonlinear total variation based noise removal algorithms,” Physica D 60, 259–268 (1992).
[Crossref]

Padgett, M. J.

B. Sun, M. P. Edgar, R. Bowman, L. E. Vittert, S. Welsh, A. Bowman, and M. J. Padgett, “3D computational imaging with single-pixel detectors,” Science 340, 844–847 (2013).
[Crossref] [PubMed]

Papandreou-Suppappola, A.

B. Chakraborty, Y. Li, J. J. Zhang, T. Trueblood, A. Papandreou-Suppappola, and D. Morrell, “Multipath exploitation with adaptive waveform design for tracking in urban terrain,” in Proceedings of IEEE International Conference on Acoustics, Speech and Signal Processing (IEEE2010), pp. 3894–3897.

Pawlikowska, A. M.

Peters, C.

J. Klein, C. Peters, J. Martín, M. Laurenzis, and M. B. Hullin, “Tracking objects outside the line of sight using 2D intensity images,” Sci. Rep. 6, 32491 (2016).
[Crossref] [PubMed]

Rahm, J.

A. Sume, M. Gustafsson, M. Herberthson, A. Janis, S. Nilsson, J. Rahm, and A. Orbom, “Radar detection of moving targets behind corners,” IEEE Trans. Geosci. Remote Sens. 49, 2259–2267 (2011).
[Crossref]

Raskar, R.

A. Velten, T. Willwacher, O. Gupta, A. Veeraraghavan, M. G. Bawendi, and R. Raskar, “Recovering three-dimensional shape around a corner using ultrafast time-of-flight imaging,” Nat. Commun. 3, 745 (2012).
[Crossref] [PubMed]

O. Gupta, T. Willwacher, A. Velten, A. Veeraraghavan, and R. Raskar, “Reconstruction of hidden 3D shapes using diffuse reflections,” Opt. Express 20, 19096–19108 (2012).
[Crossref] [PubMed]

A. Kirmani, T. Hutchison, J. Davis, and R. Raskar, “Looking around the corner using transient imaging,” in Proceedings of IEEE International Conference on Computer Vision (IEEE, 2009), pp. 159–166.

Repasi, E.

Rudin, L. I.

L. I. Rudin, S. Osher, and E. Fatemi, “Nonlinear total variation based noise removal algorithms,” Physica D 60, 259–268 (1992).
[Crossref]

Shapiro, J. H.

D. Shin, F. Xu, D. Venkatraman, R. Lussana, F. Villa, F. Zappa, V. K. Goyal, F. N. C. Wong, and J. H. Shapiro, “Photon-efficient imaging with a single-photon camera,” Nat. Commun. 7, 12046 (2016).
[Crossref] [PubMed]

D. Shin, A. Kirmani, V. K. Goyal, and J. H. Shapiro, “Photon-efficient computational 3-D and reflectivity imaging with single-photon detectors,” IEEE Trans. Comput. Imaging 1, 112–125 (2015).
[Crossref]

A. Kirmani, D. Venkatraman, D. Shin, A. Colaço, F. N. C. Wong, J. H. Shapiro, and V. K. Goyal, “First-photon imaging,” Science 343, 58–61 (2014).
[Crossref]

C. Thrampoulidis, G. Shulkind, F. Xu, W. T. Freeman, J. H. Shapiro, A. Torralba, F. N. C. Wong, and G. W. Wornell “Exploiting occlusion in non-line-of-sight active imaging,” arXiv:1711.06297 (2017).

Shin, D.

D. Shin, F. Xu, D. Venkatraman, R. Lussana, F. Villa, F. Zappa, V. K. Goyal, F. N. C. Wong, and J. H. Shapiro, “Photon-efficient imaging with a single-photon camera,” Nat. Commun. 7, 12046 (2016).
[Crossref] [PubMed]

D. Shin, A. Kirmani, V. K. Goyal, and J. H. Shapiro, “Photon-efficient computational 3-D and reflectivity imaging with single-photon detectors,” IEEE Trans. Comput. Imaging 1, 112–125 (2015).
[Crossref]

A. Kirmani, D. Venkatraman, D. Shin, A. Colaço, F. N. C. Wong, J. H. Shapiro, and V. K. Goyal, “First-photon imaging,” Science 343, 58–61 (2014).
[Crossref]

Shulkind, G.

C. Thrampoulidis, G. Shulkind, F. Xu, W. T. Freeman, J. H. Shapiro, A. Torralba, F. N. C. Wong, and G. W. Wornell “Exploiting occlusion in non-line-of-sight active imaging,” arXiv:1711.06297 (2017).

Silberberg, Y.

O. Katz, E. Small, and Y. Silberberg, “Looking around corners and through thin turbid layers in real time with scattered incoherent light,” Nat. Photonics 6, 549–553 (2012).
[Crossref]

Small, E.

O. Katz, E. Small, and Y. Silberberg, “Looking around corners and through thin turbid layers in real time with scattered incoherent light,” Nat. Photonics 6, 549–553 (2012).
[Crossref]

Steinvall, O.

Sume, A.

A. Sume, M. Gustafsson, M. Herberthson, A. Janis, S. Nilsson, J. Rahm, and A. Orbom, “Radar detection of moving targets behind corners,” IEEE Trans. Geosci. Remote Sens. 49, 2259–2267 (2011).
[Crossref]

Sun, B.

B. Sun, M. P. Edgar, R. Bowman, L. E. Vittert, S. Welsh, A. Bowman, and M. J. Padgett, “3D computational imaging with single-pixel detectors,” Science 340, 844–847 (2013).
[Crossref] [PubMed]

Thrampoulidis, C.

C. Thrampoulidis, G. Shulkind, F. Xu, W. T. Freeman, J. H. Shapiro, A. Torralba, F. N. C. Wong, and G. W. Wornell “Exploiting occlusion in non-line-of-sight active imaging,” arXiv:1711.06297 (2017).

Tonolini, F.

G. Gariepy, F. Tonolini, R. Henderson, J. Leach, and D. Faccio, “Detection and tracking of moving objects hidden from view,” Nat. Photonics 10, 23–26 (2016).
[Crossref]

Torralba, A.

C. Thrampoulidis, G. Shulkind, F. Xu, W. T. Freeman, J. H. Shapiro, A. Torralba, F. N. C. Wong, and G. W. Wornell “Exploiting occlusion in non-line-of-sight active imaging,” arXiv:1711.06297 (2017).

A. Torralba and W. T. Freeman, “Accidental pinhole and pinspeck cameras: Revealing the scene outside the picture,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2012), pp.. 374–381.

Tosi, A.

Trueblood, T.

B. Chakraborty, Y. Li, J. J. Zhang, T. Trueblood, A. Papandreou-Suppappola, and D. Morrell, “Multipath exploitation with adaptive waveform design for tracking in urban terrain,” in Proceedings of IEEE International Conference on Acoustics, Speech and Signal Processing (IEEE2010), pp. 3894–3897.

Veeraraghavan, A.

A. Velten, T. Willwacher, O. Gupta, A. Veeraraghavan, M. G. Bawendi, and R. Raskar, “Recovering three-dimensional shape around a corner using ultrafast time-of-flight imaging,” Nat. Commun. 3, 745 (2012).
[Crossref] [PubMed]

O. Gupta, T. Willwacher, A. Velten, A. Veeraraghavan, and R. Raskar, “Reconstruction of hidden 3D shapes using diffuse reflections,” Opt. Express 20, 19096–19108 (2012).
[Crossref] [PubMed]

Velten, A.

M. Buttafava, J. Zeman, A. Tosi, K. Eliceiri, and A. Velten, “Non-line-of-sight imaging using a time-gated single photon avalanche diode,” Opt. Express 23, 20997–21011 (2015).
[Crossref] [PubMed]

M. Laurenzis and A. Velten, “Nonline-of-sight laser gated viewing of scattered photons,” Opt. Eng. 53, 023102 (2014).
[Crossref]

O. Gupta, T. Willwacher, A. Velten, A. Veeraraghavan, and R. Raskar, “Reconstruction of hidden 3D shapes using diffuse reflections,” Opt. Express 20, 19096–19108 (2012).
[Crossref] [PubMed]

A. Velten, T. Willwacher, O. Gupta, A. Veeraraghavan, M. G. Bawendi, and R. Raskar, “Recovering three-dimensional shape around a corner using ultrafast time-of-flight imaging,” Nat. Commun. 3, 745 (2012).
[Crossref] [PubMed]

Venkatraman, D.

D. Shin, F. Xu, D. Venkatraman, R. Lussana, F. Villa, F. Zappa, V. K. Goyal, F. N. C. Wong, and J. H. Shapiro, “Photon-efficient imaging with a single-photon camera,” Nat. Commun. 7, 12046 (2016).
[Crossref] [PubMed]

A. Kirmani, D. Venkatraman, D. Shin, A. Colaço, F. N. C. Wong, J. H. Shapiro, and V. K. Goyal, “First-photon imaging,” Science 343, 58–61 (2014).
[Crossref]

Villa, F.

D. Shin, F. Xu, D. Venkatraman, R. Lussana, F. Villa, F. Zappa, V. K. Goyal, F. N. C. Wong, and J. H. Shapiro, “Photon-efficient imaging with a single-photon camera,” Nat. Commun. 7, 12046 (2016).
[Crossref] [PubMed]

Vittert, L. E.

B. Sun, M. P. Edgar, R. Bowman, L. E. Vittert, S. Welsh, A. Bowman, and M. J. Padgett, “3D computational imaging with single-pixel detectors,” Science 340, 844–847 (2013).
[Crossref] [PubMed]

Wang, L. V.

L. Gao, J. Liang, C. Li, and L. V. Wang, “Single-shot compressed ultrafast photography at one hundred billion frames per second,” Nature 516, 74–77 (2014).
[Crossref] [PubMed]

Welsh, S.

B. Sun, M. P. Edgar, R. Bowman, L. E. Vittert, S. Welsh, A. Bowman, and M. J. Padgett, “3D computational imaging with single-pixel detectors,” Science 340, 844–847 (2013).
[Crossref] [PubMed]

Willett, R. M.

Z. T. Harmany, R. F. Marcia, and R. M. Willett, “This is SPIRAL-TAP: Sparse Poisson intensity reconstruction algorithms—theory and practice,” IEEE Trans. Image Process. 21, 1084–1096 (2012).
[Crossref]

Willwacher, T.

A. Velten, T. Willwacher, O. Gupta, A. Veeraraghavan, M. G. Bawendi, and R. Raskar, “Recovering three-dimensional shape around a corner using ultrafast time-of-flight imaging,” Nat. Commun. 3, 745 (2012).
[Crossref] [PubMed]

O. Gupta, T. Willwacher, A. Velten, A. Veeraraghavan, and R. Raskar, “Reconstruction of hidden 3D shapes using diffuse reflections,” Opt. Express 20, 19096–19108 (2012).
[Crossref] [PubMed]

Wong, F. N. C.

D. Shin, F. Xu, D. Venkatraman, R. Lussana, F. Villa, F. Zappa, V. K. Goyal, F. N. C. Wong, and J. H. Shapiro, “Photon-efficient imaging with a single-photon camera,” Nat. Commun. 7, 12046 (2016).
[Crossref] [PubMed]

A. Kirmani, D. Venkatraman, D. Shin, A. Colaço, F. N. C. Wong, J. H. Shapiro, and V. K. Goyal, “First-photon imaging,” Science 343, 58–61 (2014).
[Crossref]

C. Thrampoulidis, G. Shulkind, F. Xu, W. T. Freeman, J. H. Shapiro, A. Torralba, F. N. C. Wong, and G. W. Wornell “Exploiting occlusion in non-line-of-sight active imaging,” arXiv:1711.06297 (2017).

Wornell, G. W.

C. Thrampoulidis, G. Shulkind, F. Xu, W. T. Freeman, J. H. Shapiro, A. Torralba, F. N. C. Wong, and G. W. Wornell “Exploiting occlusion in non-line-of-sight active imaging,” arXiv:1711.06297 (2017).

Xiao, L.

F. Heide, L. Xiao, W. Heidrich, and M. B. Hullin, “Diffuse mirrors: 3D reconstruction from diffuse indirect illumination using inexpensive time-of-flight sensors,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2014), pp. 3222–3229.

Xu, F.

D. Shin, F. Xu, D. Venkatraman, R. Lussana, F. Villa, F. Zappa, V. K. Goyal, F. N. C. Wong, and J. H. Shapiro, “Photon-efficient imaging with a single-photon camera,” Nat. Commun. 7, 12046 (2016).
[Crossref] [PubMed]

C. Thrampoulidis, G. Shulkind, F. Xu, W. T. Freeman, J. H. Shapiro, A. Torralba, F. N. C. Wong, and G. W. Wornell “Exploiting occlusion in non-line-of-sight active imaging,” arXiv:1711.06297 (2017).

Zappa, F.

D. Shin, F. Xu, D. Venkatraman, R. Lussana, F. Villa, F. Zappa, V. K. Goyal, F. N. C. Wong, and J. H. Shapiro, “Photon-efficient imaging with a single-photon camera,” Nat. Commun. 7, 12046 (2016).
[Crossref] [PubMed]

Zeman, J.

Zhang, J. J.

B. Chakraborty, Y. Li, J. J. Zhang, T. Trueblood, A. Papandreou-Suppappola, and D. Morrell, “Multipath exploitation with adaptive waveform design for tracking in urban terrain,” in Proceedings of IEEE International Conference on Acoustics, Speech and Signal Processing (IEEE2010), pp. 3894–3897.

Appl. Opt. (1)

IEEE Trans. Comput. Imaging (1)

D. Shin, A. Kirmani, V. K. Goyal, and J. H. Shapiro, “Photon-efficient computational 3-D and reflectivity imaging with single-photon detectors,” IEEE Trans. Comput. Imaging 1, 112–125 (2015).
[Crossref]

IEEE Trans. Geosci. Remote Sens. (1)

A. Sume, M. Gustafsson, M. Herberthson, A. Janis, S. Nilsson, J. Rahm, and A. Orbom, “Radar detection of moving targets behind corners,” IEEE Trans. Geosci. Remote Sens. 49, 2259–2267 (2011).
[Crossref]

IEEE Trans. Image Process. (1)

Z. T. Harmany, R. F. Marcia, and R. M. Willett, “This is SPIRAL-TAP: Sparse Poisson intensity reconstruction algorithms—theory and practice,” IEEE Trans. Image Process. 21, 1084–1096 (2012).
[Crossref]

J. Mod. Opt. (1)

A. L. Cohen, “Anti-pinhole imaging,” J. Mod. Opt. 29, 63–67 (1982).

Meas. Sci. Technol. (1)

G. Buller and R. J. Collins, “Single-photon generation and detection,” Meas. Sci. Technol. 21, 012002 (2010).
[Crossref]

Nat. Commun. (2)

D. Shin, F. Xu, D. Venkatraman, R. Lussana, F. Villa, F. Zappa, V. K. Goyal, F. N. C. Wong, and J. H. Shapiro, “Photon-efficient imaging with a single-photon camera,” Nat. Commun. 7, 12046 (2016).
[Crossref] [PubMed]

A. Velten, T. Willwacher, O. Gupta, A. Veeraraghavan, M. G. Bawendi, and R. Raskar, “Recovering three-dimensional shape around a corner using ultrafast time-of-flight imaging,” Nat. Commun. 3, 745 (2012).
[Crossref] [PubMed]

Nat. Photonics (4)

G. Gariepy, F. Tonolini, R. Henderson, J. Leach, and D. Faccio, “Detection and tracking of moving objects hidden from view,” Nat. Photonics 10, 23–26 (2016).
[Crossref]

A. P. Mosk, A. Lagendijk, G. Lerosey, and M. Fink, “Controlling waves in space and time for imaging and focusing in complex media,” Nat. Photonics 6, 283–292 (2012).
[Crossref]

O. Katz, E. Small, and Y. Silberberg, “Looking around corners and through thin turbid layers in real time with scattered incoherent light,” Nat. Photonics 6, 549–553 (2012).
[Crossref]

R. H. Hadfield, “Single-photon detectors for optical quantum information applications,” Nat. Photonics 3, 696–705 (2009).
[Crossref]

Nature (1)

L. Gao, J. Liang, C. Li, and L. V. Wang, “Single-shot compressed ultrafast photography at one hundred billion frames per second,” Nature 516, 74–77 (2014).
[Crossref] [PubMed]

Opt. Eng. (1)

M. Laurenzis and A. Velten, “Nonline-of-sight laser gated viewing of scattered photons,” Opt. Eng. 53, 023102 (2014).
[Crossref]

Opt. Express (3)

Opt. Lett. (1)

Physica D (1)

L. I. Rudin, S. Osher, and E. Fatemi, “Nonlinear total variation based noise removal algorithms,” Physica D 60, 259–268 (1992).
[Crossref]

Proc. SPIE (1)

O. Steinvall, M. Elmqvist, and H. Larsson, “See around the corner using active imaging,” Proc. SPIE 8186, 818605 (2011).
[Crossref]

Sci. Rep. (1)

J. Klein, C. Peters, J. Martín, M. Laurenzis, and M. B. Hullin, “Tracking objects outside the line of sight using 2D intensity images,” Sci. Rep. 6, 32491 (2016).
[Crossref] [PubMed]

Science (2)

B. Sun, M. P. Edgar, R. Bowman, L. E. Vittert, S. Welsh, A. Bowman, and M. J. Padgett, “3D computational imaging with single-pixel detectors,” Science 340, 844–847 (2013).
[Crossref] [PubMed]

A. Kirmani, D. Venkatraman, D. Shin, A. Colaço, F. N. C. Wong, J. H. Shapiro, and V. K. Goyal, “First-photon imaging,” Science 343, 58–61 (2014).
[Crossref]

Other (5)

A. Kirmani, T. Hutchison, J. Davis, and R. Raskar, “Looking around the corner using transient imaging,” in Proceedings of IEEE International Conference on Computer Vision (IEEE, 2009), pp. 159–166.

B. Chakraborty, Y. Li, J. J. Zhang, T. Trueblood, A. Papandreou-Suppappola, and D. Morrell, “Multipath exploitation with adaptive waveform design for tracking in urban terrain,” in Proceedings of IEEE International Conference on Acoustics, Speech and Signal Processing (IEEE2010), pp. 3894–3897.

C. Thrampoulidis, G. Shulkind, F. Xu, W. T. Freeman, J. H. Shapiro, A. Torralba, F. N. C. Wong, and G. W. Wornell “Exploiting occlusion in non-line-of-sight active imaging,” arXiv:1711.06297 (2017).

F. Heide, L. Xiao, W. Heidrich, and M. B. Hullin, “Diffuse mirrors: 3D reconstruction from diffuse indirect illumination using inexpensive time-of-flight sensors,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2014), pp. 3222–3229.

A. Torralba and W. T. Freeman, “Accidental pinhole and pinspeck cameras: Revealing the scene outside the picture,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2012), pp.. 374–381.

Supplementary Material (2)

NameDescription
» Visualization 1       This video provides a visualization of the laser’s raster scanning of the visible wall, and the occluder’s resulting shadow on the hidden wall.
» Visualization 2       This video provides a visualization of the man-shape pattern and the movement of the occluder shadow on the hidden wall.

Cited By

OSA participates in Crossref's Cited-By Linking service. Citing articles from OSA journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (13)

Fig. 1
Fig. 1 (a) Experimental configuration. The goal is to reconstruct the reflectivity pattern on the hidden wall. A repetitively-pulsed laser source raster scans a diffuse (nearly Lambertian) visible wall. Photons striking the visible wall reflect toward the hidden wall, reflect at the hidden wall back toward the visible wall, and finally reflect at the visible wall toward the single-photon avalanche diode (SPAD), whose optics are configured to detect backscattered photons from a large patch on the visible wall. The counts are recorded by a single-photon counting module and further computer processed. When present, an occluder (circular black patch) obstructs some light-propagation paths from the visible wall to the hidden wall (casting a subtle shadow), and from the hidden wall to the visible wall. (b) Raw photon counts in the absence of an occluder. (c) Raw photon counts in the presence of the occluder. (d) Reconstructed reflectivity from the counts in (b). (e) Reconstructed reflectivity from the counts in (c).
Fig. 2
Fig. 2 Top view of experimental setup and a three-bounce light trajectory of the form ΛijxcΩ. The laser (Λ) illuminates the visible wall (ij) and is diffusively reflected (first bounce) toward the hidden wall (x), where it reflects (second bounce) back toward the visible wall. The third-bounce reflection at the visible wall (c) returns light in the direction of the detector (Ω). A circular occluder is placed between the visible and hidden walls, and partially obstructs light propagating between the visible and hidden walls.
Fig. 3
Fig. 3 Role of the occluder’s shadow in NLoS imaging. The red-dashed square in the ground-truth image indicates the hidden-wall area that is scanned by the occluder’s shadow as the laser raster scans the visible wall. The blue-dashed circle in the ground-truth image indicates the approximate occluder-shadow area for one ij (see Visualization 1). (a) The man-shaped pattern, placed in the upper-left quadrant of the hidden wall, is completely scanned by the occluder’s shadow pattern as the laser scans the visible wall; with the aid of the occluder, the hidden pattern is successfully reconstructed from the raw counts. (b) The T-shaped pattern, placed in the upper-right quadrant of the hidden wall that is outside of the shadow area, yields raw photon counts that fail to reconstruct the pattern owing to the occluder’s shadow not scanning that quadrant. (c) Both the man-shaped pattern and the T-shaped pattern are placed on the hidden wall, with only the man-shaped pattern being scanned by the occluder’s shadow, so the man-shaped pattern is reconstructed successfully while the T-shaped pattern is not.
Fig. 4
Fig. 4 Reconstruction results with different values of the regularization parameter λ. We demonstrate reconstruction according to Eq. (4) with varying values for the regularization parameter λ, as indicated on the bottom of the figures. Higher λ values promote reconstructions with larger regions of near-uniform reflectivity values, whereas smaller λ values produce more detailed but noisier images. In our reconstructions, we chose a λ value that does not severely distort the image; here the preferred value is λ = 0.75.
Fig. 5
Fig. 5 Time-resolved SPAD measurements showing the gated timing window used for post-selecting third-bounce photon detections while suppressing first-bounce photon detections. The gate-off period covers detection times of first-bounce photons and the ∼6 ns duration of the gate-on period is long enough to capture all third-bounce photon detections. In our experiments, only the number of detected photons in gate-on windows were recorded to form the raw-count images.
Fig. 6
Fig. 6 Experimental results on the recovery of different hidden-wall reflectivity patterns, (a)–(d). First row: ground truth patterns on the hidden wall; second row: raw photon counts for 100 × 100 raster-scanned laser positions; third row: reconstructions in the presence of the occluder, based on solving Eq. (4), showing that detailed scene features are successfully recovered.
Fig. 7
Fig. 7 Root-mean-square error (RMSE) and reconstruction results (insets) with different numbers of detected photons per pixel (PPP). The RMSE of our binomial-likelihood method remains below 0.05 with >69 detected PPP, whereas the Gaussian-likelihood method employed in [20] requires at least ∼1100 detected PPP to achieve similar performance.
Fig. 8
Fig. 8 Reflectivity reconstructions with different numbers of detected photons per pixel (PPP). We compare the binomial-likelihood algorithm (Eq. (4)) and the Gaussian-likelihood algorithm [20] for different numbers of average detected PPP, ranging from 17 to 3438 as indicated on the bottom of each figure. The photon efficiency of the binomial-likelihood method is far superior to that of the Gaussian-likelihood method, with the latter requiring at least ∼1100 PPP to achieve reconstructions comparable to those of the former with ∼69 PPP. In the low-photon detection regime, PPP < 276, the Gaussian-likelihood method fails to reconstruct the details of the reflectivity image. Here the regularization parameter is fixed at 0.75, which causes the slight difference between Binomial-likelihood and Gaussian-likelihood at high PPP values.
Fig. 9
Fig. 9 (a)–(c), Reconstructions of the Fig. 6(a) reflectivity pattern obtained using circular occluders with diameters of 15.8 cm, 6.8 cm and 4.4 cm. A small (large) occluder sharpens (blurs) the image. (d)–(f), Reconstructions of two-bar reflectivity patterns with bar separations of 2 cm, 4 cm and 8 cm that were obtained using a 6.8-cm-diameter circular occluder. Our system achieves 4 cm spatial resolution.
Fig. 10
Fig. 10 Raw detected-count measurements with a 15.8-cm-diameter occluder placed at different positions. The real location (X, Y, Z) (cm) of the occluder is indicated on the top of each figure. In (a)–(c), we fixed the position of the occluder on the Z axis and shifted it along the X and Y axes: the center of the rings reveals the (X, Y) position of the occluder. In (d)–(f), we fixed the position of the occluder on the X and Y axes and shifted it along the Z axis: the size of the rings reveals the Z-axis position of the occluder. These preliminary measurements suggest that occluder position may be localized from raw-count data.
Fig. 11
Fig. 11 Comparing the informativeness of occluded and unoccluded measurements. We numerically simulated the setup of Fig. 2 and evaluated the informativeness of the measurements with and without an occluder from the A matrix’s singular values {σ}. In our simulations, the laser illuminates a 50 × 50 grid on the visible wall, and the hidden wall is discretized to a 50 × 50 grid. The singular values of the corresponding 2500 × 2500 A matrix were calculated for an occluded setup (blue dashed curve) and an unoccluded setup (red solid curve). The singular values of the occluded A matrix are substantially higher than those of the unoccluded matrix, suggesting that measurements in the occluded setup will be much more informative.
Fig. 12
Fig. 12 Near-Lambertian reflectance behavior of white poster-board visible wall. The blue (red) data points correspond to measurements made with the setup in the blue (red) inset: a laser illuminated the visible wall at normal incidence (20° offset from normal incidence), and a detector recorded the power reflected at different viewing angles. The green line is the theoretical cosine curve for a perfect Lambertian surface. We find that the visible wall has ∼80% reflectivity and is nearly Lambertian except for a small specular component when the viewing angle is perpendicular to the surface. We also performed this characterization for the patterns on the hidden wall, and found that the Lambertian property of those patterns was similar to that of the visible wall.
Fig. 13
Fig. 13 Results of long acquisition time background-light measurement used to calibrate Bij. The reflectivity pattern on the hidden wall was replaced with a black surface. A total of 35.6 million laser pulses were transmitted at each laser point ij on the 100 × 100 illumination grid, and the third-bounce counts were recorded by the SPAD. The nonuniformity is mainly due to scattering from the raster-scan galvo mirrors and SPAD afterpulsing that arises from detections of those first-bounce photons.

Equations (16)

Equations on this page are rendered with MathJax. Learn more.

Y i j = K p k , l A k l ( i j ) F k l ,
P 0 ( i j ) ( F ) = exp [ η ( Y i j + B i j ) ] ,
Pr ( R i j ; F ) = ( N R i j ) [ 1 P 0 ( i j ) ( F ) ] R i j [ P 0 ( i j ) ( F ) ] N R i j .
F ^ = arg min F : F k l 0 { log [ ( R ; F ) ] + λ pen ( F ) } ,
RMSE ( F ^ , F ) = 1 n 2 k = 1 n l = 1 m ( F k l F ^ k l ) 2 ,
K p f ( x ) G Λ , i j , x , c , Ω d x d c d Ω i j x 2 x c 2 c Ω 2 ,
G Λ , i j , x , c , Ω cos ( Λ i j , n i j ) cos ( x i j , n i j ) × cos ( x i j , n x ) cos ( x c , n x ) cos ( x c , n c ) cos ( c Ω , n c ) ,
Y i j = K p 𝒮 ( i j , c ) d x 𝒞 d c 𝒟 d Ω f ( x ) G Λ , i j , x , c , Ω i j x 2 x c 2 c Ω 2 ,
Y i j = K p 𝒮 d x f ( x ) 𝒞 d c 𝒟 d Ω 1 𝒮 ( i j , c ) ( x ) G Λ , i j , x , c , Ω i j x 2 x c 2 c Ω 2 = K p 𝒮 d x f ( x ) A ( i j ) ( x ) ,
A ( i j ) ( x ) 𝒞 d c 𝒟 d Ω 1 𝒮 ( i j , c ) ( x ) G Λ , i j , x , c , Ω i j x 2 x c 2 c Ω 2 .
Y i j = K p k , l A k l ( i j ) F k l .
Θ ( x , y ) = { 1 , unobstructed line of sight between x and y , 0 , obstructed line of sight between x and y .
y = Af .
P 0 ( i j ) ( F ) = exp [ η ( Y i j + B i j ) ] 1 η ( Y i j + B i j ) .
Pr ( R i j ; F ) = ( N R i j ) [ 1 P 0 ( i j ) ( F ) ] R i j [ P 0 ( i j ) ( F ) ] N R i j .
log [ ( R ; F ) ] = log ( i , j Pr ( R i j ; F ) ) = i j log [ Pr ( R i j ; F ) ] = i j { ( N R i j ) [ η K p k , l A k l ( i j ) F k l ] R i j log [ η K p k , l A k l ( i j ) F k l + η B i j ] } ,

Metrics