Abstract

Recent works have demonstrated non-line of sight (NLOS) reconstruction by using the time-resolved signal from multiply scattered light. These works combine ultrafast imaging systems with computation, which back-projects the recorded space-time signal to build a probabilistic map of the hidden geometry. Unfortunately, this computation is slow, becoming a bottleneck as the imaging technology improves. In this work, we propose a new back-projection technique for NLOS reconstruction, which is up to a thousand times faster than previous work, with almost no quality loss. We base on the observation that the hidden geometry probability map can be built as the intersection of the three-bounce space-time manifolds defined by the light illuminating the hidden geometry and the visible point receiving the scattered light from such hidden geometry. This allows us to pose the reconstruction of the hidden geometry as the voxelization of these space-time manifolds, which has lower theoretic complexity and is easily implementable in the GPU. We demonstrate the efficiency and quality of our technique compared against previous methods in both captured and synthetic data.

© 2017 Optical Society of America

Full Article  |  PDF Article
OSA Recommended Articles
Reconstruction of multiple non-line-of-sight objects using back projection based on ellipsoid mode decomposition

Chenfei Jin, Jiaheng Xie, Siqi Zhang, ZiJing Zhang, and Yuan Zhao
Opt. Express 26(16) 20089-20101 (2018)

Non-line-of-sight imaging using a time-gated single photon avalanche diode

Mauro Buttafava, Jessica Zeman, Alberto Tosi, Kevin Eliceiri, and Andreas Velten
Opt. Express 23(16) 20997-21011 (2015)

Multi sky-view 3D aerosol distribution recovery

Amit Aides, Yoav Y. Schechner, Vadim Holodovsky, Michael J. Garay, and Anthony B. Davis
Opt. Express 21(22) 25820-25833 (2013)

References

  • View by:
  • |
  • |
  • |

  1. A. Jarabo, B. Masia, J. Marco, and D. Gutierrez, “Recent advances in transient imaging: A computer graphics and vision perspective,” Vis. Inform.1, https://arxiv.org/abs/1611.00939 (2017).
  2. A. Bhandari and R. Raskar, “Signal processing for time-of-flight imaging sensors,” IEEE Signal Process. Mag. 33(5), 45–58 (2016).
    [Crossref]
  3. A. Velten, T. Willwacher, O. Gupta, A. Veeraraghavan, M. G. Bawendi, and R. Raskar, “Recovering three-dimensional shape around a corner using ultrafast time-of-flight imaging,” Nat. Commun. 3, 745 (2012).
    [Crossref] [PubMed]
  4. P. Y. Han, G. C. Cho, and X.-C. Zhang, “Time-domain transillumination of biological tissues with terahertz pulses,” Opt. Lett. 25(4), 242–244 (2000).
    [Crossref]
  5. D. Raviv, C. Barsi, N. Naik, M. Feigin, and R. Raskar, “Pose estimation using time-resolved inversion of diffuse light,” Opt. Express 22(17), 20164–20176 (2014).
    [Crossref] [PubMed]
  6. F. Heide, L. Xiao, A. Kolb, M. B. Hullin, and W. Heidrich, “Imaging in scattering media using correlation image sensors and sparse convolutional coding,” Opt. Express 22(21), 26338–26350 (2014).
    [Crossref] [PubMed]
  7. A. Velten, D. Wu, A. Jarabo, B. Masia, C. Barsi, C. Joshi, E. Lawson, M. Bawendi, D. Gutierrez, and R. Raskar, “Femto-photography: Capturing and visualizing the propagation of light,” ACM Trans. Graph. 32(4), 44 (2013).
    [Crossref]
  8. F. Heide, L. Xiao, W. Heidrich, and M. B. Hullin, “Diffuse mirrors: 3D reconstruction from diffuse indirect illumination using inexpensive time-of-flight sensors,” in IEEE Computer Vision and Pattern Recognition, (IEEE, 2014) pp. 3222–3229.
  9. M. Buttafava, J. Zeman, A. Tosi, K. Eliceiri, and A. Velten, “Non-line-of-sight imaging using a time-gated single photon avalanche diode,” Opt. Express 23(16), 20997–21011 (2015).
    [Crossref] [PubMed]
  10. M. Laurenzis and A. Velten, “Nonline-of-sight laser gated viewing of scattered photons,” Opt. Eng. 53(2), 023102 (2014).
    [Crossref]
  11. J. Klein, C. Peters, J. Martín, M. Laurenzis, and M. B. Hullin, “Tracking objects outside the line of sight using 2D intensity images,” Sci. Rep. 6, 32491 (2016).
    [Crossref] [PubMed]
  12. M. B. Hullin, “Computational imaging of light in flight,” in SPIE/COS Photonics Asia, (2014).
  13. O. Gupta, T. Willwacher, A. Velten, A. Veeraraghavan, and R. Raskar, “Reconstruction of hidden 3d shapes using diffuse reflections,” Opt. Express 20(17), 19096–19108 (2012).
    [Crossref] [PubMed]
  14. M. Schwarz and H.-P. Seidel, “Fast parallel surface and solid voxelization on GPUs,” ACM Trans. Graph. 29(6), 179 (2010).
    [Crossref]
  15. E. Eisemann and X. Décoret, “Fast scene voxelization and applications,” in “Proceedings of the 2006 symposium on Interactive 3D graphics and games,” (ACM, 2006), pp. 71–78.
  16. J. Pantaleoni, “Voxelpipe: a programmable pipeline for 3D voxelization,” in “Proceedings of the ACM SIGGRAPH Symposium on High Performance Graphics,” (ACM, 2011), pp. 99–106.
  17. A. Jarabo, J. Marco, A. Muñoz, R. Buisan, W. Jarosz, and D. Gutierrez, “A framework for transient rendering,” ACM Trans. Graph. 33(6), 177 (2014).
    [Crossref]
  18. L. Zhang, W. Chen, D. S. Ebert, and Q. Peng, “Conservative voxelization,” Vis. Comput. 23(9), 783–792 (2007).
    [Crossref]

2016 (2)

A. Bhandari and R. Raskar, “Signal processing for time-of-flight imaging sensors,” IEEE Signal Process. Mag. 33(5), 45–58 (2016).
[Crossref]

J. Klein, C. Peters, J. Martín, M. Laurenzis, and M. B. Hullin, “Tracking objects outside the line of sight using 2D intensity images,” Sci. Rep. 6, 32491 (2016).
[Crossref] [PubMed]

2015 (1)

2014 (4)

2013 (1)

A. Velten, D. Wu, A. Jarabo, B. Masia, C. Barsi, C. Joshi, E. Lawson, M. Bawendi, D. Gutierrez, and R. Raskar, “Femto-photography: Capturing and visualizing the propagation of light,” ACM Trans. Graph. 32(4), 44 (2013).
[Crossref]

2012 (2)

A. Velten, T. Willwacher, O. Gupta, A. Veeraraghavan, M. G. Bawendi, and R. Raskar, “Recovering three-dimensional shape around a corner using ultrafast time-of-flight imaging,” Nat. Commun. 3, 745 (2012).
[Crossref] [PubMed]

O. Gupta, T. Willwacher, A. Velten, A. Veeraraghavan, and R. Raskar, “Reconstruction of hidden 3d shapes using diffuse reflections,” Opt. Express 20(17), 19096–19108 (2012).
[Crossref] [PubMed]

2010 (1)

M. Schwarz and H.-P. Seidel, “Fast parallel surface and solid voxelization on GPUs,” ACM Trans. Graph. 29(6), 179 (2010).
[Crossref]

2007 (1)

L. Zhang, W. Chen, D. S. Ebert, and Q. Peng, “Conservative voxelization,” Vis. Comput. 23(9), 783–792 (2007).
[Crossref]

2000 (1)

Barsi, C.

D. Raviv, C. Barsi, N. Naik, M. Feigin, and R. Raskar, “Pose estimation using time-resolved inversion of diffuse light,” Opt. Express 22(17), 20164–20176 (2014).
[Crossref] [PubMed]

A. Velten, D. Wu, A. Jarabo, B. Masia, C. Barsi, C. Joshi, E. Lawson, M. Bawendi, D. Gutierrez, and R. Raskar, “Femto-photography: Capturing and visualizing the propagation of light,” ACM Trans. Graph. 32(4), 44 (2013).
[Crossref]

Bawendi, M.

A. Velten, D. Wu, A. Jarabo, B. Masia, C. Barsi, C. Joshi, E. Lawson, M. Bawendi, D. Gutierrez, and R. Raskar, “Femto-photography: Capturing and visualizing the propagation of light,” ACM Trans. Graph. 32(4), 44 (2013).
[Crossref]

Bawendi, M. G.

A. Velten, T. Willwacher, O. Gupta, A. Veeraraghavan, M. G. Bawendi, and R. Raskar, “Recovering three-dimensional shape around a corner using ultrafast time-of-flight imaging,” Nat. Commun. 3, 745 (2012).
[Crossref] [PubMed]

Bhandari, A.

A. Bhandari and R. Raskar, “Signal processing for time-of-flight imaging sensors,” IEEE Signal Process. Mag. 33(5), 45–58 (2016).
[Crossref]

Buisan, R.

A. Jarabo, J. Marco, A. Muñoz, R. Buisan, W. Jarosz, and D. Gutierrez, “A framework for transient rendering,” ACM Trans. Graph. 33(6), 177 (2014).
[Crossref]

Buttafava, M.

Chen, W.

L. Zhang, W. Chen, D. S. Ebert, and Q. Peng, “Conservative voxelization,” Vis. Comput. 23(9), 783–792 (2007).
[Crossref]

Cho, G. C.

Décoret, X.

E. Eisemann and X. Décoret, “Fast scene voxelization and applications,” in “Proceedings of the 2006 symposium on Interactive 3D graphics and games,” (ACM, 2006), pp. 71–78.

Ebert, D. S.

L. Zhang, W. Chen, D. S. Ebert, and Q. Peng, “Conservative voxelization,” Vis. Comput. 23(9), 783–792 (2007).
[Crossref]

Eisemann, E.

E. Eisemann and X. Décoret, “Fast scene voxelization and applications,” in “Proceedings of the 2006 symposium on Interactive 3D graphics and games,” (ACM, 2006), pp. 71–78.

Eliceiri, K.

Feigin, M.

Gupta, O.

A. Velten, T. Willwacher, O. Gupta, A. Veeraraghavan, M. G. Bawendi, and R. Raskar, “Recovering three-dimensional shape around a corner using ultrafast time-of-flight imaging,” Nat. Commun. 3, 745 (2012).
[Crossref] [PubMed]

O. Gupta, T. Willwacher, A. Velten, A. Veeraraghavan, and R. Raskar, “Reconstruction of hidden 3d shapes using diffuse reflections,” Opt. Express 20(17), 19096–19108 (2012).
[Crossref] [PubMed]

Gutierrez, D.

A. Jarabo, J. Marco, A. Muñoz, R. Buisan, W. Jarosz, and D. Gutierrez, “A framework for transient rendering,” ACM Trans. Graph. 33(6), 177 (2014).
[Crossref]

A. Velten, D. Wu, A. Jarabo, B. Masia, C. Barsi, C. Joshi, E. Lawson, M. Bawendi, D. Gutierrez, and R. Raskar, “Femto-photography: Capturing and visualizing the propagation of light,” ACM Trans. Graph. 32(4), 44 (2013).
[Crossref]

A. Jarabo, B. Masia, J. Marco, and D. Gutierrez, “Recent advances in transient imaging: A computer graphics and vision perspective,” Vis. Inform.1, https://arxiv.org/abs/1611.00939 (2017).

Han, P. Y.

Heide, F.

F. Heide, L. Xiao, A. Kolb, M. B. Hullin, and W. Heidrich, “Imaging in scattering media using correlation image sensors and sparse convolutional coding,” Opt. Express 22(21), 26338–26350 (2014).
[Crossref] [PubMed]

F. Heide, L. Xiao, W. Heidrich, and M. B. Hullin, “Diffuse mirrors: 3D reconstruction from diffuse indirect illumination using inexpensive time-of-flight sensors,” in IEEE Computer Vision and Pattern Recognition, (IEEE, 2014) pp. 3222–3229.

Heidrich, W.

F. Heide, L. Xiao, A. Kolb, M. B. Hullin, and W. Heidrich, “Imaging in scattering media using correlation image sensors and sparse convolutional coding,” Opt. Express 22(21), 26338–26350 (2014).
[Crossref] [PubMed]

F. Heide, L. Xiao, W. Heidrich, and M. B. Hullin, “Diffuse mirrors: 3D reconstruction from diffuse indirect illumination using inexpensive time-of-flight sensors,” in IEEE Computer Vision and Pattern Recognition, (IEEE, 2014) pp. 3222–3229.

Hullin, M. B.

J. Klein, C. Peters, J. Martín, M. Laurenzis, and M. B. Hullin, “Tracking objects outside the line of sight using 2D intensity images,” Sci. Rep. 6, 32491 (2016).
[Crossref] [PubMed]

F. Heide, L. Xiao, A. Kolb, M. B. Hullin, and W. Heidrich, “Imaging in scattering media using correlation image sensors and sparse convolutional coding,” Opt. Express 22(21), 26338–26350 (2014).
[Crossref] [PubMed]

M. B. Hullin, “Computational imaging of light in flight,” in SPIE/COS Photonics Asia, (2014).

F. Heide, L. Xiao, W. Heidrich, and M. B. Hullin, “Diffuse mirrors: 3D reconstruction from diffuse indirect illumination using inexpensive time-of-flight sensors,” in IEEE Computer Vision and Pattern Recognition, (IEEE, 2014) pp. 3222–3229.

Jarabo, A.

A. Jarabo, J. Marco, A. Muñoz, R. Buisan, W. Jarosz, and D. Gutierrez, “A framework for transient rendering,” ACM Trans. Graph. 33(6), 177 (2014).
[Crossref]

A. Velten, D. Wu, A. Jarabo, B. Masia, C. Barsi, C. Joshi, E. Lawson, M. Bawendi, D. Gutierrez, and R. Raskar, “Femto-photography: Capturing and visualizing the propagation of light,” ACM Trans. Graph. 32(4), 44 (2013).
[Crossref]

A. Jarabo, B. Masia, J. Marco, and D. Gutierrez, “Recent advances in transient imaging: A computer graphics and vision perspective,” Vis. Inform.1, https://arxiv.org/abs/1611.00939 (2017).

Jarosz, W.

A. Jarabo, J. Marco, A. Muñoz, R. Buisan, W. Jarosz, and D. Gutierrez, “A framework for transient rendering,” ACM Trans. Graph. 33(6), 177 (2014).
[Crossref]

Joshi, C.

A. Velten, D. Wu, A. Jarabo, B. Masia, C. Barsi, C. Joshi, E. Lawson, M. Bawendi, D. Gutierrez, and R. Raskar, “Femto-photography: Capturing and visualizing the propagation of light,” ACM Trans. Graph. 32(4), 44 (2013).
[Crossref]

Klein, J.

J. Klein, C. Peters, J. Martín, M. Laurenzis, and M. B. Hullin, “Tracking objects outside the line of sight using 2D intensity images,” Sci. Rep. 6, 32491 (2016).
[Crossref] [PubMed]

Kolb, A.

Laurenzis, M.

J. Klein, C. Peters, J. Martín, M. Laurenzis, and M. B. Hullin, “Tracking objects outside the line of sight using 2D intensity images,” Sci. Rep. 6, 32491 (2016).
[Crossref] [PubMed]

M. Laurenzis and A. Velten, “Nonline-of-sight laser gated viewing of scattered photons,” Opt. Eng. 53(2), 023102 (2014).
[Crossref]

Lawson, E.

A. Velten, D. Wu, A. Jarabo, B. Masia, C. Barsi, C. Joshi, E. Lawson, M. Bawendi, D. Gutierrez, and R. Raskar, “Femto-photography: Capturing and visualizing the propagation of light,” ACM Trans. Graph. 32(4), 44 (2013).
[Crossref]

Marco, J.

A. Jarabo, J. Marco, A. Muñoz, R. Buisan, W. Jarosz, and D. Gutierrez, “A framework for transient rendering,” ACM Trans. Graph. 33(6), 177 (2014).
[Crossref]

A. Jarabo, B. Masia, J. Marco, and D. Gutierrez, “Recent advances in transient imaging: A computer graphics and vision perspective,” Vis. Inform.1, https://arxiv.org/abs/1611.00939 (2017).

Martín, J.

J. Klein, C. Peters, J. Martín, M. Laurenzis, and M. B. Hullin, “Tracking objects outside the line of sight using 2D intensity images,” Sci. Rep. 6, 32491 (2016).
[Crossref] [PubMed]

Masia, B.

A. Velten, D. Wu, A. Jarabo, B. Masia, C. Barsi, C. Joshi, E. Lawson, M. Bawendi, D. Gutierrez, and R. Raskar, “Femto-photography: Capturing and visualizing the propagation of light,” ACM Trans. Graph. 32(4), 44 (2013).
[Crossref]

A. Jarabo, B. Masia, J. Marco, and D. Gutierrez, “Recent advances in transient imaging: A computer graphics and vision perspective,” Vis. Inform.1, https://arxiv.org/abs/1611.00939 (2017).

Muñoz, A.

A. Jarabo, J. Marco, A. Muñoz, R. Buisan, W. Jarosz, and D. Gutierrez, “A framework for transient rendering,” ACM Trans. Graph. 33(6), 177 (2014).
[Crossref]

Naik, N.

Pantaleoni, J.

J. Pantaleoni, “Voxelpipe: a programmable pipeline for 3D voxelization,” in “Proceedings of the ACM SIGGRAPH Symposium on High Performance Graphics,” (ACM, 2011), pp. 99–106.

Peng, Q.

L. Zhang, W. Chen, D. S. Ebert, and Q. Peng, “Conservative voxelization,” Vis. Comput. 23(9), 783–792 (2007).
[Crossref]

Peters, C.

J. Klein, C. Peters, J. Martín, M. Laurenzis, and M. B. Hullin, “Tracking objects outside the line of sight using 2D intensity images,” Sci. Rep. 6, 32491 (2016).
[Crossref] [PubMed]

Raskar, R.

A. Bhandari and R. Raskar, “Signal processing for time-of-flight imaging sensors,” IEEE Signal Process. Mag. 33(5), 45–58 (2016).
[Crossref]

D. Raviv, C. Barsi, N. Naik, M. Feigin, and R. Raskar, “Pose estimation using time-resolved inversion of diffuse light,” Opt. Express 22(17), 20164–20176 (2014).
[Crossref] [PubMed]

A. Velten, D. Wu, A. Jarabo, B. Masia, C. Barsi, C. Joshi, E. Lawson, M. Bawendi, D. Gutierrez, and R. Raskar, “Femto-photography: Capturing and visualizing the propagation of light,” ACM Trans. Graph. 32(4), 44 (2013).
[Crossref]

A. Velten, T. Willwacher, O. Gupta, A. Veeraraghavan, M. G. Bawendi, and R. Raskar, “Recovering three-dimensional shape around a corner using ultrafast time-of-flight imaging,” Nat. Commun. 3, 745 (2012).
[Crossref] [PubMed]

O. Gupta, T. Willwacher, A. Velten, A. Veeraraghavan, and R. Raskar, “Reconstruction of hidden 3d shapes using diffuse reflections,” Opt. Express 20(17), 19096–19108 (2012).
[Crossref] [PubMed]

Raviv, D.

Schwarz, M.

M. Schwarz and H.-P. Seidel, “Fast parallel surface and solid voxelization on GPUs,” ACM Trans. Graph. 29(6), 179 (2010).
[Crossref]

Seidel, H.-P.

M. Schwarz and H.-P. Seidel, “Fast parallel surface and solid voxelization on GPUs,” ACM Trans. Graph. 29(6), 179 (2010).
[Crossref]

Tosi, A.

Veeraraghavan, A.

O. Gupta, T. Willwacher, A. Velten, A. Veeraraghavan, and R. Raskar, “Reconstruction of hidden 3d shapes using diffuse reflections,” Opt. Express 20(17), 19096–19108 (2012).
[Crossref] [PubMed]

A. Velten, T. Willwacher, O. Gupta, A. Veeraraghavan, M. G. Bawendi, and R. Raskar, “Recovering three-dimensional shape around a corner using ultrafast time-of-flight imaging,” Nat. Commun. 3, 745 (2012).
[Crossref] [PubMed]

Velten, A.

M. Buttafava, J. Zeman, A. Tosi, K. Eliceiri, and A. Velten, “Non-line-of-sight imaging using a time-gated single photon avalanche diode,” Opt. Express 23(16), 20997–21011 (2015).
[Crossref] [PubMed]

M. Laurenzis and A. Velten, “Nonline-of-sight laser gated viewing of scattered photons,” Opt. Eng. 53(2), 023102 (2014).
[Crossref]

A. Velten, D. Wu, A. Jarabo, B. Masia, C. Barsi, C. Joshi, E. Lawson, M. Bawendi, D. Gutierrez, and R. Raskar, “Femto-photography: Capturing and visualizing the propagation of light,” ACM Trans. Graph. 32(4), 44 (2013).
[Crossref]

A. Velten, T. Willwacher, O. Gupta, A. Veeraraghavan, M. G. Bawendi, and R. Raskar, “Recovering three-dimensional shape around a corner using ultrafast time-of-flight imaging,” Nat. Commun. 3, 745 (2012).
[Crossref] [PubMed]

O. Gupta, T. Willwacher, A. Velten, A. Veeraraghavan, and R. Raskar, “Reconstruction of hidden 3d shapes using diffuse reflections,” Opt. Express 20(17), 19096–19108 (2012).
[Crossref] [PubMed]

Willwacher, T.

O. Gupta, T. Willwacher, A. Velten, A. Veeraraghavan, and R. Raskar, “Reconstruction of hidden 3d shapes using diffuse reflections,” Opt. Express 20(17), 19096–19108 (2012).
[Crossref] [PubMed]

A. Velten, T. Willwacher, O. Gupta, A. Veeraraghavan, M. G. Bawendi, and R. Raskar, “Recovering three-dimensional shape around a corner using ultrafast time-of-flight imaging,” Nat. Commun. 3, 745 (2012).
[Crossref] [PubMed]

Wu, D.

A. Velten, D. Wu, A. Jarabo, B. Masia, C. Barsi, C. Joshi, E. Lawson, M. Bawendi, D. Gutierrez, and R. Raskar, “Femto-photography: Capturing and visualizing the propagation of light,” ACM Trans. Graph. 32(4), 44 (2013).
[Crossref]

Xiao, L.

F. Heide, L. Xiao, A. Kolb, M. B. Hullin, and W. Heidrich, “Imaging in scattering media using correlation image sensors and sparse convolutional coding,” Opt. Express 22(21), 26338–26350 (2014).
[Crossref] [PubMed]

F. Heide, L. Xiao, W. Heidrich, and M. B. Hullin, “Diffuse mirrors: 3D reconstruction from diffuse indirect illumination using inexpensive time-of-flight sensors,” in IEEE Computer Vision and Pattern Recognition, (IEEE, 2014) pp. 3222–3229.

Zeman, J.

Zhang, L.

L. Zhang, W. Chen, D. S. Ebert, and Q. Peng, “Conservative voxelization,” Vis. Comput. 23(9), 783–792 (2007).
[Crossref]

Zhang, X.-C.

ACM Trans. Graph. (3)

A. Velten, D. Wu, A. Jarabo, B. Masia, C. Barsi, C. Joshi, E. Lawson, M. Bawendi, D. Gutierrez, and R. Raskar, “Femto-photography: Capturing and visualizing the propagation of light,” ACM Trans. Graph. 32(4), 44 (2013).
[Crossref]

M. Schwarz and H.-P. Seidel, “Fast parallel surface and solid voxelization on GPUs,” ACM Trans. Graph. 29(6), 179 (2010).
[Crossref]

A. Jarabo, J. Marco, A. Muñoz, R. Buisan, W. Jarosz, and D. Gutierrez, “A framework for transient rendering,” ACM Trans. Graph. 33(6), 177 (2014).
[Crossref]

IEEE Signal Process. Mag. (1)

A. Bhandari and R. Raskar, “Signal processing for time-of-flight imaging sensors,” IEEE Signal Process. Mag. 33(5), 45–58 (2016).
[Crossref]

Nat. Commun. (1)

A. Velten, T. Willwacher, O. Gupta, A. Veeraraghavan, M. G. Bawendi, and R. Raskar, “Recovering three-dimensional shape around a corner using ultrafast time-of-flight imaging,” Nat. Commun. 3, 745 (2012).
[Crossref] [PubMed]

Opt. Eng. (1)

M. Laurenzis and A. Velten, “Nonline-of-sight laser gated viewing of scattered photons,” Opt. Eng. 53(2), 023102 (2014).
[Crossref]

Opt. Express (4)

Opt. Lett. (1)

Sci. Rep. (1)

J. Klein, C. Peters, J. Martín, M. Laurenzis, and M. B. Hullin, “Tracking objects outside the line of sight using 2D intensity images,” Sci. Rep. 6, 32491 (2016).
[Crossref] [PubMed]

Vis. Comput. (1)

L. Zhang, W. Chen, D. S. Ebert, and Q. Peng, “Conservative voxelization,” Vis. Comput. 23(9), 783–792 (2007).
[Crossref]

Other (5)

M. B. Hullin, “Computational imaging of light in flight,” in SPIE/COS Photonics Asia, (2014).

E. Eisemann and X. Décoret, “Fast scene voxelization and applications,” in “Proceedings of the 2006 symposium on Interactive 3D graphics and games,” (ACM, 2006), pp. 71–78.

J. Pantaleoni, “Voxelpipe: a programmable pipeline for 3D voxelization,” in “Proceedings of the ACM SIGGRAPH Symposium on High Performance Graphics,” (ACM, 2011), pp. 99–106.

A. Jarabo, B. Masia, J. Marco, and D. Gutierrez, “Recent advances in transient imaging: A computer graphics and vision perspective,” Vis. Inform.1, https://arxiv.org/abs/1611.00939 (2017).

F. Heide, L. Xiao, W. Heidrich, and M. B. Hullin, “Diffuse mirrors: 3D reconstruction from diffuse indirect illumination using inexpensive time-of-flight sensors,” in IEEE Computer Vision and Pattern Recognition, (IEEE, 2014) pp. 3222–3229.

Cited By

OSA participates in Crossref's Cited-By Linking service. Citing articles from OSA journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (7)

Fig. 1
Fig. 1 Overview of our method. Left: Illustration of our reconstruction setup. A laser pulse is emitted towards a diffuse wall, creating a virtual point light s illuminating the occluded scene. The reflection of the occluded geometry travels back to the diffuse wall, which is imaged by the camera. The total propagation time from a hidden surface point x forms an ellipsoid with focal points at s and p. Right: The intersection of several of these ellipsoids defines a probability map for the occluded geometry, from which the reconstruction is performed with a speed-up factor of three orders of magnitude over previous approaches.
Fig. 2
Fig. 2 Reconstruction of a mannequin (see bottom right) captured with a streak camera and a femtosecond laser [7], reconstructed using traditional back-projection (left, in blue) and our method (right, in orange), which is two orders of magnitude faster while yielding similar quality. The inset on the top-right shows the same reconstructed object under a different camera angle (inset from [3]).
Fig. 3
Fig. 3 Reconstruction of the scene captured using a SPAD [9] using traditional back-projection (left), and our method (right). The rightmost images show the capture setup (insets from [9]). We have used a different color to differentiate each object (blue: T; pink: big patch; green: small patch; yellow: noise from the camera). The quality of the reconstruction is almost identical. Note that the original dataset had a higher amount of camera noise, which we have filtered before reconstruction (the insets show the reconstruction with each method for the original unfiltered data). Our method takes only 1.8s to reconstruct the scene, in comparison to 14.8s for traditional back-projection.
Fig. 4
Fig. 4 Comparison between the reconstruction quality computed with our method (top) and traditional back-projection (bottom), for an increasing number of voxels X ¯. A comparison of the cost of both algorithms, as well as the progression of their numerical error can be found in Fig. 5, left.
Fig. 5
Fig. 5 Cost (top) and error with respect to the ground truth (bottom) comparisons between our method (solid orange) and traditional back-projection (dashed blue) for the synthetic scene shown in Fig. 4. Each graph varies one reconstruction parameters while fixing the other two, namely the number of voxels X ¯ (right), input spatial resolution P ¯ (middle) and input temporal resolution T (right). We fix the parameters to a reconstruction space of X ¯ = 256 3 voxels, an input spatial resolution P ¯ = 128 pixels, and temporal resolution T = 1024 frames. For all reconstruction we use 128 measurements with different virtual light positions s. As shown in the bottom, the error introduced by our algorithm with respect to traditional back-projection is negligible. Note that at low spatial P ¯ and temporal T resolutions the error in both cases is large and scene dependent, which is the reason for the counter-intuitive behavior at very low P ¯ (bottom center) and T (bottom right).
Fig. 6
Fig. 6 Reconstruction results of our method with increasing ellipsoid tessellation quality (168, 656 and 2592 triangles per ellipsoid, respectively). The rightmost image shows the reconstruction result using using traditional back-projection. Our final reconstruction is comparable to traditional back-projection, computed with a speed-up of 1300×.
Fig. 7
Fig. 7 Left: MSE of our method with respect to a ground truth reconstruction for different voxel resolutions X ¯, and for increasing number of triangles per ellipsoid. The minimum error is bounded by the reconstruction resolution, while increasing the number of triangles past a certain point does not improve the result. Right: Reconstruction time as a function of the number of triangles per ellipsoid used (reconstruction resolution of X ¯ = 256 3). In practice we found that a near-optimal result can be obtained with a tessellation level of roughly the size of the voxel.

Equations (7)

Equations on this page are rendered with MathJax. Learn more.

I ( p ¯ , t ) = x L o ( x p , t τ ( x p ) ) G ( x p ) V ( x p ) d x ,
I s ( p ¯ , t ) = x L o ( s x , t o ) f ( s x p ) G ( s x p ) d x ,
p ( x ) = I s ( p ¯ , t ) δ ( t τ ( s x p ) ) G ( s x p ) d t d p d s
s S p ¯ P I s ( p ¯ , τ ( s x p ) ) G ( s x p ) 1 ,
p ( x ) = s S p ¯ P I s ( p ¯ , t ) isect ( x , E ( s , p , t ) ) ,
arg min o O ( α o max ( Eig ( M i ) ) < ϵ ) ,
O ( E ) = { O ( T × X ¯ Δ ) O ( X ¯ 3 ) if X ¯ Δ 1 O ( T ) elsewhere .

Metrics