Abstract

We demonstrate an imaging technique that allows identification and classification of objects hidden behind scattering media and is invariant to changes in calibration parameters within a training range. Traditional techniques to image through scattering solve an inverse problem and are limited by the need to tune a forward model with multiple calibration parameters (like camera field of view, illumination position etc.). Instead of tuning a forward model and directly inverting the optical scattering, we use a data driven approach and leverage convolutional neural networks (CNN) to learn a model that is invariant to calibration parameters variations within the training range and nearly invariant beyond that. This effectively allows robust imaging through scattering conditions that is not sensitive to calibration. The CNN is trained with a large synthetic dataset generated with a Monte Carlo (MC) model that contains random realizations of major calibration parameters. The method is evaluated with a time-resolved camera and multiple experimental results are provided including pose estimation of a mannequin hidden behind a paper sheet with 23 correct classifications out of 30 tests in three poses (76.6% accuracy on real-world measurements). This approach paves the way towards real-time practical non line of sight (NLOS) imaging applications.

© 2017 Optical Society of America

Full Article  |  PDF Article
OSA Recommended Articles
Learning-based imaging through scattering media

Ryoichi Horisaki, Ryosuke Takagi, and Jun Tanida
Opt. Express 24(13) 13738-13743 (2016)

Local receptive field constrained stacked sparse autoencoder for classification of hyperspectral images

Xiaoqing Wan and Chunhui Zhao
J. Opt. Soc. Am. A 34(6) 1011-1020 (2017)

Learning-based focusing through scattering media

Ryoichi Horisaki, Ryosuke Takagi, and Jun Tanida
Appl. Opt. 56(15) 4358-4362 (2017)

References

  • View by:
  • |
  • |
  • |

  1. D. Berman, T. treibitz, and S. Avidan, “Non-local image dehazing,” in IEEE Conference on Computer Vision and Pattern Recognition (2016).
  2. V. Holodovsky, Y. Y. Schechner, A. Levin, A. Levis, and A. Aides, “In-situ multi-view multi-scattering stochastic tomography,” in IEEE International Conference on Computational Photography (2016).
  3. M. Sheinin and Y. Y. Schechner, “The next best underwater view,” in IEEE Conference on Computer Vision and Pattern Recognition (2016).
  4. I. Gkioulekas, S. Zhao, K. Bala, T. Zickler, and A. Levin, “Inverse volume rendering with material dictionaries,” ACM Transactions on Graphics 32, 162 (2013).
    [Crossref]
  5. D. Huang, E. A. Swanson, C. P. Lin, J. S. Schuman, W. G. Stinson, W. Chang, M. R. Hee, T. Flotte, K. Gregory, C. A. Puliafito, and J. G. Fujimoto, “Optical coherence tomography,” Science 254, 1178–1181 (1991).
    [Crossref] [PubMed]
  6. Z. Yaqoob, D. Psaltis, M. S. Feld, and C. Yang, “Optical phase conjugation for turbidity suppression in biological samples,” Nat. Photonics 2, 110–115 (2008).
    [Crossref] [PubMed]
  7. I. M. Vellekoop, A. Lagendijk, and A. P. Mosk, “Exploiting disorder for perfect focusing,” Nat. Photonics 4, 320–322 (2010).
  8. O. Katz, E. Small, Y. Guan, and Y. Silberberg, “Noninvasive nonlinear focusing and imaging through strongly scattering turbid layers,” Optica 1, 170–174 (2014).
    [Crossref]
  9. R. Horstmeyer, H. Ruan, and C. Yang, “Guidestar-assisted wavefront-shaping methods for focusing light into biological tissue,” Nat. Photonics 9, 563–571 (2015).
    [Crossref] [PubMed]
  10. J. Bertolotti, E. G. van Putten, C. Blum, A. Lagendijk, W. L. Vos, and A. P. Mosk, “Non-invasive imaging through opaque scattering layers,” Nature 491, 232–234 (2012).
    [Crossref] [PubMed]
  11. O. Katz, P. Heidmann, M. Fink, and S. Gigan, “Non-invasive single-shot imaging through scattering layers and around corners via speckle correlations,” Nat. Photonics 8, 784–790 (2014).
    [Crossref]
  12. X. Xu, H. Liu, and L. V. Wang, “Time-reversed ultrasonically encoded optical focusing into scattering media,” Nat. Photonics 5, 154–157 (2011).
    [Crossref] [PubMed]
  13. X. Wang, Y. Pang, G. Ku, X. Xie, G. Stoica, and L. V. Wang, “Noninvasive laser-induced photoacoustic tomography for structural and functional in vivo imaging of the brain,” Nat. Biotechnology 21, 803–806 (2003).
    [Crossref]
  14. L. V. Wang and S. Hu, “Photoacoustic tomography: in vivo imaging from organelles to organs,” Science 335, 1458–1462 (2012).
    [Crossref] [PubMed]
  15. G. Satat, B. Heshmat, N. Naik, A. Redo-Sanchez, and R. Raskar, “Advances in ultrafast optics and imaging applications,” SPIE Defense Security 9835, 98350Q (2016).
  16. O. Gupta, T. Willwacher, A. Velten, A. Veeraraghavan, and R. Raskar, “Reconstruction of hidden 3d shapes using diffuse reflections,” Opt. Express 20, 19096–19108 (2012).
    [Crossref] [PubMed]
  17. A. Velten, T. Willwacher, O. Gupta, A. Veeraraghavan, M. G. Bawendi, and R. Raskar, “Recovering three-dimensional shape around a corner using ultrafast time-of-flight imaging,” Nat. Commun. 3, 745 (2012).
    [Crossref] [PubMed]
  18. G. Satat, B. Heshmat, C. Barsi, D. Raviv, O. Chen, M. G. Bawendi, and R. Raskar, “Locating and classifying fluorescent tags behind turbid layers using time-resolved inversion,” Nat. Commun. 6, 6796 (2015).
    [Crossref] [PubMed]
  19. F. Heide, L. Xiao, W. Heidrich, and M. B. Hullin, “Diffuse mirrors: 3d reconstruction from diffuse indirect illumination using inexpensive time-of-flight sensors,” in IEEE Conference on Computer Vision and Pattern Recognition (2014).
  20. F. Heide, L. Xiao, A. Kolb, M. B. Hullin, and W. Heidrich, “Imaging in scattering media using correlation image sensors and sparse convolutional coding,” Opt. Express 22, 26338–26350 (2014).
    [Crossref] [PubMed]
  21. A. Kadambi, H. Zhao, B. Shi, and R. Raskar, “Occluded imaging with time-of-flight sensors,” ACM Transactions on Graphics 35, 15 (2016).
    [Crossref]
  22. A. Bhandari, C. Barsi, and R. Raskar, “Blind and reference-free fluorescence lifetime estimation via consumer time-of-flight sensors,” Optica 2, 965–973 (2015).
    [Crossref]
  23. C. Jin, Z. Song, S. Zhang, J. Zhai, and Y. Zhao, “Recovering three-dimensional shape through a small hole using three laser scatterings,” Opt. Lett. 40, 52–55 (2015).
    [Crossref]
  24. D. Raviv, C. Barsi, N. Naik, M. Feigin, and R. Raskar, “Pose estimation using time-resolved inversion of diffuse light,” Opt. Express 22, 20164–20176 (2014).
    [Crossref] [PubMed]
  25. G. Gariepy, F. Tonolini, R. Henderson, J. Leach, and D. Faccio, “Detection and tracking of moving objects hidden from view,” Nat. Photonics 10, 23–26 (2015).
    [Crossref]
  26. A. Redo-Sanchez, B. Heshmat, A. Aghasi, S. Naqvi, M. Zhang, J. Romberg, and R. Raskar, “Terahertz time-gated spectral imaging for content extraction through layered structures,” Nat. Commun. 7, 12665 (2016).
    [Crossref] [PubMed]
  27. G. Satat, B. Heshmat, D. Raviv, and R. Raskar, “All photons imaging through volumetric scattering,” Sci. Rep. 6, 33946 (2016).
    [Crossref] [PubMed]
  28. M. Buttafava, J. Zeman, A. Tosi, K. Eliceiri, and A. Velten, “Non-line-of-sight imaging using a time-gated single photon avalanche diode,” Opt. Express 23, 20997–21011 (2015).
    [Crossref] [PubMed]
  29. M. Laurenzis, F. Christnacher, J. Klein, M. B. Hullin, and A. Velten, “Study of single photon counting for non-line-of-sight vision,” Proc. SPIE 9492, 94920K (2015).
    [Crossref]
  30. J. Klein, C. Peters, J. Martín, M. Laurenzis, and M. B. Hullin, “Tracking objects outside the line of sight using 2d intensity images,” Sci. Rep. 6, 32491 (2016).
    [Crossref] [PubMed]
  31. Y. Bengio, A. Courville, and P. Vincent, “Representation learning: A review and new perspectives,” IEEE Trans. Pattern Analysis Machine Intelligence 35, 1798–1828 (2013).
    [Crossref]
  32. S. Mallat, “Understanding deep convolutional networks,” Phil. Trans. Royal Soc. London A 374, 203 (2016).
    [Crossref]
  33. P. Vincent, H. Larochelle, I. Lajoie, Y. Bengio, and P. A. Manzagol, “Stacked denoising autoencoders: Learning useful representations in a deep network with a local denoising criterion,” J. Machine Learning Research 11, 3371–3408 (2010).
  34. A. Sharif Razavian, H. Azizpour, J. Sullivan, and S. Carlsson, “CNN features off-the-shelf: An astounding baseline for recognition,” in IEEE Conference on Computer Vision and Pattern Recognition (2014).
  35. L. Waller and L. Tian, “Computational imaging: Machine learning for 3D microscopy,” Nature 523, 416–417 (2015).
    [Crossref] [PubMed]
  36. K. Kulkarni, S. Lohit, P. Turaga, R. Kerviche, and A. Ashok, “Reconnet: Non-iterative reconstruction of images from compressively sensed measurements,” in IEEE Conference on Computer Vision and Pattern Recognition (2016).
  37. A. Profeta, A. Rodriguez, and H. S. Clouse, “Convolutional neural networks for synthetic aperture radar classification,” Proc. SPIE 9843, 98430M (2016).
    [Crossref]
  38. M. Chi, A. Plaza, J. A. Benediktsson, Z. Sun, J. Shen, and Y. Zhu, “Big data for remote sensing: Challenges and opportunities,” Proc. IEEE 104, 2207–2219 (2016).
    [Crossref]
  39. R. Horisaki, R. Takagi, and J. Tanida, “Learning-based imaging through scattering media,” Opt. Express 24, 13738–13743 (2016).
    [Crossref] [PubMed]
  40. B. Cai, X. Xu, K. Jia, C. Qing, and D. Tao, “Dehazenet: An end-to-end system for single image haze removal,” IEEE Trans. Image Process. 25, 5187–5198 (2016).
    [Crossref]
  41. A. Sinha, J. Lee, S. Li, and G. Barbastathis, “Lensless computational imaging through deep learning,” arXiv preprint arXiv:1702.08516 (2017).
  42. A. Abdolmanafi, L. Duong, N. Dahdah, and F. Cheriet, “Deep feature learning for automatic tissue classification of coronary artery using optical coherence tomography,” Biomed. Opt. Express 8, 1203–1220 (2017).
    [Crossref] [PubMed]
  43. T. Ando, R. Horisaki, and J. Tanida, “Speckle-learning-based object recognition through scattering media,” Opt. Express 23, 33902–33910 (2015).
    [Crossref]
  44. A. Saade, F. Caltagirone, I. Carron, L. Daudet, A. Drémeau, S. Gigan, and F. Krzakala, “Random projections through multiple optical scattering: Approximating kernels at the speed of light,” in IEEE International Conference on Acoustics, Speech and Signal Processing (2016), pp. 6215–6219.
  45. J. Carlsson, P. Hellentin, L. Malmqvist, A. Persson, W. Persson, and C. G. Wahlström, “Time-resolved studies of light propagation in paper,” Appl. Opt. 34, 1528–1535 (1995).
    [Crossref] [PubMed]
  46. A. Karpathy, G. Toderici, S. Shetty, T. Leung, R. Sukthankar, and L. Fei-Fei, “Large-scale video classification with convolutional neural networks,” in IEEE Conference on Computer Vision and Pattern Recognition (2014).
  47. K. Simonyan and A. Zisserman, “Two-stream convolutional networks for action recognition in videos,” arXiv:1406.2199 (2014).
  48. K. Simonyan and A. Zisserman, “Very deep convolutional networks for large-scale image recognition,” arXiv:1409.1556 (2014).
  49. K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition,” in IEEE Conference on Computer Vision and Pattern Recognition (2016).
  50. S. Kumar and A. Savakis, “Robust domain adaptation on the l1-grassmannian manifold,” in IEEE Conference on Computer Vision and Pattern Recognition Workshops (2016).
  51. L. v. d. Maaten and G. Hinton, “Visualizing data using t-SNE,” J. Machine Learning Res. 9, 2579–2605 (2008).
  52. T. D. Kulkarni, W. F. Whitney, P. Kohli, and J. Tenenbaum, “Deep convolutional inverse graphics network,” arXiv:1503.03167 (2015).
  53. I. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville, and Y. Bengio, “Generative adversarial nets,” arXiv:1406.2661 (2014).

2017 (1)

2016 (10)

G. Satat, B. Heshmat, N. Naik, A. Redo-Sanchez, and R. Raskar, “Advances in ultrafast optics and imaging applications,” SPIE Defense Security 9835, 98350Q (2016).

A. Kadambi, H. Zhao, B. Shi, and R. Raskar, “Occluded imaging with time-of-flight sensors,” ACM Transactions on Graphics 35, 15 (2016).
[Crossref]

J. Klein, C. Peters, J. Martín, M. Laurenzis, and M. B. Hullin, “Tracking objects outside the line of sight using 2d intensity images,” Sci. Rep. 6, 32491 (2016).
[Crossref] [PubMed]

S. Mallat, “Understanding deep convolutional networks,” Phil. Trans. Royal Soc. London A 374, 203 (2016).
[Crossref]

A. Profeta, A. Rodriguez, and H. S. Clouse, “Convolutional neural networks for synthetic aperture radar classification,” Proc. SPIE 9843, 98430M (2016).
[Crossref]

M. Chi, A. Plaza, J. A. Benediktsson, Z. Sun, J. Shen, and Y. Zhu, “Big data for remote sensing: Challenges and opportunities,” Proc. IEEE 104, 2207–2219 (2016).
[Crossref]

A. Redo-Sanchez, B. Heshmat, A. Aghasi, S. Naqvi, M. Zhang, J. Romberg, and R. Raskar, “Terahertz time-gated spectral imaging for content extraction through layered structures,” Nat. Commun. 7, 12665 (2016).
[Crossref] [PubMed]

G. Satat, B. Heshmat, D. Raviv, and R. Raskar, “All photons imaging through volumetric scattering,” Sci. Rep. 6, 33946 (2016).
[Crossref] [PubMed]

B. Cai, X. Xu, K. Jia, C. Qing, and D. Tao, “Dehazenet: An end-to-end system for single image haze removal,” IEEE Trans. Image Process. 25, 5187–5198 (2016).
[Crossref]

R. Horisaki, R. Takagi, and J. Tanida, “Learning-based imaging through scattering media,” Opt. Express 24, 13738–13743 (2016).
[Crossref] [PubMed]

2015 (9)

C. Jin, Z. Song, S. Zhang, J. Zhai, and Y. Zhao, “Recovering three-dimensional shape through a small hole using three laser scatterings,” Opt. Lett. 40, 52–55 (2015).
[Crossref]

M. Buttafava, J. Zeman, A. Tosi, K. Eliceiri, and A. Velten, “Non-line-of-sight imaging using a time-gated single photon avalanche diode,” Opt. Express 23, 20997–21011 (2015).
[Crossref] [PubMed]

A. Bhandari, C. Barsi, and R. Raskar, “Blind and reference-free fluorescence lifetime estimation via consumer time-of-flight sensors,” Optica 2, 965–973 (2015).
[Crossref]

T. Ando, R. Horisaki, and J. Tanida, “Speckle-learning-based object recognition through scattering media,” Opt. Express 23, 33902–33910 (2015).
[Crossref]

G. Gariepy, F. Tonolini, R. Henderson, J. Leach, and D. Faccio, “Detection and tracking of moving objects hidden from view,” Nat. Photonics 10, 23–26 (2015).
[Crossref]

G. Satat, B. Heshmat, C. Barsi, D. Raviv, O. Chen, M. G. Bawendi, and R. Raskar, “Locating and classifying fluorescent tags behind turbid layers using time-resolved inversion,” Nat. Commun. 6, 6796 (2015).
[Crossref] [PubMed]

L. Waller and L. Tian, “Computational imaging: Machine learning for 3D microscopy,” Nature 523, 416–417 (2015).
[Crossref] [PubMed]

M. Laurenzis, F. Christnacher, J. Klein, M. B. Hullin, and A. Velten, “Study of single photon counting for non-line-of-sight vision,” Proc. SPIE 9492, 94920K (2015).
[Crossref]

R. Horstmeyer, H. Ruan, and C. Yang, “Guidestar-assisted wavefront-shaping methods for focusing light into biological tissue,” Nat. Photonics 9, 563–571 (2015).
[Crossref] [PubMed]

2014 (4)

2013 (2)

I. Gkioulekas, S. Zhao, K. Bala, T. Zickler, and A. Levin, “Inverse volume rendering with material dictionaries,” ACM Transactions on Graphics 32, 162 (2013).
[Crossref]

Y. Bengio, A. Courville, and P. Vincent, “Representation learning: A review and new perspectives,” IEEE Trans. Pattern Analysis Machine Intelligence 35, 1798–1828 (2013).
[Crossref]

2012 (4)

A. Velten, T. Willwacher, O. Gupta, A. Veeraraghavan, M. G. Bawendi, and R. Raskar, “Recovering three-dimensional shape around a corner using ultrafast time-of-flight imaging,” Nat. Commun. 3, 745 (2012).
[Crossref] [PubMed]

L. V. Wang and S. Hu, “Photoacoustic tomography: in vivo imaging from organelles to organs,” Science 335, 1458–1462 (2012).
[Crossref] [PubMed]

J. Bertolotti, E. G. van Putten, C. Blum, A. Lagendijk, W. L. Vos, and A. P. Mosk, “Non-invasive imaging through opaque scattering layers,” Nature 491, 232–234 (2012).
[Crossref] [PubMed]

O. Gupta, T. Willwacher, A. Velten, A. Veeraraghavan, and R. Raskar, “Reconstruction of hidden 3d shapes using diffuse reflections,” Opt. Express 20, 19096–19108 (2012).
[Crossref] [PubMed]

2011 (1)

X. Xu, H. Liu, and L. V. Wang, “Time-reversed ultrasonically encoded optical focusing into scattering media,” Nat. Photonics 5, 154–157 (2011).
[Crossref] [PubMed]

2010 (2)

I. M. Vellekoop, A. Lagendijk, and A. P. Mosk, “Exploiting disorder for perfect focusing,” Nat. Photonics 4, 320–322 (2010).

P. Vincent, H. Larochelle, I. Lajoie, Y. Bengio, and P. A. Manzagol, “Stacked denoising autoencoders: Learning useful representations in a deep network with a local denoising criterion,” J. Machine Learning Research 11, 3371–3408 (2010).

2008 (2)

Z. Yaqoob, D. Psaltis, M. S. Feld, and C. Yang, “Optical phase conjugation for turbidity suppression in biological samples,” Nat. Photonics 2, 110–115 (2008).
[Crossref] [PubMed]

L. v. d. Maaten and G. Hinton, “Visualizing data using t-SNE,” J. Machine Learning Res. 9, 2579–2605 (2008).

2003 (1)

X. Wang, Y. Pang, G. Ku, X. Xie, G. Stoica, and L. V. Wang, “Noninvasive laser-induced photoacoustic tomography for structural and functional in vivo imaging of the brain,” Nat. Biotechnology 21, 803–806 (2003).
[Crossref]

1995 (1)

1991 (1)

D. Huang, E. A. Swanson, C. P. Lin, J. S. Schuman, W. G. Stinson, W. Chang, M. R. Hee, T. Flotte, K. Gregory, C. A. Puliafito, and J. G. Fujimoto, “Optical coherence tomography,” Science 254, 1178–1181 (1991).
[Crossref] [PubMed]

Abdolmanafi, A.

Aghasi, A.

A. Redo-Sanchez, B. Heshmat, A. Aghasi, S. Naqvi, M. Zhang, J. Romberg, and R. Raskar, “Terahertz time-gated spectral imaging for content extraction through layered structures,” Nat. Commun. 7, 12665 (2016).
[Crossref] [PubMed]

Aides, A.

V. Holodovsky, Y. Y. Schechner, A. Levin, A. Levis, and A. Aides, “In-situ multi-view multi-scattering stochastic tomography,” in IEEE International Conference on Computational Photography (2016).

Ando, T.

Ashok, A.

K. Kulkarni, S. Lohit, P. Turaga, R. Kerviche, and A. Ashok, “Reconnet: Non-iterative reconstruction of images from compressively sensed measurements,” in IEEE Conference on Computer Vision and Pattern Recognition (2016).

Avidan, S.

D. Berman, T. treibitz, and S. Avidan, “Non-local image dehazing,” in IEEE Conference on Computer Vision and Pattern Recognition (2016).

Azizpour, H.

A. Sharif Razavian, H. Azizpour, J. Sullivan, and S. Carlsson, “CNN features off-the-shelf: An astounding baseline for recognition,” in IEEE Conference on Computer Vision and Pattern Recognition (2014).

Bala, K.

I. Gkioulekas, S. Zhao, K. Bala, T. Zickler, and A. Levin, “Inverse volume rendering with material dictionaries,” ACM Transactions on Graphics 32, 162 (2013).
[Crossref]

Barbastathis, G.

A. Sinha, J. Lee, S. Li, and G. Barbastathis, “Lensless computational imaging through deep learning,” arXiv preprint arXiv:1702.08516 (2017).

Barsi, C.

Bawendi, M. G.

G. Satat, B. Heshmat, C. Barsi, D. Raviv, O. Chen, M. G. Bawendi, and R. Raskar, “Locating and classifying fluorescent tags behind turbid layers using time-resolved inversion,” Nat. Commun. 6, 6796 (2015).
[Crossref] [PubMed]

A. Velten, T. Willwacher, O. Gupta, A. Veeraraghavan, M. G. Bawendi, and R. Raskar, “Recovering three-dimensional shape around a corner using ultrafast time-of-flight imaging,” Nat. Commun. 3, 745 (2012).
[Crossref] [PubMed]

Benediktsson, J. A.

M. Chi, A. Plaza, J. A. Benediktsson, Z. Sun, J. Shen, and Y. Zhu, “Big data for remote sensing: Challenges and opportunities,” Proc. IEEE 104, 2207–2219 (2016).
[Crossref]

Bengio, Y.

Y. Bengio, A. Courville, and P. Vincent, “Representation learning: A review and new perspectives,” IEEE Trans. Pattern Analysis Machine Intelligence 35, 1798–1828 (2013).
[Crossref]

P. Vincent, H. Larochelle, I. Lajoie, Y. Bengio, and P. A. Manzagol, “Stacked denoising autoencoders: Learning useful representations in a deep network with a local denoising criterion,” J. Machine Learning Research 11, 3371–3408 (2010).

I. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville, and Y. Bengio, “Generative adversarial nets,” arXiv:1406.2661 (2014).

Berman, D.

D. Berman, T. treibitz, and S. Avidan, “Non-local image dehazing,” in IEEE Conference on Computer Vision and Pattern Recognition (2016).

Bertolotti, J.

J. Bertolotti, E. G. van Putten, C. Blum, A. Lagendijk, W. L. Vos, and A. P. Mosk, “Non-invasive imaging through opaque scattering layers,” Nature 491, 232–234 (2012).
[Crossref] [PubMed]

Bhandari, A.

Blum, C.

J. Bertolotti, E. G. van Putten, C. Blum, A. Lagendijk, W. L. Vos, and A. P. Mosk, “Non-invasive imaging through opaque scattering layers,” Nature 491, 232–234 (2012).
[Crossref] [PubMed]

Buttafava, M.

Cai, B.

B. Cai, X. Xu, K. Jia, C. Qing, and D. Tao, “Dehazenet: An end-to-end system for single image haze removal,” IEEE Trans. Image Process. 25, 5187–5198 (2016).
[Crossref]

Caltagirone, F.

A. Saade, F. Caltagirone, I. Carron, L. Daudet, A. Drémeau, S. Gigan, and F. Krzakala, “Random projections through multiple optical scattering: Approximating kernels at the speed of light,” in IEEE International Conference on Acoustics, Speech and Signal Processing (2016), pp. 6215–6219.

Carlsson, J.

Carlsson, S.

A. Sharif Razavian, H. Azizpour, J. Sullivan, and S. Carlsson, “CNN features off-the-shelf: An astounding baseline for recognition,” in IEEE Conference on Computer Vision and Pattern Recognition (2014).

Carron, I.

A. Saade, F. Caltagirone, I. Carron, L. Daudet, A. Drémeau, S. Gigan, and F. Krzakala, “Random projections through multiple optical scattering: Approximating kernels at the speed of light,” in IEEE International Conference on Acoustics, Speech and Signal Processing (2016), pp. 6215–6219.

Chang, W.

D. Huang, E. A. Swanson, C. P. Lin, J. S. Schuman, W. G. Stinson, W. Chang, M. R. Hee, T. Flotte, K. Gregory, C. A. Puliafito, and J. G. Fujimoto, “Optical coherence tomography,” Science 254, 1178–1181 (1991).
[Crossref] [PubMed]

Chen, O.

G. Satat, B. Heshmat, C. Barsi, D. Raviv, O. Chen, M. G. Bawendi, and R. Raskar, “Locating and classifying fluorescent tags behind turbid layers using time-resolved inversion,” Nat. Commun. 6, 6796 (2015).
[Crossref] [PubMed]

Cheriet, F.

Chi, M.

M. Chi, A. Plaza, J. A. Benediktsson, Z. Sun, J. Shen, and Y. Zhu, “Big data for remote sensing: Challenges and opportunities,” Proc. IEEE 104, 2207–2219 (2016).
[Crossref]

Christnacher, F.

M. Laurenzis, F. Christnacher, J. Klein, M. B. Hullin, and A. Velten, “Study of single photon counting for non-line-of-sight vision,” Proc. SPIE 9492, 94920K (2015).
[Crossref]

Clouse, H. S.

A. Profeta, A. Rodriguez, and H. S. Clouse, “Convolutional neural networks for synthetic aperture radar classification,” Proc. SPIE 9843, 98430M (2016).
[Crossref]

Courville, A.

Y. Bengio, A. Courville, and P. Vincent, “Representation learning: A review and new perspectives,” IEEE Trans. Pattern Analysis Machine Intelligence 35, 1798–1828 (2013).
[Crossref]

I. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville, and Y. Bengio, “Generative adversarial nets,” arXiv:1406.2661 (2014).

Dahdah, N.

Daudet, L.

A. Saade, F. Caltagirone, I. Carron, L. Daudet, A. Drémeau, S. Gigan, and F. Krzakala, “Random projections through multiple optical scattering: Approximating kernels at the speed of light,” in IEEE International Conference on Acoustics, Speech and Signal Processing (2016), pp. 6215–6219.

Drémeau, A.

A. Saade, F. Caltagirone, I. Carron, L. Daudet, A. Drémeau, S. Gigan, and F. Krzakala, “Random projections through multiple optical scattering: Approximating kernels at the speed of light,” in IEEE International Conference on Acoustics, Speech and Signal Processing (2016), pp. 6215–6219.

Duong, L.

Eliceiri, K.

Faccio, D.

G. Gariepy, F. Tonolini, R. Henderson, J. Leach, and D. Faccio, “Detection and tracking of moving objects hidden from view,” Nat. Photonics 10, 23–26 (2015).
[Crossref]

Fei-Fei, L.

A. Karpathy, G. Toderici, S. Shetty, T. Leung, R. Sukthankar, and L. Fei-Fei, “Large-scale video classification with convolutional neural networks,” in IEEE Conference on Computer Vision and Pattern Recognition (2014).

Feigin, M.

Feld, M. S.

Z. Yaqoob, D. Psaltis, M. S. Feld, and C. Yang, “Optical phase conjugation for turbidity suppression in biological samples,” Nat. Photonics 2, 110–115 (2008).
[Crossref] [PubMed]

Fink, M.

O. Katz, P. Heidmann, M. Fink, and S. Gigan, “Non-invasive single-shot imaging through scattering layers and around corners via speckle correlations,” Nat. Photonics 8, 784–790 (2014).
[Crossref]

Flotte, T.

D. Huang, E. A. Swanson, C. P. Lin, J. S. Schuman, W. G. Stinson, W. Chang, M. R. Hee, T. Flotte, K. Gregory, C. A. Puliafito, and J. G. Fujimoto, “Optical coherence tomography,” Science 254, 1178–1181 (1991).
[Crossref] [PubMed]

Fujimoto, J. G.

D. Huang, E. A. Swanson, C. P. Lin, J. S. Schuman, W. G. Stinson, W. Chang, M. R. Hee, T. Flotte, K. Gregory, C. A. Puliafito, and J. G. Fujimoto, “Optical coherence tomography,” Science 254, 1178–1181 (1991).
[Crossref] [PubMed]

Gariepy, G.

G. Gariepy, F. Tonolini, R. Henderson, J. Leach, and D. Faccio, “Detection and tracking of moving objects hidden from view,” Nat. Photonics 10, 23–26 (2015).
[Crossref]

Gigan, S.

O. Katz, P. Heidmann, M. Fink, and S. Gigan, “Non-invasive single-shot imaging through scattering layers and around corners via speckle correlations,” Nat. Photonics 8, 784–790 (2014).
[Crossref]

A. Saade, F. Caltagirone, I. Carron, L. Daudet, A. Drémeau, S. Gigan, and F. Krzakala, “Random projections through multiple optical scattering: Approximating kernels at the speed of light,” in IEEE International Conference on Acoustics, Speech and Signal Processing (2016), pp. 6215–6219.

Gkioulekas, I.

I. Gkioulekas, S. Zhao, K. Bala, T. Zickler, and A. Levin, “Inverse volume rendering with material dictionaries,” ACM Transactions on Graphics 32, 162 (2013).
[Crossref]

Goodfellow, I.

I. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville, and Y. Bengio, “Generative adversarial nets,” arXiv:1406.2661 (2014).

Gregory, K.

D. Huang, E. A. Swanson, C. P. Lin, J. S. Schuman, W. G. Stinson, W. Chang, M. R. Hee, T. Flotte, K. Gregory, C. A. Puliafito, and J. G. Fujimoto, “Optical coherence tomography,” Science 254, 1178–1181 (1991).
[Crossref] [PubMed]

Guan, Y.

Gupta, O.

O. Gupta, T. Willwacher, A. Velten, A. Veeraraghavan, and R. Raskar, “Reconstruction of hidden 3d shapes using diffuse reflections,” Opt. Express 20, 19096–19108 (2012).
[Crossref] [PubMed]

A. Velten, T. Willwacher, O. Gupta, A. Veeraraghavan, M. G. Bawendi, and R. Raskar, “Recovering three-dimensional shape around a corner using ultrafast time-of-flight imaging,” Nat. Commun. 3, 745 (2012).
[Crossref] [PubMed]

He, K.

K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition,” in IEEE Conference on Computer Vision and Pattern Recognition (2016).

Hee, M. R.

D. Huang, E. A. Swanson, C. P. Lin, J. S. Schuman, W. G. Stinson, W. Chang, M. R. Hee, T. Flotte, K. Gregory, C. A. Puliafito, and J. G. Fujimoto, “Optical coherence tomography,” Science 254, 1178–1181 (1991).
[Crossref] [PubMed]

Heide, F.

F. Heide, L. Xiao, A. Kolb, M. B. Hullin, and W. Heidrich, “Imaging in scattering media using correlation image sensors and sparse convolutional coding,” Opt. Express 22, 26338–26350 (2014).
[Crossref] [PubMed]

F. Heide, L. Xiao, W. Heidrich, and M. B. Hullin, “Diffuse mirrors: 3d reconstruction from diffuse indirect illumination using inexpensive time-of-flight sensors,” in IEEE Conference on Computer Vision and Pattern Recognition (2014).

Heidmann, P.

O. Katz, P. Heidmann, M. Fink, and S. Gigan, “Non-invasive single-shot imaging through scattering layers and around corners via speckle correlations,” Nat. Photonics 8, 784–790 (2014).
[Crossref]

Heidrich, W.

F. Heide, L. Xiao, A. Kolb, M. B. Hullin, and W. Heidrich, “Imaging in scattering media using correlation image sensors and sparse convolutional coding,” Opt. Express 22, 26338–26350 (2014).
[Crossref] [PubMed]

F. Heide, L. Xiao, W. Heidrich, and M. B. Hullin, “Diffuse mirrors: 3d reconstruction from diffuse indirect illumination using inexpensive time-of-flight sensors,” in IEEE Conference on Computer Vision and Pattern Recognition (2014).

Hellentin, P.

Henderson, R.

G. Gariepy, F. Tonolini, R. Henderson, J. Leach, and D. Faccio, “Detection and tracking of moving objects hidden from view,” Nat. Photonics 10, 23–26 (2015).
[Crossref]

Heshmat, B.

A. Redo-Sanchez, B. Heshmat, A. Aghasi, S. Naqvi, M. Zhang, J. Romberg, and R. Raskar, “Terahertz time-gated spectral imaging for content extraction through layered structures,” Nat. Commun. 7, 12665 (2016).
[Crossref] [PubMed]

G. Satat, B. Heshmat, D. Raviv, and R. Raskar, “All photons imaging through volumetric scattering,” Sci. Rep. 6, 33946 (2016).
[Crossref] [PubMed]

G. Satat, B. Heshmat, N. Naik, A. Redo-Sanchez, and R. Raskar, “Advances in ultrafast optics and imaging applications,” SPIE Defense Security 9835, 98350Q (2016).

G. Satat, B. Heshmat, C. Barsi, D. Raviv, O. Chen, M. G. Bawendi, and R. Raskar, “Locating and classifying fluorescent tags behind turbid layers using time-resolved inversion,” Nat. Commun. 6, 6796 (2015).
[Crossref] [PubMed]

Hinton, G.

L. v. d. Maaten and G. Hinton, “Visualizing data using t-SNE,” J. Machine Learning Res. 9, 2579–2605 (2008).

Holodovsky, V.

V. Holodovsky, Y. Y. Schechner, A. Levin, A. Levis, and A. Aides, “In-situ multi-view multi-scattering stochastic tomography,” in IEEE International Conference on Computational Photography (2016).

Horisaki, R.

Horstmeyer, R.

R. Horstmeyer, H. Ruan, and C. Yang, “Guidestar-assisted wavefront-shaping methods for focusing light into biological tissue,” Nat. Photonics 9, 563–571 (2015).
[Crossref] [PubMed]

Hu, S.

L. V. Wang and S. Hu, “Photoacoustic tomography: in vivo imaging from organelles to organs,” Science 335, 1458–1462 (2012).
[Crossref] [PubMed]

Huang, D.

D. Huang, E. A. Swanson, C. P. Lin, J. S. Schuman, W. G. Stinson, W. Chang, M. R. Hee, T. Flotte, K. Gregory, C. A. Puliafito, and J. G. Fujimoto, “Optical coherence tomography,” Science 254, 1178–1181 (1991).
[Crossref] [PubMed]

Hullin, M. B.

J. Klein, C. Peters, J. Martín, M. Laurenzis, and M. B. Hullin, “Tracking objects outside the line of sight using 2d intensity images,” Sci. Rep. 6, 32491 (2016).
[Crossref] [PubMed]

M. Laurenzis, F. Christnacher, J. Klein, M. B. Hullin, and A. Velten, “Study of single photon counting for non-line-of-sight vision,” Proc. SPIE 9492, 94920K (2015).
[Crossref]

F. Heide, L. Xiao, A. Kolb, M. B. Hullin, and W. Heidrich, “Imaging in scattering media using correlation image sensors and sparse convolutional coding,” Opt. Express 22, 26338–26350 (2014).
[Crossref] [PubMed]

F. Heide, L. Xiao, W. Heidrich, and M. B. Hullin, “Diffuse mirrors: 3d reconstruction from diffuse indirect illumination using inexpensive time-of-flight sensors,” in IEEE Conference on Computer Vision and Pattern Recognition (2014).

Jia, K.

B. Cai, X. Xu, K. Jia, C. Qing, and D. Tao, “Dehazenet: An end-to-end system for single image haze removal,” IEEE Trans. Image Process. 25, 5187–5198 (2016).
[Crossref]

Jin, C.

Kadambi, A.

A. Kadambi, H. Zhao, B. Shi, and R. Raskar, “Occluded imaging with time-of-flight sensors,” ACM Transactions on Graphics 35, 15 (2016).
[Crossref]

Karpathy, A.

A. Karpathy, G. Toderici, S. Shetty, T. Leung, R. Sukthankar, and L. Fei-Fei, “Large-scale video classification with convolutional neural networks,” in IEEE Conference on Computer Vision and Pattern Recognition (2014).

Katz, O.

O. Katz, E. Small, Y. Guan, and Y. Silberberg, “Noninvasive nonlinear focusing and imaging through strongly scattering turbid layers,” Optica 1, 170–174 (2014).
[Crossref]

O. Katz, P. Heidmann, M. Fink, and S. Gigan, “Non-invasive single-shot imaging through scattering layers and around corners via speckle correlations,” Nat. Photonics 8, 784–790 (2014).
[Crossref]

Kerviche, R.

K. Kulkarni, S. Lohit, P. Turaga, R. Kerviche, and A. Ashok, “Reconnet: Non-iterative reconstruction of images from compressively sensed measurements,” in IEEE Conference on Computer Vision and Pattern Recognition (2016).

Klein, J.

J. Klein, C. Peters, J. Martín, M. Laurenzis, and M. B. Hullin, “Tracking objects outside the line of sight using 2d intensity images,” Sci. Rep. 6, 32491 (2016).
[Crossref] [PubMed]

M. Laurenzis, F. Christnacher, J. Klein, M. B. Hullin, and A. Velten, “Study of single photon counting for non-line-of-sight vision,” Proc. SPIE 9492, 94920K (2015).
[Crossref]

Kohli, P.

T. D. Kulkarni, W. F. Whitney, P. Kohli, and J. Tenenbaum, “Deep convolutional inverse graphics network,” arXiv:1503.03167 (2015).

Kolb, A.

Krzakala, F.

A. Saade, F. Caltagirone, I. Carron, L. Daudet, A. Drémeau, S. Gigan, and F. Krzakala, “Random projections through multiple optical scattering: Approximating kernels at the speed of light,” in IEEE International Conference on Acoustics, Speech and Signal Processing (2016), pp. 6215–6219.

Ku, G.

X. Wang, Y. Pang, G. Ku, X. Xie, G. Stoica, and L. V. Wang, “Noninvasive laser-induced photoacoustic tomography for structural and functional in vivo imaging of the brain,” Nat. Biotechnology 21, 803–806 (2003).
[Crossref]

Kulkarni, K.

K. Kulkarni, S. Lohit, P. Turaga, R. Kerviche, and A. Ashok, “Reconnet: Non-iterative reconstruction of images from compressively sensed measurements,” in IEEE Conference on Computer Vision and Pattern Recognition (2016).

Kulkarni, T. D.

T. D. Kulkarni, W. F. Whitney, P. Kohli, and J. Tenenbaum, “Deep convolutional inverse graphics network,” arXiv:1503.03167 (2015).

Kumar, S.

S. Kumar and A. Savakis, “Robust domain adaptation on the l1-grassmannian manifold,” in IEEE Conference on Computer Vision and Pattern Recognition Workshops (2016).

Lagendijk, A.

J. Bertolotti, E. G. van Putten, C. Blum, A. Lagendijk, W. L. Vos, and A. P. Mosk, “Non-invasive imaging through opaque scattering layers,” Nature 491, 232–234 (2012).
[Crossref] [PubMed]

I. M. Vellekoop, A. Lagendijk, and A. P. Mosk, “Exploiting disorder for perfect focusing,” Nat. Photonics 4, 320–322 (2010).

Lajoie, I.

P. Vincent, H. Larochelle, I. Lajoie, Y. Bengio, and P. A. Manzagol, “Stacked denoising autoencoders: Learning useful representations in a deep network with a local denoising criterion,” J. Machine Learning Research 11, 3371–3408 (2010).

Larochelle, H.

P. Vincent, H. Larochelle, I. Lajoie, Y. Bengio, and P. A. Manzagol, “Stacked denoising autoencoders: Learning useful representations in a deep network with a local denoising criterion,” J. Machine Learning Research 11, 3371–3408 (2010).

Laurenzis, M.

J. Klein, C. Peters, J. Martín, M. Laurenzis, and M. B. Hullin, “Tracking objects outside the line of sight using 2d intensity images,” Sci. Rep. 6, 32491 (2016).
[Crossref] [PubMed]

M. Laurenzis, F. Christnacher, J. Klein, M. B. Hullin, and A. Velten, “Study of single photon counting for non-line-of-sight vision,” Proc. SPIE 9492, 94920K (2015).
[Crossref]

Leach, J.

G. Gariepy, F. Tonolini, R. Henderson, J. Leach, and D. Faccio, “Detection and tracking of moving objects hidden from view,” Nat. Photonics 10, 23–26 (2015).
[Crossref]

Lee, J.

A. Sinha, J. Lee, S. Li, and G. Barbastathis, “Lensless computational imaging through deep learning,” arXiv preprint arXiv:1702.08516 (2017).

Leung, T.

A. Karpathy, G. Toderici, S. Shetty, T. Leung, R. Sukthankar, and L. Fei-Fei, “Large-scale video classification with convolutional neural networks,” in IEEE Conference on Computer Vision and Pattern Recognition (2014).

Levin, A.

I. Gkioulekas, S. Zhao, K. Bala, T. Zickler, and A. Levin, “Inverse volume rendering with material dictionaries,” ACM Transactions on Graphics 32, 162 (2013).
[Crossref]

V. Holodovsky, Y. Y. Schechner, A. Levin, A. Levis, and A. Aides, “In-situ multi-view multi-scattering stochastic tomography,” in IEEE International Conference on Computational Photography (2016).

Levis, A.

V. Holodovsky, Y. Y. Schechner, A. Levin, A. Levis, and A. Aides, “In-situ multi-view multi-scattering stochastic tomography,” in IEEE International Conference on Computational Photography (2016).

Li, S.

A. Sinha, J. Lee, S. Li, and G. Barbastathis, “Lensless computational imaging through deep learning,” arXiv preprint arXiv:1702.08516 (2017).

Lin, C. P.

D. Huang, E. A. Swanson, C. P. Lin, J. S. Schuman, W. G. Stinson, W. Chang, M. R. Hee, T. Flotte, K. Gregory, C. A. Puliafito, and J. G. Fujimoto, “Optical coherence tomography,” Science 254, 1178–1181 (1991).
[Crossref] [PubMed]

Liu, H.

X. Xu, H. Liu, and L. V. Wang, “Time-reversed ultrasonically encoded optical focusing into scattering media,” Nat. Photonics 5, 154–157 (2011).
[Crossref] [PubMed]

Lohit, S.

K. Kulkarni, S. Lohit, P. Turaga, R. Kerviche, and A. Ashok, “Reconnet: Non-iterative reconstruction of images from compressively sensed measurements,” in IEEE Conference on Computer Vision and Pattern Recognition (2016).

Maaten, L. v. d.

L. v. d. Maaten and G. Hinton, “Visualizing data using t-SNE,” J. Machine Learning Res. 9, 2579–2605 (2008).

Mallat, S.

S. Mallat, “Understanding deep convolutional networks,” Phil. Trans. Royal Soc. London A 374, 203 (2016).
[Crossref]

Malmqvist, L.

Manzagol, P. A.

P. Vincent, H. Larochelle, I. Lajoie, Y. Bengio, and P. A. Manzagol, “Stacked denoising autoencoders: Learning useful representations in a deep network with a local denoising criterion,” J. Machine Learning Research 11, 3371–3408 (2010).

Martín, J.

J. Klein, C. Peters, J. Martín, M. Laurenzis, and M. B. Hullin, “Tracking objects outside the line of sight using 2d intensity images,” Sci. Rep. 6, 32491 (2016).
[Crossref] [PubMed]

Mirza, M.

I. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville, and Y. Bengio, “Generative adversarial nets,” arXiv:1406.2661 (2014).

Mosk, A. P.

J. Bertolotti, E. G. van Putten, C. Blum, A. Lagendijk, W. L. Vos, and A. P. Mosk, “Non-invasive imaging through opaque scattering layers,” Nature 491, 232–234 (2012).
[Crossref] [PubMed]

I. M. Vellekoop, A. Lagendijk, and A. P. Mosk, “Exploiting disorder for perfect focusing,” Nat. Photonics 4, 320–322 (2010).

Naik, N.

G. Satat, B. Heshmat, N. Naik, A. Redo-Sanchez, and R. Raskar, “Advances in ultrafast optics and imaging applications,” SPIE Defense Security 9835, 98350Q (2016).

D. Raviv, C. Barsi, N. Naik, M. Feigin, and R. Raskar, “Pose estimation using time-resolved inversion of diffuse light,” Opt. Express 22, 20164–20176 (2014).
[Crossref] [PubMed]

Naqvi, S.

A. Redo-Sanchez, B. Heshmat, A. Aghasi, S. Naqvi, M. Zhang, J. Romberg, and R. Raskar, “Terahertz time-gated spectral imaging for content extraction through layered structures,” Nat. Commun. 7, 12665 (2016).
[Crossref] [PubMed]

Ozair, S.

I. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville, and Y. Bengio, “Generative adversarial nets,” arXiv:1406.2661 (2014).

Pang, Y.

X. Wang, Y. Pang, G. Ku, X. Xie, G. Stoica, and L. V. Wang, “Noninvasive laser-induced photoacoustic tomography for structural and functional in vivo imaging of the brain,” Nat. Biotechnology 21, 803–806 (2003).
[Crossref]

Persson, A.

Persson, W.

Peters, C.

J. Klein, C. Peters, J. Martín, M. Laurenzis, and M. B. Hullin, “Tracking objects outside the line of sight using 2d intensity images,” Sci. Rep. 6, 32491 (2016).
[Crossref] [PubMed]

Plaza, A.

M. Chi, A. Plaza, J. A. Benediktsson, Z. Sun, J. Shen, and Y. Zhu, “Big data for remote sensing: Challenges and opportunities,” Proc. IEEE 104, 2207–2219 (2016).
[Crossref]

Pouget-Abadie, J.

I. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville, and Y. Bengio, “Generative adversarial nets,” arXiv:1406.2661 (2014).

Profeta, A.

A. Profeta, A. Rodriguez, and H. S. Clouse, “Convolutional neural networks for synthetic aperture radar classification,” Proc. SPIE 9843, 98430M (2016).
[Crossref]

Psaltis, D.

Z. Yaqoob, D. Psaltis, M. S. Feld, and C. Yang, “Optical phase conjugation for turbidity suppression in biological samples,” Nat. Photonics 2, 110–115 (2008).
[Crossref] [PubMed]

Puliafito, C. A.

D. Huang, E. A. Swanson, C. P. Lin, J. S. Schuman, W. G. Stinson, W. Chang, M. R. Hee, T. Flotte, K. Gregory, C. A. Puliafito, and J. G. Fujimoto, “Optical coherence tomography,” Science 254, 1178–1181 (1991).
[Crossref] [PubMed]

Qing, C.

B. Cai, X. Xu, K. Jia, C. Qing, and D. Tao, “Dehazenet: An end-to-end system for single image haze removal,” IEEE Trans. Image Process. 25, 5187–5198 (2016).
[Crossref]

Raskar, R.

G. Satat, B. Heshmat, N. Naik, A. Redo-Sanchez, and R. Raskar, “Advances in ultrafast optics and imaging applications,” SPIE Defense Security 9835, 98350Q (2016).

G. Satat, B. Heshmat, D. Raviv, and R. Raskar, “All photons imaging through volumetric scattering,” Sci. Rep. 6, 33946 (2016).
[Crossref] [PubMed]

A. Redo-Sanchez, B. Heshmat, A. Aghasi, S. Naqvi, M. Zhang, J. Romberg, and R. Raskar, “Terahertz time-gated spectral imaging for content extraction through layered structures,” Nat. Commun. 7, 12665 (2016).
[Crossref] [PubMed]

A. Kadambi, H. Zhao, B. Shi, and R. Raskar, “Occluded imaging with time-of-flight sensors,” ACM Transactions on Graphics 35, 15 (2016).
[Crossref]

A. Bhandari, C. Barsi, and R. Raskar, “Blind and reference-free fluorescence lifetime estimation via consumer time-of-flight sensors,” Optica 2, 965–973 (2015).
[Crossref]

G. Satat, B. Heshmat, C. Barsi, D. Raviv, O. Chen, M. G. Bawendi, and R. Raskar, “Locating and classifying fluorescent tags behind turbid layers using time-resolved inversion,” Nat. Commun. 6, 6796 (2015).
[Crossref] [PubMed]

D. Raviv, C. Barsi, N. Naik, M. Feigin, and R. Raskar, “Pose estimation using time-resolved inversion of diffuse light,” Opt. Express 22, 20164–20176 (2014).
[Crossref] [PubMed]

A. Velten, T. Willwacher, O. Gupta, A. Veeraraghavan, M. G. Bawendi, and R. Raskar, “Recovering three-dimensional shape around a corner using ultrafast time-of-flight imaging,” Nat. Commun. 3, 745 (2012).
[Crossref] [PubMed]

O. Gupta, T. Willwacher, A. Velten, A. Veeraraghavan, and R. Raskar, “Reconstruction of hidden 3d shapes using diffuse reflections,” Opt. Express 20, 19096–19108 (2012).
[Crossref] [PubMed]

Raviv, D.

G. Satat, B. Heshmat, D. Raviv, and R. Raskar, “All photons imaging through volumetric scattering,” Sci. Rep. 6, 33946 (2016).
[Crossref] [PubMed]

G. Satat, B. Heshmat, C. Barsi, D. Raviv, O. Chen, M. G. Bawendi, and R. Raskar, “Locating and classifying fluorescent tags behind turbid layers using time-resolved inversion,” Nat. Commun. 6, 6796 (2015).
[Crossref] [PubMed]

D. Raviv, C. Barsi, N. Naik, M. Feigin, and R. Raskar, “Pose estimation using time-resolved inversion of diffuse light,” Opt. Express 22, 20164–20176 (2014).
[Crossref] [PubMed]

Redo-Sanchez, A.

A. Redo-Sanchez, B. Heshmat, A. Aghasi, S. Naqvi, M. Zhang, J. Romberg, and R. Raskar, “Terahertz time-gated spectral imaging for content extraction through layered structures,” Nat. Commun. 7, 12665 (2016).
[Crossref] [PubMed]

G. Satat, B. Heshmat, N. Naik, A. Redo-Sanchez, and R. Raskar, “Advances in ultrafast optics and imaging applications,” SPIE Defense Security 9835, 98350Q (2016).

Ren, S.

K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition,” in IEEE Conference on Computer Vision and Pattern Recognition (2016).

Rodriguez, A.

A. Profeta, A. Rodriguez, and H. S. Clouse, “Convolutional neural networks for synthetic aperture radar classification,” Proc. SPIE 9843, 98430M (2016).
[Crossref]

Romberg, J.

A. Redo-Sanchez, B. Heshmat, A. Aghasi, S. Naqvi, M. Zhang, J. Romberg, and R. Raskar, “Terahertz time-gated spectral imaging for content extraction through layered structures,” Nat. Commun. 7, 12665 (2016).
[Crossref] [PubMed]

Ruan, H.

R. Horstmeyer, H. Ruan, and C. Yang, “Guidestar-assisted wavefront-shaping methods for focusing light into biological tissue,” Nat. Photonics 9, 563–571 (2015).
[Crossref] [PubMed]

Saade, A.

A. Saade, F. Caltagirone, I. Carron, L. Daudet, A. Drémeau, S. Gigan, and F. Krzakala, “Random projections through multiple optical scattering: Approximating kernels at the speed of light,” in IEEE International Conference on Acoustics, Speech and Signal Processing (2016), pp. 6215–6219.

Satat, G.

G. Satat, B. Heshmat, N. Naik, A. Redo-Sanchez, and R. Raskar, “Advances in ultrafast optics and imaging applications,” SPIE Defense Security 9835, 98350Q (2016).

G. Satat, B. Heshmat, D. Raviv, and R. Raskar, “All photons imaging through volumetric scattering,” Sci. Rep. 6, 33946 (2016).
[Crossref] [PubMed]

G. Satat, B. Heshmat, C. Barsi, D. Raviv, O. Chen, M. G. Bawendi, and R. Raskar, “Locating and classifying fluorescent tags behind turbid layers using time-resolved inversion,” Nat. Commun. 6, 6796 (2015).
[Crossref] [PubMed]

Savakis, A.

S. Kumar and A. Savakis, “Robust domain adaptation on the l1-grassmannian manifold,” in IEEE Conference on Computer Vision and Pattern Recognition Workshops (2016).

Schechner, Y. Y.

V. Holodovsky, Y. Y. Schechner, A. Levin, A. Levis, and A. Aides, “In-situ multi-view multi-scattering stochastic tomography,” in IEEE International Conference on Computational Photography (2016).

M. Sheinin and Y. Y. Schechner, “The next best underwater view,” in IEEE Conference on Computer Vision and Pattern Recognition (2016).

Schuman, J. S.

D. Huang, E. A. Swanson, C. P. Lin, J. S. Schuman, W. G. Stinson, W. Chang, M. R. Hee, T. Flotte, K. Gregory, C. A. Puliafito, and J. G. Fujimoto, “Optical coherence tomography,” Science 254, 1178–1181 (1991).
[Crossref] [PubMed]

Sharif Razavian, A.

A. Sharif Razavian, H. Azizpour, J. Sullivan, and S. Carlsson, “CNN features off-the-shelf: An astounding baseline for recognition,” in IEEE Conference on Computer Vision and Pattern Recognition (2014).

Sheinin, M.

M. Sheinin and Y. Y. Schechner, “The next best underwater view,” in IEEE Conference on Computer Vision and Pattern Recognition (2016).

Shen, J.

M. Chi, A. Plaza, J. A. Benediktsson, Z. Sun, J. Shen, and Y. Zhu, “Big data for remote sensing: Challenges and opportunities,” Proc. IEEE 104, 2207–2219 (2016).
[Crossref]

Shetty, S.

A. Karpathy, G. Toderici, S. Shetty, T. Leung, R. Sukthankar, and L. Fei-Fei, “Large-scale video classification with convolutional neural networks,” in IEEE Conference on Computer Vision and Pattern Recognition (2014).

Shi, B.

A. Kadambi, H. Zhao, B. Shi, and R. Raskar, “Occluded imaging with time-of-flight sensors,” ACM Transactions on Graphics 35, 15 (2016).
[Crossref]

Silberberg, Y.

Simonyan, K.

K. Simonyan and A. Zisserman, “Very deep convolutional networks for large-scale image recognition,” arXiv:1409.1556 (2014).

K. Simonyan and A. Zisserman, “Two-stream convolutional networks for action recognition in videos,” arXiv:1406.2199 (2014).

Sinha, A.

A. Sinha, J. Lee, S. Li, and G. Barbastathis, “Lensless computational imaging through deep learning,” arXiv preprint arXiv:1702.08516 (2017).

Small, E.

Song, Z.

Stinson, W. G.

D. Huang, E. A. Swanson, C. P. Lin, J. S. Schuman, W. G. Stinson, W. Chang, M. R. Hee, T. Flotte, K. Gregory, C. A. Puliafito, and J. G. Fujimoto, “Optical coherence tomography,” Science 254, 1178–1181 (1991).
[Crossref] [PubMed]

Stoica, G.

X. Wang, Y. Pang, G. Ku, X. Xie, G. Stoica, and L. V. Wang, “Noninvasive laser-induced photoacoustic tomography for structural and functional in vivo imaging of the brain,” Nat. Biotechnology 21, 803–806 (2003).
[Crossref]

Sukthankar, R.

A. Karpathy, G. Toderici, S. Shetty, T. Leung, R. Sukthankar, and L. Fei-Fei, “Large-scale video classification with convolutional neural networks,” in IEEE Conference on Computer Vision and Pattern Recognition (2014).

Sullivan, J.

A. Sharif Razavian, H. Azizpour, J. Sullivan, and S. Carlsson, “CNN features off-the-shelf: An astounding baseline for recognition,” in IEEE Conference on Computer Vision and Pattern Recognition (2014).

Sun, J.

K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition,” in IEEE Conference on Computer Vision and Pattern Recognition (2016).

Sun, Z.

M. Chi, A. Plaza, J. A. Benediktsson, Z. Sun, J. Shen, and Y. Zhu, “Big data for remote sensing: Challenges and opportunities,” Proc. IEEE 104, 2207–2219 (2016).
[Crossref]

Swanson, E. A.

D. Huang, E. A. Swanson, C. P. Lin, J. S. Schuman, W. G. Stinson, W. Chang, M. R. Hee, T. Flotte, K. Gregory, C. A. Puliafito, and J. G. Fujimoto, “Optical coherence tomography,” Science 254, 1178–1181 (1991).
[Crossref] [PubMed]

Takagi, R.

Tanida, J.

Tao, D.

B. Cai, X. Xu, K. Jia, C. Qing, and D. Tao, “Dehazenet: An end-to-end system for single image haze removal,” IEEE Trans. Image Process. 25, 5187–5198 (2016).
[Crossref]

Tenenbaum, J.

T. D. Kulkarni, W. F. Whitney, P. Kohli, and J. Tenenbaum, “Deep convolutional inverse graphics network,” arXiv:1503.03167 (2015).

Tian, L.

L. Waller and L. Tian, “Computational imaging: Machine learning for 3D microscopy,” Nature 523, 416–417 (2015).
[Crossref] [PubMed]

Toderici, G.

A. Karpathy, G. Toderici, S. Shetty, T. Leung, R. Sukthankar, and L. Fei-Fei, “Large-scale video classification with convolutional neural networks,” in IEEE Conference on Computer Vision and Pattern Recognition (2014).

Tonolini, F.

G. Gariepy, F. Tonolini, R. Henderson, J. Leach, and D. Faccio, “Detection and tracking of moving objects hidden from view,” Nat. Photonics 10, 23–26 (2015).
[Crossref]

Tosi, A.

treibitz, T.

D. Berman, T. treibitz, and S. Avidan, “Non-local image dehazing,” in IEEE Conference on Computer Vision and Pattern Recognition (2016).

Turaga, P.

K. Kulkarni, S. Lohit, P. Turaga, R. Kerviche, and A. Ashok, “Reconnet: Non-iterative reconstruction of images from compressively sensed measurements,” in IEEE Conference on Computer Vision and Pattern Recognition (2016).

van Putten, E. G.

J. Bertolotti, E. G. van Putten, C. Blum, A. Lagendijk, W. L. Vos, and A. P. Mosk, “Non-invasive imaging through opaque scattering layers,” Nature 491, 232–234 (2012).
[Crossref] [PubMed]

Veeraraghavan, A.

O. Gupta, T. Willwacher, A. Velten, A. Veeraraghavan, and R. Raskar, “Reconstruction of hidden 3d shapes using diffuse reflections,” Opt. Express 20, 19096–19108 (2012).
[Crossref] [PubMed]

A. Velten, T. Willwacher, O. Gupta, A. Veeraraghavan, M. G. Bawendi, and R. Raskar, “Recovering three-dimensional shape around a corner using ultrafast time-of-flight imaging,” Nat. Commun. 3, 745 (2012).
[Crossref] [PubMed]

Vellekoop, I. M.

I. M. Vellekoop, A. Lagendijk, and A. P. Mosk, “Exploiting disorder for perfect focusing,” Nat. Photonics 4, 320–322 (2010).

Velten, A.

M. Buttafava, J. Zeman, A. Tosi, K. Eliceiri, and A. Velten, “Non-line-of-sight imaging using a time-gated single photon avalanche diode,” Opt. Express 23, 20997–21011 (2015).
[Crossref] [PubMed]

M. Laurenzis, F. Christnacher, J. Klein, M. B. Hullin, and A. Velten, “Study of single photon counting for non-line-of-sight vision,” Proc. SPIE 9492, 94920K (2015).
[Crossref]

O. Gupta, T. Willwacher, A. Velten, A. Veeraraghavan, and R. Raskar, “Reconstruction of hidden 3d shapes using diffuse reflections,” Opt. Express 20, 19096–19108 (2012).
[Crossref] [PubMed]

A. Velten, T. Willwacher, O. Gupta, A. Veeraraghavan, M. G. Bawendi, and R. Raskar, “Recovering three-dimensional shape around a corner using ultrafast time-of-flight imaging,” Nat. Commun. 3, 745 (2012).
[Crossref] [PubMed]

Vincent, P.

Y. Bengio, A. Courville, and P. Vincent, “Representation learning: A review and new perspectives,” IEEE Trans. Pattern Analysis Machine Intelligence 35, 1798–1828 (2013).
[Crossref]

P. Vincent, H. Larochelle, I. Lajoie, Y. Bengio, and P. A. Manzagol, “Stacked denoising autoencoders: Learning useful representations in a deep network with a local denoising criterion,” J. Machine Learning Research 11, 3371–3408 (2010).

Vos, W. L.

J. Bertolotti, E. G. van Putten, C. Blum, A. Lagendijk, W. L. Vos, and A. P. Mosk, “Non-invasive imaging through opaque scattering layers,” Nature 491, 232–234 (2012).
[Crossref] [PubMed]

Wahlström, C. G.

Waller, L.

L. Waller and L. Tian, “Computational imaging: Machine learning for 3D microscopy,” Nature 523, 416–417 (2015).
[Crossref] [PubMed]

Wang, L. V.

L. V. Wang and S. Hu, “Photoacoustic tomography: in vivo imaging from organelles to organs,” Science 335, 1458–1462 (2012).
[Crossref] [PubMed]

X. Xu, H. Liu, and L. V. Wang, “Time-reversed ultrasonically encoded optical focusing into scattering media,” Nat. Photonics 5, 154–157 (2011).
[Crossref] [PubMed]

X. Wang, Y. Pang, G. Ku, X. Xie, G. Stoica, and L. V. Wang, “Noninvasive laser-induced photoacoustic tomography for structural and functional in vivo imaging of the brain,” Nat. Biotechnology 21, 803–806 (2003).
[Crossref]

Wang, X.

X. Wang, Y. Pang, G. Ku, X. Xie, G. Stoica, and L. V. Wang, “Noninvasive laser-induced photoacoustic tomography for structural and functional in vivo imaging of the brain,” Nat. Biotechnology 21, 803–806 (2003).
[Crossref]

Warde-Farley, D.

I. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville, and Y. Bengio, “Generative adversarial nets,” arXiv:1406.2661 (2014).

Whitney, W. F.

T. D. Kulkarni, W. F. Whitney, P. Kohli, and J. Tenenbaum, “Deep convolutional inverse graphics network,” arXiv:1503.03167 (2015).

Willwacher, T.

O. Gupta, T. Willwacher, A. Velten, A. Veeraraghavan, and R. Raskar, “Reconstruction of hidden 3d shapes using diffuse reflections,” Opt. Express 20, 19096–19108 (2012).
[Crossref] [PubMed]

A. Velten, T. Willwacher, O. Gupta, A. Veeraraghavan, M. G. Bawendi, and R. Raskar, “Recovering three-dimensional shape around a corner using ultrafast time-of-flight imaging,” Nat. Commun. 3, 745 (2012).
[Crossref] [PubMed]

Xiao, L.

F. Heide, L. Xiao, A. Kolb, M. B. Hullin, and W. Heidrich, “Imaging in scattering media using correlation image sensors and sparse convolutional coding,” Opt. Express 22, 26338–26350 (2014).
[Crossref] [PubMed]

F. Heide, L. Xiao, W. Heidrich, and M. B. Hullin, “Diffuse mirrors: 3d reconstruction from diffuse indirect illumination using inexpensive time-of-flight sensors,” in IEEE Conference on Computer Vision and Pattern Recognition (2014).

Xie, X.

X. Wang, Y. Pang, G. Ku, X. Xie, G. Stoica, and L. V. Wang, “Noninvasive laser-induced photoacoustic tomography for structural and functional in vivo imaging of the brain,” Nat. Biotechnology 21, 803–806 (2003).
[Crossref]

Xu, B.

I. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville, and Y. Bengio, “Generative adversarial nets,” arXiv:1406.2661 (2014).

Xu, X.

B. Cai, X. Xu, K. Jia, C. Qing, and D. Tao, “Dehazenet: An end-to-end system for single image haze removal,” IEEE Trans. Image Process. 25, 5187–5198 (2016).
[Crossref]

X. Xu, H. Liu, and L. V. Wang, “Time-reversed ultrasonically encoded optical focusing into scattering media,” Nat. Photonics 5, 154–157 (2011).
[Crossref] [PubMed]

Yang, C.

R. Horstmeyer, H. Ruan, and C. Yang, “Guidestar-assisted wavefront-shaping methods for focusing light into biological tissue,” Nat. Photonics 9, 563–571 (2015).
[Crossref] [PubMed]

Z. Yaqoob, D. Psaltis, M. S. Feld, and C. Yang, “Optical phase conjugation for turbidity suppression in biological samples,” Nat. Photonics 2, 110–115 (2008).
[Crossref] [PubMed]

Yaqoob, Z.

Z. Yaqoob, D. Psaltis, M. S. Feld, and C. Yang, “Optical phase conjugation for turbidity suppression in biological samples,” Nat. Photonics 2, 110–115 (2008).
[Crossref] [PubMed]

Zeman, J.

Zhai, J.

Zhang, M.

A. Redo-Sanchez, B. Heshmat, A. Aghasi, S. Naqvi, M. Zhang, J. Romberg, and R. Raskar, “Terahertz time-gated spectral imaging for content extraction through layered structures,” Nat. Commun. 7, 12665 (2016).
[Crossref] [PubMed]

Zhang, S.

Zhang, X.

K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition,” in IEEE Conference on Computer Vision and Pattern Recognition (2016).

Zhao, H.

A. Kadambi, H. Zhao, B. Shi, and R. Raskar, “Occluded imaging with time-of-flight sensors,” ACM Transactions on Graphics 35, 15 (2016).
[Crossref]

Zhao, S.

I. Gkioulekas, S. Zhao, K. Bala, T. Zickler, and A. Levin, “Inverse volume rendering with material dictionaries,” ACM Transactions on Graphics 32, 162 (2013).
[Crossref]

Zhao, Y.

Zhu, Y.

M. Chi, A. Plaza, J. A. Benediktsson, Z. Sun, J. Shen, and Y. Zhu, “Big data for remote sensing: Challenges and opportunities,” Proc. IEEE 104, 2207–2219 (2016).
[Crossref]

Zickler, T.

I. Gkioulekas, S. Zhao, K. Bala, T. Zickler, and A. Levin, “Inverse volume rendering with material dictionaries,” ACM Transactions on Graphics 32, 162 (2013).
[Crossref]

Zisserman, A.

K. Simonyan and A. Zisserman, “Very deep convolutional networks for large-scale image recognition,” arXiv:1409.1556 (2014).

K. Simonyan and A. Zisserman, “Two-stream convolutional networks for action recognition in videos,” arXiv:1406.2199 (2014).

ACM Transactions on Graphics (2)

I. Gkioulekas, S. Zhao, K. Bala, T. Zickler, and A. Levin, “Inverse volume rendering with material dictionaries,” ACM Transactions on Graphics 32, 162 (2013).
[Crossref]

A. Kadambi, H. Zhao, B. Shi, and R. Raskar, “Occluded imaging with time-of-flight sensors,” ACM Transactions on Graphics 35, 15 (2016).
[Crossref]

Appl. Opt. (1)

Biomed. Opt. Express (1)

IEEE Trans. Image Process. (1)

B. Cai, X. Xu, K. Jia, C. Qing, and D. Tao, “Dehazenet: An end-to-end system for single image haze removal,” IEEE Trans. Image Process. 25, 5187–5198 (2016).
[Crossref]

IEEE Trans. Pattern Analysis Machine Intelligence (1)

Y. Bengio, A. Courville, and P. Vincent, “Representation learning: A review and new perspectives,” IEEE Trans. Pattern Analysis Machine Intelligence 35, 1798–1828 (2013).
[Crossref]

J. Machine Learning Res. (1)

L. v. d. Maaten and G. Hinton, “Visualizing data using t-SNE,” J. Machine Learning Res. 9, 2579–2605 (2008).

J. Machine Learning Research (1)

P. Vincent, H. Larochelle, I. Lajoie, Y. Bengio, and P. A. Manzagol, “Stacked denoising autoencoders: Learning useful representations in a deep network with a local denoising criterion,” J. Machine Learning Research 11, 3371–3408 (2010).

Nat. Biotechnology (1)

X. Wang, Y. Pang, G. Ku, X. Xie, G. Stoica, and L. V. Wang, “Noninvasive laser-induced photoacoustic tomography for structural and functional in vivo imaging of the brain,” Nat. Biotechnology 21, 803–806 (2003).
[Crossref]

Nat. Commun. (3)

A. Velten, T. Willwacher, O. Gupta, A. Veeraraghavan, M. G. Bawendi, and R. Raskar, “Recovering three-dimensional shape around a corner using ultrafast time-of-flight imaging,” Nat. Commun. 3, 745 (2012).
[Crossref] [PubMed]

G. Satat, B. Heshmat, C. Barsi, D. Raviv, O. Chen, M. G. Bawendi, and R. Raskar, “Locating and classifying fluorescent tags behind turbid layers using time-resolved inversion,” Nat. Commun. 6, 6796 (2015).
[Crossref] [PubMed]

A. Redo-Sanchez, B. Heshmat, A. Aghasi, S. Naqvi, M. Zhang, J. Romberg, and R. Raskar, “Terahertz time-gated spectral imaging for content extraction through layered structures,” Nat. Commun. 7, 12665 (2016).
[Crossref] [PubMed]

Nat. Photonics (6)

G. Gariepy, F. Tonolini, R. Henderson, J. Leach, and D. Faccio, “Detection and tracking of moving objects hidden from view,” Nat. Photonics 10, 23–26 (2015).
[Crossref]

O. Katz, P. Heidmann, M. Fink, and S. Gigan, “Non-invasive single-shot imaging through scattering layers and around corners via speckle correlations,” Nat. Photonics 8, 784–790 (2014).
[Crossref]

X. Xu, H. Liu, and L. V. Wang, “Time-reversed ultrasonically encoded optical focusing into scattering media,” Nat. Photonics 5, 154–157 (2011).
[Crossref] [PubMed]

Z. Yaqoob, D. Psaltis, M. S. Feld, and C. Yang, “Optical phase conjugation for turbidity suppression in biological samples,” Nat. Photonics 2, 110–115 (2008).
[Crossref] [PubMed]

I. M. Vellekoop, A. Lagendijk, and A. P. Mosk, “Exploiting disorder for perfect focusing,” Nat. Photonics 4, 320–322 (2010).

R. Horstmeyer, H. Ruan, and C. Yang, “Guidestar-assisted wavefront-shaping methods for focusing light into biological tissue,” Nat. Photonics 9, 563–571 (2015).
[Crossref] [PubMed]

Nature (2)

J. Bertolotti, E. G. van Putten, C. Blum, A. Lagendijk, W. L. Vos, and A. P. Mosk, “Non-invasive imaging through opaque scattering layers,” Nature 491, 232–234 (2012).
[Crossref] [PubMed]

L. Waller and L. Tian, “Computational imaging: Machine learning for 3D microscopy,” Nature 523, 416–417 (2015).
[Crossref] [PubMed]

Opt. Express (6)

Opt. Lett. (1)

Optica (2)

Phil. Trans. Royal Soc. London A (1)

S. Mallat, “Understanding deep convolutional networks,” Phil. Trans. Royal Soc. London A 374, 203 (2016).
[Crossref]

Proc. IEEE (1)

M. Chi, A. Plaza, J. A. Benediktsson, Z. Sun, J. Shen, and Y. Zhu, “Big data for remote sensing: Challenges and opportunities,” Proc. IEEE 104, 2207–2219 (2016).
[Crossref]

Proc. SPIE (2)

A. Profeta, A. Rodriguez, and H. S. Clouse, “Convolutional neural networks for synthetic aperture radar classification,” Proc. SPIE 9843, 98430M (2016).
[Crossref]

M. Laurenzis, F. Christnacher, J. Klein, M. B. Hullin, and A. Velten, “Study of single photon counting for non-line-of-sight vision,” Proc. SPIE 9492, 94920K (2015).
[Crossref]

Sci. Rep. (2)

J. Klein, C. Peters, J. Martín, M. Laurenzis, and M. B. Hullin, “Tracking objects outside the line of sight using 2d intensity images,” Sci. Rep. 6, 32491 (2016).
[Crossref] [PubMed]

G. Satat, B. Heshmat, D. Raviv, and R. Raskar, “All photons imaging through volumetric scattering,” Sci. Rep. 6, 33946 (2016).
[Crossref] [PubMed]

Science (2)

D. Huang, E. A. Swanson, C. P. Lin, J. S. Schuman, W. G. Stinson, W. Chang, M. R. Hee, T. Flotte, K. Gregory, C. A. Puliafito, and J. G. Fujimoto, “Optical coherence tomography,” Science 254, 1178–1181 (1991).
[Crossref] [PubMed]

L. V. Wang and S. Hu, “Photoacoustic tomography: in vivo imaging from organelles to organs,” Science 335, 1458–1462 (2012).
[Crossref] [PubMed]

SPIE Defense Security (1)

G. Satat, B. Heshmat, N. Naik, A. Redo-Sanchez, and R. Raskar, “Advances in ultrafast optics and imaging applications,” SPIE Defense Security 9835, 98350Q (2016).

Other (15)

F. Heide, L. Xiao, W. Heidrich, and M. B. Hullin, “Diffuse mirrors: 3d reconstruction from diffuse indirect illumination using inexpensive time-of-flight sensors,” in IEEE Conference on Computer Vision and Pattern Recognition (2014).

D. Berman, T. treibitz, and S. Avidan, “Non-local image dehazing,” in IEEE Conference on Computer Vision and Pattern Recognition (2016).

V. Holodovsky, Y. Y. Schechner, A. Levin, A. Levis, and A. Aides, “In-situ multi-view multi-scattering stochastic tomography,” in IEEE International Conference on Computational Photography (2016).

M. Sheinin and Y. Y. Schechner, “The next best underwater view,” in IEEE Conference on Computer Vision and Pattern Recognition (2016).

K. Kulkarni, S. Lohit, P. Turaga, R. Kerviche, and A. Ashok, “Reconnet: Non-iterative reconstruction of images from compressively sensed measurements,” in IEEE Conference on Computer Vision and Pattern Recognition (2016).

A. Sharif Razavian, H. Azizpour, J. Sullivan, and S. Carlsson, “CNN features off-the-shelf: An astounding baseline for recognition,” in IEEE Conference on Computer Vision and Pattern Recognition (2014).

A. Sinha, J. Lee, S. Li, and G. Barbastathis, “Lensless computational imaging through deep learning,” arXiv preprint arXiv:1702.08516 (2017).

A. Karpathy, G. Toderici, S. Shetty, T. Leung, R. Sukthankar, and L. Fei-Fei, “Large-scale video classification with convolutional neural networks,” in IEEE Conference on Computer Vision and Pattern Recognition (2014).

K. Simonyan and A. Zisserman, “Two-stream convolutional networks for action recognition in videos,” arXiv:1406.2199 (2014).

K. Simonyan and A. Zisserman, “Very deep convolutional networks for large-scale image recognition,” arXiv:1409.1556 (2014).

K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition,” in IEEE Conference on Computer Vision and Pattern Recognition (2016).

S. Kumar and A. Savakis, “Robust domain adaptation on the l1-grassmannian manifold,” in IEEE Conference on Computer Vision and Pattern Recognition Workshops (2016).

A. Saade, F. Caltagirone, I. Carron, L. Daudet, A. Drémeau, S. Gigan, and F. Krzakala, “Random projections through multiple optical scattering: Approximating kernels at the speed of light,” in IEEE International Conference on Acoustics, Speech and Signal Processing (2016), pp. 6215–6219.

T. D. Kulkarni, W. F. Whitney, P. Kohli, and J. Tenenbaum, “Deep convolutional inverse graphics network,” arXiv:1503.03167 (2015).

I. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville, and Y. Bengio, “Generative adversarial nets,” arXiv:1406.2661 (2014).

Cited By

OSA participates in Crossref's Cited-By Linking service. Citing articles from OSA journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (8)

Fig. 1
Fig. 1

Calibration-invariant object classification through scattering. a) Training phase is an offline process in which the user defines random distributions of physical model parameters (based on approximate measurements or prior knowledge). The distributions are used to generate synthetic measurements with an MC forward model. The synthetic data is used to train a CNN for classification. b) Once the CNN is trained the user can simply place the camera (demonstrated here with a time sensitive SPAD array) and an illumination source in the scene, capture measurements (six examples of time resolved frames are shown), and classify with the CNN without having to precisely calibrate the system.

Fig. 2
Fig. 2

Comparison of SPAD measurement and forward model. The targets are two poses of a mannequin placed behind a paper sheet (diffuser). The data shows six frames (each frame is 32 × 32 pixels) of raw SPAD measurements, examples of two synthetic results generated by the MC forward model with similar measurement quality, and a synthetic result with high photon count and no additive noise. Note that differences between synthetic ex. 1, 2 and the raw measurement are due to the fact that the forward model was never calibrated to this specific setup. The synthetic images represent different instances chosen randomly from the dataset. The synthetic example with high photon count helps to distinguish between measurement (or simulated) noise and the actual signal as well as to observe the full signal wavefront.

Fig. 3
Fig. 3

CNN learns to be invariant to model parameters. The CNN is trained with the complete random training set (based on the MNIST dataset), and evaluated with test sets in which all model parameters are fixed except for one that is randomly sampled from distributions with growing variance. Three parameters are demonstrated (other parameters show similar behavior). a) Diffuser scattering profile variance DDN(0, σ), σU(1 − α, 1 + α) radians, b) Camera field of view CFVU(0.15 − α, 0.15 + α) radians, and c) Illumination source position LPU(−α, α) cm. The top plots show the classification accuracy as a function of the parameter distribution variance in the test set. Red lines show the ranges used for training. The ’X’ marks point to specific locations sampled for PCA projections in the bottom part of the figure. PCA projections show a color map where each digit has different color. Performance is maintained beyond the training range and starts to slowly degrade further from it, as can be observed in PCA projection III where more mixing is apparent at a test range ×2.5 larger compared to the training set.

Fig. 4
Fig. 4

Lab experiments show successful estimation of hidden human pose. a) Three examples (rows) demonstrate target pose, raw SPAD measurement (first six frames), and the successful classification. b) Confusion matrix for classification of raw test set (10 samples per pose).

Fig. 5
Fig. 5

Classification among seven poses on synthetic test dataset. a) t-SNE visualization demonstrates the CNN ability to classify among the seven poses. b) Confusion matrix for classification on the synthetic test dataset.

Fig. 6
Fig. 6

Performance of the K-nearest neighbor approach on the clean dataset. Classification accuracy with varying dictionary size for a) nearest neighbor classifier, and b) K-nearest neighbors classifier.

Fig. 7
Fig. 7

Time resolution is more important than number of pixels for imaging through scattering. a) Classification accuracy vs. time resolution (for 32×32 pixels). b) Classification accuracy vs. number of pixels (for non time-resolved system).

Fig. 8
Fig. 8

Examples of spatio-temporal filters learned by the CNN. The network generates both a) spatial and b) temporal filters for inference.

Tables (3)

Tables Icon

Table 1 Distributions for calibration and target parameters used in mannequin dataset.

Tables Icon

Algorithm 1 MC Forward Model

Tables Icon

Table 2 Comparison of different approaches on classification of the clean and realistic datasets. The CNN outperforms all methods in the clean dataset, and is the only method that achieves results that are better than random accuracy on the realistic dataset.

Metrics