Abstract

Computational imaging through scatter generally is accomplished by first characterizing the scattering medium so that its forward operator is obtained and then imposing additional priors in the form of regularizers on the reconstruction functional to improve the condition of the originally ill-posed inverse problem. In the functional, the forward operator and regularizer must be entered explicitly or parametrically (e.g., scattering matrices and dictionaries, respectively). However, the process of determining these representations is often incomplete, prone to errors, or infeasible. Recently, deep learning architectures have been proposed to instead learn both the forward operator and regularizer through examples. Here, we propose for the first time, to our knowledge, a convolutional neural network architecture called “IDiffNet” for the problem of imaging through diffuse media and demonstrate that IDiffNet has superior generalization capability through extensive tests with well-calibrated diffusers. We also introduce the negative Pearson correlation coefficient (NPCC) loss function for neural net training and show that the NPCC is more appropriate for spatially sparse objects and strong scattering conditions. Our results show that the convolutional architecture is robust to the choice of prior, as demonstrated by the use of multiple training and testing object databases, and capable of achieving higher space–bandwidth product reconstructions than previously reported.

© 2018 Optical Society of America under the terms of the OSA Open Access Publishing Agreement

Full Article  |  PDF Article
OSA Recommended Articles
Deep speckle correlation: a deep learning approach toward scalable imaging through scattering media

Yunzhe Li, Yujia Xue, and Lei Tian
Optica 5(10) 1181-1190 (2018)

Fast phase retrieval in off-axis digital holographic microscopy through deep learning

Gong Zhang, Tian Guan, Zhiyuan Shen, Xiangnan Wang, Tao Hu, Delai Wang, Yonghong He, and Ni Xie
Opt. Express 26(15) 19388-19405 (2018)

References

  • View by:
  • |
  • |
  • |

  1. V. I. Tatarski, Wave Propagation in a Turbulent Medium (Courier Dover, 2016).
  2. A. Ishimaru, Wave Propagation and Scattering in Random Media (Academic, 1978), Vol. 2.
  3. W. Denk, J. H. Strickler, and W. W. Webb, “Two-photon laser scanning fluorescence microscopy,” Science 248, 73–76 (1990).
    [Crossref]
  4. L. Moreaux, O. Sandre, and J. Mertz, “Membrane imaging by second-harmonic generation microscopy,” J. Opt. Soc. Am. B 17, 1685–1694 (2000).
    [Crossref]
  5. S. W. Hell and J. Wichmann, “Breaking the diffraction resolution limit by stimulated emission: stimulated-emission-depletion fluorescence microscopy,” Opt. Lett. 19, 780–782 (1994).
    [Crossref]
  6. M. G. Gustafsson, “Surpassing the lateral resolution limit by a factor of two using structured illumination microscopy,” J. Microsc. 198, 82–87 (2000).
    [Crossref]
  7. T. Wilson, “Optical sectioning in fluorescence microscopy,” J. Microsc. 242, 111–116 (2011).
    [Crossref]
  8. D. Lim, K. K. Chu, and J. Mertz, “Wide-field fluorescence sectioning with hybrid speckle and uniform-illumination microscopy,” Opt. Lett. 33, 1819–1821 (2008).
    [Crossref]
  9. S. Popoff, G. Lerosey, R. Carminati, M. Fink, A. Boccara, and S. Gigan, “Measuring the transmission matrix in optics: an approach to the study and control of light propagation in disordered media,” Phys. Rev. Lett. 104, 100601 (2010).
    [Crossref]
  10. S. Popoff, G. Lerosey, M. Fink, A. Boccara, and S. Gigan, “Image transmission through an opaque material,” Nat. Commun. 1, 81 (2010).
    [Crossref]
  11. A. Drémeau, A. Liutkus, D. Martina, O. Katz, C. Schülke, F. Krzakala, S. Gigan, and L. Daudet, “Reference-less measurement of the transmission matrix of a highly scattering material using a DMD and phase retrieval techniques,” Opt. Express 23, 11898–11911 (2015).
    [Crossref]
  12. J. Bertolotti, E. G. van Putten, C. Blum, A. Lagendijk, W. L. Vos, and A. P. Mosk, “Non-invasive imaging through opaque scattering layers,” Nature 491, 232–234 (2012).
    [Crossref]
  13. O. Katz, P. Heidmann, M. Fink, and S. Gigan, “Non-invasive single-shot imaging through scattering layers and around corners via speckle correlations,” Nat. Photonics 8, 784–790 (2014).
    [Crossref]
  14. N. Stasio, C. Moser, and D. Psaltis, “Calibration-free imaging through a multicore fiber using speckle scanning microscopy,” Opt. Lett. 41, 3078–3081 (2016).
    [Crossref]
  15. A. Porat, E. R. Andresen, H. Rigneault, D. Oron, S. Gigan, and O. Katz, “Widefield lensless imaging through a fiber bundle via speckle correlations,” Opt. Express 24, 16835–16855 (2016).
    [Crossref]
  16. G. Osnabrugge, R. Horstmeyer, I. N. Papadopoulos, B. Judkewitz, and I. M. Vellekoop, “Generalized optical memory effect,” Optica 4, 886–892 (2017).
    [Crossref]
  17. S. Feng, C. Kane, P. A. Lee, and A. D. Stone, “Correlations and fluctuations of coherent wave transmission through disordered media,” Phys. Rev. Lett. 61, 834–837 (1988).
    [Crossref]
  18. I. Freund, M. Rosenbluh, and S. Feng, “Memory effects in propagation of optical waves through disordered media,” Phys. Rev. Lett. 61, 2328–2331 (1988).
    [Crossref]
  19. E. Akkermans and G. Montambaux, Mesoscopic Physics of Electrons and Photons (Cambridge University, 2007).
  20. R. W. Gerchberg, “A practical algorithm for the determination of the phase from image and diffraction plane pictures,” Optik 35, 237–246 (1972).
  21. J. R. Fienup, “Reconstruction of an object from the modulus of its Fourier transform,” Opt. Lett. 3, 27–29 (1978).
    [Crossref]
  22. Y. Park, W. Choi, Z. Yaqoob, R. Dasari, K. Badizadegan, and M. S. Feld, “Speckle-field digital holographic microscopy,” Opt. Express 17, 12285–12292 (2009).
    [Crossref]
  23. N. Antipa, G. Kuo, R. Heckel, B. Mildenhall, E. Bostan, R. Ng, and L. Waller, “DiffuserCam: lensless single-exposure 3D imaging,” Optica 5, 1–9 (2018).
    [Crossref]
  24. U. Grenander, General Pattern Theory—A Mathematical Study of Regular Structures (Clarendon, 1993).
  25. E. J. Candès, J. Romberg, and T. Tao, “Robust uncertainty principles: exact signal reconstruction from highly incomplete frequency information,” IEEE Trans. Inf. Theory 52, 489–509 (2006).
    [Crossref]
  26. D. J. Brady, K. Choi, D. L. Marks, R. Horisaki, and S. Lim, “Compressive holography,” Opt. Express 17, 13040–13049 (2009).
    [Crossref]
  27. H.-Y. Liu, E. Jonas, L. Tian, J. Zhong, B. Recht, and L. Waller, “3D imaging in volumetric scattering media using phase-space measurements,” Opt. Express 23, 14461–14471 (2015).
    [Crossref]
  28. R. Horisaki, R. Takagi, and J. Tanida, “Learning-based imaging through scattering media,” Opt. Express 24, 13738–13743 (2016).
    [Crossref]
  29. M. Lyu, H. Wang, G. Li, and G. Situ, “Exploit imaging through opaque wall via deep learning,” arXiv:1708.07881 (2017).
  30. A. Sinha, J. Lee, S. Li, and G. Barbastathis, “Lensless computational imaging through deep learning,” Optica 4, 1117–1125 (2017).
    [Crossref]
  31. Y. Rivenson, Y. Zhang, H. Gunaydin, D. Teng, and A. Ozcan, “Phase recovery and holographic image reconstruction using deep learning in neural networks,” arXiv:1705.04286 (2017).
  32. Y. Rivenson, Z. Gorocs, H. Gunaydin, Y. Zhang, H. Wang, and A. Ozcan, “Deep learning microscopy,” arXiv:1705.04709 (2017).
  33. K. H. Jin, M. T. McCann, E. Froustey, and M. Unser, “Deep convolutional neural network for inverse problems in imaging,” IEEE Trans. Image Process. 26, 4509–4522 (2017).
    [Crossref]
  34. G. Satat, M. Tancik, O. Gupta, B. Heshmat, and R. Raskar, “Object classification through scattering media with deep learning on time resolved measurement,” Opt. Express 25, 17466–17479 (2017).
    [Crossref]
  35. R. Horstmeyer, R. Y. Chen, B. Kappes, and B. Judkewitz, “Convolutional neural networks that teach microscopes how to image,” arXiv:1709.07223 (2017).
  36. Y. LeCun and Y. Bengio, “Convolutional networks for images, speech, and time series,” in The Handbook of Brain Theory and Neural Networks (1995), Vol. 3361.
  37. M. D. Zeiler and R. Fergus, “Visualizing and understanding convolutional networks,” in European Conference on Computer Vision (Springer, 2014), pp. 818–833.
  38. J. W. Goodman, Introduction to Fourier Optics (Roberts and Company, 2005).
  39. N. Antipa, S. Necula, R. Ng, and L. Waller, “Single-shot diffuser-encoded light field imaging,” in IEEE International Conference on Computational Photography (ICCP) (IEEE, 2016), pp. 1–11.
  40. https://www.unc.edu/~rowlett/units/scales/grit.html .
  41. G. Huang, Z. Liu, K. Q. Weinberger, and L. van der Maaten, “Densely connected convolutional networks,” arXiv:1608.06993 (2016).
  42. X. Mao, C. Shen, and Y.-B. Yang, “Image restoration using very deep convolutional encoder-decoder networks with symmetric skip connections,” in Advances in Neural Information Processing Systems (2016), pp. 2802–2810.
  43. V. Badrinarayanan, A. Kendall, and R. Cipolla, “SegNet: a deep convolutional encoder-decoder architecture for image segmentation,” IEEE Trans. Pattern Anal. Mach. Intell. 39, 2481–2495 (2017).
    [Crossref]
  44. O. Ronneberger, P. Fischer, and T. Brox, “U-Net: convolutional networks for biomedical image segmentation,” in International Conference on Medical Image Computing and Computer-Assisted Intervention (Springer, 2015), pp. 234–241.
  45. G. B. Huang, M. Ramesh, T. Berg, and E. Learned-Miller, “Labeled faces in the wild: a database for studying face recognition in unconstrained environments,” (University of Massachusetts, 2007).
  46. O. Russakovsky, J. Deng, H. Su, J. Krause, S. Satheesh, S. Ma, Z. Huang, A. Karpathy, A. Khosla, M. Bernstein, A. Berg, and L. Fei-Fei, “ImageNet large scale visual recognition challenge,” Int. J. Comput. Vis. 115, 211–252 (2015).
    [Crossref]
  47. Y. LeCun, C. Cortes, and C. J. Burges, “MNIST handwritten digit database,” AT&T Labs, 2010, http://yann.lecun.com/exdb/mnist .
  48. A. Krizhevsky and G. Hinton, “Learning multiple layers of features from tiny images,” (University of Toronto, 2009).
  49. F. S. Samaria and A. C. Harter, “Parameterisation of a stochastic model for human face identification,” in Proceedings of the Second IEEE Workshop on Applications of Computer Vision (IEEE, 1994), pp. 138–142.
  50. A. M. Neto, A. C. Victorino, I. Fantoni, D. E. Zampieri, J. V. Ferreira, and D. A. Lima, “Image processing using Pearson’s correlation coefficient: applications on autonomous robotics,” in 13th International Conference on Autonomous Robot Systems (Robotica) (IEEE, 2013), pp. 1–6.
  51. C. Gehring and S. Lemay, “Sparse coding,” sibi 1, 1 (2012).
  52. B.-S. Kim, J. Y. Park, A. C. Gilbert, and S. Savarese, “Hierarchical classification of images by sparse approximation,” Image Vision Comput. 31, 982–991 (2013).
    [Crossref]
  53. T. Remez, O. Litany, R. Giryes, and A. M. Bronstein, “Deep convolutional denoising of low-light images,” arXiv:1701.01687 (2017).

2018 (1)

2017 (5)

A. Sinha, J. Lee, S. Li, and G. Barbastathis, “Lensless computational imaging through deep learning,” Optica 4, 1117–1125 (2017).
[Crossref]

K. H. Jin, M. T. McCann, E. Froustey, and M. Unser, “Deep convolutional neural network for inverse problems in imaging,” IEEE Trans. Image Process. 26, 4509–4522 (2017).
[Crossref]

G. Satat, M. Tancik, O. Gupta, B. Heshmat, and R. Raskar, “Object classification through scattering media with deep learning on time resolved measurement,” Opt. Express 25, 17466–17479 (2017).
[Crossref]

V. Badrinarayanan, A. Kendall, and R. Cipolla, “SegNet: a deep convolutional encoder-decoder architecture for image segmentation,” IEEE Trans. Pattern Anal. Mach. Intell. 39, 2481–2495 (2017).
[Crossref]

G. Osnabrugge, R. Horstmeyer, I. N. Papadopoulos, B. Judkewitz, and I. M. Vellekoop, “Generalized optical memory effect,” Optica 4, 886–892 (2017).
[Crossref]

2016 (3)

2015 (3)

2014 (1)

O. Katz, P. Heidmann, M. Fink, and S. Gigan, “Non-invasive single-shot imaging through scattering layers and around corners via speckle correlations,” Nat. Photonics 8, 784–790 (2014).
[Crossref]

2013 (1)

B.-S. Kim, J. Y. Park, A. C. Gilbert, and S. Savarese, “Hierarchical classification of images by sparse approximation,” Image Vision Comput. 31, 982–991 (2013).
[Crossref]

2012 (2)

C. Gehring and S. Lemay, “Sparse coding,” sibi 1, 1 (2012).

J. Bertolotti, E. G. van Putten, C. Blum, A. Lagendijk, W. L. Vos, and A. P. Mosk, “Non-invasive imaging through opaque scattering layers,” Nature 491, 232–234 (2012).
[Crossref]

2011 (1)

T. Wilson, “Optical sectioning in fluorescence microscopy,” J. Microsc. 242, 111–116 (2011).
[Crossref]

2010 (2)

S. Popoff, G. Lerosey, R. Carminati, M. Fink, A. Boccara, and S. Gigan, “Measuring the transmission matrix in optics: an approach to the study and control of light propagation in disordered media,” Phys. Rev. Lett. 104, 100601 (2010).
[Crossref]

S. Popoff, G. Lerosey, M. Fink, A. Boccara, and S. Gigan, “Image transmission through an opaque material,” Nat. Commun. 1, 81 (2010).
[Crossref]

2009 (2)

2008 (1)

2006 (1)

E. J. Candès, J. Romberg, and T. Tao, “Robust uncertainty principles: exact signal reconstruction from highly incomplete frequency information,” IEEE Trans. Inf. Theory 52, 489–509 (2006).
[Crossref]

2000 (2)

L. Moreaux, O. Sandre, and J. Mertz, “Membrane imaging by second-harmonic generation microscopy,” J. Opt. Soc. Am. B 17, 1685–1694 (2000).
[Crossref]

M. G. Gustafsson, “Surpassing the lateral resolution limit by a factor of two using structured illumination microscopy,” J. Microsc. 198, 82–87 (2000).
[Crossref]

1994 (1)

1990 (1)

W. Denk, J. H. Strickler, and W. W. Webb, “Two-photon laser scanning fluorescence microscopy,” Science 248, 73–76 (1990).
[Crossref]

1988 (2)

S. Feng, C. Kane, P. A. Lee, and A. D. Stone, “Correlations and fluctuations of coherent wave transmission through disordered media,” Phys. Rev. Lett. 61, 834–837 (1988).
[Crossref]

I. Freund, M. Rosenbluh, and S. Feng, “Memory effects in propagation of optical waves through disordered media,” Phys. Rev. Lett. 61, 2328–2331 (1988).
[Crossref]

1978 (1)

1972 (1)

R. W. Gerchberg, “A practical algorithm for the determination of the phase from image and diffraction plane pictures,” Optik 35, 237–246 (1972).

Akkermans, E.

E. Akkermans and G. Montambaux, Mesoscopic Physics of Electrons and Photons (Cambridge University, 2007).

Andresen, E. R.

Antipa, N.

N. Antipa, G. Kuo, R. Heckel, B. Mildenhall, E. Bostan, R. Ng, and L. Waller, “DiffuserCam: lensless single-exposure 3D imaging,” Optica 5, 1–9 (2018).
[Crossref]

N. Antipa, S. Necula, R. Ng, and L. Waller, “Single-shot diffuser-encoded light field imaging,” in IEEE International Conference on Computational Photography (ICCP) (IEEE, 2016), pp. 1–11.

Badizadegan, K.

Badrinarayanan, V.

V. Badrinarayanan, A. Kendall, and R. Cipolla, “SegNet: a deep convolutional encoder-decoder architecture for image segmentation,” IEEE Trans. Pattern Anal. Mach. Intell. 39, 2481–2495 (2017).
[Crossref]

Barbastathis, G.

Bengio, Y.

Y. LeCun and Y. Bengio, “Convolutional networks for images, speech, and time series,” in The Handbook of Brain Theory and Neural Networks (1995), Vol. 3361.

Berg, A.

O. Russakovsky, J. Deng, H. Su, J. Krause, S. Satheesh, S. Ma, Z. Huang, A. Karpathy, A. Khosla, M. Bernstein, A. Berg, and L. Fei-Fei, “ImageNet large scale visual recognition challenge,” Int. J. Comput. Vis. 115, 211–252 (2015).
[Crossref]

Berg, T.

G. B. Huang, M. Ramesh, T. Berg, and E. Learned-Miller, “Labeled faces in the wild: a database for studying face recognition in unconstrained environments,” (University of Massachusetts, 2007).

Bernstein, M.

O. Russakovsky, J. Deng, H. Su, J. Krause, S. Satheesh, S. Ma, Z. Huang, A. Karpathy, A. Khosla, M. Bernstein, A. Berg, and L. Fei-Fei, “ImageNet large scale visual recognition challenge,” Int. J. Comput. Vis. 115, 211–252 (2015).
[Crossref]

Bertolotti, J.

J. Bertolotti, E. G. van Putten, C. Blum, A. Lagendijk, W. L. Vos, and A. P. Mosk, “Non-invasive imaging through opaque scattering layers,” Nature 491, 232–234 (2012).
[Crossref]

Blum, C.

J. Bertolotti, E. G. van Putten, C. Blum, A. Lagendijk, W. L. Vos, and A. P. Mosk, “Non-invasive imaging through opaque scattering layers,” Nature 491, 232–234 (2012).
[Crossref]

Boccara, A.

S. Popoff, G. Lerosey, M. Fink, A. Boccara, and S. Gigan, “Image transmission through an opaque material,” Nat. Commun. 1, 81 (2010).
[Crossref]

S. Popoff, G. Lerosey, R. Carminati, M. Fink, A. Boccara, and S. Gigan, “Measuring the transmission matrix in optics: an approach to the study and control of light propagation in disordered media,” Phys. Rev. Lett. 104, 100601 (2010).
[Crossref]

Bostan, E.

Brady, D. J.

Bronstein, A. M.

T. Remez, O. Litany, R. Giryes, and A. M. Bronstein, “Deep convolutional denoising of low-light images,” arXiv:1701.01687 (2017).

Brox, T.

O. Ronneberger, P. Fischer, and T. Brox, “U-Net: convolutional networks for biomedical image segmentation,” in International Conference on Medical Image Computing and Computer-Assisted Intervention (Springer, 2015), pp. 234–241.

Candès, E. J.

E. J. Candès, J. Romberg, and T. Tao, “Robust uncertainty principles: exact signal reconstruction from highly incomplete frequency information,” IEEE Trans. Inf. Theory 52, 489–509 (2006).
[Crossref]

Carminati, R.

S. Popoff, G. Lerosey, R. Carminati, M. Fink, A. Boccara, and S. Gigan, “Measuring the transmission matrix in optics: an approach to the study and control of light propagation in disordered media,” Phys. Rev. Lett. 104, 100601 (2010).
[Crossref]

Chen, R. Y.

R. Horstmeyer, R. Y. Chen, B. Kappes, and B. Judkewitz, “Convolutional neural networks that teach microscopes how to image,” arXiv:1709.07223 (2017).

Choi, K.

Choi, W.

Chu, K. K.

Cipolla, R.

V. Badrinarayanan, A. Kendall, and R. Cipolla, “SegNet: a deep convolutional encoder-decoder architecture for image segmentation,” IEEE Trans. Pattern Anal. Mach. Intell. 39, 2481–2495 (2017).
[Crossref]

Dasari, R.

Daudet, L.

Deng, J.

O. Russakovsky, J. Deng, H. Su, J. Krause, S. Satheesh, S. Ma, Z. Huang, A. Karpathy, A. Khosla, M. Bernstein, A. Berg, and L. Fei-Fei, “ImageNet large scale visual recognition challenge,” Int. J. Comput. Vis. 115, 211–252 (2015).
[Crossref]

Denk, W.

W. Denk, J. H. Strickler, and W. W. Webb, “Two-photon laser scanning fluorescence microscopy,” Science 248, 73–76 (1990).
[Crossref]

Drémeau, A.

Fantoni, I.

A. M. Neto, A. C. Victorino, I. Fantoni, D. E. Zampieri, J. V. Ferreira, and D. A. Lima, “Image processing using Pearson’s correlation coefficient: applications on autonomous robotics,” in 13th International Conference on Autonomous Robot Systems (Robotica) (IEEE, 2013), pp. 1–6.

Fei-Fei, L.

O. Russakovsky, J. Deng, H. Su, J. Krause, S. Satheesh, S. Ma, Z. Huang, A. Karpathy, A. Khosla, M. Bernstein, A. Berg, and L. Fei-Fei, “ImageNet large scale visual recognition challenge,” Int. J. Comput. Vis. 115, 211–252 (2015).
[Crossref]

Feld, M. S.

Feng, S.

I. Freund, M. Rosenbluh, and S. Feng, “Memory effects in propagation of optical waves through disordered media,” Phys. Rev. Lett. 61, 2328–2331 (1988).
[Crossref]

S. Feng, C. Kane, P. A. Lee, and A. D. Stone, “Correlations and fluctuations of coherent wave transmission through disordered media,” Phys. Rev. Lett. 61, 834–837 (1988).
[Crossref]

Fergus, R.

M. D. Zeiler and R. Fergus, “Visualizing and understanding convolutional networks,” in European Conference on Computer Vision (Springer, 2014), pp. 818–833.

Ferreira, J. V.

A. M. Neto, A. C. Victorino, I. Fantoni, D. E. Zampieri, J. V. Ferreira, and D. A. Lima, “Image processing using Pearson’s correlation coefficient: applications on autonomous robotics,” in 13th International Conference on Autonomous Robot Systems (Robotica) (IEEE, 2013), pp. 1–6.

Fienup, J. R.

Fink, M.

O. Katz, P. Heidmann, M. Fink, and S. Gigan, “Non-invasive single-shot imaging through scattering layers and around corners via speckle correlations,” Nat. Photonics 8, 784–790 (2014).
[Crossref]

S. Popoff, G. Lerosey, R. Carminati, M. Fink, A. Boccara, and S. Gigan, “Measuring the transmission matrix in optics: an approach to the study and control of light propagation in disordered media,” Phys. Rev. Lett. 104, 100601 (2010).
[Crossref]

S. Popoff, G. Lerosey, M. Fink, A. Boccara, and S. Gigan, “Image transmission through an opaque material,” Nat. Commun. 1, 81 (2010).
[Crossref]

Fischer, P.

O. Ronneberger, P. Fischer, and T. Brox, “U-Net: convolutional networks for biomedical image segmentation,” in International Conference on Medical Image Computing and Computer-Assisted Intervention (Springer, 2015), pp. 234–241.

Freund, I.

I. Freund, M. Rosenbluh, and S. Feng, “Memory effects in propagation of optical waves through disordered media,” Phys. Rev. Lett. 61, 2328–2331 (1988).
[Crossref]

Froustey, E.

K. H. Jin, M. T. McCann, E. Froustey, and M. Unser, “Deep convolutional neural network for inverse problems in imaging,” IEEE Trans. Image Process. 26, 4509–4522 (2017).
[Crossref]

Gehring, C.

C. Gehring and S. Lemay, “Sparse coding,” sibi 1, 1 (2012).

Gerchberg, R. W.

R. W. Gerchberg, “A practical algorithm for the determination of the phase from image and diffraction plane pictures,” Optik 35, 237–246 (1972).

Gigan, S.

A. Porat, E. R. Andresen, H. Rigneault, D. Oron, S. Gigan, and O. Katz, “Widefield lensless imaging through a fiber bundle via speckle correlations,” Opt. Express 24, 16835–16855 (2016).
[Crossref]

A. Drémeau, A. Liutkus, D. Martina, O. Katz, C. Schülke, F. Krzakala, S. Gigan, and L. Daudet, “Reference-less measurement of the transmission matrix of a highly scattering material using a DMD and phase retrieval techniques,” Opt. Express 23, 11898–11911 (2015).
[Crossref]

O. Katz, P. Heidmann, M. Fink, and S. Gigan, “Non-invasive single-shot imaging through scattering layers and around corners via speckle correlations,” Nat. Photonics 8, 784–790 (2014).
[Crossref]

S. Popoff, G. Lerosey, R. Carminati, M. Fink, A. Boccara, and S. Gigan, “Measuring the transmission matrix in optics: an approach to the study and control of light propagation in disordered media,” Phys. Rev. Lett. 104, 100601 (2010).
[Crossref]

S. Popoff, G. Lerosey, M. Fink, A. Boccara, and S. Gigan, “Image transmission through an opaque material,” Nat. Commun. 1, 81 (2010).
[Crossref]

Gilbert, A. C.

B.-S. Kim, J. Y. Park, A. C. Gilbert, and S. Savarese, “Hierarchical classification of images by sparse approximation,” Image Vision Comput. 31, 982–991 (2013).
[Crossref]

Giryes, R.

T. Remez, O. Litany, R. Giryes, and A. M. Bronstein, “Deep convolutional denoising of low-light images,” arXiv:1701.01687 (2017).

Goodman, J. W.

J. W. Goodman, Introduction to Fourier Optics (Roberts and Company, 2005).

Gorocs, Z.

Y. Rivenson, Z. Gorocs, H. Gunaydin, Y. Zhang, H. Wang, and A. Ozcan, “Deep learning microscopy,” arXiv:1705.04709 (2017).

Grenander, U.

U. Grenander, General Pattern Theory—A Mathematical Study of Regular Structures (Clarendon, 1993).

Gunaydin, H.

Y. Rivenson, Z. Gorocs, H. Gunaydin, Y. Zhang, H. Wang, and A. Ozcan, “Deep learning microscopy,” arXiv:1705.04709 (2017).

Y. Rivenson, Y. Zhang, H. Gunaydin, D. Teng, and A. Ozcan, “Phase recovery and holographic image reconstruction using deep learning in neural networks,” arXiv:1705.04286 (2017).

Gupta, O.

Gustafsson, M. G.

M. G. Gustafsson, “Surpassing the lateral resolution limit by a factor of two using structured illumination microscopy,” J. Microsc. 198, 82–87 (2000).
[Crossref]

Harter, A. C.

F. S. Samaria and A. C. Harter, “Parameterisation of a stochastic model for human face identification,” in Proceedings of the Second IEEE Workshop on Applications of Computer Vision (IEEE, 1994), pp. 138–142.

Heckel, R.

Heidmann, P.

O. Katz, P. Heidmann, M. Fink, and S. Gigan, “Non-invasive single-shot imaging through scattering layers and around corners via speckle correlations,” Nat. Photonics 8, 784–790 (2014).
[Crossref]

Hell, S. W.

Heshmat, B.

Hinton, G.

A. Krizhevsky and G. Hinton, “Learning multiple layers of features from tiny images,” (University of Toronto, 2009).

Horisaki, R.

Horstmeyer, R.

G. Osnabrugge, R. Horstmeyer, I. N. Papadopoulos, B. Judkewitz, and I. M. Vellekoop, “Generalized optical memory effect,” Optica 4, 886–892 (2017).
[Crossref]

R. Horstmeyer, R. Y. Chen, B. Kappes, and B. Judkewitz, “Convolutional neural networks that teach microscopes how to image,” arXiv:1709.07223 (2017).

Huang, G.

G. Huang, Z. Liu, K. Q. Weinberger, and L. van der Maaten, “Densely connected convolutional networks,” arXiv:1608.06993 (2016).

Huang, G. B.

G. B. Huang, M. Ramesh, T. Berg, and E. Learned-Miller, “Labeled faces in the wild: a database for studying face recognition in unconstrained environments,” (University of Massachusetts, 2007).

Huang, Z.

O. Russakovsky, J. Deng, H. Su, J. Krause, S. Satheesh, S. Ma, Z. Huang, A. Karpathy, A. Khosla, M. Bernstein, A. Berg, and L. Fei-Fei, “ImageNet large scale visual recognition challenge,” Int. J. Comput. Vis. 115, 211–252 (2015).
[Crossref]

Ishimaru, A.

A. Ishimaru, Wave Propagation and Scattering in Random Media (Academic, 1978), Vol. 2.

Jin, K. H.

K. H. Jin, M. T. McCann, E. Froustey, and M. Unser, “Deep convolutional neural network for inverse problems in imaging,” IEEE Trans. Image Process. 26, 4509–4522 (2017).
[Crossref]

Jonas, E.

Judkewitz, B.

G. Osnabrugge, R. Horstmeyer, I. N. Papadopoulos, B. Judkewitz, and I. M. Vellekoop, “Generalized optical memory effect,” Optica 4, 886–892 (2017).
[Crossref]

R. Horstmeyer, R. Y. Chen, B. Kappes, and B. Judkewitz, “Convolutional neural networks that teach microscopes how to image,” arXiv:1709.07223 (2017).

Kane, C.

S. Feng, C. Kane, P. A. Lee, and A. D. Stone, “Correlations and fluctuations of coherent wave transmission through disordered media,” Phys. Rev. Lett. 61, 834–837 (1988).
[Crossref]

Kappes, B.

R. Horstmeyer, R. Y. Chen, B. Kappes, and B. Judkewitz, “Convolutional neural networks that teach microscopes how to image,” arXiv:1709.07223 (2017).

Karpathy, A.

O. Russakovsky, J. Deng, H. Su, J. Krause, S. Satheesh, S. Ma, Z. Huang, A. Karpathy, A. Khosla, M. Bernstein, A. Berg, and L. Fei-Fei, “ImageNet large scale visual recognition challenge,” Int. J. Comput. Vis. 115, 211–252 (2015).
[Crossref]

Katz, O.

Kendall, A.

V. Badrinarayanan, A. Kendall, and R. Cipolla, “SegNet: a deep convolutional encoder-decoder architecture for image segmentation,” IEEE Trans. Pattern Anal. Mach. Intell. 39, 2481–2495 (2017).
[Crossref]

Khosla, A.

O. Russakovsky, J. Deng, H. Su, J. Krause, S. Satheesh, S. Ma, Z. Huang, A. Karpathy, A. Khosla, M. Bernstein, A. Berg, and L. Fei-Fei, “ImageNet large scale visual recognition challenge,” Int. J. Comput. Vis. 115, 211–252 (2015).
[Crossref]

Kim, B.-S.

B.-S. Kim, J. Y. Park, A. C. Gilbert, and S. Savarese, “Hierarchical classification of images by sparse approximation,” Image Vision Comput. 31, 982–991 (2013).
[Crossref]

Krause, J.

O. Russakovsky, J. Deng, H. Su, J. Krause, S. Satheesh, S. Ma, Z. Huang, A. Karpathy, A. Khosla, M. Bernstein, A. Berg, and L. Fei-Fei, “ImageNet large scale visual recognition challenge,” Int. J. Comput. Vis. 115, 211–252 (2015).
[Crossref]

Krizhevsky, A.

A. Krizhevsky and G. Hinton, “Learning multiple layers of features from tiny images,” (University of Toronto, 2009).

Krzakala, F.

Kuo, G.

Lagendijk, A.

J. Bertolotti, E. G. van Putten, C. Blum, A. Lagendijk, W. L. Vos, and A. P. Mosk, “Non-invasive imaging through opaque scattering layers,” Nature 491, 232–234 (2012).
[Crossref]

Learned-Miller, E.

G. B. Huang, M. Ramesh, T. Berg, and E. Learned-Miller, “Labeled faces in the wild: a database for studying face recognition in unconstrained environments,” (University of Massachusetts, 2007).

LeCun, Y.

Y. LeCun and Y. Bengio, “Convolutional networks for images, speech, and time series,” in The Handbook of Brain Theory and Neural Networks (1995), Vol. 3361.

Lee, J.

Lee, P. A.

S. Feng, C. Kane, P. A. Lee, and A. D. Stone, “Correlations and fluctuations of coherent wave transmission through disordered media,” Phys. Rev. Lett. 61, 834–837 (1988).
[Crossref]

Lemay, S.

C. Gehring and S. Lemay, “Sparse coding,” sibi 1, 1 (2012).

Lerosey, G.

S. Popoff, G. Lerosey, M. Fink, A. Boccara, and S. Gigan, “Image transmission through an opaque material,” Nat. Commun. 1, 81 (2010).
[Crossref]

S. Popoff, G. Lerosey, R. Carminati, M. Fink, A. Boccara, and S. Gigan, “Measuring the transmission matrix in optics: an approach to the study and control of light propagation in disordered media,” Phys. Rev. Lett. 104, 100601 (2010).
[Crossref]

Li, G.

M. Lyu, H. Wang, G. Li, and G. Situ, “Exploit imaging through opaque wall via deep learning,” arXiv:1708.07881 (2017).

Li, S.

Lim, D.

Lim, S.

Lima, D. A.

A. M. Neto, A. C. Victorino, I. Fantoni, D. E. Zampieri, J. V. Ferreira, and D. A. Lima, “Image processing using Pearson’s correlation coefficient: applications on autonomous robotics,” in 13th International Conference on Autonomous Robot Systems (Robotica) (IEEE, 2013), pp. 1–6.

Litany, O.

T. Remez, O. Litany, R. Giryes, and A. M. Bronstein, “Deep convolutional denoising of low-light images,” arXiv:1701.01687 (2017).

Liu, H.-Y.

Liu, Z.

G. Huang, Z. Liu, K. Q. Weinberger, and L. van der Maaten, “Densely connected convolutional networks,” arXiv:1608.06993 (2016).

Liutkus, A.

Lyu, M.

M. Lyu, H. Wang, G. Li, and G. Situ, “Exploit imaging through opaque wall via deep learning,” arXiv:1708.07881 (2017).

Ma, S.

O. Russakovsky, J. Deng, H. Su, J. Krause, S. Satheesh, S. Ma, Z. Huang, A. Karpathy, A. Khosla, M. Bernstein, A. Berg, and L. Fei-Fei, “ImageNet large scale visual recognition challenge,” Int. J. Comput. Vis. 115, 211–252 (2015).
[Crossref]

Mao, X.

X. Mao, C. Shen, and Y.-B. Yang, “Image restoration using very deep convolutional encoder-decoder networks with symmetric skip connections,” in Advances in Neural Information Processing Systems (2016), pp. 2802–2810.

Marks, D. L.

Martina, D.

McCann, M. T.

K. H. Jin, M. T. McCann, E. Froustey, and M. Unser, “Deep convolutional neural network for inverse problems in imaging,” IEEE Trans. Image Process. 26, 4509–4522 (2017).
[Crossref]

Mertz, J.

Mildenhall, B.

Montambaux, G.

E. Akkermans and G. Montambaux, Mesoscopic Physics of Electrons and Photons (Cambridge University, 2007).

Moreaux, L.

Moser, C.

Mosk, A. P.

J. Bertolotti, E. G. van Putten, C. Blum, A. Lagendijk, W. L. Vos, and A. P. Mosk, “Non-invasive imaging through opaque scattering layers,” Nature 491, 232–234 (2012).
[Crossref]

Necula, S.

N. Antipa, S. Necula, R. Ng, and L. Waller, “Single-shot diffuser-encoded light field imaging,” in IEEE International Conference on Computational Photography (ICCP) (IEEE, 2016), pp. 1–11.

Neto, A. M.

A. M. Neto, A. C. Victorino, I. Fantoni, D. E. Zampieri, J. V. Ferreira, and D. A. Lima, “Image processing using Pearson’s correlation coefficient: applications on autonomous robotics,” in 13th International Conference on Autonomous Robot Systems (Robotica) (IEEE, 2013), pp. 1–6.

Ng, R.

N. Antipa, G. Kuo, R. Heckel, B. Mildenhall, E. Bostan, R. Ng, and L. Waller, “DiffuserCam: lensless single-exposure 3D imaging,” Optica 5, 1–9 (2018).
[Crossref]

N. Antipa, S. Necula, R. Ng, and L. Waller, “Single-shot diffuser-encoded light field imaging,” in IEEE International Conference on Computational Photography (ICCP) (IEEE, 2016), pp. 1–11.

Oron, D.

Osnabrugge, G.

Ozcan, A.

Y. Rivenson, Z. Gorocs, H. Gunaydin, Y. Zhang, H. Wang, and A. Ozcan, “Deep learning microscopy,” arXiv:1705.04709 (2017).

Y. Rivenson, Y. Zhang, H. Gunaydin, D. Teng, and A. Ozcan, “Phase recovery and holographic image reconstruction using deep learning in neural networks,” arXiv:1705.04286 (2017).

Papadopoulos, I. N.

Park, J. Y.

B.-S. Kim, J. Y. Park, A. C. Gilbert, and S. Savarese, “Hierarchical classification of images by sparse approximation,” Image Vision Comput. 31, 982–991 (2013).
[Crossref]

Park, Y.

Popoff, S.

S. Popoff, G. Lerosey, R. Carminati, M. Fink, A. Boccara, and S. Gigan, “Measuring the transmission matrix in optics: an approach to the study and control of light propagation in disordered media,” Phys. Rev. Lett. 104, 100601 (2010).
[Crossref]

S. Popoff, G. Lerosey, M. Fink, A. Boccara, and S. Gigan, “Image transmission through an opaque material,” Nat. Commun. 1, 81 (2010).
[Crossref]

Porat, A.

Psaltis, D.

Ramesh, M.

G. B. Huang, M. Ramesh, T. Berg, and E. Learned-Miller, “Labeled faces in the wild: a database for studying face recognition in unconstrained environments,” (University of Massachusetts, 2007).

Raskar, R.

Recht, B.

Remez, T.

T. Remez, O. Litany, R. Giryes, and A. M. Bronstein, “Deep convolutional denoising of low-light images,” arXiv:1701.01687 (2017).

Rigneault, H.

Rivenson, Y.

Y. Rivenson, Z. Gorocs, H. Gunaydin, Y. Zhang, H. Wang, and A. Ozcan, “Deep learning microscopy,” arXiv:1705.04709 (2017).

Y. Rivenson, Y. Zhang, H. Gunaydin, D. Teng, and A. Ozcan, “Phase recovery and holographic image reconstruction using deep learning in neural networks,” arXiv:1705.04286 (2017).

Romberg, J.

E. J. Candès, J. Romberg, and T. Tao, “Robust uncertainty principles: exact signal reconstruction from highly incomplete frequency information,” IEEE Trans. Inf. Theory 52, 489–509 (2006).
[Crossref]

Ronneberger, O.

O. Ronneberger, P. Fischer, and T. Brox, “U-Net: convolutional networks for biomedical image segmentation,” in International Conference on Medical Image Computing and Computer-Assisted Intervention (Springer, 2015), pp. 234–241.

Rosenbluh, M.

I. Freund, M. Rosenbluh, and S. Feng, “Memory effects in propagation of optical waves through disordered media,” Phys. Rev. Lett. 61, 2328–2331 (1988).
[Crossref]

Russakovsky, O.

O. Russakovsky, J. Deng, H. Su, J. Krause, S. Satheesh, S. Ma, Z. Huang, A. Karpathy, A. Khosla, M. Bernstein, A. Berg, and L. Fei-Fei, “ImageNet large scale visual recognition challenge,” Int. J. Comput. Vis. 115, 211–252 (2015).
[Crossref]

Samaria, F. S.

F. S. Samaria and A. C. Harter, “Parameterisation of a stochastic model for human face identification,” in Proceedings of the Second IEEE Workshop on Applications of Computer Vision (IEEE, 1994), pp. 138–142.

Sandre, O.

Satat, G.

Satheesh, S.

O. Russakovsky, J. Deng, H. Su, J. Krause, S. Satheesh, S. Ma, Z. Huang, A. Karpathy, A. Khosla, M. Bernstein, A. Berg, and L. Fei-Fei, “ImageNet large scale visual recognition challenge,” Int. J. Comput. Vis. 115, 211–252 (2015).
[Crossref]

Savarese, S.

B.-S. Kim, J. Y. Park, A. C. Gilbert, and S. Savarese, “Hierarchical classification of images by sparse approximation,” Image Vision Comput. 31, 982–991 (2013).
[Crossref]

Schülke, C.

Shen, C.

X. Mao, C. Shen, and Y.-B. Yang, “Image restoration using very deep convolutional encoder-decoder networks with symmetric skip connections,” in Advances in Neural Information Processing Systems (2016), pp. 2802–2810.

Sinha, A.

Situ, G.

M. Lyu, H. Wang, G. Li, and G. Situ, “Exploit imaging through opaque wall via deep learning,” arXiv:1708.07881 (2017).

Stasio, N.

Stone, A. D.

S. Feng, C. Kane, P. A. Lee, and A. D. Stone, “Correlations and fluctuations of coherent wave transmission through disordered media,” Phys. Rev. Lett. 61, 834–837 (1988).
[Crossref]

Strickler, J. H.

W. Denk, J. H. Strickler, and W. W. Webb, “Two-photon laser scanning fluorescence microscopy,” Science 248, 73–76 (1990).
[Crossref]

Su, H.

O. Russakovsky, J. Deng, H. Su, J. Krause, S. Satheesh, S. Ma, Z. Huang, A. Karpathy, A. Khosla, M. Bernstein, A. Berg, and L. Fei-Fei, “ImageNet large scale visual recognition challenge,” Int. J. Comput. Vis. 115, 211–252 (2015).
[Crossref]

Takagi, R.

Tancik, M.

Tanida, J.

Tao, T.

E. J. Candès, J. Romberg, and T. Tao, “Robust uncertainty principles: exact signal reconstruction from highly incomplete frequency information,” IEEE Trans. Inf. Theory 52, 489–509 (2006).
[Crossref]

Tatarski, V. I.

V. I. Tatarski, Wave Propagation in a Turbulent Medium (Courier Dover, 2016).

Teng, D.

Y. Rivenson, Y. Zhang, H. Gunaydin, D. Teng, and A. Ozcan, “Phase recovery and holographic image reconstruction using deep learning in neural networks,” arXiv:1705.04286 (2017).

Tian, L.

Unser, M.

K. H. Jin, M. T. McCann, E. Froustey, and M. Unser, “Deep convolutional neural network for inverse problems in imaging,” IEEE Trans. Image Process. 26, 4509–4522 (2017).
[Crossref]

van der Maaten, L.

G. Huang, Z. Liu, K. Q. Weinberger, and L. van der Maaten, “Densely connected convolutional networks,” arXiv:1608.06993 (2016).

van Putten, E. G.

J. Bertolotti, E. G. van Putten, C. Blum, A. Lagendijk, W. L. Vos, and A. P. Mosk, “Non-invasive imaging through opaque scattering layers,” Nature 491, 232–234 (2012).
[Crossref]

Vellekoop, I. M.

Victorino, A. C.

A. M. Neto, A. C. Victorino, I. Fantoni, D. E. Zampieri, J. V. Ferreira, and D. A. Lima, “Image processing using Pearson’s correlation coefficient: applications on autonomous robotics,” in 13th International Conference on Autonomous Robot Systems (Robotica) (IEEE, 2013), pp. 1–6.

Vos, W. L.

J. Bertolotti, E. G. van Putten, C. Blum, A. Lagendijk, W. L. Vos, and A. P. Mosk, “Non-invasive imaging through opaque scattering layers,” Nature 491, 232–234 (2012).
[Crossref]

Waller, L.

Wang, H.

Y. Rivenson, Z. Gorocs, H. Gunaydin, Y. Zhang, H. Wang, and A. Ozcan, “Deep learning microscopy,” arXiv:1705.04709 (2017).

M. Lyu, H. Wang, G. Li, and G. Situ, “Exploit imaging through opaque wall via deep learning,” arXiv:1708.07881 (2017).

Webb, W. W.

W. Denk, J. H. Strickler, and W. W. Webb, “Two-photon laser scanning fluorescence microscopy,” Science 248, 73–76 (1990).
[Crossref]

Weinberger, K. Q.

G. Huang, Z. Liu, K. Q. Weinberger, and L. van der Maaten, “Densely connected convolutional networks,” arXiv:1608.06993 (2016).

Wichmann, J.

Wilson, T.

T. Wilson, “Optical sectioning in fluorescence microscopy,” J. Microsc. 242, 111–116 (2011).
[Crossref]

Yang, Y.-B.

X. Mao, C. Shen, and Y.-B. Yang, “Image restoration using very deep convolutional encoder-decoder networks with symmetric skip connections,” in Advances in Neural Information Processing Systems (2016), pp. 2802–2810.

Yaqoob, Z.

Zampieri, D. E.

A. M. Neto, A. C. Victorino, I. Fantoni, D. E. Zampieri, J. V. Ferreira, and D. A. Lima, “Image processing using Pearson’s correlation coefficient: applications on autonomous robotics,” in 13th International Conference on Autonomous Robot Systems (Robotica) (IEEE, 2013), pp. 1–6.

Zeiler, M. D.

M. D. Zeiler and R. Fergus, “Visualizing and understanding convolutional networks,” in European Conference on Computer Vision (Springer, 2014), pp. 818–833.

Zhang, Y.

Y. Rivenson, Y. Zhang, H. Gunaydin, D. Teng, and A. Ozcan, “Phase recovery and holographic image reconstruction using deep learning in neural networks,” arXiv:1705.04286 (2017).

Y. Rivenson, Z. Gorocs, H. Gunaydin, Y. Zhang, H. Wang, and A. Ozcan, “Deep learning microscopy,” arXiv:1705.04709 (2017).

Zhong, J.

IEEE Trans. Image Process. (1)

K. H. Jin, M. T. McCann, E. Froustey, and M. Unser, “Deep convolutional neural network for inverse problems in imaging,” IEEE Trans. Image Process. 26, 4509–4522 (2017).
[Crossref]

IEEE Trans. Inf. Theory (1)

E. J. Candès, J. Romberg, and T. Tao, “Robust uncertainty principles: exact signal reconstruction from highly incomplete frequency information,” IEEE Trans. Inf. Theory 52, 489–509 (2006).
[Crossref]

IEEE Trans. Pattern Anal. Mach. Intell. (1)

V. Badrinarayanan, A. Kendall, and R. Cipolla, “SegNet: a deep convolutional encoder-decoder architecture for image segmentation,” IEEE Trans. Pattern Anal. Mach. Intell. 39, 2481–2495 (2017).
[Crossref]

Image Vision Comput. (1)

B.-S. Kim, J. Y. Park, A. C. Gilbert, and S. Savarese, “Hierarchical classification of images by sparse approximation,” Image Vision Comput. 31, 982–991 (2013).
[Crossref]

Int. J. Comput. Vis. (1)

O. Russakovsky, J. Deng, H. Su, J. Krause, S. Satheesh, S. Ma, Z. Huang, A. Karpathy, A. Khosla, M. Bernstein, A. Berg, and L. Fei-Fei, “ImageNet large scale visual recognition challenge,” Int. J. Comput. Vis. 115, 211–252 (2015).
[Crossref]

J. Microsc. (2)

M. G. Gustafsson, “Surpassing the lateral resolution limit by a factor of two using structured illumination microscopy,” J. Microsc. 198, 82–87 (2000).
[Crossref]

T. Wilson, “Optical sectioning in fluorescence microscopy,” J. Microsc. 242, 111–116 (2011).
[Crossref]

J. Opt. Soc. Am. B (1)

Nat. Commun. (1)

S. Popoff, G. Lerosey, M. Fink, A. Boccara, and S. Gigan, “Image transmission through an opaque material,” Nat. Commun. 1, 81 (2010).
[Crossref]

Nat. Photonics (1)

O. Katz, P. Heidmann, M. Fink, and S. Gigan, “Non-invasive single-shot imaging through scattering layers and around corners via speckle correlations,” Nat. Photonics 8, 784–790 (2014).
[Crossref]

Nature (1)

J. Bertolotti, E. G. van Putten, C. Blum, A. Lagendijk, W. L. Vos, and A. P. Mosk, “Non-invasive imaging through opaque scattering layers,” Nature 491, 232–234 (2012).
[Crossref]

Opt. Express (7)

Opt. Lett. (4)

Optica (3)

Optik (1)

R. W. Gerchberg, “A practical algorithm for the determination of the phase from image and diffraction plane pictures,” Optik 35, 237–246 (1972).

Phys. Rev. Lett. (3)

S. Feng, C. Kane, P. A. Lee, and A. D. Stone, “Correlations and fluctuations of coherent wave transmission through disordered media,” Phys. Rev. Lett. 61, 834–837 (1988).
[Crossref]

I. Freund, M. Rosenbluh, and S. Feng, “Memory effects in propagation of optical waves through disordered media,” Phys. Rev. Lett. 61, 2328–2331 (1988).
[Crossref]

S. Popoff, G. Lerosey, R. Carminati, M. Fink, A. Boccara, and S. Gigan, “Measuring the transmission matrix in optics: an approach to the study and control of light propagation in disordered media,” Phys. Rev. Lett. 104, 100601 (2010).
[Crossref]

Science (1)

W. Denk, J. H. Strickler, and W. W. Webb, “Two-photon laser scanning fluorescence microscopy,” Science 248, 73–76 (1990).
[Crossref]

sibi (1)

C. Gehring and S. Lemay, “Sparse coding,” sibi 1, 1 (2012).

Other (22)

T. Remez, O. Litany, R. Giryes, and A. M. Bronstein, “Deep convolutional denoising of low-light images,” arXiv:1701.01687 (2017).

O. Ronneberger, P. Fischer, and T. Brox, “U-Net: convolutional networks for biomedical image segmentation,” in International Conference on Medical Image Computing and Computer-Assisted Intervention (Springer, 2015), pp. 234–241.

G. B. Huang, M. Ramesh, T. Berg, and E. Learned-Miller, “Labeled faces in the wild: a database for studying face recognition in unconstrained environments,” (University of Massachusetts, 2007).

Y. Rivenson, Y. Zhang, H. Gunaydin, D. Teng, and A. Ozcan, “Phase recovery and holographic image reconstruction using deep learning in neural networks,” arXiv:1705.04286 (2017).

Y. Rivenson, Z. Gorocs, H. Gunaydin, Y. Zhang, H. Wang, and A. Ozcan, “Deep learning microscopy,” arXiv:1705.04709 (2017).

U. Grenander, General Pattern Theory—A Mathematical Study of Regular Structures (Clarendon, 1993).

Y. LeCun, C. Cortes, and C. J. Burges, “MNIST handwritten digit database,” AT&T Labs, 2010, http://yann.lecun.com/exdb/mnist .

A. Krizhevsky and G. Hinton, “Learning multiple layers of features from tiny images,” (University of Toronto, 2009).

F. S. Samaria and A. C. Harter, “Parameterisation of a stochastic model for human face identification,” in Proceedings of the Second IEEE Workshop on Applications of Computer Vision (IEEE, 1994), pp. 138–142.

A. M. Neto, A. C. Victorino, I. Fantoni, D. E. Zampieri, J. V. Ferreira, and D. A. Lima, “Image processing using Pearson’s correlation coefficient: applications on autonomous robotics,” in 13th International Conference on Autonomous Robot Systems (Robotica) (IEEE, 2013), pp. 1–6.

E. Akkermans and G. Montambaux, Mesoscopic Physics of Electrons and Photons (Cambridge University, 2007).

V. I. Tatarski, Wave Propagation in a Turbulent Medium (Courier Dover, 2016).

A. Ishimaru, Wave Propagation and Scattering in Random Media (Academic, 1978), Vol. 2.

M. Lyu, H. Wang, G. Li, and G. Situ, “Exploit imaging through opaque wall via deep learning,” arXiv:1708.07881 (2017).

R. Horstmeyer, R. Y. Chen, B. Kappes, and B. Judkewitz, “Convolutional neural networks that teach microscopes how to image,” arXiv:1709.07223 (2017).

Y. LeCun and Y. Bengio, “Convolutional networks for images, speech, and time series,” in The Handbook of Brain Theory and Neural Networks (1995), Vol. 3361.

M. D. Zeiler and R. Fergus, “Visualizing and understanding convolutional networks,” in European Conference on Computer Vision (Springer, 2014), pp. 818–833.

J. W. Goodman, Introduction to Fourier Optics (Roberts and Company, 2005).

N. Antipa, S. Necula, R. Ng, and L. Waller, “Single-shot diffuser-encoded light field imaging,” in IEEE International Conference on Computational Photography (ICCP) (IEEE, 2016), pp. 1–11.

https://www.unc.edu/~rowlett/units/scales/grit.html .

G. Huang, Z. Liu, K. Q. Weinberger, and L. van der Maaten, “Densely connected convolutional networks,” arXiv:1608.06993 (2016).

X. Mao, C. Shen, and Y.-B. Yang, “Image restoration using very deep convolutional encoder-decoder networks with symmetric skip connections,” in Advances in Neural Information Processing Systems (2016), pp. 2802–2810.

Supplementary Material (1)

NameDescription
» Supplement 1       Supplemental material

Cited By

OSA participates in Crossref's Cited-By Linking service. Citing articles from OSA journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (14)

Fig. 1.
Fig. 1. Optical configuration. (a) Experimental arrangement. SF, spatial filter; CL, collimating lens; M, mirror; POL, linear polarizer; BS, beam splitter; SLM, spatial light modulator. (b) Detail of the telescopic imaging system.
Fig. 2.
Fig. 2. Point spread functions (PSFs) and degree of shift variance of the imaging system. (a) PSF for the 600-grit diffuser: μ = 16    μm , σ 0 = 5    μm , σ = 4    μm . (b) PSF for the 220-grit diffuser: μ = 63    μm , σ 0 = 14    μm , σ = 15.75    μm . (c) Comparison of the profiles of the two PSFs along the lines indicated by the red arrows in (a) and (b). (d) Degree of shift variance along the x direction ( Δ y = 0 ). (e) Degree of shift variance along the y direction ( Δ x = 0 ). Other simulation parameters are set to be the same as the actual experiment: z d = 15    mm , R = 12.7    mm , and λ = 632.8    nm . All the PSF plots are in logarithmic scale.
Fig. 3.
Fig. 3. IDiffNet, our densely connected neural network that images through diffuse media.
Fig. 4.
Fig. 4. Qualitative analysis of IDiffNet trained using MAE as the loss function. (i) Ground truth pixel value inputs to the SLM. (ii) Corresponding intensity images calibrated by SLM response curve. (iii) Raw intensity images captured by CMOS detector for 600-grit glass diffuser. (iv) IDiffNet reconstruction from raw images when trained using Faces-LFW dataset [45]. (v) IDiffNet reconstruction when trained using ImageNet dataset [46]. (vi) IDiffNet reconstruction when trained using MNIST dataset [47]. Columns (vii)–(x) follow the same sequence as (iii)–(vi), but in these sets the diffuser used is 220-grit. Rows (a)–(f) correspond to the dataset from which the test image is drawn: (a) Faces-LFW, (b) ImageNet, (c) Characters, (d) MNIST, (e) Faces-ATT [49], (f) CIFAR [48], respectively.
Fig. 5.
Fig. 5. Quantitative analysis of IDiffNet trained using MAE as the loss function. Test errors for IDiffNet trained on Faces-LFW (blue), ImageNet (red), and MNIST (green) on six datasets when the diffuser used is (a) 600-grit and (b) 220-grit. The training and testing error curves when the diffuser used is (c) 600-grit and (d) 220-grit.
Fig. 6.
Fig. 6. Qualitative analysis of IDiffNets trained using NPCC as the loss function. (i) Ground truth pixel value inputs to the SLM. (ii) Corresponding intensity images calibrated by SLM response curve. (iii) Raw intensity images captured by CMOS detector for 600-grit glass diffuser. (iv) IDiffNet reconstruction from raw images when trained using Faces-LFW dataset [45]. (v) IDiffNet reconstruction when trained using ImageNet dataset [46]. (vi) IDiffNet reconstruction when trained using MNIST dataset [47]. Columns (vii)–(x) follow the same sequence as (iii)–(vi) but in these sets the diffuser used is 220-grit. Rows (a)–(f) correspond to the dataset from which the test image is drawn: (a) Faces-LFW, (b) ImageNet, (c) Characters, (d) MNIST, (e) Faces-ATT [49], (f) CIFAR [48], respectively.
Fig. 7.
Fig. 7. Quantitative analysis of our trained deep neural networks for using NPCC as the loss function. Test errors for the IDiffNets trained on Faces-LFW (blue), ImageNet (red), and MNIST (green) on six datasets when the diffuser used is (a) 600-grit and (b) 220-grit. The training and testing error curves when the diffuser used is (c) 600-grit and (d) 220-grit.
Fig. 8.
Fig. 8. Resolution test patterns. Left: dot pattern. Right: fringe pattern.
Fig. 9.
Fig. 9. Experimental resolution test result for IDiffNet trained on MNIST using MAE as loss function. The diffuser used is 600-grit. (a) Reconstructed dot pattern when D = 3 superpixels. (b) 1D cross-section plot along the line indicated by red arrows in (a). (c) Reconstructed fringe pattern when D = 3 superpixels. (d) Reconstructed dot pattern when D = 4 superpixels. (e) 1D cross-section plot along the line indicated by red arrows in (d). (f) Reconstructed fringe pattern when D = 4 superpixels.
Fig. 10.
Fig. 10. Experimental resolution test result for IDiffNet trained on ImageNet using MAE as loss function. The diffuser used is 600-grit. (a) Reconstructed dot pattern when D = 3 superpixels. (b) 1D cross-section plot along the line indicated by red arrows in (a). (c) Reconstructed fringe pattern when D = 3 superpixels. (d) Reconstructed dot pattern when D = 4 superpixels. (e) 1D cross-section plot along the line indicated by red arrows in (d). (f) Reconstructed fringe pattern when D = 4 superpixels.
Fig. 11.
Fig. 11. Experimental resolution test result for IDiffNet trained on MNIST using NPCC as loss function. The diffuser used is 220-grit. (a) Resolution test pattern when D = 16 superpixels. (b) Reconstructed test pattern when D = 16 superpixels. (c) 1D cross-section plot along the line indicated by red arrows in (b). (d) Resolution test pattern when D = 17 superpixels. (e) Reconstructed test pattern when D = 17 superpixels. (f) 1D cross-section plot along the line indicated by red arrows in (e).
Fig. 12.
Fig. 12. Simulated shift invariance test. (a) Correlations in the speckle patterns C s calculated on MNIST database. (b) Correlations in the reconstructions C r calculated on MNIST database. In the 600-grit case, the IDiffNet is trained on ImageNet using MAE loss function; in the 220-grit case, the IDiffNet is trained on MNIST using NPCC loss function.
Fig. 13.
Fig. 13. Comparison between IDiffNets and a denoising neural network. (i) Ground truth intensity images calibrated by SLM response curve. (ii) Speckle images that we captured using the 600-grit diffuser (after subtracting the reference pattern). (iii) Noisy images generated by adding Poisson noise to the ground truth. (iv) Reconstructions of the denoising neural network when inputting the noisy image in (iii). (v) Reconstructions of the denoising neural network when inputting the speckle image in (ii). (vi) IDiffNet reconstructions when inputting the speckle image in (ii). (The images shown in column vi are the same as those in the column v of Fig. 4, duplicated here for the readers’ convenience.) Rows (a)–(c) correspond to the dataset from which the test images are drawn: (a) Characters, (b) CIFAR [48], (c) Faces-LFW [45], respectively.
Fig. 14.
Fig. 14. Maximally activated patterns (MAPs) for different DNNs. (a)  128 × 128 inputs that maximally activate the filters in the convolutional layer at depth 5. (b)  128 × 128 inputs that maximally activate the filters in the convolutional layer at depth 13. (There are actually more than 16 filters at each convolutional layer, but we show only the 16 filters that have the highest activations here.)

Tables (1)

Tables Icon

Table 1. Summary of Reconstruction Results in Different Cases

Equations (9)

Equations on this page are rendered with MathJax. Learn more.

f ^ = argmin f { g H f 2 + α ϕ ( f ) } ,
g out ( x , y ) = { e i π f 1 2 λ ( f 1 z d ) f 2 2 ( x 2 + y 2 ) · d x d y [ g ( x , y ) e i π λ ( f 1 z d ) ( x 2 + y 2 ) · T ( x + f 1 x / f 2 λ ( f 1 z d ) , y + f 1 y / f 2 λ ( f 1 z d ) ) ] } * [ J 1 ( 2 π R λ f 2 x 2 + y 2 ) x 2 + y 2 ] ,
D ( x , y ) = W ( x , y ) * K ( σ ) .
C ( Δ x , Δ y ) = h ( x , y ; x , y ) h ( x , y ; x + Δ x , y + Δ y ) d x d y d x d y .
I ^ ( x , y ) = H inv I out ( x , y ) ,
MAE = 1 w h i = 1 w j = 1 h | Y ( i , j ) G ( i , j ) | .
NPCC = 1 × i = 1 w j = 1 h ( Y ( i , j ) Y ˜ ) ( G ( i , j ) G ˜ ) i = 1 w j = 1 h ( Y ( i , j ) Y ˜ ) 2 i = 1 w j = 1 h ( G ( i , j ) G ˜ ) 2 .
C s ( Δ x ) = PCC [ I out ( x , y ; x , y ) , I out ( x Δ x , y ; x + Δ x , y ) ] .
C r ( Δ x ) = PCC [ I ^ ( x , y ; x , y ) , I ^ ( x Δ x , y ; x + Δ x , y ) ] .

Metrics