Abstract

Imaging through scattering is an important yet challenging problem. Tremendous progress has been made by exploiting the deterministic input–output “transmission matrix” for a fixed medium. However, this “one-to-one” mapping is highly susceptible to speckle decorrelations – small perturbations to the scattering medium lead to model errors and severe degradation of the imaging performance. Our goal here is to develop a new framework that is highly scalable to both medium perturbations and measurement requirement. To do so, we propose a statistical “one-to-all” deep learning (DL) technique that encapsulates a wide range of statistical variations for the model to be resilient to speckle decorrelations. Specifically, we develop a convolutional neural network (CNN) that is able to learn the statistical information contained in the speckle intensity patterns captured on a set of diffusers having the same macroscopic parameter. We then show for the first time, to the best of our knowledge, that the trained CNN is able to generalize and make high-quality object predictions through an entirely different set of diffusers of the same class. Our work paves the way to a highly scalable DL approach for imaging through scattering media.

© 2018 Optical Society of America under the terms of the OSA Open Access Publishing Agreement

Full Article  |  PDF Article
OSA Recommended Articles
Deep learning approach for Fourier ptychography microscopy

Thanh Nguyen, Yujia Xue, Yunzhe Li, Lei Tian, and George Nehmetallah
Opt. Express 26(20) 26470-26484 (2018)

Speckle-learning-based object recognition through scattering media

Takamasa Ando, Ryoichi Horisaki, and Jun Tanida
Opt. Express 23(26) 33902-33910 (2015)

Object classification through scattering media with deep learning on time resolved measurement

Guy Satat, Matthew Tancik, Otkrist Gupta, Barmak Heshmat, and Ramesh Raskar
Opt. Express 25(15) 17466-17479 (2017)

References

  • View by:
  • |
  • |
  • |

  1. V. Ntziachristos, “Going deeper than microscopy: the optical imaging frontier in biology,” Nat. Methods 7, 603–614 (2010).
    [Crossref]
  2. M. C. Roggemann, B. M. Welsh, and B. R. Hunt, Imaging Through Turbulence (CRC Press, 2018).
  3. I. Vellekoop and A. Mosk, “Focusing coherent light through opaque strongly scattering media,” Opt. Lett. 32, 2309–2311 (2007).
    [Crossref]
  4. A. P. Mosk, A. Lagendijk, G. Lerosey, and M. Fink, “Controlling waves in space and time for imaging and focusing in complex media,” Nat. Photonics 6, 283–292 (2012).
    [Crossref]
  5. S. Rotter and S. Gigan, “Light fields in complex media: mesoscopic scattering meets wave control,” Rev. Mod. Phys. 89, 015005 (2017).
    [Crossref]
  6. J. W. Goodman, Speckle Phenomena in Optics: Theory and Applications (Roberts & Company, 2007).
  7. S. Popoff, G. Lerosey, R. Carminati, M. Fink, A. Boccara, and S. Gigan, “Measuring the transmission matrix in optics: an approach to the study and control of light propagation in disordered media,” Phys. Rev. Lett. 104, 100601 (2010).
    [Crossref]
  8. M. Kim, W. Choi, Y. Choi, C. Yoon, and W. Choi, “Transmission matrix of a scattering medium and its applications in biophotonics,” Opt. Express 23, 12648–12668 (2015).
    [Crossref]
  9. I. Freund, “Looking through walls and around corners,” Physica A 168, 49–65 (1990).
    [Crossref]
  10. S. Schott, J. Bertolotti, J.-F. Léger, L. Bourdieu, and S. Gigan, “Characterization of the angular memory effect of scattered light in biological tissues,” Opt. Express 23, 13505–13516 (2015).
    [Crossref]
  11. O. Katz, P. Heidmann, M. Fink, and S. Gigan, “Non-invasive single-shot imaging through scattering layers and around corners via speckle correlations,” Nat. Photonics 8, 784–790 (2014).
    [Crossref]
  12. A. Tokovinin, M. Le Louarn, and M. Sarazin, “Isoplanatism in a multiconjugate adaptive optics system,” J. Opt. Soc. Am. A 17, 1819–1827 (2000).
    [Crossref]
  13. J. Mertz, H. Paudel, and T. G. Bifano, “Field of view advantage of conjugate adaptive optics in microscopy applications,” Appl. Opt. 54, 3498–3506 (2015).
    [Crossref]
  14. J. Li, D. R. Beaulieu, H. Paudel, R. Barankov, T. G. Bifano, and J. Mertz, “Conjugate adaptive optics in widefield microscopy with an extended-source wavefront sensor,” Optica 2, 682–688 (2015).
    [Crossref]
  15. A. Labeyrie, “Attainment of diffraction limited resolution in large telescopes by Fourier analysing speckle patterns in star images,” Astron. Astrophys. 6, 85–87 (1970).
  16. J. Bertolotti, E. G. van Putten, C. Blum, A. Lagendijk, W. L. Vos, and A. P. Mosk, “Non-invasive imaging through opaque scattering layers,” Nature 491, 232–234 (2012).
    [Crossref]
  17. E. Edrei and G. Scarcelli, “Optical imaging through dynamic turbid media using the Fourier-domain shower-curtain effect,” Optica 3, 71–74 (2016).
    [Crossref]
  18. T. R. Hillman, T. Yamauchi, W. Choi, R. R. Dasari, M. S. Feld, Y. Park, and Z. Yaqoob, “Digital optical phase conjugation for delivering two-dimensional images through turbid media,” Sci. Rep. 3, 1909 (2013).
    [Crossref]
  19. M. Jang, H. Ruan, I. M. Vellekoop, B. Judkewitz, E. Chung, and C. Yang, “Relation between speckle decorrelation and optical phase conjugation (OPC)-based turbidity suppression through dynamic scattering media: a study on in vivo mouse skin,” Biomed. Opt. Express 6, 72–85 (2015).
    [Crossref]
  20. Y. Liu, P. Lai, C. Ma, X. Xu, A. A. Grabar, and L. V. Wang, “Optical focusing deep inside dynamic scattering media with near-infrared time-reversed ultrasonically encoded (TRUE) light,” Nat. Commun. 6, 5904 (2015).
    [Crossref]
  21. M. M. Qureshi, J. Brake, H.-J. Jeon, H. Ruan, Y. Liu, A. M. Safi, T. J. Eom, C. Yang, and E. Chung, “In vivo study of optical speckle decorrelation time across depths in the mouse brain,” Biomed. Opt. Express 8, 4855–4864 (2017).
    [Crossref]
  22. D. Conkey, A. Caravaca-Aguirre, and R. Piestun, “High-speed scattering medium characterization with application to focusing light through turbid media,” Opt. Express 20, 1733–1740 (2012).
    [Crossref]
  23. D. Wang, E. H. Zhou, J. Brake, H. Ruan, M. Jang, and C. Yang, “Focusing through dynamic tissue with millisecond digital optical phase conjugation,” Optica 2, 728–735 (2015).
    [Crossref]
  24. Y. Liu, C. Ma, Y. Shen, J. Shi, and L. V. Wang, “Focusing light inside dynamic scattering media with millisecond digital optical phase conjugation,” Optica 4, 280–288 (2017).
    [Crossref]
  25. B. Blochet, L. Bourdieu, and S. Gigan, “Focusing light through dynamical samples using fast continuous wavefront optimization,” Opt. Lett. 42, 4994–4997 (2017).
    [Crossref]
  26. https://www.unc.edu/rowlett/units/scales/grit.html .
  27. Y. Rivenson, Z. Göröcs, H. Günaydin, Y. Zhang, H. Wang, and A. Ozcan, “Deep learning microscopy,” Optica 4, 1437–1443 (2017).
    [Crossref]
  28. H. Wang, Y. Rivenson, Y. Jin, Z. Wei, R. Gao, H. Gunaydin, L. Bentolila, and A. Ozcan, “Deep learning achieves super-resolution in fluorescence microscopy,” bioRxiv (2018), p. 309641.
  29. Y. Rivenson, Y. Zhang, H. Günaydn, D. Teng, and A. Ozcan, “Phase recovery and holographic image reconstruction using deep learning in neural networks,” Light Sci. Appl. 7, 17141 (2018).
    [Crossref]
  30. Z. Ren, Z. Xu, and E. Y. Lam, “Learning-based nonparametric autofocusing for digital holography,” Optica 5, 337–344 (2018).
    [Crossref]
  31. A. Sinha, J. Lee, S. Li, and G. Barbastathis, “Lensless computational imaging through deep learning,” Optica 4, 1117–1125 (2017).
    [Crossref]
  32. T. Nguyen, Y. Xue, Y. Li, L. Tian, and G. Nehmetallah, “Convolutional neural network for Fourier ptychography video reconstruction: learning temporal dynamics from spatial ensembles,” arXiv: 1805.00334 (2018).
  33. R. Horisaki, R. Takagi, and J. Tanida, “Learning-based imaging through scattering media,” Opt. Express 24, 13738–13743 (2016).
    [Crossref]
  34. M. Lyu, H. Wang, G. Li, and G. Situ, “Exploit imaging through opaque wall via deep learning,” arXiv: 1708.07881 (2017).
  35. R. Horisaki, R. Takagi, and J. Tanida, “Learning-based focusing through scattering media,” Appl. Opt. 56, 4358–4362 (2017).
    [Crossref]
  36. S. Li, M. Deng, J. Lee, A. Sinha, and G. Barbastathis, “Imaging through glass diffusers using densely connected convolutional networks,” Optica 5, 803–813 (2018).
    [Crossref]
  37. A. Turpin, I. Vishniakou, and J. D. Seelig, “Light scattering control with neural networks in transmission and reflection,” arXiv: 1805.05602 (2018).
  38. N. Borhani, E. Kakkava, C. Moser, and D. Psaltis, “Learning to see through multimode fibers,” Optica 5, 960–966 (2018).
    [Crossref]
  39. P. Fan, T. Zhao, and L. Su, “Deep learning the high variability and randomness inside multimode fibres,” arXiv: 1807.09351 (2018).
  40. A. Drémeau, A. Liutkus, D. Martina, O. Katz, C. Schülke, F. Krzakala, S. Gigan, and L. Daudet, “Reference-less measurement of the transmission matrix of a highly scattering material using a DMD and phase retrieval techniques,” Opt. Express 23, 11898–11911 (2015).
    [Crossref]
  41. Y. LeCun, Y. Bengio, and G. Hinton, “Deep learning,” Nature 521, 436–444 (2015).
    [Crossref]
  42. http://yann.lecun.com/exdb/mnist/ .
  43. https://www.nist.gov/srd/nist-special-database-19 .
  44. https://quickdraw.withgoogle.com/data .
  45. T. Xiao, T. Xia, Y. Yang, C. Huang, and X. Wang, “Learning from massive noisy labeled data for image classification,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (2015), pp. 2691–2699.
  46. O. Ronneberger, P. Fischer, and T. Brox, “U-net: convolutional networks for biomedical image segmentation,” in International Conference on Medical Image Computing and Computer-Assisted Intervention (Springer, 2015), pp. 234–241.
  47. G. Huang, Z. Liu, L. van der Maaten, and K. Q. Weinberger, “Densely connected convolutional networks,” in IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2017), pp. 2261–2269.
  48. A. Kendall and Y. Gal, “What uncertainties do we need in Bayesian deep learning for computer vision?” in Advances in Neural Information Processing Systems (2017), pp. 5580–5590.
  49. S. Suresh, N. Sundararajan, and P. Saratchandran, “Risk-sensitive loss functions for sparse multi-category classification problems,” Inf. Sci. 178, 2621–2638 (2008).
    [Crossref]
  50. https://github.com/bu-cisl/deep-speckle-correlation .
  51. A. Krizhevsky, I. Sutskevar, and G. Hinton, “Imagenet classification with deep convolutional neural networks,” in Proceedings of the 25th International Conference on Advances in Neural Information Processing Systems, Lake Tahoe, Nevada, 2012, pp. 1097–1105.
  52. K. H. Zou, S. K. Warfield, A. Bharatha, C. M. Tempany, M. R. Kaus, S. J. Haker, W. M. Wells, F. A. Jolesz, and R. Kikinis, “Statistical validation of image segmentation quality based on a spatial overlap index1: scientific reports,” Acad. Radiol. 11, 178–189 (2004).
    [Crossref]
  53. M. D. Zeiler and R. Fergus, “Visualizing and understanding convolutional networks,” in European Conference on Computer Vision (Springer, 2014), pp. 818–833.
  54. L. V. Wang and H.-I. Wu, Biomedical Optics: Principles and Imaging (Wiley, 2012).
  55. N. Ji, D. Milkie, and E. Betzig, “Adaptive optics via pupil segmentation for high-resolution imaging in biological tissues,” Nat. Methods 7, 141–147 (2010).
    [Crossref]
  56. L. Tian and L. Waller, “3D intensity and phase imaging from light field measurements in an LED array microscope,” Optica 2, 104–111 (2015).
    [Crossref]
  57. L. Waller and L. Tian, “Machine learning for 3D microscopy,” Nature 523, 416–417 (2015).
    [Crossref]
  58. U. S. Kamilov, I. N. Papadopoulos, M. H. Shoreh, A. Goy, C. Vonesch, M. Unser, and D. Psaltis, “Learning approach to optical tomography,” Optica 2, 517–522 (2015).
    [Crossref]
  59. H.-Y. Liu, D. Liu, H. Mansour, P. T. Boufounos, L. Waller, and U. S. Kamilov, “SEAGLE: sparsity-driven image reconstruction under multiple scattering,” IEEE Trans. Comput. Imaging 4, 73–86 (2018).
    [Crossref]
  60. E. Soubies, T.-A. Pham, and M. Unser, “Efficient inversion of multiple-scattering model for optical diffraction tomography,” Opt. Express 25, 21786–21800 (2017).
    [Crossref]
  61. Y. Sun, Z. Xia, and U. S. Kamilov, “Efficient and accurate inversion of multiple scattering with deep learning,” Opt. Express 26, 14678–14688 (2018).
    [Crossref]

2018 (6)

2017 (8)

2016 (2)

2015 (12)

Y. LeCun, Y. Bengio, and G. Hinton, “Deep learning,” Nature 521, 436–444 (2015).
[Crossref]

Y. Liu, P. Lai, C. Ma, X. Xu, A. A. Grabar, and L. V. Wang, “Optical focusing deep inside dynamic scattering media with near-infrared time-reversed ultrasonically encoded (TRUE) light,” Nat. Commun. 6, 5904 (2015).
[Crossref]

L. Waller and L. Tian, “Machine learning for 3D microscopy,” Nature 523, 416–417 (2015).
[Crossref]

M. Jang, H. Ruan, I. M. Vellekoop, B. Judkewitz, E. Chung, and C. Yang, “Relation between speckle decorrelation and optical phase conjugation (OPC)-based turbidity suppression through dynamic scattering media: a study on in vivo mouse skin,” Biomed. Opt. Express 6, 72–85 (2015).
[Crossref]

L. Tian and L. Waller, “3D intensity and phase imaging from light field measurements in an LED array microscope,” Optica 2, 104–111 (2015).
[Crossref]

J. Mertz, H. Paudel, and T. G. Bifano, “Field of view advantage of conjugate adaptive optics in microscopy applications,” Appl. Opt. 54, 3498–3506 (2015).
[Crossref]

A. Drémeau, A. Liutkus, D. Martina, O. Katz, C. Schülke, F. Krzakala, S. Gigan, and L. Daudet, “Reference-less measurement of the transmission matrix of a highly scattering material using a DMD and phase retrieval techniques,” Opt. Express 23, 11898–11911 (2015).
[Crossref]

M. Kim, W. Choi, Y. Choi, C. Yoon, and W. Choi, “Transmission matrix of a scattering medium and its applications in biophotonics,” Opt. Express 23, 12648–12668 (2015).
[Crossref]

S. Schott, J. Bertolotti, J.-F. Léger, L. Bourdieu, and S. Gigan, “Characterization of the angular memory effect of scattered light in biological tissues,” Opt. Express 23, 13505–13516 (2015).
[Crossref]

U. S. Kamilov, I. N. Papadopoulos, M. H. Shoreh, A. Goy, C. Vonesch, M. Unser, and D. Psaltis, “Learning approach to optical tomography,” Optica 2, 517–522 (2015).
[Crossref]

J. Li, D. R. Beaulieu, H. Paudel, R. Barankov, T. G. Bifano, and J. Mertz, “Conjugate adaptive optics in widefield microscopy with an extended-source wavefront sensor,” Optica 2, 682–688 (2015).
[Crossref]

D. Wang, E. H. Zhou, J. Brake, H. Ruan, M. Jang, and C. Yang, “Focusing through dynamic tissue with millisecond digital optical phase conjugation,” Optica 2, 728–735 (2015).
[Crossref]

2014 (1)

O. Katz, P. Heidmann, M. Fink, and S. Gigan, “Non-invasive single-shot imaging through scattering layers and around corners via speckle correlations,” Nat. Photonics 8, 784–790 (2014).
[Crossref]

2013 (1)

T. R. Hillman, T. Yamauchi, W. Choi, R. R. Dasari, M. S. Feld, Y. Park, and Z. Yaqoob, “Digital optical phase conjugation for delivering two-dimensional images through turbid media,” Sci. Rep. 3, 1909 (2013).
[Crossref]

2012 (3)

A. P. Mosk, A. Lagendijk, G. Lerosey, and M. Fink, “Controlling waves in space and time for imaging and focusing in complex media,” Nat. Photonics 6, 283–292 (2012).
[Crossref]

J. Bertolotti, E. G. van Putten, C. Blum, A. Lagendijk, W. L. Vos, and A. P. Mosk, “Non-invasive imaging through opaque scattering layers,” Nature 491, 232–234 (2012).
[Crossref]

D. Conkey, A. Caravaca-Aguirre, and R. Piestun, “High-speed scattering medium characterization with application to focusing light through turbid media,” Opt. Express 20, 1733–1740 (2012).
[Crossref]

2010 (3)

S. Popoff, G. Lerosey, R. Carminati, M. Fink, A. Boccara, and S. Gigan, “Measuring the transmission matrix in optics: an approach to the study and control of light propagation in disordered media,” Phys. Rev. Lett. 104, 100601 (2010).
[Crossref]

V. Ntziachristos, “Going deeper than microscopy: the optical imaging frontier in biology,” Nat. Methods 7, 603–614 (2010).
[Crossref]

N. Ji, D. Milkie, and E. Betzig, “Adaptive optics via pupil segmentation for high-resolution imaging in biological tissues,” Nat. Methods 7, 141–147 (2010).
[Crossref]

2008 (1)

S. Suresh, N. Sundararajan, and P. Saratchandran, “Risk-sensitive loss functions for sparse multi-category classification problems,” Inf. Sci. 178, 2621–2638 (2008).
[Crossref]

2007 (1)

2004 (1)

K. H. Zou, S. K. Warfield, A. Bharatha, C. M. Tempany, M. R. Kaus, S. J. Haker, W. M. Wells, F. A. Jolesz, and R. Kikinis, “Statistical validation of image segmentation quality based on a spatial overlap index1: scientific reports,” Acad. Radiol. 11, 178–189 (2004).
[Crossref]

2000 (1)

1990 (1)

I. Freund, “Looking through walls and around corners,” Physica A 168, 49–65 (1990).
[Crossref]

1970 (1)

A. Labeyrie, “Attainment of diffraction limited resolution in large telescopes by Fourier analysing speckle patterns in star images,” Astron. Astrophys. 6, 85–87 (1970).

Barankov, R.

Barbastathis, G.

Beaulieu, D. R.

Bengio, Y.

Y. LeCun, Y. Bengio, and G. Hinton, “Deep learning,” Nature 521, 436–444 (2015).
[Crossref]

Bentolila, L.

H. Wang, Y. Rivenson, Y. Jin, Z. Wei, R. Gao, H. Gunaydin, L. Bentolila, and A. Ozcan, “Deep learning achieves super-resolution in fluorescence microscopy,” bioRxiv (2018), p. 309641.

Bertolotti, J.

S. Schott, J. Bertolotti, J.-F. Léger, L. Bourdieu, and S. Gigan, “Characterization of the angular memory effect of scattered light in biological tissues,” Opt. Express 23, 13505–13516 (2015).
[Crossref]

J. Bertolotti, E. G. van Putten, C. Blum, A. Lagendijk, W. L. Vos, and A. P. Mosk, “Non-invasive imaging through opaque scattering layers,” Nature 491, 232–234 (2012).
[Crossref]

Betzig, E.

N. Ji, D. Milkie, and E. Betzig, “Adaptive optics via pupil segmentation for high-resolution imaging in biological tissues,” Nat. Methods 7, 141–147 (2010).
[Crossref]

Bharatha, A.

K. H. Zou, S. K. Warfield, A. Bharatha, C. M. Tempany, M. R. Kaus, S. J. Haker, W. M. Wells, F. A. Jolesz, and R. Kikinis, “Statistical validation of image segmentation quality based on a spatial overlap index1: scientific reports,” Acad. Radiol. 11, 178–189 (2004).
[Crossref]

Bifano, T. G.

Blochet, B.

Blum, C.

J. Bertolotti, E. G. van Putten, C. Blum, A. Lagendijk, W. L. Vos, and A. P. Mosk, “Non-invasive imaging through opaque scattering layers,” Nature 491, 232–234 (2012).
[Crossref]

Boccara, A.

S. Popoff, G. Lerosey, R. Carminati, M. Fink, A. Boccara, and S. Gigan, “Measuring the transmission matrix in optics: an approach to the study and control of light propagation in disordered media,” Phys. Rev. Lett. 104, 100601 (2010).
[Crossref]

Borhani, N.

Boufounos, P. T.

H.-Y. Liu, D. Liu, H. Mansour, P. T. Boufounos, L. Waller, and U. S. Kamilov, “SEAGLE: sparsity-driven image reconstruction under multiple scattering,” IEEE Trans. Comput. Imaging 4, 73–86 (2018).
[Crossref]

Bourdieu, L.

Brake, J.

Brox, T.

O. Ronneberger, P. Fischer, and T. Brox, “U-net: convolutional networks for biomedical image segmentation,” in International Conference on Medical Image Computing and Computer-Assisted Intervention (Springer, 2015), pp. 234–241.

Caravaca-Aguirre, A.

Carminati, R.

S. Popoff, G. Lerosey, R. Carminati, M. Fink, A. Boccara, and S. Gigan, “Measuring the transmission matrix in optics: an approach to the study and control of light propagation in disordered media,” Phys. Rev. Lett. 104, 100601 (2010).
[Crossref]

Choi, W.

Choi, Y.

Chung, E.

Conkey, D.

Dasari, R. R.

T. R. Hillman, T. Yamauchi, W. Choi, R. R. Dasari, M. S. Feld, Y. Park, and Z. Yaqoob, “Digital optical phase conjugation for delivering two-dimensional images through turbid media,” Sci. Rep. 3, 1909 (2013).
[Crossref]

Daudet, L.

Deng, M.

Drémeau, A.

Edrei, E.

Eom, T. J.

Fan, P.

P. Fan, T. Zhao, and L. Su, “Deep learning the high variability and randomness inside multimode fibres,” arXiv: 1807.09351 (2018).

Feld, M. S.

T. R. Hillman, T. Yamauchi, W. Choi, R. R. Dasari, M. S. Feld, Y. Park, and Z. Yaqoob, “Digital optical phase conjugation for delivering two-dimensional images through turbid media,” Sci. Rep. 3, 1909 (2013).
[Crossref]

Fergus, R.

M. D. Zeiler and R. Fergus, “Visualizing and understanding convolutional networks,” in European Conference on Computer Vision (Springer, 2014), pp. 818–833.

Fink, M.

O. Katz, P. Heidmann, M. Fink, and S. Gigan, “Non-invasive single-shot imaging through scattering layers and around corners via speckle correlations,” Nat. Photonics 8, 784–790 (2014).
[Crossref]

A. P. Mosk, A. Lagendijk, G. Lerosey, and M. Fink, “Controlling waves in space and time for imaging and focusing in complex media,” Nat. Photonics 6, 283–292 (2012).
[Crossref]

S. Popoff, G. Lerosey, R. Carminati, M. Fink, A. Boccara, and S. Gigan, “Measuring the transmission matrix in optics: an approach to the study and control of light propagation in disordered media,” Phys. Rev. Lett. 104, 100601 (2010).
[Crossref]

Fischer, P.

O. Ronneberger, P. Fischer, and T. Brox, “U-net: convolutional networks for biomedical image segmentation,” in International Conference on Medical Image Computing and Computer-Assisted Intervention (Springer, 2015), pp. 234–241.

Freund, I.

I. Freund, “Looking through walls and around corners,” Physica A 168, 49–65 (1990).
[Crossref]

Gal, Y.

A. Kendall and Y. Gal, “What uncertainties do we need in Bayesian deep learning for computer vision?” in Advances in Neural Information Processing Systems (2017), pp. 5580–5590.

Gao, R.

H. Wang, Y. Rivenson, Y. Jin, Z. Wei, R. Gao, H. Gunaydin, L. Bentolila, and A. Ozcan, “Deep learning achieves super-resolution in fluorescence microscopy,” bioRxiv (2018), p. 309641.

Gigan, S.

B. Blochet, L. Bourdieu, and S. Gigan, “Focusing light through dynamical samples using fast continuous wavefront optimization,” Opt. Lett. 42, 4994–4997 (2017).
[Crossref]

S. Rotter and S. Gigan, “Light fields in complex media: mesoscopic scattering meets wave control,” Rev. Mod. Phys. 89, 015005 (2017).
[Crossref]

S. Schott, J. Bertolotti, J.-F. Léger, L. Bourdieu, and S. Gigan, “Characterization of the angular memory effect of scattered light in biological tissues,” Opt. Express 23, 13505–13516 (2015).
[Crossref]

A. Drémeau, A. Liutkus, D. Martina, O. Katz, C. Schülke, F. Krzakala, S. Gigan, and L. Daudet, “Reference-less measurement of the transmission matrix of a highly scattering material using a DMD and phase retrieval techniques,” Opt. Express 23, 11898–11911 (2015).
[Crossref]

O. Katz, P. Heidmann, M. Fink, and S. Gigan, “Non-invasive single-shot imaging through scattering layers and around corners via speckle correlations,” Nat. Photonics 8, 784–790 (2014).
[Crossref]

S. Popoff, G. Lerosey, R. Carminati, M. Fink, A. Boccara, and S. Gigan, “Measuring the transmission matrix in optics: an approach to the study and control of light propagation in disordered media,” Phys. Rev. Lett. 104, 100601 (2010).
[Crossref]

Goodman, J. W.

J. W. Goodman, Speckle Phenomena in Optics: Theory and Applications (Roberts & Company, 2007).

Göröcs, Z.

Goy, A.

Grabar, A. A.

Y. Liu, P. Lai, C. Ma, X. Xu, A. A. Grabar, and L. V. Wang, “Optical focusing deep inside dynamic scattering media with near-infrared time-reversed ultrasonically encoded (TRUE) light,” Nat. Commun. 6, 5904 (2015).
[Crossref]

Gunaydin, H.

H. Wang, Y. Rivenson, Y. Jin, Z. Wei, R. Gao, H. Gunaydin, L. Bentolila, and A. Ozcan, “Deep learning achieves super-resolution in fluorescence microscopy,” bioRxiv (2018), p. 309641.

Günaydin, H.

Günaydn, H.

Y. Rivenson, Y. Zhang, H. Günaydn, D. Teng, and A. Ozcan, “Phase recovery and holographic image reconstruction using deep learning in neural networks,” Light Sci. Appl. 7, 17141 (2018).
[Crossref]

Haker, S. J.

K. H. Zou, S. K. Warfield, A. Bharatha, C. M. Tempany, M. R. Kaus, S. J. Haker, W. M. Wells, F. A. Jolesz, and R. Kikinis, “Statistical validation of image segmentation quality based on a spatial overlap index1: scientific reports,” Acad. Radiol. 11, 178–189 (2004).
[Crossref]

Heidmann, P.

O. Katz, P. Heidmann, M. Fink, and S. Gigan, “Non-invasive single-shot imaging through scattering layers and around corners via speckle correlations,” Nat. Photonics 8, 784–790 (2014).
[Crossref]

Hillman, T. R.

T. R. Hillman, T. Yamauchi, W. Choi, R. R. Dasari, M. S. Feld, Y. Park, and Z. Yaqoob, “Digital optical phase conjugation for delivering two-dimensional images through turbid media,” Sci. Rep. 3, 1909 (2013).
[Crossref]

Hinton, G.

Y. LeCun, Y. Bengio, and G. Hinton, “Deep learning,” Nature 521, 436–444 (2015).
[Crossref]

A. Krizhevsky, I. Sutskevar, and G. Hinton, “Imagenet classification with deep convolutional neural networks,” in Proceedings of the 25th International Conference on Advances in Neural Information Processing Systems, Lake Tahoe, Nevada, 2012, pp. 1097–1105.

Horisaki, R.

Huang, C.

T. Xiao, T. Xia, Y. Yang, C. Huang, and X. Wang, “Learning from massive noisy labeled data for image classification,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (2015), pp. 2691–2699.

Huang, G.

G. Huang, Z. Liu, L. van der Maaten, and K. Q. Weinberger, “Densely connected convolutional networks,” in IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2017), pp. 2261–2269.

Hunt, B. R.

M. C. Roggemann, B. M. Welsh, and B. R. Hunt, Imaging Through Turbulence (CRC Press, 2018).

Jang, M.

Jeon, H.-J.

Ji, N.

N. Ji, D. Milkie, and E. Betzig, “Adaptive optics via pupil segmentation for high-resolution imaging in biological tissues,” Nat. Methods 7, 141–147 (2010).
[Crossref]

Jin, Y.

H. Wang, Y. Rivenson, Y. Jin, Z. Wei, R. Gao, H. Gunaydin, L. Bentolila, and A. Ozcan, “Deep learning achieves super-resolution in fluorescence microscopy,” bioRxiv (2018), p. 309641.

Jolesz, F. A.

K. H. Zou, S. K. Warfield, A. Bharatha, C. M. Tempany, M. R. Kaus, S. J. Haker, W. M. Wells, F. A. Jolesz, and R. Kikinis, “Statistical validation of image segmentation quality based on a spatial overlap index1: scientific reports,” Acad. Radiol. 11, 178–189 (2004).
[Crossref]

Judkewitz, B.

Kakkava, E.

Kamilov, U. S.

Katz, O.

Kaus, M. R.

K. H. Zou, S. K. Warfield, A. Bharatha, C. M. Tempany, M. R. Kaus, S. J. Haker, W. M. Wells, F. A. Jolesz, and R. Kikinis, “Statistical validation of image segmentation quality based on a spatial overlap index1: scientific reports,” Acad. Radiol. 11, 178–189 (2004).
[Crossref]

Kendall, A.

A. Kendall and Y. Gal, “What uncertainties do we need in Bayesian deep learning for computer vision?” in Advances in Neural Information Processing Systems (2017), pp. 5580–5590.

Kikinis, R.

K. H. Zou, S. K. Warfield, A. Bharatha, C. M. Tempany, M. R. Kaus, S. J. Haker, W. M. Wells, F. A. Jolesz, and R. Kikinis, “Statistical validation of image segmentation quality based on a spatial overlap index1: scientific reports,” Acad. Radiol. 11, 178–189 (2004).
[Crossref]

Kim, M.

Krizhevsky, A.

A. Krizhevsky, I. Sutskevar, and G. Hinton, “Imagenet classification with deep convolutional neural networks,” in Proceedings of the 25th International Conference on Advances in Neural Information Processing Systems, Lake Tahoe, Nevada, 2012, pp. 1097–1105.

Krzakala, F.

Labeyrie, A.

A. Labeyrie, “Attainment of diffraction limited resolution in large telescopes by Fourier analysing speckle patterns in star images,” Astron. Astrophys. 6, 85–87 (1970).

Lagendijk, A.

J. Bertolotti, E. G. van Putten, C. Blum, A. Lagendijk, W. L. Vos, and A. P. Mosk, “Non-invasive imaging through opaque scattering layers,” Nature 491, 232–234 (2012).
[Crossref]

A. P. Mosk, A. Lagendijk, G. Lerosey, and M. Fink, “Controlling waves in space and time for imaging and focusing in complex media,” Nat. Photonics 6, 283–292 (2012).
[Crossref]

Lai, P.

Y. Liu, P. Lai, C. Ma, X. Xu, A. A. Grabar, and L. V. Wang, “Optical focusing deep inside dynamic scattering media with near-infrared time-reversed ultrasonically encoded (TRUE) light,” Nat. Commun. 6, 5904 (2015).
[Crossref]

Lam, E. Y.

Le Louarn, M.

LeCun, Y.

Y. LeCun, Y. Bengio, and G. Hinton, “Deep learning,” Nature 521, 436–444 (2015).
[Crossref]

Lee, J.

Léger, J.-F.

Lerosey, G.

A. P. Mosk, A. Lagendijk, G. Lerosey, and M. Fink, “Controlling waves in space and time for imaging and focusing in complex media,” Nat. Photonics 6, 283–292 (2012).
[Crossref]

S. Popoff, G. Lerosey, R. Carminati, M. Fink, A. Boccara, and S. Gigan, “Measuring the transmission matrix in optics: an approach to the study and control of light propagation in disordered media,” Phys. Rev. Lett. 104, 100601 (2010).
[Crossref]

Li, G.

M. Lyu, H. Wang, G. Li, and G. Situ, “Exploit imaging through opaque wall via deep learning,” arXiv: 1708.07881 (2017).

Li, J.

Li, S.

Li, Y.

T. Nguyen, Y. Xue, Y. Li, L. Tian, and G. Nehmetallah, “Convolutional neural network for Fourier ptychography video reconstruction: learning temporal dynamics from spatial ensembles,” arXiv: 1805.00334 (2018).

Liu, D.

H.-Y. Liu, D. Liu, H. Mansour, P. T. Boufounos, L. Waller, and U. S. Kamilov, “SEAGLE: sparsity-driven image reconstruction under multiple scattering,” IEEE Trans. Comput. Imaging 4, 73–86 (2018).
[Crossref]

Liu, H.-Y.

H.-Y. Liu, D. Liu, H. Mansour, P. T. Boufounos, L. Waller, and U. S. Kamilov, “SEAGLE: sparsity-driven image reconstruction under multiple scattering,” IEEE Trans. Comput. Imaging 4, 73–86 (2018).
[Crossref]

Liu, Y.

Liu, Z.

G. Huang, Z. Liu, L. van der Maaten, and K. Q. Weinberger, “Densely connected convolutional networks,” in IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2017), pp. 2261–2269.

Liutkus, A.

Lyu, M.

M. Lyu, H. Wang, G. Li, and G. Situ, “Exploit imaging through opaque wall via deep learning,” arXiv: 1708.07881 (2017).

Ma, C.

Y. Liu, C. Ma, Y. Shen, J. Shi, and L. V. Wang, “Focusing light inside dynamic scattering media with millisecond digital optical phase conjugation,” Optica 4, 280–288 (2017).
[Crossref]

Y. Liu, P. Lai, C. Ma, X. Xu, A. A. Grabar, and L. V. Wang, “Optical focusing deep inside dynamic scattering media with near-infrared time-reversed ultrasonically encoded (TRUE) light,” Nat. Commun. 6, 5904 (2015).
[Crossref]

Mansour, H.

H.-Y. Liu, D. Liu, H. Mansour, P. T. Boufounos, L. Waller, and U. S. Kamilov, “SEAGLE: sparsity-driven image reconstruction under multiple scattering,” IEEE Trans. Comput. Imaging 4, 73–86 (2018).
[Crossref]

Martina, D.

Mertz, J.

Milkie, D.

N. Ji, D. Milkie, and E. Betzig, “Adaptive optics via pupil segmentation for high-resolution imaging in biological tissues,” Nat. Methods 7, 141–147 (2010).
[Crossref]

Moser, C.

Mosk, A.

Mosk, A. P.

A. P. Mosk, A. Lagendijk, G. Lerosey, and M. Fink, “Controlling waves in space and time for imaging and focusing in complex media,” Nat. Photonics 6, 283–292 (2012).
[Crossref]

J. Bertolotti, E. G. van Putten, C. Blum, A. Lagendijk, W. L. Vos, and A. P. Mosk, “Non-invasive imaging through opaque scattering layers,” Nature 491, 232–234 (2012).
[Crossref]

Nehmetallah, G.

T. Nguyen, Y. Xue, Y. Li, L. Tian, and G. Nehmetallah, “Convolutional neural network for Fourier ptychography video reconstruction: learning temporal dynamics from spatial ensembles,” arXiv: 1805.00334 (2018).

Nguyen, T.

T. Nguyen, Y. Xue, Y. Li, L. Tian, and G. Nehmetallah, “Convolutional neural network for Fourier ptychography video reconstruction: learning temporal dynamics from spatial ensembles,” arXiv: 1805.00334 (2018).

Ntziachristos, V.

V. Ntziachristos, “Going deeper than microscopy: the optical imaging frontier in biology,” Nat. Methods 7, 603–614 (2010).
[Crossref]

Ozcan, A.

Y. Rivenson, Y. Zhang, H. Günaydn, D. Teng, and A. Ozcan, “Phase recovery and holographic image reconstruction using deep learning in neural networks,” Light Sci. Appl. 7, 17141 (2018).
[Crossref]

Y. Rivenson, Z. Göröcs, H. Günaydin, Y. Zhang, H. Wang, and A. Ozcan, “Deep learning microscopy,” Optica 4, 1437–1443 (2017).
[Crossref]

H. Wang, Y. Rivenson, Y. Jin, Z. Wei, R. Gao, H. Gunaydin, L. Bentolila, and A. Ozcan, “Deep learning achieves super-resolution in fluorescence microscopy,” bioRxiv (2018), p. 309641.

Papadopoulos, I. N.

Park, Y.

T. R. Hillman, T. Yamauchi, W. Choi, R. R. Dasari, M. S. Feld, Y. Park, and Z. Yaqoob, “Digital optical phase conjugation for delivering two-dimensional images through turbid media,” Sci. Rep. 3, 1909 (2013).
[Crossref]

Paudel, H.

Pham, T.-A.

Piestun, R.

Popoff, S.

S. Popoff, G. Lerosey, R. Carminati, M. Fink, A. Boccara, and S. Gigan, “Measuring the transmission matrix in optics: an approach to the study and control of light propagation in disordered media,” Phys. Rev. Lett. 104, 100601 (2010).
[Crossref]

Psaltis, D.

Qureshi, M. M.

Ren, Z.

Rivenson, Y.

Y. Rivenson, Y. Zhang, H. Günaydn, D. Teng, and A. Ozcan, “Phase recovery and holographic image reconstruction using deep learning in neural networks,” Light Sci. Appl. 7, 17141 (2018).
[Crossref]

Y. Rivenson, Z. Göröcs, H. Günaydin, Y. Zhang, H. Wang, and A. Ozcan, “Deep learning microscopy,” Optica 4, 1437–1443 (2017).
[Crossref]

H. Wang, Y. Rivenson, Y. Jin, Z. Wei, R. Gao, H. Gunaydin, L. Bentolila, and A. Ozcan, “Deep learning achieves super-resolution in fluorescence microscopy,” bioRxiv (2018), p. 309641.

Roggemann, M. C.

M. C. Roggemann, B. M. Welsh, and B. R. Hunt, Imaging Through Turbulence (CRC Press, 2018).

Ronneberger, O.

O. Ronneberger, P. Fischer, and T. Brox, “U-net: convolutional networks for biomedical image segmentation,” in International Conference on Medical Image Computing and Computer-Assisted Intervention (Springer, 2015), pp. 234–241.

Rotter, S.

S. Rotter and S. Gigan, “Light fields in complex media: mesoscopic scattering meets wave control,” Rev. Mod. Phys. 89, 015005 (2017).
[Crossref]

Ruan, H.

Safi, A. M.

Saratchandran, P.

S. Suresh, N. Sundararajan, and P. Saratchandran, “Risk-sensitive loss functions for sparse multi-category classification problems,” Inf. Sci. 178, 2621–2638 (2008).
[Crossref]

Sarazin, M.

Scarcelli, G.

Schott, S.

Schülke, C.

Seelig, J. D.

A. Turpin, I. Vishniakou, and J. D. Seelig, “Light scattering control with neural networks in transmission and reflection,” arXiv: 1805.05602 (2018).

Shen, Y.

Shi, J.

Shoreh, M. H.

Sinha, A.

Situ, G.

M. Lyu, H. Wang, G. Li, and G. Situ, “Exploit imaging through opaque wall via deep learning,” arXiv: 1708.07881 (2017).

Soubies, E.

Su, L.

P. Fan, T. Zhao, and L. Su, “Deep learning the high variability and randomness inside multimode fibres,” arXiv: 1807.09351 (2018).

Sun, Y.

Sundararajan, N.

S. Suresh, N. Sundararajan, and P. Saratchandran, “Risk-sensitive loss functions for sparse multi-category classification problems,” Inf. Sci. 178, 2621–2638 (2008).
[Crossref]

Suresh, S.

S. Suresh, N. Sundararajan, and P. Saratchandran, “Risk-sensitive loss functions for sparse multi-category classification problems,” Inf. Sci. 178, 2621–2638 (2008).
[Crossref]

Sutskevar, I.

A. Krizhevsky, I. Sutskevar, and G. Hinton, “Imagenet classification with deep convolutional neural networks,” in Proceedings of the 25th International Conference on Advances in Neural Information Processing Systems, Lake Tahoe, Nevada, 2012, pp. 1097–1105.

Takagi, R.

Tanida, J.

Tempany, C. M.

K. H. Zou, S. K. Warfield, A. Bharatha, C. M. Tempany, M. R. Kaus, S. J. Haker, W. M. Wells, F. A. Jolesz, and R. Kikinis, “Statistical validation of image segmentation quality based on a spatial overlap index1: scientific reports,” Acad. Radiol. 11, 178–189 (2004).
[Crossref]

Teng, D.

Y. Rivenson, Y. Zhang, H. Günaydn, D. Teng, and A. Ozcan, “Phase recovery and holographic image reconstruction using deep learning in neural networks,” Light Sci. Appl. 7, 17141 (2018).
[Crossref]

Tian, L.

L. Tian and L. Waller, “3D intensity and phase imaging from light field measurements in an LED array microscope,” Optica 2, 104–111 (2015).
[Crossref]

L. Waller and L. Tian, “Machine learning for 3D microscopy,” Nature 523, 416–417 (2015).
[Crossref]

T. Nguyen, Y. Xue, Y. Li, L. Tian, and G. Nehmetallah, “Convolutional neural network for Fourier ptychography video reconstruction: learning temporal dynamics from spatial ensembles,” arXiv: 1805.00334 (2018).

Tokovinin, A.

Turpin, A.

A. Turpin, I. Vishniakou, and J. D. Seelig, “Light scattering control with neural networks in transmission and reflection,” arXiv: 1805.05602 (2018).

Unser, M.

van der Maaten, L.

G. Huang, Z. Liu, L. van der Maaten, and K. Q. Weinberger, “Densely connected convolutional networks,” in IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2017), pp. 2261–2269.

van Putten, E. G.

J. Bertolotti, E. G. van Putten, C. Blum, A. Lagendijk, W. L. Vos, and A. P. Mosk, “Non-invasive imaging through opaque scattering layers,” Nature 491, 232–234 (2012).
[Crossref]

Vellekoop, I.

Vellekoop, I. M.

Vishniakou, I.

A. Turpin, I. Vishniakou, and J. D. Seelig, “Light scattering control with neural networks in transmission and reflection,” arXiv: 1805.05602 (2018).

Vonesch, C.

Vos, W. L.

J. Bertolotti, E. G. van Putten, C. Blum, A. Lagendijk, W. L. Vos, and A. P. Mosk, “Non-invasive imaging through opaque scattering layers,” Nature 491, 232–234 (2012).
[Crossref]

Waller, L.

H.-Y. Liu, D. Liu, H. Mansour, P. T. Boufounos, L. Waller, and U. S. Kamilov, “SEAGLE: sparsity-driven image reconstruction under multiple scattering,” IEEE Trans. Comput. Imaging 4, 73–86 (2018).
[Crossref]

L. Tian and L. Waller, “3D intensity and phase imaging from light field measurements in an LED array microscope,” Optica 2, 104–111 (2015).
[Crossref]

L. Waller and L. Tian, “Machine learning for 3D microscopy,” Nature 523, 416–417 (2015).
[Crossref]

Wang, D.

Wang, H.

Y. Rivenson, Z. Göröcs, H. Günaydin, Y. Zhang, H. Wang, and A. Ozcan, “Deep learning microscopy,” Optica 4, 1437–1443 (2017).
[Crossref]

H. Wang, Y. Rivenson, Y. Jin, Z. Wei, R. Gao, H. Gunaydin, L. Bentolila, and A. Ozcan, “Deep learning achieves super-resolution in fluorescence microscopy,” bioRxiv (2018), p. 309641.

M. Lyu, H. Wang, G. Li, and G. Situ, “Exploit imaging through opaque wall via deep learning,” arXiv: 1708.07881 (2017).

Wang, L. V.

Y. Liu, C. Ma, Y. Shen, J. Shi, and L. V. Wang, “Focusing light inside dynamic scattering media with millisecond digital optical phase conjugation,” Optica 4, 280–288 (2017).
[Crossref]

Y. Liu, P. Lai, C. Ma, X. Xu, A. A. Grabar, and L. V. Wang, “Optical focusing deep inside dynamic scattering media with near-infrared time-reversed ultrasonically encoded (TRUE) light,” Nat. Commun. 6, 5904 (2015).
[Crossref]

L. V. Wang and H.-I. Wu, Biomedical Optics: Principles and Imaging (Wiley, 2012).

Wang, X.

T. Xiao, T. Xia, Y. Yang, C. Huang, and X. Wang, “Learning from massive noisy labeled data for image classification,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (2015), pp. 2691–2699.

Warfield, S. K.

K. H. Zou, S. K. Warfield, A. Bharatha, C. M. Tempany, M. R. Kaus, S. J. Haker, W. M. Wells, F. A. Jolesz, and R. Kikinis, “Statistical validation of image segmentation quality based on a spatial overlap index1: scientific reports,” Acad. Radiol. 11, 178–189 (2004).
[Crossref]

Wei, Z.

H. Wang, Y. Rivenson, Y. Jin, Z. Wei, R. Gao, H. Gunaydin, L. Bentolila, and A. Ozcan, “Deep learning achieves super-resolution in fluorescence microscopy,” bioRxiv (2018), p. 309641.

Weinberger, K. Q.

G. Huang, Z. Liu, L. van der Maaten, and K. Q. Weinberger, “Densely connected convolutional networks,” in IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2017), pp. 2261–2269.

Wells, W. M.

K. H. Zou, S. K. Warfield, A. Bharatha, C. M. Tempany, M. R. Kaus, S. J. Haker, W. M. Wells, F. A. Jolesz, and R. Kikinis, “Statistical validation of image segmentation quality based on a spatial overlap index1: scientific reports,” Acad. Radiol. 11, 178–189 (2004).
[Crossref]

Welsh, B. M.

M. C. Roggemann, B. M. Welsh, and B. R. Hunt, Imaging Through Turbulence (CRC Press, 2018).

Wu, H.-I.

L. V. Wang and H.-I. Wu, Biomedical Optics: Principles and Imaging (Wiley, 2012).

Xia, T.

T. Xiao, T. Xia, Y. Yang, C. Huang, and X. Wang, “Learning from massive noisy labeled data for image classification,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (2015), pp. 2691–2699.

Xia, Z.

Xiao, T.

T. Xiao, T. Xia, Y. Yang, C. Huang, and X. Wang, “Learning from massive noisy labeled data for image classification,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (2015), pp. 2691–2699.

Xu, X.

Y. Liu, P. Lai, C. Ma, X. Xu, A. A. Grabar, and L. V. Wang, “Optical focusing deep inside dynamic scattering media with near-infrared time-reversed ultrasonically encoded (TRUE) light,” Nat. Commun. 6, 5904 (2015).
[Crossref]

Xu, Z.

Xue, Y.

T. Nguyen, Y. Xue, Y. Li, L. Tian, and G. Nehmetallah, “Convolutional neural network for Fourier ptychography video reconstruction: learning temporal dynamics from spatial ensembles,” arXiv: 1805.00334 (2018).

Yamauchi, T.

T. R. Hillman, T. Yamauchi, W. Choi, R. R. Dasari, M. S. Feld, Y. Park, and Z. Yaqoob, “Digital optical phase conjugation for delivering two-dimensional images through turbid media,” Sci. Rep. 3, 1909 (2013).
[Crossref]

Yang, C.

Yang, Y.

T. Xiao, T. Xia, Y. Yang, C. Huang, and X. Wang, “Learning from massive noisy labeled data for image classification,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (2015), pp. 2691–2699.

Yaqoob, Z.

T. R. Hillman, T. Yamauchi, W. Choi, R. R. Dasari, M. S. Feld, Y. Park, and Z. Yaqoob, “Digital optical phase conjugation for delivering two-dimensional images through turbid media,” Sci. Rep. 3, 1909 (2013).
[Crossref]

Yoon, C.

Zeiler, M. D.

M. D. Zeiler and R. Fergus, “Visualizing and understanding convolutional networks,” in European Conference on Computer Vision (Springer, 2014), pp. 818–833.

Zhang, Y.

Y. Rivenson, Y. Zhang, H. Günaydn, D. Teng, and A. Ozcan, “Phase recovery and holographic image reconstruction using deep learning in neural networks,” Light Sci. Appl. 7, 17141 (2018).
[Crossref]

Y. Rivenson, Z. Göröcs, H. Günaydin, Y. Zhang, H. Wang, and A. Ozcan, “Deep learning microscopy,” Optica 4, 1437–1443 (2017).
[Crossref]

Zhao, T.

P. Fan, T. Zhao, and L. Su, “Deep learning the high variability and randomness inside multimode fibres,” arXiv: 1807.09351 (2018).

Zhou, E. H.

Zou, K. H.

K. H. Zou, S. K. Warfield, A. Bharatha, C. M. Tempany, M. R. Kaus, S. J. Haker, W. M. Wells, F. A. Jolesz, and R. Kikinis, “Statistical validation of image segmentation quality based on a spatial overlap index1: scientific reports,” Acad. Radiol. 11, 178–189 (2004).
[Crossref]

Acad. Radiol. (1)

K. H. Zou, S. K. Warfield, A. Bharatha, C. M. Tempany, M. R. Kaus, S. J. Haker, W. M. Wells, F. A. Jolesz, and R. Kikinis, “Statistical validation of image segmentation quality based on a spatial overlap index1: scientific reports,” Acad. Radiol. 11, 178–189 (2004).
[Crossref]

Appl. Opt. (2)

Astron. Astrophys. (1)

A. Labeyrie, “Attainment of diffraction limited resolution in large telescopes by Fourier analysing speckle patterns in star images,” Astron. Astrophys. 6, 85–87 (1970).

Biomed. Opt. Express (2)

IEEE Trans. Comput. Imaging (1)

H.-Y. Liu, D. Liu, H. Mansour, P. T. Boufounos, L. Waller, and U. S. Kamilov, “SEAGLE: sparsity-driven image reconstruction under multiple scattering,” IEEE Trans. Comput. Imaging 4, 73–86 (2018).
[Crossref]

Inf. Sci. (1)

S. Suresh, N. Sundararajan, and P. Saratchandran, “Risk-sensitive loss functions for sparse multi-category classification problems,” Inf. Sci. 178, 2621–2638 (2008).
[Crossref]

J. Opt. Soc. Am. A (1)

Light Sci. Appl. (1)

Y. Rivenson, Y. Zhang, H. Günaydn, D. Teng, and A. Ozcan, “Phase recovery and holographic image reconstruction using deep learning in neural networks,” Light Sci. Appl. 7, 17141 (2018).
[Crossref]

Nat. Commun. (1)

Y. Liu, P. Lai, C. Ma, X. Xu, A. A. Grabar, and L. V. Wang, “Optical focusing deep inside dynamic scattering media with near-infrared time-reversed ultrasonically encoded (TRUE) light,” Nat. Commun. 6, 5904 (2015).
[Crossref]

Nat. Methods (2)

V. Ntziachristos, “Going deeper than microscopy: the optical imaging frontier in biology,” Nat. Methods 7, 603–614 (2010).
[Crossref]

N. Ji, D. Milkie, and E. Betzig, “Adaptive optics via pupil segmentation for high-resolution imaging in biological tissues,” Nat. Methods 7, 141–147 (2010).
[Crossref]

Nat. Photonics (2)

A. P. Mosk, A. Lagendijk, G. Lerosey, and M. Fink, “Controlling waves in space and time for imaging and focusing in complex media,” Nat. Photonics 6, 283–292 (2012).
[Crossref]

O. Katz, P. Heidmann, M. Fink, and S. Gigan, “Non-invasive single-shot imaging through scattering layers and around corners via speckle correlations,” Nat. Photonics 8, 784–790 (2014).
[Crossref]

Nature (3)

J. Bertolotti, E. G. van Putten, C. Blum, A. Lagendijk, W. L. Vos, and A. P. Mosk, “Non-invasive imaging through opaque scattering layers,” Nature 491, 232–234 (2012).
[Crossref]

L. Waller and L. Tian, “Machine learning for 3D microscopy,” Nature 523, 416–417 (2015).
[Crossref]

Y. LeCun, Y. Bengio, and G. Hinton, “Deep learning,” Nature 521, 436–444 (2015).
[Crossref]

Opt. Express (7)

Opt. Lett. (2)

Optica (11)

J. Li, D. R. Beaulieu, H. Paudel, R. Barankov, T. G. Bifano, and J. Mertz, “Conjugate adaptive optics in widefield microscopy with an extended-source wavefront sensor,” Optica 2, 682–688 (2015).
[Crossref]

E. Edrei and G. Scarcelli, “Optical imaging through dynamic turbid media using the Fourier-domain shower-curtain effect,” Optica 3, 71–74 (2016).
[Crossref]

D. Wang, E. H. Zhou, J. Brake, H. Ruan, M. Jang, and C. Yang, “Focusing through dynamic tissue with millisecond digital optical phase conjugation,” Optica 2, 728–735 (2015).
[Crossref]

Y. Liu, C. Ma, Y. Shen, J. Shi, and L. V. Wang, “Focusing light inside dynamic scattering media with millisecond digital optical phase conjugation,” Optica 4, 280–288 (2017).
[Crossref]

Y. Rivenson, Z. Göröcs, H. Günaydin, Y. Zhang, H. Wang, and A. Ozcan, “Deep learning microscopy,” Optica 4, 1437–1443 (2017).
[Crossref]

Z. Ren, Z. Xu, and E. Y. Lam, “Learning-based nonparametric autofocusing for digital holography,” Optica 5, 337–344 (2018).
[Crossref]

A. Sinha, J. Lee, S. Li, and G. Barbastathis, “Lensless computational imaging through deep learning,” Optica 4, 1117–1125 (2017).
[Crossref]

U. S. Kamilov, I. N. Papadopoulos, M. H. Shoreh, A. Goy, C. Vonesch, M. Unser, and D. Psaltis, “Learning approach to optical tomography,” Optica 2, 517–522 (2015).
[Crossref]

L. Tian and L. Waller, “3D intensity and phase imaging from light field measurements in an LED array microscope,” Optica 2, 104–111 (2015).
[Crossref]

S. Li, M. Deng, J. Lee, A. Sinha, and G. Barbastathis, “Imaging through glass diffusers using densely connected convolutional networks,” Optica 5, 803–813 (2018).
[Crossref]

N. Borhani, E. Kakkava, C. Moser, and D. Psaltis, “Learning to see through multimode fibers,” Optica 5, 960–966 (2018).
[Crossref]

Phys. Rev. Lett. (1)

S. Popoff, G. Lerosey, R. Carminati, M. Fink, A. Boccara, and S. Gigan, “Measuring the transmission matrix in optics: an approach to the study and control of light propagation in disordered media,” Phys. Rev. Lett. 104, 100601 (2010).
[Crossref]

Physica A (1)

I. Freund, “Looking through walls and around corners,” Physica A 168, 49–65 (1990).
[Crossref]

Rev. Mod. Phys. (1)

S. Rotter and S. Gigan, “Light fields in complex media: mesoscopic scattering meets wave control,” Rev. Mod. Phys. 89, 015005 (2017).
[Crossref]

Sci. Rep. (1)

T. R. Hillman, T. Yamauchi, W. Choi, R. R. Dasari, M. S. Feld, Y. Park, and Z. Yaqoob, “Digital optical phase conjugation for delivering two-dimensional images through turbid media,” Sci. Rep. 3, 1909 (2013).
[Crossref]

Other (19)

J. W. Goodman, Speckle Phenomena in Optics: Theory and Applications (Roberts & Company, 2007).

M. C. Roggemann, B. M. Welsh, and B. R. Hunt, Imaging Through Turbulence (CRC Press, 2018).

T. Nguyen, Y. Xue, Y. Li, L. Tian, and G. Nehmetallah, “Convolutional neural network for Fourier ptychography video reconstruction: learning temporal dynamics from spatial ensembles,” arXiv: 1805.00334 (2018).

H. Wang, Y. Rivenson, Y. Jin, Z. Wei, R. Gao, H. Gunaydin, L. Bentolila, and A. Ozcan, “Deep learning achieves super-resolution in fluorescence microscopy,” bioRxiv (2018), p. 309641.

M. Lyu, H. Wang, G. Li, and G. Situ, “Exploit imaging through opaque wall via deep learning,” arXiv: 1708.07881 (2017).

https://www.unc.edu/rowlett/units/scales/grit.html .

P. Fan, T. Zhao, and L. Su, “Deep learning the high variability and randomness inside multimode fibres,” arXiv: 1807.09351 (2018).

A. Turpin, I. Vishniakou, and J. D. Seelig, “Light scattering control with neural networks in transmission and reflection,” arXiv: 1805.05602 (2018).

https://github.com/bu-cisl/deep-speckle-correlation .

A. Krizhevsky, I. Sutskevar, and G. Hinton, “Imagenet classification with deep convolutional neural networks,” in Proceedings of the 25th International Conference on Advances in Neural Information Processing Systems, Lake Tahoe, Nevada, 2012, pp. 1097–1105.

http://yann.lecun.com/exdb/mnist/ .

https://www.nist.gov/srd/nist-special-database-19 .

https://quickdraw.withgoogle.com/data .

T. Xiao, T. Xia, Y. Yang, C. Huang, and X. Wang, “Learning from massive noisy labeled data for image classification,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (2015), pp. 2691–2699.

O. Ronneberger, P. Fischer, and T. Brox, “U-net: convolutional networks for biomedical image segmentation,” in International Conference on Medical Image Computing and Computer-Assisted Intervention (Springer, 2015), pp. 234–241.

G. Huang, Z. Liu, L. van der Maaten, and K. Q. Weinberger, “Densely connected convolutional networks,” in IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2017), pp. 2261–2269.

A. Kendall and Y. Gal, “What uncertainties do we need in Bayesian deep learning for computer vision?” in Advances in Neural Information Processing Systems (2017), pp. 5580–5590.

M. D. Zeiler and R. Fergus, “Visualizing and understanding convolutional networks,” in European Conference on Computer Vision (Springer, 2014), pp. 818–833.

L. V. Wang and H.-I. Wu, Biomedical Optics: Principles and Imaging (Wiley, 2012).

Supplementary Material (1)

NameDescription
» Supplement 1       Supplemental document

Cited By

OSA participates in Crossref's Cited-By Linking service. Citing articles from OSA journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (11)

Fig. 1.
Fig. 1. Overview of our DL-based imaging through scattering technique. (a) Speckle measurements are repeated on multiple diffusers. (b) During the training stage, only speckle patterns collected through the training diffusers D1train,D2train,,DNtrain are used. (c) During the testing stage, objects are predicted from speckle patterns collected through previously unseen testing diffusers D1test,D2test,,DNtest, demonstrating the superior scalability of our DL approach.
Fig. 2.
Fig. 2. (a) Experiment setup uses an SLM as the object that is illuminated by a laser. A diffuser is placed at a defocused plane to create shift-variant scattering. (b) The speckle size is 16  μm, characterized by the speckle’s intensity autocorrelation. (c) The isoplanatic range is 1 speckle size characterized by the cross-correlation coefficients between speckle patterns of shifted point objects.
Fig. 3.
Fig. 3. Proposed CNN architecture to learn statistical relationship between speckle patterns and unscattered objects. It takes the general encoder–decoder Unet structure (the layer indices are marked in blue). Starting with a high-resolution input speckle pattern, the encoder gradually condenses the lateral spatial information (size marked in black) into high-level feature maps with growing depths (size marked in purple); the decoder reverses the process by recombining the information into feature maps with gradually increased lateral details; the output consists of a two-channel object, background pixel-wise prediction.
Fig. 4.
Fig. 4. Testing results of “seen objects through unseen diffusers.” The single CNN trained with four “training diffusers” is used to predict objects through previously unseen “testing diffusers,” D1test,D2test,D3test,D4test,D5test. The same set of objects are used during the training through the training diffusers. Despite the apparent differences across the speckle patterns, consistently reliable predictions are made by our CNN.
Fig. 5.
Fig. 5. Testing results of “unseen objects of the same type through unseen diffusers.” The CNN trained with four “training diffusers” is used to make predictions using speckles from previously unused objects (during training) through unseen testing diffusers. The testing objects belong to the same class (handwritten digits and letters) as the training sets.
Fig. 6.
Fig. 6. Testing results of “unseen objects of new types through unseen diffusers.” The CNN trained with four “training diffusers” is used to make predictions using speckles from new types of objects through unseen testing diffusers. The testing objects are taken from a new class (Quickdraw) that have never been used during training.
Fig. 7.
Fig. 7. Testing results of the CNN trained on a single diffuser. When tested on speckles from unseen object (during training) through the same diffuser, the CNN is able to make high-quality predictions. However, it fails on speckles from a different unseen diffuser, demonstrating the importance of the proposed DL strategy involving multiple diffusers.
Fig. 8.
Fig. 8. We compare the performance of multiple CNNs trained on one, two, and four diffusers using different dataset sizes (800 in blue, 1600 in orange, 2400 in green) by the JI. Each CNN is tested under the same condition, using the same 1000 speckle patterns from seen objects through five unseen diffusers. Each circle represents the average JI on all objects through each testing diffuser. The mean JI of each CNN is marked by black horizontal bars. The bottom figure shows representative example predictions from the CNN trained on one, two, and four diffusers, respectively. To visualize the result, the first row shows the CNN prediction that is overlaid with the true positive (white), the false positive (green), and the false negative (purple).
Fig. 9.
Fig. 9. Quantitative evaluation of the CNN performance on “unseen objects of new types through unseen diffusers.” Each bar represents the mean JI from different objects belonging to the same type and are imaged through five different testing diffusers. Each error bar represents the standard deviation of the JI for each object type.
Fig. 10.
Fig. 10. (a) Visualization of the intermediate activation maps (layer index defined in Fig. 3) of our trained CNN by inputting two speckle patterns from the same object, through two different testing diffusers. The number of channels in each layer is defined by the depths of each layer in Fig. 3. The corresponding activation maps from the two speckle patterns become increasingly similar as the data flow through deeper layers, demonstrating the CNN’s ability to extract statistically invariant information from visually distinct speckle patterns. (b) The similarity of the activation maps is quantified using the PCC averaged over all possible pairs of data from the five testing diffusers using the same object. The PCC generally grows with the layer index. Results from four different objects are shown, all of which follow the similar trend.
Fig. 11.
Fig. 11. (a) To quantitatively analyze the robustness of our CNN to speckle decorrelations, PCCs are calculated based on 400 randomly selected speckle patterns for three different cases –AD1*BD1: different objects, the same diffuser; AD1*AD2: the same object, different diffusers; AD1*BD2: different objects, different diffusers. The results show progressively more difficult tasks tested in our experiments. Training and testing on the same diffuser (Fig. 7) needs to overcome an average 0.307 decorrelation; training on one diffuser and testing on another diffuser but with the same object (Fig. 4) needs to account for an average 0.221 decorrelation; training and testing on different objects and diffusers (Figs. 57) needs to further model an average 0.207 decorrelation. (b) Correlating speckle patterns from the same objects but through different diffusers shows invariant patterns, providing a possible source of weak correlation information exploited by the CNN.

Tables (1)

Tables Icon

Table 1. PCC of CNN Predictions for the “Seen Objects Through Unseen Diffusers” Task

Equations (1)

Equations on this page are rendered with MathJax. Learn more.

L=12Ncx(glog(p)+(1g)log(1p)),