Abstract

Deep learning (DL) has been applied extensively in many computational imaging problems, often leading to superior performance over traditional iterative approaches. However, two important questions remain largely unanswered: first, how well can the trained neural network generalize to objects very different from the ones in training? This is particularly important in practice, since large-scale annotated examples similar to those of interest are often not available during training. Second, has the trained neural network learnt the underlying (inverse) physics model, or has it merely done something trivial, such as memorizing the examples or point-wise pattern matching? This pertains to the interpretability of machine-learning based algorithms. In this work, we use the Phase Extraction Neural Network (PhENN) [Optica 4, 1117-1125 (2017)], a deep neural network (DNN) for quantitative phase retrieval in a lensless phase imaging system as the standard platform and show that the two questions are related and share a common crux: the choice of the training examples. Moreover, we connect the strength of the regularization effect imposed by a training set to the training process with the Shannon entropy of images in the dataset. That is, the higher the entropy of the training images, the weaker the regularization effect can be imposed. We also discover that weaker regularization effect leads to better learning of the underlying propagation model, i.e. the weak object transfer function, applicable for weakly scattering objects under the weak object approximation. Finally, simulation and experimental results show that better cross-domain generalization performance can be achieved if DNN is trained on a higher-entropy database, e.g. the ImageNet, than if the same DNN is trained on a lower-entropy database, e.g. MNIST, as the former allows the underlying physics model be learned better than the latter.

© 2020 Optical Society of America under the terms of the OSA Open Access Publishing Agreement

Full Article  |  PDF Article

References

  • View by:
  • |
  • |
  • |

  1. C. Dong, C. Loy, K. He, and X. Tang, “Learning a deep convolutional neural network for image super-resolution,” in European Conference on Computer Vision (ECCV) / Lecture Notes on Computer Science Part IV, vol. 8692, (2014), pp. 184–199.
  2. J. Johnson, A. Alahi, and Li Fei-Fei, “Perceptual losses for real-time style transfer and super-resolution,” in European Conference on Computer Vision (ECCV) / Lecture Notes on Computer Science, vol. 9906B. Leide, J. Matas, N. Sebe, and M. Welling, eds. (2016), pp. 694–711.
  3. C. Ledig, L. Theis, F. Huczar, J. Caballero, A. Cunningham, A. Acosta, A. Aitken, A. Tejani, J. Totz, Zehan Wang, and Wenshe Shi, “Photo-realistic single image super-resolution using a Generative Adversarial Network,” in The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), (2017), pp. 4681–4690.
  4. Y. Rivenson, Z. Gorocs, H. Gunaydin, Yibo Zhang, Hongda Wang, and A. Ozcan, “Deep learning microscopy,” Optica 4(11), 1437–1443 (2017).
    [Crossref]
  5. H. Wang, Y. Rivenson, Z. Wei, H. Gunaydin, L. Bentolila, and A. Ozcan, “Deep learning achieves super-resolution in fluorescence microscopy,” bioRxiv, https://doi.org/10.1101/309641 (2018).
  6. E. Nehme, L. E. Weiss, T. Michaeli, and Y. Shechtman, “Deep-STORM: super-resolution single-molecule microscopy by deep learning,” Optica 5(4), 458–464 (2018).
    [Crossref]
  7. M. Deng, S. Li, and G. Barbastathis, “Learning to synthesize: splitting and recombining low and high spatial frequencies for image recovery,” arXiv preprint arXiv:1811.07945 (2018).
  8. A. Sinha, Justin Lee, Shuai Li, and G. Barbastathis, “Lensless computational imaging through deep learning,” Optica 4(9), 1117–1125 (2017).
    [Crossref]
  9. Y. Rivenson, Y. Zhang, H. Günaydın, D. Teng, and A. Ozcan, “Phase recovery and holographic image reconstruction using deep learning in neural networks,” Light: Sci. Appl. 7(2), 17141 (2018).
    [Crossref]
  10. A. Goy, K. Arthur, Shuai Li, and G. Barbastathis, “Low photon count phase retrieval using deep learning,” Phys. Rev. Lett. 121(24), 243902 (2018).
    [Crossref]
  11. T. Nguyen, Y. Xue, Y. Li, L. Tian, and G. Nehmetallah, “Deep learning approach for fourier ptychography microscopy,” Opt. Express 26(20), 26470–26484 (2018).
    [Crossref]
  12. Ç. Işıl, F. S. Oktem, and A. Koç, “Deep iterative reconstruction for phase retrieval,” Appl. Opt. 58(20), 5422–5431 (2019).
    [Crossref]
  13. H. Wang, M. Lyu, and G. Situ, “eholonet: a learning-based point-to-point approach for in-line digital holographic reconstruction,” Opt. Express 26(18), 22603–22614 (2018).
    [Crossref]
  14. T. Pitkäaho, A. Manninen, and T. J. Naughton, “Performance of autofocus capability of deep convolutional neural networks in digital holographic microscopy,” in Digital Holography and Three-Dimensional Imaging (OSA, 2017), p. W2A.5.
  15. Y. Wu, Y. Rivenson, Y. Zhang, Z. Wei, H. Gunaydin, X. Lin, and A. Ozcan, “Extended depth-of-field in holographic image reconstruction using deep learning based auto-focusing and phase-recovery,” Optica 5(6), 704–710 (2018).
    [Crossref]
  16. M. R. Kellman, E. Bostan, N. A. Repina, and L. Waller, “Physics-based learned design: Optimized coded-illumination for quantitative phase imaging,” IEEE Trans. Comput. Imaging 5(3), 344–353 (2019).
    [Crossref]
  17. Z. Ren, Z. Xu, and E. Y. Lam, “Learning-based nonparametric autofocusing for digital holography,” Optica 5(4), 337–344 (2018).
    [Crossref]
  18. M. Deng, S. Li, A. Goy, I. Kang, and G. Barbastathis, “Learning to synthesize: Robust phase retrieval at low photon counts,” Light: Sci. Appl. 9(1), 36 (2020).
    [Crossref]
  19. M. Deng, A. Goy, S. Li, K. Arthur, and G. Barbastathis, “Probing shallower: perceptual loss trained phase extraction neural network (plt-phenn) for artifact-free reconstruction at low photon budget,” Opt. Express 28(2), 2511–2535 (2020).
    [Crossref]
  20. R. Horisaki, R. Takagi, and J. Tanida, “Learning-based imaging through scattering media,” Opt. Express 24(13), 13738–13743 (2016).
    [Crossref]
  21. S. Li, M. Deng, J. Lee, A. Sinha, and G. Barbastathis, “Imaging through glass diffusers using densely connected convolutional networks,” Optica 5(7), 803–813 (2018).
    [Crossref]
  22. Y. Li, Y. Xue, and L. Tian, “Deep speckle correlation: a deep learning approach toward scalable imaging through scattering media,” Optica 5(10), 1181–1190 (2018).
    [Crossref]
  23. U. S. Kamilov, I. N. Papadopoulos, M. H. Shoreh, A. Goy, C. Vonesch, M. Unser, and D. Psaltis, “Learning approach to optical tomography,” Optica 2(6), 517–522 (2015).
    [Crossref]
  24. U. S. Kamilov, I. N. Papadopoulos, M. H. Shoreh, A. Goy, C. Vonesch, M. Unser, and D. Psaltis, “Optical tomographic image reconstruction based on beam propagation and sparse regularization,” IEEE Trans. Comput. Imaging 2(1), 59–70 (2016).
    [Crossref]
  25. A. Goy, G. Rughoobur, Shuai Li, K. Arthur, A. Akinwande, and G. Barbastathis, “High-resolution limited-angle phase tomography of dense layered objects using deep neural networks,” Proc. Nat. Acad. Sci. ((accepted) 2019).
  26. G. Barbastathis, A. Ozcan, and Guohai Situ, “On the use of deep learning for computational imaging,” Optica (2019).
  27. M. T. McCann, K. H. Jin, and M. Unser, “Convolutional neural networks for inverse problems in imaging: A review,” IEEE Signal Process. Mag. 34(6), 85–95 (2017).
    [Crossref]
  28. J. Deng, W. Dong, R. Socher, L.-J. Li, K. Li, and L. Fei-Fei, “Imagenet: A large-scale hierarchical image database, in 2009 IEEE conference on computer vision and pattern recognition (Ieee, 2009), pp. 248–255.
  29. Y. LeCun, C. Cortes, and C. J. Burges, “MNIST handwritten digit database,” AT&T Labs [Online]. Available: http://yann.lecun.com/exdb/mnist 2 (2010).
  30. L. Tian and L. Waller, “Quantitative differential phase contrast imaging in an led array microscope,” Opt. Express 23(9), 11394–11403 (2015).
    [Crossref]
  31. S. Li, G. Barbastathis, and A. Goy, “Analysis of phase-extraction neural network (phenn) performance for lensless quantitative phase imaging,” in Quantitative Phase Imaging V, vol. 10887 (International Society for Optics and Photonics, 2019), p. 108870T.
  32. S. Li and G. Barbastathis, “Spectral pre-modulation of training examples enhances the spatial resolution of the phase extraction neural network (PhENN),” Opt. Express 26(22), 29340–29352 (2018).
    [Crossref]
  33. B. Neyshabur, S. Bhojanapalli, D. McAllester, and N. Srebro, “Exploring generalization in deep learning,” in Advances in Neural Information Processing Systems, (2017), pp. 5947–5956.
  34. B. Neyshabur, Z. Li, S. Bhojanapalli, Y. LeCun, and N. Srebro, “Towards understanding the role of over-parametrization in generalization of neural networks,” arXiv preprint arXiv:1805.12076 (2018).
  35. B. Neyshabur, S. Bhojanapalli, and N. Srebro, “A pac-bayesian approach to spectrally-normalized margin bounds for neural networks,” arXiv preprint arXiv:1707.09564 (2017).
  36. C. Zhang, S. Bengio, M. Hardt, B. Recht, and O. Vinyals, “Understanding deep learning requires rethinking generalization,” arXiv preprint arXiv:1611.03530 (2016).
  37. M. S. Advani and A. M. Saxe, “High-dimensional dynamics of generalization error in neural networks,” arXiv preprint arXiv:1710.03667 (2017).
  38. H. Xu and S. Mannor, “Robustness and generalization,” Mach. Learn. 86(3), 391–423 (2012).
    [Crossref]
  39. D. Jakubovitz, R. Giryes, and M. R. Rodrigues, “Generalization error in deep learning,” in Compressed Sensing and Its Applications (Springer, 2019), pp. 153–193.
  40. C. E. Shannon, “A mathematical theory of communication,” Bell Syst. Tech. J. 27(3), 379–423 (1948).
    [Crossref]
  41. T. M. Cover and J. A. Thomas, Elements of information theory (John Wiley & Sons, 2012).
  42. V. K. Ingle and J. G. Proakis, Digital signal processing using matlab: a problem solving companion (Cengage Learning, 2016).
  43. G. B. Huang, M. Mattar, T. Berg, and E. Learned-Miller, “Labeled faces in the wild: A database forstudying face recognition in unconstrained environments,” in Technical Report, University of Massachusetts, (2007).
  44. G. K. Matsopoulos, N. A. Mouravliansky, K. K. Delibasis, and K. S. Nikita, “Automatic retinal image registration scheme using global optimization techniques,” IEEE Trans. Inform. Technol. Biomed. 3(1), 47–60 (1999).
    [Crossref]
  45. J. A. Nelder and R. Mead, “A simplex method for function minimization,” The computer journal 7(4), 308–313 (1965).
    [Crossref]
  46. S. Li, “Computational imaging through deep learning,” Ph.D. thesis, MIT (2019).
  47. D. P. Kingma and J. Lei Ba, “Adam: A method for stochastic optimization,” in International Conference on Learning Representations (ICLR), (2015).

2020 (2)

2019 (2)

Ç. Işıl, F. S. Oktem, and A. Koç, “Deep iterative reconstruction for phase retrieval,” Appl. Opt. 58(20), 5422–5431 (2019).
[Crossref]

M. R. Kellman, E. Bostan, N. A. Repina, and L. Waller, “Physics-based learned design: Optimized coded-illumination for quantitative phase imaging,” IEEE Trans. Comput. Imaging 5(3), 344–353 (2019).
[Crossref]

2018 (10)

Z. Ren, Z. Xu, and E. Y. Lam, “Learning-based nonparametric autofocusing for digital holography,” Optica 5(4), 337–344 (2018).
[Crossref]

H. Wang, M. Lyu, and G. Situ, “eholonet: a learning-based point-to-point approach for in-line digital holographic reconstruction,” Opt. Express 26(18), 22603–22614 (2018).
[Crossref]

Y. Wu, Y. Rivenson, Y. Zhang, Z. Wei, H. Gunaydin, X. Lin, and A. Ozcan, “Extended depth-of-field in holographic image reconstruction using deep learning based auto-focusing and phase-recovery,” Optica 5(6), 704–710 (2018).
[Crossref]

E. Nehme, L. E. Weiss, T. Michaeli, and Y. Shechtman, “Deep-STORM: super-resolution single-molecule microscopy by deep learning,” Optica 5(4), 458–464 (2018).
[Crossref]

Y. Rivenson, Y. Zhang, H. Günaydın, D. Teng, and A. Ozcan, “Phase recovery and holographic image reconstruction using deep learning in neural networks,” Light: Sci. Appl. 7(2), 17141 (2018).
[Crossref]

A. Goy, K. Arthur, Shuai Li, and G. Barbastathis, “Low photon count phase retrieval using deep learning,” Phys. Rev. Lett. 121(24), 243902 (2018).
[Crossref]

T. Nguyen, Y. Xue, Y. Li, L. Tian, and G. Nehmetallah, “Deep learning approach for fourier ptychography microscopy,” Opt. Express 26(20), 26470–26484 (2018).
[Crossref]

S. Li, M. Deng, J. Lee, A. Sinha, and G. Barbastathis, “Imaging through glass diffusers using densely connected convolutional networks,” Optica 5(7), 803–813 (2018).
[Crossref]

Y. Li, Y. Xue, and L. Tian, “Deep speckle correlation: a deep learning approach toward scalable imaging through scattering media,” Optica 5(10), 1181–1190 (2018).
[Crossref]

S. Li and G. Barbastathis, “Spectral pre-modulation of training examples enhances the spatial resolution of the phase extraction neural network (PhENN),” Opt. Express 26(22), 29340–29352 (2018).
[Crossref]

2017 (3)

2016 (2)

U. S. Kamilov, I. N. Papadopoulos, M. H. Shoreh, A. Goy, C. Vonesch, M. Unser, and D. Psaltis, “Optical tomographic image reconstruction based on beam propagation and sparse regularization,” IEEE Trans. Comput. Imaging 2(1), 59–70 (2016).
[Crossref]

R. Horisaki, R. Takagi, and J. Tanida, “Learning-based imaging through scattering media,” Opt. Express 24(13), 13738–13743 (2016).
[Crossref]

2015 (2)

2012 (1)

H. Xu and S. Mannor, “Robustness and generalization,” Mach. Learn. 86(3), 391–423 (2012).
[Crossref]

1999 (1)

G. K. Matsopoulos, N. A. Mouravliansky, K. K. Delibasis, and K. S. Nikita, “Automatic retinal image registration scheme using global optimization techniques,” IEEE Trans. Inform. Technol. Biomed. 3(1), 47–60 (1999).
[Crossref]

1965 (1)

J. A. Nelder and R. Mead, “A simplex method for function minimization,” The computer journal 7(4), 308–313 (1965).
[Crossref]

1948 (1)

C. E. Shannon, “A mathematical theory of communication,” Bell Syst. Tech. J. 27(3), 379–423 (1948).
[Crossref]

Acosta, A.

C. Ledig, L. Theis, F. Huczar, J. Caballero, A. Cunningham, A. Acosta, A. Aitken, A. Tejani, J. Totz, Zehan Wang, and Wenshe Shi, “Photo-realistic single image super-resolution using a Generative Adversarial Network,” in The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), (2017), pp. 4681–4690.

Advani, M. S.

M. S. Advani and A. M. Saxe, “High-dimensional dynamics of generalization error in neural networks,” arXiv preprint arXiv:1710.03667 (2017).

Aitken, A.

C. Ledig, L. Theis, F. Huczar, J. Caballero, A. Cunningham, A. Acosta, A. Aitken, A. Tejani, J. Totz, Zehan Wang, and Wenshe Shi, “Photo-realistic single image super-resolution using a Generative Adversarial Network,” in The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), (2017), pp. 4681–4690.

Akinwande, A.

A. Goy, G. Rughoobur, Shuai Li, K. Arthur, A. Akinwande, and G. Barbastathis, “High-resolution limited-angle phase tomography of dense layered objects using deep neural networks,” Proc. Nat. Acad. Sci. ((accepted) 2019).

Alahi, A.

J. Johnson, A. Alahi, and Li Fei-Fei, “Perceptual losses for real-time style transfer and super-resolution,” in European Conference on Computer Vision (ECCV) / Lecture Notes on Computer Science, vol. 9906B. Leide, J. Matas, N. Sebe, and M. Welling, eds. (2016), pp. 694–711.

Arthur, K.

M. Deng, A. Goy, S. Li, K. Arthur, and G. Barbastathis, “Probing shallower: perceptual loss trained phase extraction neural network (plt-phenn) for artifact-free reconstruction at low photon budget,” Opt. Express 28(2), 2511–2535 (2020).
[Crossref]

A. Goy, K. Arthur, Shuai Li, and G. Barbastathis, “Low photon count phase retrieval using deep learning,” Phys. Rev. Lett. 121(24), 243902 (2018).
[Crossref]

A. Goy, G. Rughoobur, Shuai Li, K. Arthur, A. Akinwande, and G. Barbastathis, “High-resolution limited-angle phase tomography of dense layered objects using deep neural networks,” Proc. Nat. Acad. Sci. ((accepted) 2019).

Barbastathis, G.

M. Deng, S. Li, A. Goy, I. Kang, and G. Barbastathis, “Learning to synthesize: Robust phase retrieval at low photon counts,” Light: Sci. Appl. 9(1), 36 (2020).
[Crossref]

M. Deng, A. Goy, S. Li, K. Arthur, and G. Barbastathis, “Probing shallower: perceptual loss trained phase extraction neural network (plt-phenn) for artifact-free reconstruction at low photon budget,” Opt. Express 28(2), 2511–2535 (2020).
[Crossref]

S. Li and G. Barbastathis, “Spectral pre-modulation of training examples enhances the spatial resolution of the phase extraction neural network (PhENN),” Opt. Express 26(22), 29340–29352 (2018).
[Crossref]

A. Goy, K. Arthur, Shuai Li, and G. Barbastathis, “Low photon count phase retrieval using deep learning,” Phys. Rev. Lett. 121(24), 243902 (2018).
[Crossref]

S. Li, M. Deng, J. Lee, A. Sinha, and G. Barbastathis, “Imaging through glass diffusers using densely connected convolutional networks,” Optica 5(7), 803–813 (2018).
[Crossref]

A. Sinha, Justin Lee, Shuai Li, and G. Barbastathis, “Lensless computational imaging through deep learning,” Optica 4(9), 1117–1125 (2017).
[Crossref]

M. Deng, S. Li, and G. Barbastathis, “Learning to synthesize: splitting and recombining low and high spatial frequencies for image recovery,” arXiv preprint arXiv:1811.07945 (2018).

A. Goy, G. Rughoobur, Shuai Li, K. Arthur, A. Akinwande, and G. Barbastathis, “High-resolution limited-angle phase tomography of dense layered objects using deep neural networks,” Proc. Nat. Acad. Sci. ((accepted) 2019).

G. Barbastathis, A. Ozcan, and Guohai Situ, “On the use of deep learning for computational imaging,” Optica (2019).

S. Li, G. Barbastathis, and A. Goy, “Analysis of phase-extraction neural network (phenn) performance for lensless quantitative phase imaging,” in Quantitative Phase Imaging V, vol. 10887 (International Society for Optics and Photonics, 2019), p. 108870T.

Bengio, S.

C. Zhang, S. Bengio, M. Hardt, B. Recht, and O. Vinyals, “Understanding deep learning requires rethinking generalization,” arXiv preprint arXiv:1611.03530 (2016).

Bentolila, L.

H. Wang, Y. Rivenson, Z. Wei, H. Gunaydin, L. Bentolila, and A. Ozcan, “Deep learning achieves super-resolution in fluorescence microscopy,” bioRxiv, https://doi.org/10.1101/309641 (2018).

Berg, T.

G. B. Huang, M. Mattar, T. Berg, and E. Learned-Miller, “Labeled faces in the wild: A database forstudying face recognition in unconstrained environments,” in Technical Report, University of Massachusetts, (2007).

Bhojanapalli, S.

B. Neyshabur, Z. Li, S. Bhojanapalli, Y. LeCun, and N. Srebro, “Towards understanding the role of over-parametrization in generalization of neural networks,” arXiv preprint arXiv:1805.12076 (2018).

B. Neyshabur, S. Bhojanapalli, and N. Srebro, “A pac-bayesian approach to spectrally-normalized margin bounds for neural networks,” arXiv preprint arXiv:1707.09564 (2017).

B. Neyshabur, S. Bhojanapalli, D. McAllester, and N. Srebro, “Exploring generalization in deep learning,” in Advances in Neural Information Processing Systems, (2017), pp. 5947–5956.

Bostan, E.

M. R. Kellman, E. Bostan, N. A. Repina, and L. Waller, “Physics-based learned design: Optimized coded-illumination for quantitative phase imaging,” IEEE Trans. Comput. Imaging 5(3), 344–353 (2019).
[Crossref]

Burges, C. J.

Y. LeCun, C. Cortes, and C. J. Burges, “MNIST handwritten digit database,” AT&T Labs [Online]. Available: http://yann.lecun.com/exdb/mnist 2 (2010).

Caballero, J.

C. Ledig, L. Theis, F. Huczar, J. Caballero, A. Cunningham, A. Acosta, A. Aitken, A. Tejani, J. Totz, Zehan Wang, and Wenshe Shi, “Photo-realistic single image super-resolution using a Generative Adversarial Network,” in The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), (2017), pp. 4681–4690.

Cortes, C.

Y. LeCun, C. Cortes, and C. J. Burges, “MNIST handwritten digit database,” AT&T Labs [Online]. Available: http://yann.lecun.com/exdb/mnist 2 (2010).

Cover, T. M.

T. M. Cover and J. A. Thomas, Elements of information theory (John Wiley & Sons, 2012).

Cunningham, A.

C. Ledig, L. Theis, F. Huczar, J. Caballero, A. Cunningham, A. Acosta, A. Aitken, A. Tejani, J. Totz, Zehan Wang, and Wenshe Shi, “Photo-realistic single image super-resolution using a Generative Adversarial Network,” in The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), (2017), pp. 4681–4690.

Delibasis, K. K.

G. K. Matsopoulos, N. A. Mouravliansky, K. K. Delibasis, and K. S. Nikita, “Automatic retinal image registration scheme using global optimization techniques,” IEEE Trans. Inform. Technol. Biomed. 3(1), 47–60 (1999).
[Crossref]

Deng, J.

J. Deng, W. Dong, R. Socher, L.-J. Li, K. Li, and L. Fei-Fei, “Imagenet: A large-scale hierarchical image database, in 2009 IEEE conference on computer vision and pattern recognition (Ieee, 2009), pp. 248–255.

Deng, M.

M. Deng, S. Li, A. Goy, I. Kang, and G. Barbastathis, “Learning to synthesize: Robust phase retrieval at low photon counts,” Light: Sci. Appl. 9(1), 36 (2020).
[Crossref]

M. Deng, A. Goy, S. Li, K. Arthur, and G. Barbastathis, “Probing shallower: perceptual loss trained phase extraction neural network (plt-phenn) for artifact-free reconstruction at low photon budget,” Opt. Express 28(2), 2511–2535 (2020).
[Crossref]

S. Li, M. Deng, J. Lee, A. Sinha, and G. Barbastathis, “Imaging through glass diffusers using densely connected convolutional networks,” Optica 5(7), 803–813 (2018).
[Crossref]

M. Deng, S. Li, and G. Barbastathis, “Learning to synthesize: splitting and recombining low and high spatial frequencies for image recovery,” arXiv preprint arXiv:1811.07945 (2018).

Dong, C.

C. Dong, C. Loy, K. He, and X. Tang, “Learning a deep convolutional neural network for image super-resolution,” in European Conference on Computer Vision (ECCV) / Lecture Notes on Computer Science Part IV, vol. 8692, (2014), pp. 184–199.

Dong, W.

J. Deng, W. Dong, R. Socher, L.-J. Li, K. Li, and L. Fei-Fei, “Imagenet: A large-scale hierarchical image database, in 2009 IEEE conference on computer vision and pattern recognition (Ieee, 2009), pp. 248–255.

Fei-Fei, L.

J. Deng, W. Dong, R. Socher, L.-J. Li, K. Li, and L. Fei-Fei, “Imagenet: A large-scale hierarchical image database, in 2009 IEEE conference on computer vision and pattern recognition (Ieee, 2009), pp. 248–255.

Fei-Fei, Li

J. Johnson, A. Alahi, and Li Fei-Fei, “Perceptual losses for real-time style transfer and super-resolution,” in European Conference on Computer Vision (ECCV) / Lecture Notes on Computer Science, vol. 9906B. Leide, J. Matas, N. Sebe, and M. Welling, eds. (2016), pp. 694–711.

Giryes, R.

D. Jakubovitz, R. Giryes, and M. R. Rodrigues, “Generalization error in deep learning,” in Compressed Sensing and Its Applications (Springer, 2019), pp. 153–193.

Gorocs, Z.

Goy, A.

M. Deng, A. Goy, S. Li, K. Arthur, and G. Barbastathis, “Probing shallower: perceptual loss trained phase extraction neural network (plt-phenn) for artifact-free reconstruction at low photon budget,” Opt. Express 28(2), 2511–2535 (2020).
[Crossref]

M. Deng, S. Li, A. Goy, I. Kang, and G. Barbastathis, “Learning to synthesize: Robust phase retrieval at low photon counts,” Light: Sci. Appl. 9(1), 36 (2020).
[Crossref]

A. Goy, K. Arthur, Shuai Li, and G. Barbastathis, “Low photon count phase retrieval using deep learning,” Phys. Rev. Lett. 121(24), 243902 (2018).
[Crossref]

U. S. Kamilov, I. N. Papadopoulos, M. H. Shoreh, A. Goy, C. Vonesch, M. Unser, and D. Psaltis, “Optical tomographic image reconstruction based on beam propagation and sparse regularization,” IEEE Trans. Comput. Imaging 2(1), 59–70 (2016).
[Crossref]

U. S. Kamilov, I. N. Papadopoulos, M. H. Shoreh, A. Goy, C. Vonesch, M. Unser, and D. Psaltis, “Learning approach to optical tomography,” Optica 2(6), 517–522 (2015).
[Crossref]

A. Goy, G. Rughoobur, Shuai Li, K. Arthur, A. Akinwande, and G. Barbastathis, “High-resolution limited-angle phase tomography of dense layered objects using deep neural networks,” Proc. Nat. Acad. Sci. ((accepted) 2019).

S. Li, G. Barbastathis, and A. Goy, “Analysis of phase-extraction neural network (phenn) performance for lensless quantitative phase imaging,” in Quantitative Phase Imaging V, vol. 10887 (International Society for Optics and Photonics, 2019), p. 108870T.

Gunaydin, H.

Günaydin, H.

Y. Rivenson, Y. Zhang, H. Günaydın, D. Teng, and A. Ozcan, “Phase recovery and holographic image reconstruction using deep learning in neural networks,” Light: Sci. Appl. 7(2), 17141 (2018).
[Crossref]

Hardt, M.

C. Zhang, S. Bengio, M. Hardt, B. Recht, and O. Vinyals, “Understanding deep learning requires rethinking generalization,” arXiv preprint arXiv:1611.03530 (2016).

He, K.

C. Dong, C. Loy, K. He, and X. Tang, “Learning a deep convolutional neural network for image super-resolution,” in European Conference on Computer Vision (ECCV) / Lecture Notes on Computer Science Part IV, vol. 8692, (2014), pp. 184–199.

Horisaki, R.

Huang, G. B.

G. B. Huang, M. Mattar, T. Berg, and E. Learned-Miller, “Labeled faces in the wild: A database forstudying face recognition in unconstrained environments,” in Technical Report, University of Massachusetts, (2007).

Huczar, F.

C. Ledig, L. Theis, F. Huczar, J. Caballero, A. Cunningham, A. Acosta, A. Aitken, A. Tejani, J. Totz, Zehan Wang, and Wenshe Shi, “Photo-realistic single image super-resolution using a Generative Adversarial Network,” in The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), (2017), pp. 4681–4690.

Ingle, V. K.

V. K. Ingle and J. G. Proakis, Digital signal processing using matlab: a problem solving companion (Cengage Learning, 2016).

Isil, Ç.

Jakubovitz, D.

D. Jakubovitz, R. Giryes, and M. R. Rodrigues, “Generalization error in deep learning,” in Compressed Sensing and Its Applications (Springer, 2019), pp. 153–193.

Jin, K. H.

M. T. McCann, K. H. Jin, and M. Unser, “Convolutional neural networks for inverse problems in imaging: A review,” IEEE Signal Process. Mag. 34(6), 85–95 (2017).
[Crossref]

Johnson, J.

J. Johnson, A. Alahi, and Li Fei-Fei, “Perceptual losses for real-time style transfer and super-resolution,” in European Conference on Computer Vision (ECCV) / Lecture Notes on Computer Science, vol. 9906B. Leide, J. Matas, N. Sebe, and M. Welling, eds. (2016), pp. 694–711.

Kamilov, U. S.

U. S. Kamilov, I. N. Papadopoulos, M. H. Shoreh, A. Goy, C. Vonesch, M. Unser, and D. Psaltis, “Optical tomographic image reconstruction based on beam propagation and sparse regularization,” IEEE Trans. Comput. Imaging 2(1), 59–70 (2016).
[Crossref]

U. S. Kamilov, I. N. Papadopoulos, M. H. Shoreh, A. Goy, C. Vonesch, M. Unser, and D. Psaltis, “Learning approach to optical tomography,” Optica 2(6), 517–522 (2015).
[Crossref]

Kang, I.

M. Deng, S. Li, A. Goy, I. Kang, and G. Barbastathis, “Learning to synthesize: Robust phase retrieval at low photon counts,” Light: Sci. Appl. 9(1), 36 (2020).
[Crossref]

Kellman, M. R.

M. R. Kellman, E. Bostan, N. A. Repina, and L. Waller, “Physics-based learned design: Optimized coded-illumination for quantitative phase imaging,” IEEE Trans. Comput. Imaging 5(3), 344–353 (2019).
[Crossref]

Kingma, D. P.

D. P. Kingma and J. Lei Ba, “Adam: A method for stochastic optimization,” in International Conference on Learning Representations (ICLR), (2015).

Koç, A.

Lam, E. Y.

Learned-Miller, E.

G. B. Huang, M. Mattar, T. Berg, and E. Learned-Miller, “Labeled faces in the wild: A database forstudying face recognition in unconstrained environments,” in Technical Report, University of Massachusetts, (2007).

LeCun, Y.

B. Neyshabur, Z. Li, S. Bhojanapalli, Y. LeCun, and N. Srebro, “Towards understanding the role of over-parametrization in generalization of neural networks,” arXiv preprint arXiv:1805.12076 (2018).

Y. LeCun, C. Cortes, and C. J. Burges, “MNIST handwritten digit database,” AT&T Labs [Online]. Available: http://yann.lecun.com/exdb/mnist 2 (2010).

Ledig, C.

C. Ledig, L. Theis, F. Huczar, J. Caballero, A. Cunningham, A. Acosta, A. Aitken, A. Tejani, J. Totz, Zehan Wang, and Wenshe Shi, “Photo-realistic single image super-resolution using a Generative Adversarial Network,” in The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), (2017), pp. 4681–4690.

Lee, J.

Lee, Justin

Lei Ba, J.

D. P. Kingma and J. Lei Ba, “Adam: A method for stochastic optimization,” in International Conference on Learning Representations (ICLR), (2015).

Li, K.

J. Deng, W. Dong, R. Socher, L.-J. Li, K. Li, and L. Fei-Fei, “Imagenet: A large-scale hierarchical image database, in 2009 IEEE conference on computer vision and pattern recognition (Ieee, 2009), pp. 248–255.

Li, L.-J.

J. Deng, W. Dong, R. Socher, L.-J. Li, K. Li, and L. Fei-Fei, “Imagenet: A large-scale hierarchical image database, in 2009 IEEE conference on computer vision and pattern recognition (Ieee, 2009), pp. 248–255.

Li, S.

M. Deng, S. Li, A. Goy, I. Kang, and G. Barbastathis, “Learning to synthesize: Robust phase retrieval at low photon counts,” Light: Sci. Appl. 9(1), 36 (2020).
[Crossref]

M. Deng, A. Goy, S. Li, K. Arthur, and G. Barbastathis, “Probing shallower: perceptual loss trained phase extraction neural network (plt-phenn) for artifact-free reconstruction at low photon budget,” Opt. Express 28(2), 2511–2535 (2020).
[Crossref]

S. Li and G. Barbastathis, “Spectral pre-modulation of training examples enhances the spatial resolution of the phase extraction neural network (PhENN),” Opt. Express 26(22), 29340–29352 (2018).
[Crossref]

S. Li, M. Deng, J. Lee, A. Sinha, and G. Barbastathis, “Imaging through glass diffusers using densely connected convolutional networks,” Optica 5(7), 803–813 (2018).
[Crossref]

M. Deng, S. Li, and G. Barbastathis, “Learning to synthesize: splitting and recombining low and high spatial frequencies for image recovery,” arXiv preprint arXiv:1811.07945 (2018).

S. Li, G. Barbastathis, and A. Goy, “Analysis of phase-extraction neural network (phenn) performance for lensless quantitative phase imaging,” in Quantitative Phase Imaging V, vol. 10887 (International Society for Optics and Photonics, 2019), p. 108870T.

S. Li, “Computational imaging through deep learning,” Ph.D. thesis, MIT (2019).

Li, Shuai

A. Goy, K. Arthur, Shuai Li, and G. Barbastathis, “Low photon count phase retrieval using deep learning,” Phys. Rev. Lett. 121(24), 243902 (2018).
[Crossref]

A. Sinha, Justin Lee, Shuai Li, and G. Barbastathis, “Lensless computational imaging through deep learning,” Optica 4(9), 1117–1125 (2017).
[Crossref]

A. Goy, G. Rughoobur, Shuai Li, K. Arthur, A. Akinwande, and G. Barbastathis, “High-resolution limited-angle phase tomography of dense layered objects using deep neural networks,” Proc. Nat. Acad. Sci. ((accepted) 2019).

Li, Y.

Li, Z.

B. Neyshabur, Z. Li, S. Bhojanapalli, Y. LeCun, and N. Srebro, “Towards understanding the role of over-parametrization in generalization of neural networks,” arXiv preprint arXiv:1805.12076 (2018).

Lin, X.

Loy, C.

C. Dong, C. Loy, K. He, and X. Tang, “Learning a deep convolutional neural network for image super-resolution,” in European Conference on Computer Vision (ECCV) / Lecture Notes on Computer Science Part IV, vol. 8692, (2014), pp. 184–199.

Lyu, M.

Manninen, A.

T. Pitkäaho, A. Manninen, and T. J. Naughton, “Performance of autofocus capability of deep convolutional neural networks in digital holographic microscopy,” in Digital Holography and Three-Dimensional Imaging (OSA, 2017), p. W2A.5.

Mannor, S.

H. Xu and S. Mannor, “Robustness and generalization,” Mach. Learn. 86(3), 391–423 (2012).
[Crossref]

Matsopoulos, G. K.

G. K. Matsopoulos, N. A. Mouravliansky, K. K. Delibasis, and K. S. Nikita, “Automatic retinal image registration scheme using global optimization techniques,” IEEE Trans. Inform. Technol. Biomed. 3(1), 47–60 (1999).
[Crossref]

Mattar, M.

G. B. Huang, M. Mattar, T. Berg, and E. Learned-Miller, “Labeled faces in the wild: A database forstudying face recognition in unconstrained environments,” in Technical Report, University of Massachusetts, (2007).

McAllester, D.

B. Neyshabur, S. Bhojanapalli, D. McAllester, and N. Srebro, “Exploring generalization in deep learning,” in Advances in Neural Information Processing Systems, (2017), pp. 5947–5956.

McCann, M. T.

M. T. McCann, K. H. Jin, and M. Unser, “Convolutional neural networks for inverse problems in imaging: A review,” IEEE Signal Process. Mag. 34(6), 85–95 (2017).
[Crossref]

Mead, R.

J. A. Nelder and R. Mead, “A simplex method for function minimization,” The computer journal 7(4), 308–313 (1965).
[Crossref]

Michaeli, T.

Mouravliansky, N. A.

G. K. Matsopoulos, N. A. Mouravliansky, K. K. Delibasis, and K. S. Nikita, “Automatic retinal image registration scheme using global optimization techniques,” IEEE Trans. Inform. Technol. Biomed. 3(1), 47–60 (1999).
[Crossref]

Naughton, T. J.

T. Pitkäaho, A. Manninen, and T. J. Naughton, “Performance of autofocus capability of deep convolutional neural networks in digital holographic microscopy,” in Digital Holography and Three-Dimensional Imaging (OSA, 2017), p. W2A.5.

Nehme, E.

Nehmetallah, G.

Nelder, J. A.

J. A. Nelder and R. Mead, “A simplex method for function minimization,” The computer journal 7(4), 308–313 (1965).
[Crossref]

Neyshabur, B.

B. Neyshabur, S. Bhojanapalli, and N. Srebro, “A pac-bayesian approach to spectrally-normalized margin bounds for neural networks,” arXiv preprint arXiv:1707.09564 (2017).

B. Neyshabur, Z. Li, S. Bhojanapalli, Y. LeCun, and N. Srebro, “Towards understanding the role of over-parametrization in generalization of neural networks,” arXiv preprint arXiv:1805.12076 (2018).

B. Neyshabur, S. Bhojanapalli, D. McAllester, and N. Srebro, “Exploring generalization in deep learning,” in Advances in Neural Information Processing Systems, (2017), pp. 5947–5956.

Nguyen, T.

Nikita, K. S.

G. K. Matsopoulos, N. A. Mouravliansky, K. K. Delibasis, and K. S. Nikita, “Automatic retinal image registration scheme using global optimization techniques,” IEEE Trans. Inform. Technol. Biomed. 3(1), 47–60 (1999).
[Crossref]

Oktem, F. S.

Ozcan, A.

Y. Wu, Y. Rivenson, Y. Zhang, Z. Wei, H. Gunaydin, X. Lin, and A. Ozcan, “Extended depth-of-field in holographic image reconstruction using deep learning based auto-focusing and phase-recovery,” Optica 5(6), 704–710 (2018).
[Crossref]

Y. Rivenson, Y. Zhang, H. Günaydın, D. Teng, and A. Ozcan, “Phase recovery and holographic image reconstruction using deep learning in neural networks,” Light: Sci. Appl. 7(2), 17141 (2018).
[Crossref]

Y. Rivenson, Z. Gorocs, H. Gunaydin, Yibo Zhang, Hongda Wang, and A. Ozcan, “Deep learning microscopy,” Optica 4(11), 1437–1443 (2017).
[Crossref]

H. Wang, Y. Rivenson, Z. Wei, H. Gunaydin, L. Bentolila, and A. Ozcan, “Deep learning achieves super-resolution in fluorescence microscopy,” bioRxiv, https://doi.org/10.1101/309641 (2018).

G. Barbastathis, A. Ozcan, and Guohai Situ, “On the use of deep learning for computational imaging,” Optica (2019).

Papadopoulos, I. N.

U. S. Kamilov, I. N. Papadopoulos, M. H. Shoreh, A. Goy, C. Vonesch, M. Unser, and D. Psaltis, “Optical tomographic image reconstruction based on beam propagation and sparse regularization,” IEEE Trans. Comput. Imaging 2(1), 59–70 (2016).
[Crossref]

U. S. Kamilov, I. N. Papadopoulos, M. H. Shoreh, A. Goy, C. Vonesch, M. Unser, and D. Psaltis, “Learning approach to optical tomography,” Optica 2(6), 517–522 (2015).
[Crossref]

Pitkäaho, T.

T. Pitkäaho, A. Manninen, and T. J. Naughton, “Performance of autofocus capability of deep convolutional neural networks in digital holographic microscopy,” in Digital Holography and Three-Dimensional Imaging (OSA, 2017), p. W2A.5.

Proakis, J. G.

V. K. Ingle and J. G. Proakis, Digital signal processing using matlab: a problem solving companion (Cengage Learning, 2016).

Psaltis, D.

U. S. Kamilov, I. N. Papadopoulos, M. H. Shoreh, A. Goy, C. Vonesch, M. Unser, and D. Psaltis, “Optical tomographic image reconstruction based on beam propagation and sparse regularization,” IEEE Trans. Comput. Imaging 2(1), 59–70 (2016).
[Crossref]

U. S. Kamilov, I. N. Papadopoulos, M. H. Shoreh, A. Goy, C. Vonesch, M. Unser, and D. Psaltis, “Learning approach to optical tomography,” Optica 2(6), 517–522 (2015).
[Crossref]

Recht, B.

C. Zhang, S. Bengio, M. Hardt, B. Recht, and O. Vinyals, “Understanding deep learning requires rethinking generalization,” arXiv preprint arXiv:1611.03530 (2016).

Ren, Z.

Repina, N. A.

M. R. Kellman, E. Bostan, N. A. Repina, and L. Waller, “Physics-based learned design: Optimized coded-illumination for quantitative phase imaging,” IEEE Trans. Comput. Imaging 5(3), 344–353 (2019).
[Crossref]

Rivenson, Y.

Y. Rivenson, Y. Zhang, H. Günaydın, D. Teng, and A. Ozcan, “Phase recovery and holographic image reconstruction using deep learning in neural networks,” Light: Sci. Appl. 7(2), 17141 (2018).
[Crossref]

Y. Wu, Y. Rivenson, Y. Zhang, Z. Wei, H. Gunaydin, X. Lin, and A. Ozcan, “Extended depth-of-field in holographic image reconstruction using deep learning based auto-focusing and phase-recovery,” Optica 5(6), 704–710 (2018).
[Crossref]

Y. Rivenson, Z. Gorocs, H. Gunaydin, Yibo Zhang, Hongda Wang, and A. Ozcan, “Deep learning microscopy,” Optica 4(11), 1437–1443 (2017).
[Crossref]

H. Wang, Y. Rivenson, Z. Wei, H. Gunaydin, L. Bentolila, and A. Ozcan, “Deep learning achieves super-resolution in fluorescence microscopy,” bioRxiv, https://doi.org/10.1101/309641 (2018).

Rodrigues, M. R.

D. Jakubovitz, R. Giryes, and M. R. Rodrigues, “Generalization error in deep learning,” in Compressed Sensing and Its Applications (Springer, 2019), pp. 153–193.

Rughoobur, G.

A. Goy, G. Rughoobur, Shuai Li, K. Arthur, A. Akinwande, and G. Barbastathis, “High-resolution limited-angle phase tomography of dense layered objects using deep neural networks,” Proc. Nat. Acad. Sci. ((accepted) 2019).

Saxe, A. M.

M. S. Advani and A. M. Saxe, “High-dimensional dynamics of generalization error in neural networks,” arXiv preprint arXiv:1710.03667 (2017).

Shannon, C. E.

C. E. Shannon, “A mathematical theory of communication,” Bell Syst. Tech. J. 27(3), 379–423 (1948).
[Crossref]

Shechtman, Y.

Shi, Wenshe

C. Ledig, L. Theis, F. Huczar, J. Caballero, A. Cunningham, A. Acosta, A. Aitken, A. Tejani, J. Totz, Zehan Wang, and Wenshe Shi, “Photo-realistic single image super-resolution using a Generative Adversarial Network,” in The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), (2017), pp. 4681–4690.

Shoreh, M. H.

U. S. Kamilov, I. N. Papadopoulos, M. H. Shoreh, A. Goy, C. Vonesch, M. Unser, and D. Psaltis, “Optical tomographic image reconstruction based on beam propagation and sparse regularization,” IEEE Trans. Comput. Imaging 2(1), 59–70 (2016).
[Crossref]

U. S. Kamilov, I. N. Papadopoulos, M. H. Shoreh, A. Goy, C. Vonesch, M. Unser, and D. Psaltis, “Learning approach to optical tomography,” Optica 2(6), 517–522 (2015).
[Crossref]

Sinha, A.

Situ, G.

Situ, Guohai

G. Barbastathis, A. Ozcan, and Guohai Situ, “On the use of deep learning for computational imaging,” Optica (2019).

Socher, R.

J. Deng, W. Dong, R. Socher, L.-J. Li, K. Li, and L. Fei-Fei, “Imagenet: A large-scale hierarchical image database, in 2009 IEEE conference on computer vision and pattern recognition (Ieee, 2009), pp. 248–255.

Srebro, N.

B. Neyshabur, S. Bhojanapalli, D. McAllester, and N. Srebro, “Exploring generalization in deep learning,” in Advances in Neural Information Processing Systems, (2017), pp. 5947–5956.

B. Neyshabur, Z. Li, S. Bhojanapalli, Y. LeCun, and N. Srebro, “Towards understanding the role of over-parametrization in generalization of neural networks,” arXiv preprint arXiv:1805.12076 (2018).

B. Neyshabur, S. Bhojanapalli, and N. Srebro, “A pac-bayesian approach to spectrally-normalized margin bounds for neural networks,” arXiv preprint arXiv:1707.09564 (2017).

Takagi, R.

Tang, X.

C. Dong, C. Loy, K. He, and X. Tang, “Learning a deep convolutional neural network for image super-resolution,” in European Conference on Computer Vision (ECCV) / Lecture Notes on Computer Science Part IV, vol. 8692, (2014), pp. 184–199.

Tanida, J.

Tejani, A.

C. Ledig, L. Theis, F. Huczar, J. Caballero, A. Cunningham, A. Acosta, A. Aitken, A. Tejani, J. Totz, Zehan Wang, and Wenshe Shi, “Photo-realistic single image super-resolution using a Generative Adversarial Network,” in The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), (2017), pp. 4681–4690.

Teng, D.

Y. Rivenson, Y. Zhang, H. Günaydın, D. Teng, and A. Ozcan, “Phase recovery and holographic image reconstruction using deep learning in neural networks,” Light: Sci. Appl. 7(2), 17141 (2018).
[Crossref]

Theis, L.

C. Ledig, L. Theis, F. Huczar, J. Caballero, A. Cunningham, A. Acosta, A. Aitken, A. Tejani, J. Totz, Zehan Wang, and Wenshe Shi, “Photo-realistic single image super-resolution using a Generative Adversarial Network,” in The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), (2017), pp. 4681–4690.

Thomas, J. A.

T. M. Cover and J. A. Thomas, Elements of information theory (John Wiley & Sons, 2012).

Tian, L.

Totz, J.

C. Ledig, L. Theis, F. Huczar, J. Caballero, A. Cunningham, A. Acosta, A. Aitken, A. Tejani, J. Totz, Zehan Wang, and Wenshe Shi, “Photo-realistic single image super-resolution using a Generative Adversarial Network,” in The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), (2017), pp. 4681–4690.

Unser, M.

M. T. McCann, K. H. Jin, and M. Unser, “Convolutional neural networks for inverse problems in imaging: A review,” IEEE Signal Process. Mag. 34(6), 85–95 (2017).
[Crossref]

U. S. Kamilov, I. N. Papadopoulos, M. H. Shoreh, A. Goy, C. Vonesch, M. Unser, and D. Psaltis, “Optical tomographic image reconstruction based on beam propagation and sparse regularization,” IEEE Trans. Comput. Imaging 2(1), 59–70 (2016).
[Crossref]

U. S. Kamilov, I. N. Papadopoulos, M. H. Shoreh, A. Goy, C. Vonesch, M. Unser, and D. Psaltis, “Learning approach to optical tomography,” Optica 2(6), 517–522 (2015).
[Crossref]

Vinyals, O.

C. Zhang, S. Bengio, M. Hardt, B. Recht, and O. Vinyals, “Understanding deep learning requires rethinking generalization,” arXiv preprint arXiv:1611.03530 (2016).

Vonesch, C.

U. S. Kamilov, I. N. Papadopoulos, M. H. Shoreh, A. Goy, C. Vonesch, M. Unser, and D. Psaltis, “Optical tomographic image reconstruction based on beam propagation and sparse regularization,” IEEE Trans. Comput. Imaging 2(1), 59–70 (2016).
[Crossref]

U. S. Kamilov, I. N. Papadopoulos, M. H. Shoreh, A. Goy, C. Vonesch, M. Unser, and D. Psaltis, “Learning approach to optical tomography,” Optica 2(6), 517–522 (2015).
[Crossref]

Waller, L.

M. R. Kellman, E. Bostan, N. A. Repina, and L. Waller, “Physics-based learned design: Optimized coded-illumination for quantitative phase imaging,” IEEE Trans. Comput. Imaging 5(3), 344–353 (2019).
[Crossref]

L. Tian and L. Waller, “Quantitative differential phase contrast imaging in an led array microscope,” Opt. Express 23(9), 11394–11403 (2015).
[Crossref]

Wang, H.

H. Wang, M. Lyu, and G. Situ, “eholonet: a learning-based point-to-point approach for in-line digital holographic reconstruction,” Opt. Express 26(18), 22603–22614 (2018).
[Crossref]

H. Wang, Y. Rivenson, Z. Wei, H. Gunaydin, L. Bentolila, and A. Ozcan, “Deep learning achieves super-resolution in fluorescence microscopy,” bioRxiv, https://doi.org/10.1101/309641 (2018).

Wang, Hongda

Wang, Zehan

C. Ledig, L. Theis, F. Huczar, J. Caballero, A. Cunningham, A. Acosta, A. Aitken, A. Tejani, J. Totz, Zehan Wang, and Wenshe Shi, “Photo-realistic single image super-resolution using a Generative Adversarial Network,” in The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), (2017), pp. 4681–4690.

Wei, Z.

Y. Wu, Y. Rivenson, Y. Zhang, Z. Wei, H. Gunaydin, X. Lin, and A. Ozcan, “Extended depth-of-field in holographic image reconstruction using deep learning based auto-focusing and phase-recovery,” Optica 5(6), 704–710 (2018).
[Crossref]

H. Wang, Y. Rivenson, Z. Wei, H. Gunaydin, L. Bentolila, and A. Ozcan, “Deep learning achieves super-resolution in fluorescence microscopy,” bioRxiv, https://doi.org/10.1101/309641 (2018).

Weiss, L. E.

Wu, Y.

Xu, H.

H. Xu and S. Mannor, “Robustness and generalization,” Mach. Learn. 86(3), 391–423 (2012).
[Crossref]

Xu, Z.

Xue, Y.

Zhang, C.

C. Zhang, S. Bengio, M. Hardt, B. Recht, and O. Vinyals, “Understanding deep learning requires rethinking generalization,” arXiv preprint arXiv:1611.03530 (2016).

Zhang, Y.

Y. Rivenson, Y. Zhang, H. Günaydın, D. Teng, and A. Ozcan, “Phase recovery and holographic image reconstruction using deep learning in neural networks,” Light: Sci. Appl. 7(2), 17141 (2018).
[Crossref]

Y. Wu, Y. Rivenson, Y. Zhang, Z. Wei, H. Gunaydin, X. Lin, and A. Ozcan, “Extended depth-of-field in holographic image reconstruction using deep learning based auto-focusing and phase-recovery,” Optica 5(6), 704–710 (2018).
[Crossref]

Zhang, Yibo

Appl. Opt. (1)

Bell Syst. Tech. J. (1)

C. E. Shannon, “A mathematical theory of communication,” Bell Syst. Tech. J. 27(3), 379–423 (1948).
[Crossref]

IEEE Signal Process. Mag. (1)

M. T. McCann, K. H. Jin, and M. Unser, “Convolutional neural networks for inverse problems in imaging: A review,” IEEE Signal Process. Mag. 34(6), 85–95 (2017).
[Crossref]

IEEE Trans. Comput. Imaging (2)

M. R. Kellman, E. Bostan, N. A. Repina, and L. Waller, “Physics-based learned design: Optimized coded-illumination for quantitative phase imaging,” IEEE Trans. Comput. Imaging 5(3), 344–353 (2019).
[Crossref]

U. S. Kamilov, I. N. Papadopoulos, M. H. Shoreh, A. Goy, C. Vonesch, M. Unser, and D. Psaltis, “Optical tomographic image reconstruction based on beam propagation and sparse regularization,” IEEE Trans. Comput. Imaging 2(1), 59–70 (2016).
[Crossref]

IEEE Trans. Inform. Technol. Biomed. (1)

G. K. Matsopoulos, N. A. Mouravliansky, K. K. Delibasis, and K. S. Nikita, “Automatic retinal image registration scheme using global optimization techniques,” IEEE Trans. Inform. Technol. Biomed. 3(1), 47–60 (1999).
[Crossref]

Light: Sci. Appl. (2)

M. Deng, S. Li, A. Goy, I. Kang, and G. Barbastathis, “Learning to synthesize: Robust phase retrieval at low photon counts,” Light: Sci. Appl. 9(1), 36 (2020).
[Crossref]

Y. Rivenson, Y. Zhang, H. Günaydın, D. Teng, and A. Ozcan, “Phase recovery and holographic image reconstruction using deep learning in neural networks,” Light: Sci. Appl. 7(2), 17141 (2018).
[Crossref]

Mach. Learn. (1)

H. Xu and S. Mannor, “Robustness and generalization,” Mach. Learn. 86(3), 391–423 (2012).
[Crossref]

Opt. Express (6)

Optica (8)

Phys. Rev. Lett. (1)

A. Goy, K. Arthur, Shuai Li, and G. Barbastathis, “Low photon count phase retrieval using deep learning,” Phys. Rev. Lett. 121(24), 243902 (2018).
[Crossref]

The computer journal (1)

J. A. Nelder and R. Mead, “A simplex method for function minimization,” The computer journal 7(4), 308–313 (1965).
[Crossref]

Other (22)

S. Li, “Computational imaging through deep learning,” Ph.D. thesis, MIT (2019).

D. P. Kingma and J. Lei Ba, “Adam: A method for stochastic optimization,” in International Conference on Learning Representations (ICLR), (2015).

S. Li, G. Barbastathis, and A. Goy, “Analysis of phase-extraction neural network (phenn) performance for lensless quantitative phase imaging,” in Quantitative Phase Imaging V, vol. 10887 (International Society for Optics and Photonics, 2019), p. 108870T.

D. Jakubovitz, R. Giryes, and M. R. Rodrigues, “Generalization error in deep learning,” in Compressed Sensing and Its Applications (Springer, 2019), pp. 153–193.

A. Goy, G. Rughoobur, Shuai Li, K. Arthur, A. Akinwande, and G. Barbastathis, “High-resolution limited-angle phase tomography of dense layered objects using deep neural networks,” Proc. Nat. Acad. Sci. ((accepted) 2019).

G. Barbastathis, A. Ozcan, and Guohai Situ, “On the use of deep learning for computational imaging,” Optica (2019).

T. M. Cover and J. A. Thomas, Elements of information theory (John Wiley & Sons, 2012).

V. K. Ingle and J. G. Proakis, Digital signal processing using matlab: a problem solving companion (Cengage Learning, 2016).

G. B. Huang, M. Mattar, T. Berg, and E. Learned-Miller, “Labeled faces in the wild: A database forstudying face recognition in unconstrained environments,” in Technical Report, University of Massachusetts, (2007).

T. Pitkäaho, A. Manninen, and T. J. Naughton, “Performance of autofocus capability of deep convolutional neural networks in digital holographic microscopy,” in Digital Holography and Three-Dimensional Imaging (OSA, 2017), p. W2A.5.

C. Dong, C. Loy, K. He, and X. Tang, “Learning a deep convolutional neural network for image super-resolution,” in European Conference on Computer Vision (ECCV) / Lecture Notes on Computer Science Part IV, vol. 8692, (2014), pp. 184–199.

J. Johnson, A. Alahi, and Li Fei-Fei, “Perceptual losses for real-time style transfer and super-resolution,” in European Conference on Computer Vision (ECCV) / Lecture Notes on Computer Science, vol. 9906B. Leide, J. Matas, N. Sebe, and M. Welling, eds. (2016), pp. 694–711.

C. Ledig, L. Theis, F. Huczar, J. Caballero, A. Cunningham, A. Acosta, A. Aitken, A. Tejani, J. Totz, Zehan Wang, and Wenshe Shi, “Photo-realistic single image super-resolution using a Generative Adversarial Network,” in The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), (2017), pp. 4681–4690.

H. Wang, Y. Rivenson, Z. Wei, H. Gunaydin, L. Bentolila, and A. Ozcan, “Deep learning achieves super-resolution in fluorescence microscopy,” bioRxiv, https://doi.org/10.1101/309641 (2018).

M. Deng, S. Li, and G. Barbastathis, “Learning to synthesize: splitting and recombining low and high spatial frequencies for image recovery,” arXiv preprint arXiv:1811.07945 (2018).

J. Deng, W. Dong, R. Socher, L.-J. Li, K. Li, and L. Fei-Fei, “Imagenet: A large-scale hierarchical image database, in 2009 IEEE conference on computer vision and pattern recognition (Ieee, 2009), pp. 248–255.

Y. LeCun, C. Cortes, and C. J. Burges, “MNIST handwritten digit database,” AT&T Labs [Online]. Available: http://yann.lecun.com/exdb/mnist 2 (2010).

B. Neyshabur, S. Bhojanapalli, D. McAllester, and N. Srebro, “Exploring generalization in deep learning,” in Advances in Neural Information Processing Systems, (2017), pp. 5947–5956.

B. Neyshabur, Z. Li, S. Bhojanapalli, Y. LeCun, and N. Srebro, “Towards understanding the role of over-parametrization in generalization of neural networks,” arXiv preprint arXiv:1805.12076 (2018).

B. Neyshabur, S. Bhojanapalli, and N. Srebro, “A pac-bayesian approach to spectrally-normalized margin bounds for neural networks,” arXiv preprint arXiv:1707.09564 (2017).

C. Zhang, S. Bengio, M. Hardt, B. Recht, and O. Vinyals, “Understanding deep learning requires rethinking generalization,” arXiv preprint arXiv:1611.03530 (2016).

M. S. Advani and A. M. Saxe, “High-dimensional dynamics of generalization error in neural networks,” arXiv preprint arXiv:1710.03667 (2017).

Cited By

OSA participates in Crossref's Cited-By Linking service. Citing articles from OSA journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (12)

Fig. 1.
Fig. 1. Schematic plot of the lensless phase imaging system.
Fig. 2.
Fig. 2. The general architecture of PhENN.
Fig. 3.
Fig. 3. Entropy histogram of ImageNet and MNIST. Each histogram is computed based on 1000 bins and 10000 images.
Fig. 4.
Fig. 4. Cross-domain generalization performance on synthetic data, when PhENN is trained with MNIST, IC, Face-LFW and ImageNet, respectively. The defocus distance $z=150\textrm {mm}$.
Fig. 5.
Fig. 5. Comparison of LWOTFs-ImageNet, LWOTF-MNIST, WOTF-computed and WOTF-theory. The values are cropped to the range of $[-3,3]$ and outliers from LWOTF-MNIST have been cropped out.
Fig. 6.
Fig. 6. Optical apparatus. HWP: Half-wave plate, OBJ: objective lens, SF: spatial filter, CL: collimating lens, POL: linear polarizer, SLM: spatial light modulator, NPBS: non-polarizing beamsplitter, L: lens.
Fig. 7.
Fig. 7. Phase modulation vs. 8-bit grayscale value for the experiments.
Fig. 8.
Fig. 8. Cross-domain generalization performances of ImageNet-PhENN and MNIST-PhENN on experimental data. The defocus distance is $z=150\textrm {mm}$.
Fig. 9.
Fig. 9. Reconstruction of the star pattern. (a). Intensity measurement at $z=150 \textrm {mm}$. (b). star pattern (weak) object. (c). reconstruction by ImageNet-trained PhENN. (d). reconstruction by MNIST-trained PhENN.
Fig. 10.
Fig. 10. More detailed architecture of PhENN. Superscripts a - d denote different kernel size and strides, listed as follows: a) Kernel size: (3, 3), strides: (2, 2). b) Kernel size: (3, 3), strides: (1, 1). c) Kernel size: (2, 2), strides: (2, 2). d) Kernel size: (1, 1), strides: (1, 1).
Fig. 11.
Fig. 11. Cross-domain generalization performances of ImageNet-PhENN and MNIST-PhENN on synthetic data. The defocus distance is $z=100\textrm {mm}$.
Fig. 12.
Fig. 12. More examples of cross-domain generalization performance on experimental data. The defocus distance $z=150\textrm {mm}$.

Tables (5)

Tables Icon

Table 1. Cross-domain generalization ability performance of PhENN trained with various datasets. The quantitative metric used is Pearson Correlation Coefficient. The defocus distance z = 150 mm .

Tables Icon

Table 2. Cross-domain generalization ability performance of PhENN trained with various datasets. The quantitative metric used is Mean Absolute Error (MAE). The defocus distance z = 150 mm .

Tables Icon

Table 3. Cross-domain generalization ability performance of PhENN trained with ImageNet, SImageNet-1.5 and SImageNet-2.0, respectively, for z = 150  mm (synthetic data).

Tables Icon

Table 4. Cross-domain generalization ability performance of PhENN trained with ImageNet and MNIST, respectively, for z = 150  mm (experimental data).

Tables Icon

Table 5. Cross-domain generalization ability performance of PhENN trained with ImageNet and MNIST, respectively, for z = 100  mm (synthetic data).

Equations (7)

Equations on this page are rendered with MathJax. Learn more.

g ( x , y ) = | exp { i f ( x , y ) } exp { i π λ z ( x 2 + y 2 ) } | 2 .
G ( u , v ) δ ( u , v ) + 2 sin ( π λ z ( u 2 + v 2 ) ) F ( u , v ) .
H p ( X ) = k = 1 K p ( x k ) log 2 p ( x k ) ,
p ^ ( x k ) = i , j 1 { f i , j = x k } M × N = number of pixels of value x k M × N ,
NPCC ( f ^ , f ) x , y ( f ^ ( x , y ) f ^ ) ( f ( x , y ) f ) x , y ( f ^ ( x , y ) f ^ ) 2 x , y ( f ( x , y ) f ) 2 ,
LWOTF = 1 K k = 1 K G k ( u , v ) δ ( u , v ) F ^ k ( u , v ) ,
λ z ( u k 2 + v k 2 ) = λ z ( P 2 π r k ) 2 = k .