Abstract

Emerging deep-learning (DL)-based techniques have significant potential to revolutionize biomedical imaging. However, one outstanding challenge is the lack of reliability assessment in the DL predictions, whose errors are commonly revealed only in hindsight. Here, we propose a new Bayesian convolutional neural network (BNN)-based framework that overcomes this issue by quantifying the uncertainty of DL predictions. Foremost, we show that BNN-predicted uncertainty maps provide surrogate estimates of the true error from the network model and measurement itself. The uncertainty maps characterize imperfections often unknown in real-world applications, such as noise, model error, incomplete training data, and out-of-distribution testing data. Quantifying this uncertainty provides a per-pixel estimate of the confidence level of the DL prediction as well as the quality of the model and data set. We demonstrate this framework in the application of large space–bandwidth product phase imaging using a physics-guided coded illumination scheme. From only five multiplexed illumination measurements, our BNN predicts gigapixel phase images in both static and dynamic biological samples with quantitative credibility assessment. Furthermore, we show that low-certainty regions can identify spatially and temporally rare biological phenomena. We believe our uncertainty learning framework is widely applicable to many DL-based biomedical imaging techniques for assessing the reliability of DL predictions.

© 2019 Optical Society of America under the terms of the OSA Open Access Publishing Agreement

Full Article  |  PDF Article
OSA Recommended Articles
Deep learning approach for Fourier ptychography microscopy

Thanh Nguyen, Yujia Xue, Yunzhe Li, Lei Tian, and George Nehmetallah
Opt. Express 26(20) 26470-26484 (2018)

Deep speckle correlation: a deep learning approach toward scalable imaging through scattering media

Yunzhe Li, Yujia Xue, and Lei Tian
Optica 5(10) 1181-1190 (2018)

Extended depth-of-field in holographic imaging using deep-learning-based autofocusing and phase recovery

Yichen Wu, Yair Rivenson, Yibo Zhang, Zhensong Wei, Harun Günaydin, Xing Lin, and Aydogan Ozcan
Optica 5(6) 704-710 (2018)

References

  • View by:
  • |
  • |
  • |

  1. A. W. Lohmann, R. G. Dorsch, D. Mendlovic, Z. Zalevsky, and C. Ferreira, “Space–bandwidth product of optical signals and systems,” J. Opt. Soc. Am. A 13, 470–473 (1996).
    [Crossref]
  2. A. W. Lohmann, “Scaling laws for lens systems,” Appl. Opt. 28, 4996–4998 (1989).
    [Crossref]
  3. W. Lukosz, “Optical systems with resolving powers exceeding the classical limit,” J. Opt. Soc. Am. 56, 1463–1471 (1966).
    [Crossref]
  4. W. Lukosz, “Optical systems with resolving powers exceeding the classical limit ii,” J. Opt. Soc. Am. 57, 932–941 (1967).
    [Crossref]
  5. K. Wicker and R. Heintzmann, “Resolving a misconception about structured illumination,” Nat. Photonics 8, 342–344 (2014).
    [Crossref]
  6. G. Zheng, R. Horstmeyer, and C. Yang, “Wide-field, high-resolution Fourier ptychographic microscopy,” Nat. Photonics 7, 739–745 (2013).
    [Crossref]
  7. L. Tian, Z. Liu, L.-H. Yeh, M. Chen, J. Zhong, and L. Waller, “Computational illumination for high-speed in vitro Fourier ptychographic microscopy,” Optica 2, 904–911 (2015).
    [Crossref]
  8. A. Sinha, J. Lee, S. Li, and G. Barbastathis, “Lensless computational imaging through deep learning,” Optica 4, 1117–1125 (2017).
    [Crossref]
  9. Y. Rivenson, Y. Zhang, H. Günaydn, D. Teng, and A. Ozcan, “Phase recovery and holographic image reconstruction using deep learning in neural networks,” Light Sci. Appl. 7, 17141–17149 (2018).
    [Crossref]
  10. T. Nguyen, Y. Xue, Y. Li, L. Tian, and G. Nehmetallah, “Deep learning approach for Fourier ptychography microscopy,” Opt. Express 26, 26470–26484 (2018).
    [Crossref]
  11. S. Li, M. Deng, J. Lee, A. Sinha, and G. Barbastathis, “Imaging through glass diffusers using densely connected convolutional networks,” Optica 5, 803–813 (2018).
    [Crossref]
  12. Y. Li, Y. Xue, and L. Tian, “Deep speckle correlation: a deep learning approach toward scalable imaging through scattering media,” Optica 5, 1181–1190 (2018).
    [Crossref]
  13. Y. Wu, Y. Rivenson, Y. Zhang, Z. Wei, H. Günaydin, X. Lin, and A. Ozcan, “Extended depth-of-field in holographic imaging using deep-learning-based autofocusing and phase recovery,” Optica 5, 704–710 (2018).
    [Crossref]
  14. R. Horstmeyer, R. Y. Chen, B. Kappes, and B. Judkewitz, “Convolutional neural networks that teach microscopes how to image,” arXiv:1709.07223 (2017).
  15. B. Diederich, R. Wartmann, H. Schadwinkel, and R. Heintzmann, “Using machine-learning to optimize phase contrast in a low-cost cellphone microscope,” PLoS One 13, e0192937 (2018).
    [Crossref]
  16. A. Robey and V. Ganapati, “Optimal physical preprocessing for example-based super-resolution,” Opt. Express 26, 31333–31350 (2018).
    [Crossref]
  17. M. Kellman, E. Bostan, N. Repina, and L. Waller, “Physics-based learned design: optimized coded-illumination for quantitative phase imaging,” IEEE Transactions on Computational Imaging (Early Access) (2019), https://doi.org/10.1109/TCI.2019.2905434.
  18. L. Tian and L. Waller, “Quantitative differential phase contrast imaging in an LED array microscope,” Opt. Express 23, 11394–11403 (2015).
    [Crossref]
  19. T. R. Hillman, T. Gutzler, S. A. Alexandrov, and D. D. Sampson, “High-resolution, wide-field object reconstruction with synthetic aperture Fourier holographic optical microscopy,” Opt. Express 17, 7873–7892 (2009).
    [Crossref]
  20. L. Tian, X. Li, K. Ramchandran, and L. Waller, “Multiplexed coded illumination for Fourier ptychography with an LED array microscope,” Biomed. Opt. Express 5, 2376–2389 (2014).
    [Crossref]
  21. E. Bostan, M. Soltanolkotabi, D. Ren, and L. Waller, “Accelerated Wirtinger flow for multiplexed Fourier ptychographic microscopy,” arXiv:1803.03714 (2018).
  22. P. Chen and A. Fannjiang, “Coded aperture ptychography: uniqueness and reconstruction,” Inverse Probl. 34, 025003 (2018).
    [Crossref]
  23. A. Kendall and Y. Gal, “What uncertainties do we need in Bayesian deep learning for computer vision?” in Advances in Neural Information Processing Systems (2017), pp. 5580–5590.
  24. A. D. Kiureghian and O. Ditlevsen, “Aleatory or epistemic? Does it matter?” Struct. Saf. 31, 105–112 (2009).
    [Crossref]
  25. R. Ling, W. Tahir, H.-Y. Lin, H. Lee, and L. Tian, “High-throughput intensity diffraction tomography with a computational microscope,” Biomed. Opt. Express 9, 2130–2141 (2018).
    [Crossref]
  26. S. Mehta and C. Sheppard, “Quantitative phase-gradient imaging at high resolution with asymmetric illumination-based differential phase contrast,” Opt. Lett. 34, 1924–1926 (2009).
    [Crossref]
  27. Y. Fan, J. Sun, Q. Chen, X. Pan, L. Tian, and C. Zuo, “Optimal illumination scheme for isotropic quantitative differential phase contrast microscopy,” arXiv:1903.10718 (2019).
  28. Z. Ghahramani, “Probabilistic machine learning and artificial intelligence,” Nature 521, 452–459 (2015).
    [Crossref]
  29. X. Glorot and Y. Bengio, “Understanding the difficulty of training deep feedforward neural networks,” in International Conference on Artificial Intelligence and Statistics (2010), Vol. 9, pp. 249–256.
  30. N. Srivastava, G. Hinton, A. Krizhevsky, I. Sutskevar, and R. Salakhutdinov, “Dropout: a simple way to prevent neural networks from overfitting,” J. Mach. Learn. Res. 15, 1929–1958 (2014).
  31. L. Bottou, “Large-scale machine learning with stochastic gradient descent,” in Proceedings of COMPSTAT’10 (2010), pp. 177–186.
  32. B. Lakshminarayanan, A. Pritzel, and C. Blundell, “Simple and scalable predictive uncertainty estimation using deep ensembles,” in Advances in Neural Information Processing Systems (2017), pp. 6402–6413.
  33. Y. Gal and Z. Ghahramani, “Dropout as a Bayesian approximation: representing model uncertainty in deep learning,” in International Conference on Machine Learning (2016), pp. 1050–1059.
  34. O. Ronneberger, P. Fischer, and T. Brox, “U-Net: convolutional networks for biomedical image segmentation,” in International Conference on Medical Image Computing and Computer-Assisted Intervention (Springer, 2015), pp. 234–241.
  35. P. Isola, J. Zhu, T. Zhou, and A. A. Efros, “Image-to-image translation with conditional adversarial networks,” CoRR abs/1611.07004 (2016).
  36. Y. Xue, S. Cheng, Y. Li, and L. Tian, https://github.com/bu-cisl/illumination-coding-meets-uncertainty-learning .
  37. X. Ou, G. Zheng, and C. Yang, “Embedded pupil function recovery for Fourier ptychographic microscopy,” Opt. Express 22, 4960–4972 (2014).
    [Crossref]
  38. R. Eckert, Z. F. Phillips, and L. Waller, “Efficient illumination angle self-calibration in Fourier ptychography,” Appl. Opt. 57, 5434–5442 (2018).
    [Crossref]
  39. D. C. Ghiglia and L. A. Romero, “Robust two-dimensional weighted and unweighted phase unwrapping that uses fast transforms and iterative methods,” J. Opt. Soc. Am. A 11, 107–117 (1994).
    [Crossref]
  40. L.-H. Yeh, J. Dong, J. Zhong, L. Tian, M. Chen, G. Tang, M. Soltanolkotabi, and L. Waller, “Experimental robustness of Fourier ptychography phase retrieval algorithms,” Opt. Express 23, 33214–33240 (2015).
    [Crossref]
  41. V. Kuleshov, N. Fenner, and S. Ermon, “Accurate uncertainties for deep learning using calibrated regression,” arXiv:1807.00263 (2018).
  42. A. Niculescu-Mizil and R. Caruana, “Predicting good probabilities with supervised learning,” in Proceedings of the 22nd International Conference on Machine Learning (2005).
  43. M. Weigert, U. Schmidt, T. Boothe, A. Müller, A. Dibrov, A. Jain, B. Wilhelm, D. Schmidt, C. Broaddus, S. Culley, and M. Rocha-Martins, “Content-aware image restoration: pushing the limits of fluorescence microscopy,” Nat. Methods 15, 1090–1097 (2018).
    [Crossref]
  44. J. Sun, Q. Chen, Y. Zhang, and C. Zuo, “Efficient positional misalignment correction method for Fourier ptychographic microscopy,” Biomed. Opt. Express 7, 1336–1350 (2016).
    [Crossref]

2018 (11)

Y. Rivenson, Y. Zhang, H. Günaydn, D. Teng, and A. Ozcan, “Phase recovery and holographic image reconstruction using deep learning in neural networks,” Light Sci. Appl. 7, 17141–17149 (2018).
[Crossref]

T. Nguyen, Y. Xue, Y. Li, L. Tian, and G. Nehmetallah, “Deep learning approach for Fourier ptychography microscopy,” Opt. Express 26, 26470–26484 (2018).
[Crossref]

S. Li, M. Deng, J. Lee, A. Sinha, and G. Barbastathis, “Imaging through glass diffusers using densely connected convolutional networks,” Optica 5, 803–813 (2018).
[Crossref]

Y. Li, Y. Xue, and L. Tian, “Deep speckle correlation: a deep learning approach toward scalable imaging through scattering media,” Optica 5, 1181–1190 (2018).
[Crossref]

Y. Wu, Y. Rivenson, Y. Zhang, Z. Wei, H. Günaydin, X. Lin, and A. Ozcan, “Extended depth-of-field in holographic imaging using deep-learning-based autofocusing and phase recovery,” Optica 5, 704–710 (2018).
[Crossref]

B. Diederich, R. Wartmann, H. Schadwinkel, and R. Heintzmann, “Using machine-learning to optimize phase contrast in a low-cost cellphone microscope,” PLoS One 13, e0192937 (2018).
[Crossref]

A. Robey and V. Ganapati, “Optimal physical preprocessing for example-based super-resolution,” Opt. Express 26, 31333–31350 (2018).
[Crossref]

P. Chen and A. Fannjiang, “Coded aperture ptychography: uniqueness and reconstruction,” Inverse Probl. 34, 025003 (2018).
[Crossref]

R. Ling, W. Tahir, H.-Y. Lin, H. Lee, and L. Tian, “High-throughput intensity diffraction tomography with a computational microscope,” Biomed. Opt. Express 9, 2130–2141 (2018).
[Crossref]

R. Eckert, Z. F. Phillips, and L. Waller, “Efficient illumination angle self-calibration in Fourier ptychography,” Appl. Opt. 57, 5434–5442 (2018).
[Crossref]

M. Weigert, U. Schmidt, T. Boothe, A. Müller, A. Dibrov, A. Jain, B. Wilhelm, D. Schmidt, C. Broaddus, S. Culley, and M. Rocha-Martins, “Content-aware image restoration: pushing the limits of fluorescence microscopy,” Nat. Methods 15, 1090–1097 (2018).
[Crossref]

2017 (1)

2016 (1)

2015 (4)

2014 (4)

L. Tian, X. Li, K. Ramchandran, and L. Waller, “Multiplexed coded illumination for Fourier ptychography with an LED array microscope,” Biomed. Opt. Express 5, 2376–2389 (2014).
[Crossref]

K. Wicker and R. Heintzmann, “Resolving a misconception about structured illumination,” Nat. Photonics 8, 342–344 (2014).
[Crossref]

N. Srivastava, G. Hinton, A. Krizhevsky, I. Sutskevar, and R. Salakhutdinov, “Dropout: a simple way to prevent neural networks from overfitting,” J. Mach. Learn. Res. 15, 1929–1958 (2014).

X. Ou, G. Zheng, and C. Yang, “Embedded pupil function recovery for Fourier ptychographic microscopy,” Opt. Express 22, 4960–4972 (2014).
[Crossref]

2013 (1)

G. Zheng, R. Horstmeyer, and C. Yang, “Wide-field, high-resolution Fourier ptychographic microscopy,” Nat. Photonics 7, 739–745 (2013).
[Crossref]

2009 (3)

1996 (1)

1994 (1)

1989 (1)

1967 (1)

1966 (1)

Alexandrov, S. A.

Barbastathis, G.

Bengio, Y.

X. Glorot and Y. Bengio, “Understanding the difficulty of training deep feedforward neural networks,” in International Conference on Artificial Intelligence and Statistics (2010), Vol. 9, pp. 249–256.

Blundell, C.

B. Lakshminarayanan, A. Pritzel, and C. Blundell, “Simple and scalable predictive uncertainty estimation using deep ensembles,” in Advances in Neural Information Processing Systems (2017), pp. 6402–6413.

Boothe, T.

M. Weigert, U. Schmidt, T. Boothe, A. Müller, A. Dibrov, A. Jain, B. Wilhelm, D. Schmidt, C. Broaddus, S. Culley, and M. Rocha-Martins, “Content-aware image restoration: pushing the limits of fluorescence microscopy,” Nat. Methods 15, 1090–1097 (2018).
[Crossref]

Bostan, E.

M. Kellman, E. Bostan, N. Repina, and L. Waller, “Physics-based learned design: optimized coded-illumination for quantitative phase imaging,” IEEE Transactions on Computational Imaging (Early Access) (2019), https://doi.org/10.1109/TCI.2019.2905434.

E. Bostan, M. Soltanolkotabi, D. Ren, and L. Waller, “Accelerated Wirtinger flow for multiplexed Fourier ptychographic microscopy,” arXiv:1803.03714 (2018).

Bottou, L.

L. Bottou, “Large-scale machine learning with stochastic gradient descent,” in Proceedings of COMPSTAT’10 (2010), pp. 177–186.

Broaddus, C.

M. Weigert, U. Schmidt, T. Boothe, A. Müller, A. Dibrov, A. Jain, B. Wilhelm, D. Schmidt, C. Broaddus, S. Culley, and M. Rocha-Martins, “Content-aware image restoration: pushing the limits of fluorescence microscopy,” Nat. Methods 15, 1090–1097 (2018).
[Crossref]

Brox, T.

O. Ronneberger, P. Fischer, and T. Brox, “U-Net: convolutional networks for biomedical image segmentation,” in International Conference on Medical Image Computing and Computer-Assisted Intervention (Springer, 2015), pp. 234–241.

Caruana, R.

A. Niculescu-Mizil and R. Caruana, “Predicting good probabilities with supervised learning,” in Proceedings of the 22nd International Conference on Machine Learning (2005).

Chen, M.

Chen, P.

P. Chen and A. Fannjiang, “Coded aperture ptychography: uniqueness and reconstruction,” Inverse Probl. 34, 025003 (2018).
[Crossref]

Chen, Q.

J. Sun, Q. Chen, Y. Zhang, and C. Zuo, “Efficient positional misalignment correction method for Fourier ptychographic microscopy,” Biomed. Opt. Express 7, 1336–1350 (2016).
[Crossref]

Y. Fan, J. Sun, Q. Chen, X. Pan, L. Tian, and C. Zuo, “Optimal illumination scheme for isotropic quantitative differential phase contrast microscopy,” arXiv:1903.10718 (2019).

Chen, R. Y.

R. Horstmeyer, R. Y. Chen, B. Kappes, and B. Judkewitz, “Convolutional neural networks that teach microscopes how to image,” arXiv:1709.07223 (2017).

Culley, S.

M. Weigert, U. Schmidt, T. Boothe, A. Müller, A. Dibrov, A. Jain, B. Wilhelm, D. Schmidt, C. Broaddus, S. Culley, and M. Rocha-Martins, “Content-aware image restoration: pushing the limits of fluorescence microscopy,” Nat. Methods 15, 1090–1097 (2018).
[Crossref]

Deng, M.

Dibrov, A.

M. Weigert, U. Schmidt, T. Boothe, A. Müller, A. Dibrov, A. Jain, B. Wilhelm, D. Schmidt, C. Broaddus, S. Culley, and M. Rocha-Martins, “Content-aware image restoration: pushing the limits of fluorescence microscopy,” Nat. Methods 15, 1090–1097 (2018).
[Crossref]

Diederich, B.

B. Diederich, R. Wartmann, H. Schadwinkel, and R. Heintzmann, “Using machine-learning to optimize phase contrast in a low-cost cellphone microscope,” PLoS One 13, e0192937 (2018).
[Crossref]

Ditlevsen, O.

A. D. Kiureghian and O. Ditlevsen, “Aleatory or epistemic? Does it matter?” Struct. Saf. 31, 105–112 (2009).
[Crossref]

Dong, J.

Dorsch, R. G.

Eckert, R.

Efros, A. A.

P. Isola, J. Zhu, T. Zhou, and A. A. Efros, “Image-to-image translation with conditional adversarial networks,” CoRR abs/1611.07004 (2016).

Ermon, S.

V. Kuleshov, N. Fenner, and S. Ermon, “Accurate uncertainties for deep learning using calibrated regression,” arXiv:1807.00263 (2018).

Fan, Y.

Y. Fan, J. Sun, Q. Chen, X. Pan, L. Tian, and C. Zuo, “Optimal illumination scheme for isotropic quantitative differential phase contrast microscopy,” arXiv:1903.10718 (2019).

Fannjiang, A.

P. Chen and A. Fannjiang, “Coded aperture ptychography: uniqueness and reconstruction,” Inverse Probl. 34, 025003 (2018).
[Crossref]

Fenner, N.

V. Kuleshov, N. Fenner, and S. Ermon, “Accurate uncertainties for deep learning using calibrated regression,” arXiv:1807.00263 (2018).

Ferreira, C.

Fischer, P.

O. Ronneberger, P. Fischer, and T. Brox, “U-Net: convolutional networks for biomedical image segmentation,” in International Conference on Medical Image Computing and Computer-Assisted Intervention (Springer, 2015), pp. 234–241.

Gal, Y.

Y. Gal and Z. Ghahramani, “Dropout as a Bayesian approximation: representing model uncertainty in deep learning,” in International Conference on Machine Learning (2016), pp. 1050–1059.

A. Kendall and Y. Gal, “What uncertainties do we need in Bayesian deep learning for computer vision?” in Advances in Neural Information Processing Systems (2017), pp. 5580–5590.

Ganapati, V.

Ghahramani, Z.

Z. Ghahramani, “Probabilistic machine learning and artificial intelligence,” Nature 521, 452–459 (2015).
[Crossref]

Y. Gal and Z. Ghahramani, “Dropout as a Bayesian approximation: representing model uncertainty in deep learning,” in International Conference on Machine Learning (2016), pp. 1050–1059.

Ghiglia, D. C.

Glorot, X.

X. Glorot and Y. Bengio, “Understanding the difficulty of training deep feedforward neural networks,” in International Conference on Artificial Intelligence and Statistics (2010), Vol. 9, pp. 249–256.

Günaydin, H.

Günaydn, H.

Y. Rivenson, Y. Zhang, H. Günaydn, D. Teng, and A. Ozcan, “Phase recovery and holographic image reconstruction using deep learning in neural networks,” Light Sci. Appl. 7, 17141–17149 (2018).
[Crossref]

Gutzler, T.

Heintzmann, R.

B. Diederich, R. Wartmann, H. Schadwinkel, and R. Heintzmann, “Using machine-learning to optimize phase contrast in a low-cost cellphone microscope,” PLoS One 13, e0192937 (2018).
[Crossref]

K. Wicker and R. Heintzmann, “Resolving a misconception about structured illumination,” Nat. Photonics 8, 342–344 (2014).
[Crossref]

Hillman, T. R.

Hinton, G.

N. Srivastava, G. Hinton, A. Krizhevsky, I. Sutskevar, and R. Salakhutdinov, “Dropout: a simple way to prevent neural networks from overfitting,” J. Mach. Learn. Res. 15, 1929–1958 (2014).

Horstmeyer, R.

G. Zheng, R. Horstmeyer, and C. Yang, “Wide-field, high-resolution Fourier ptychographic microscopy,” Nat. Photonics 7, 739–745 (2013).
[Crossref]

R. Horstmeyer, R. Y. Chen, B. Kappes, and B. Judkewitz, “Convolutional neural networks that teach microscopes how to image,” arXiv:1709.07223 (2017).

Isola, P.

P. Isola, J. Zhu, T. Zhou, and A. A. Efros, “Image-to-image translation with conditional adversarial networks,” CoRR abs/1611.07004 (2016).

Jain, A.

M. Weigert, U. Schmidt, T. Boothe, A. Müller, A. Dibrov, A. Jain, B. Wilhelm, D. Schmidt, C. Broaddus, S. Culley, and M. Rocha-Martins, “Content-aware image restoration: pushing the limits of fluorescence microscopy,” Nat. Methods 15, 1090–1097 (2018).
[Crossref]

Judkewitz, B.

R. Horstmeyer, R. Y. Chen, B. Kappes, and B. Judkewitz, “Convolutional neural networks that teach microscopes how to image,” arXiv:1709.07223 (2017).

Kappes, B.

R. Horstmeyer, R. Y. Chen, B. Kappes, and B. Judkewitz, “Convolutional neural networks that teach microscopes how to image,” arXiv:1709.07223 (2017).

Kellman, M.

M. Kellman, E. Bostan, N. Repina, and L. Waller, “Physics-based learned design: optimized coded-illumination for quantitative phase imaging,” IEEE Transactions on Computational Imaging (Early Access) (2019), https://doi.org/10.1109/TCI.2019.2905434.

Kendall, A.

A. Kendall and Y. Gal, “What uncertainties do we need in Bayesian deep learning for computer vision?” in Advances in Neural Information Processing Systems (2017), pp. 5580–5590.

Kiureghian, A. D.

A. D. Kiureghian and O. Ditlevsen, “Aleatory or epistemic? Does it matter?” Struct. Saf. 31, 105–112 (2009).
[Crossref]

Krizhevsky, A.

N. Srivastava, G. Hinton, A. Krizhevsky, I. Sutskevar, and R. Salakhutdinov, “Dropout: a simple way to prevent neural networks from overfitting,” J. Mach. Learn. Res. 15, 1929–1958 (2014).

Kuleshov, V.

V. Kuleshov, N. Fenner, and S. Ermon, “Accurate uncertainties for deep learning using calibrated regression,” arXiv:1807.00263 (2018).

Lakshminarayanan, B.

B. Lakshminarayanan, A. Pritzel, and C. Blundell, “Simple and scalable predictive uncertainty estimation using deep ensembles,” in Advances in Neural Information Processing Systems (2017), pp. 6402–6413.

Lee, H.

Lee, J.

Li, S.

Li, X.

Li, Y.

Lin, H.-Y.

Lin, X.

Ling, R.

Liu, Z.

Lohmann, A. W.

Lukosz, W.

Mehta, S.

Mendlovic, D.

Müller, A.

M. Weigert, U. Schmidt, T. Boothe, A. Müller, A. Dibrov, A. Jain, B. Wilhelm, D. Schmidt, C. Broaddus, S. Culley, and M. Rocha-Martins, “Content-aware image restoration: pushing the limits of fluorescence microscopy,” Nat. Methods 15, 1090–1097 (2018).
[Crossref]

Nehmetallah, G.

Nguyen, T.

Niculescu-Mizil, A.

A. Niculescu-Mizil and R. Caruana, “Predicting good probabilities with supervised learning,” in Proceedings of the 22nd International Conference on Machine Learning (2005).

Ou, X.

Ozcan, A.

Y. Wu, Y. Rivenson, Y. Zhang, Z. Wei, H. Günaydin, X. Lin, and A. Ozcan, “Extended depth-of-field in holographic imaging using deep-learning-based autofocusing and phase recovery,” Optica 5, 704–710 (2018).
[Crossref]

Y. Rivenson, Y. Zhang, H. Günaydn, D. Teng, and A. Ozcan, “Phase recovery and holographic image reconstruction using deep learning in neural networks,” Light Sci. Appl. 7, 17141–17149 (2018).
[Crossref]

Pan, X.

Y. Fan, J. Sun, Q. Chen, X. Pan, L. Tian, and C. Zuo, “Optimal illumination scheme for isotropic quantitative differential phase contrast microscopy,” arXiv:1903.10718 (2019).

Phillips, Z. F.

Pritzel, A.

B. Lakshminarayanan, A. Pritzel, and C. Blundell, “Simple and scalable predictive uncertainty estimation using deep ensembles,” in Advances in Neural Information Processing Systems (2017), pp. 6402–6413.

Ramchandran, K.

Ren, D.

E. Bostan, M. Soltanolkotabi, D. Ren, and L. Waller, “Accelerated Wirtinger flow for multiplexed Fourier ptychographic microscopy,” arXiv:1803.03714 (2018).

Repina, N.

M. Kellman, E. Bostan, N. Repina, and L. Waller, “Physics-based learned design: optimized coded-illumination for quantitative phase imaging,” IEEE Transactions on Computational Imaging (Early Access) (2019), https://doi.org/10.1109/TCI.2019.2905434.

Rivenson, Y.

Y. Rivenson, Y. Zhang, H. Günaydn, D. Teng, and A. Ozcan, “Phase recovery and holographic image reconstruction using deep learning in neural networks,” Light Sci. Appl. 7, 17141–17149 (2018).
[Crossref]

Y. Wu, Y. Rivenson, Y. Zhang, Z. Wei, H. Günaydin, X. Lin, and A. Ozcan, “Extended depth-of-field in holographic imaging using deep-learning-based autofocusing and phase recovery,” Optica 5, 704–710 (2018).
[Crossref]

Robey, A.

Rocha-Martins, M.

M. Weigert, U. Schmidt, T. Boothe, A. Müller, A. Dibrov, A. Jain, B. Wilhelm, D. Schmidt, C. Broaddus, S. Culley, and M. Rocha-Martins, “Content-aware image restoration: pushing the limits of fluorescence microscopy,” Nat. Methods 15, 1090–1097 (2018).
[Crossref]

Romero, L. A.

Ronneberger, O.

O. Ronneberger, P. Fischer, and T. Brox, “U-Net: convolutional networks for biomedical image segmentation,” in International Conference on Medical Image Computing and Computer-Assisted Intervention (Springer, 2015), pp. 234–241.

Salakhutdinov, R.

N. Srivastava, G. Hinton, A. Krizhevsky, I. Sutskevar, and R. Salakhutdinov, “Dropout: a simple way to prevent neural networks from overfitting,” J. Mach. Learn. Res. 15, 1929–1958 (2014).

Sampson, D. D.

Schadwinkel, H.

B. Diederich, R. Wartmann, H. Schadwinkel, and R. Heintzmann, “Using machine-learning to optimize phase contrast in a low-cost cellphone microscope,” PLoS One 13, e0192937 (2018).
[Crossref]

Schmidt, D.

M. Weigert, U. Schmidt, T. Boothe, A. Müller, A. Dibrov, A. Jain, B. Wilhelm, D. Schmidt, C. Broaddus, S. Culley, and M. Rocha-Martins, “Content-aware image restoration: pushing the limits of fluorescence microscopy,” Nat. Methods 15, 1090–1097 (2018).
[Crossref]

Schmidt, U.

M. Weigert, U. Schmidt, T. Boothe, A. Müller, A. Dibrov, A. Jain, B. Wilhelm, D. Schmidt, C. Broaddus, S. Culley, and M. Rocha-Martins, “Content-aware image restoration: pushing the limits of fluorescence microscopy,” Nat. Methods 15, 1090–1097 (2018).
[Crossref]

Sheppard, C.

Sinha, A.

Soltanolkotabi, M.

L.-H. Yeh, J. Dong, J. Zhong, L. Tian, M. Chen, G. Tang, M. Soltanolkotabi, and L. Waller, “Experimental robustness of Fourier ptychography phase retrieval algorithms,” Opt. Express 23, 33214–33240 (2015).
[Crossref]

E. Bostan, M. Soltanolkotabi, D. Ren, and L. Waller, “Accelerated Wirtinger flow for multiplexed Fourier ptychographic microscopy,” arXiv:1803.03714 (2018).

Srivastava, N.

N. Srivastava, G. Hinton, A. Krizhevsky, I. Sutskevar, and R. Salakhutdinov, “Dropout: a simple way to prevent neural networks from overfitting,” J. Mach. Learn. Res. 15, 1929–1958 (2014).

Sun, J.

J. Sun, Q. Chen, Y. Zhang, and C. Zuo, “Efficient positional misalignment correction method for Fourier ptychographic microscopy,” Biomed. Opt. Express 7, 1336–1350 (2016).
[Crossref]

Y. Fan, J. Sun, Q. Chen, X. Pan, L. Tian, and C. Zuo, “Optimal illumination scheme for isotropic quantitative differential phase contrast microscopy,” arXiv:1903.10718 (2019).

Sutskevar, I.

N. Srivastava, G. Hinton, A. Krizhevsky, I. Sutskevar, and R. Salakhutdinov, “Dropout: a simple way to prevent neural networks from overfitting,” J. Mach. Learn. Res. 15, 1929–1958 (2014).

Tahir, W.

Tang, G.

Teng, D.

Y. Rivenson, Y. Zhang, H. Günaydn, D. Teng, and A. Ozcan, “Phase recovery and holographic image reconstruction using deep learning in neural networks,” Light Sci. Appl. 7, 17141–17149 (2018).
[Crossref]

Tian, L.

Waller, L.

Wartmann, R.

B. Diederich, R. Wartmann, H. Schadwinkel, and R. Heintzmann, “Using machine-learning to optimize phase contrast in a low-cost cellphone microscope,” PLoS One 13, e0192937 (2018).
[Crossref]

Wei, Z.

Weigert, M.

M. Weigert, U. Schmidt, T. Boothe, A. Müller, A. Dibrov, A. Jain, B. Wilhelm, D. Schmidt, C. Broaddus, S. Culley, and M. Rocha-Martins, “Content-aware image restoration: pushing the limits of fluorescence microscopy,” Nat. Methods 15, 1090–1097 (2018).
[Crossref]

Wicker, K.

K. Wicker and R. Heintzmann, “Resolving a misconception about structured illumination,” Nat. Photonics 8, 342–344 (2014).
[Crossref]

Wilhelm, B.

M. Weigert, U. Schmidt, T. Boothe, A. Müller, A. Dibrov, A. Jain, B. Wilhelm, D. Schmidt, C. Broaddus, S. Culley, and M. Rocha-Martins, “Content-aware image restoration: pushing the limits of fluorescence microscopy,” Nat. Methods 15, 1090–1097 (2018).
[Crossref]

Wu, Y.

Xue, Y.

Yang, C.

X. Ou, G. Zheng, and C. Yang, “Embedded pupil function recovery for Fourier ptychographic microscopy,” Opt. Express 22, 4960–4972 (2014).
[Crossref]

G. Zheng, R. Horstmeyer, and C. Yang, “Wide-field, high-resolution Fourier ptychographic microscopy,” Nat. Photonics 7, 739–745 (2013).
[Crossref]

Yeh, L.-H.

Zalevsky, Z.

Zhang, Y.

Zheng, G.

X. Ou, G. Zheng, and C. Yang, “Embedded pupil function recovery for Fourier ptychographic microscopy,” Opt. Express 22, 4960–4972 (2014).
[Crossref]

G. Zheng, R. Horstmeyer, and C. Yang, “Wide-field, high-resolution Fourier ptychographic microscopy,” Nat. Photonics 7, 739–745 (2013).
[Crossref]

Zhong, J.

Zhou, T.

P. Isola, J. Zhu, T. Zhou, and A. A. Efros, “Image-to-image translation with conditional adversarial networks,” CoRR abs/1611.07004 (2016).

Zhu, J.

P. Isola, J. Zhu, T. Zhou, and A. A. Efros, “Image-to-image translation with conditional adversarial networks,” CoRR abs/1611.07004 (2016).

Zuo, C.

J. Sun, Q. Chen, Y. Zhang, and C. Zuo, “Efficient positional misalignment correction method for Fourier ptychographic microscopy,” Biomed. Opt. Express 7, 1336–1350 (2016).
[Crossref]

Y. Fan, J. Sun, Q. Chen, X. Pan, L. Tian, and C. Zuo, “Optimal illumination scheme for isotropic quantitative differential phase contrast microscopy,” arXiv:1903.10718 (2019).

Appl. Opt. (2)

Biomed. Opt. Express (3)

Inverse Probl. (1)

P. Chen and A. Fannjiang, “Coded aperture ptychography: uniqueness and reconstruction,” Inverse Probl. 34, 025003 (2018).
[Crossref]

J. Mach. Learn. Res. (1)

N. Srivastava, G. Hinton, A. Krizhevsky, I. Sutskevar, and R. Salakhutdinov, “Dropout: a simple way to prevent neural networks from overfitting,” J. Mach. Learn. Res. 15, 1929–1958 (2014).

J. Opt. Soc. Am. (2)

J. Opt. Soc. Am. A (2)

Light Sci. Appl. (1)

Y. Rivenson, Y. Zhang, H. Günaydn, D. Teng, and A. Ozcan, “Phase recovery and holographic image reconstruction using deep learning in neural networks,” Light Sci. Appl. 7, 17141–17149 (2018).
[Crossref]

Nat. Methods (1)

M. Weigert, U. Schmidt, T. Boothe, A. Müller, A. Dibrov, A. Jain, B. Wilhelm, D. Schmidt, C. Broaddus, S. Culley, and M. Rocha-Martins, “Content-aware image restoration: pushing the limits of fluorescence microscopy,” Nat. Methods 15, 1090–1097 (2018).
[Crossref]

Nat. Photonics (2)

K. Wicker and R. Heintzmann, “Resolving a misconception about structured illumination,” Nat. Photonics 8, 342–344 (2014).
[Crossref]

G. Zheng, R. Horstmeyer, and C. Yang, “Wide-field, high-resolution Fourier ptychographic microscopy,” Nat. Photonics 7, 739–745 (2013).
[Crossref]

Nature (1)

Z. Ghahramani, “Probabilistic machine learning and artificial intelligence,” Nature 521, 452–459 (2015).
[Crossref]

Opt. Express (6)

Opt. Lett. (1)

Optica (5)

PLoS One (1)

B. Diederich, R. Wartmann, H. Schadwinkel, and R. Heintzmann, “Using machine-learning to optimize phase contrast in a low-cost cellphone microscope,” PLoS One 13, e0192937 (2018).
[Crossref]

Struct. Saf. (1)

A. D. Kiureghian and O. Ditlevsen, “Aleatory or epistemic? Does it matter?” Struct. Saf. 31, 105–112 (2009).
[Crossref]

Other (14)

X. Glorot and Y. Bengio, “Understanding the difficulty of training deep feedforward neural networks,” in International Conference on Artificial Intelligence and Statistics (2010), Vol. 9, pp. 249–256.

V. Kuleshov, N. Fenner, and S. Ermon, “Accurate uncertainties for deep learning using calibrated regression,” arXiv:1807.00263 (2018).

A. Niculescu-Mizil and R. Caruana, “Predicting good probabilities with supervised learning,” in Proceedings of the 22nd International Conference on Machine Learning (2005).

Y. Fan, J. Sun, Q. Chen, X. Pan, L. Tian, and C. Zuo, “Optimal illumination scheme for isotropic quantitative differential phase contrast microscopy,” arXiv:1903.10718 (2019).

A. Kendall and Y. Gal, “What uncertainties do we need in Bayesian deep learning for computer vision?” in Advances in Neural Information Processing Systems (2017), pp. 5580–5590.

L. Bottou, “Large-scale machine learning with stochastic gradient descent,” in Proceedings of COMPSTAT’10 (2010), pp. 177–186.

B. Lakshminarayanan, A. Pritzel, and C. Blundell, “Simple and scalable predictive uncertainty estimation using deep ensembles,” in Advances in Neural Information Processing Systems (2017), pp. 6402–6413.

Y. Gal and Z. Ghahramani, “Dropout as a Bayesian approximation: representing model uncertainty in deep learning,” in International Conference on Machine Learning (2016), pp. 1050–1059.

O. Ronneberger, P. Fischer, and T. Brox, “U-Net: convolutional networks for biomedical image segmentation,” in International Conference on Medical Image Computing and Computer-Assisted Intervention (Springer, 2015), pp. 234–241.

P. Isola, J. Zhu, T. Zhou, and A. A. Efros, “Image-to-image translation with conditional adversarial networks,” CoRR abs/1611.07004 (2016).

Y. Xue, S. Cheng, Y. Li, and L. Tian, https://github.com/bu-cisl/illumination-coding-meets-uncertainty-learning .

M. Kellman, E. Bostan, N. Repina, and L. Waller, “Physics-based learned design: optimized coded-illumination for quantitative phase imaging,” IEEE Transactions on Computational Imaging (Early Access) (2019), https://doi.org/10.1109/TCI.2019.2905434.

E. Bostan, M. Soltanolkotabi, D. Ren, and L. Waller, “Accelerated Wirtinger flow for multiplexed Fourier ptychographic microscopy,” arXiv:1803.03714 (2018).

R. Horstmeyer, R. Y. Chen, B. Kappes, and B. Judkewitz, “Convolutional neural networks that teach microscopes how to image,” arXiv:1709.07223 (2017).

Supplementary Material (2)

NameDescription
» Supplement 1       Supplementary material
» Visualization 1       Time-series phase and credibility prediction based on uncertainty learning framework. The full-FOV phase prediction achieves 0.8 NA across a 4X FOV. The credibility map is computed with a credible interval bound ϵ=0.047rad.

Cited By

OSA participates in Crossref's Cited-By Linking service. Citing articles from OSA journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (9)

Fig. 1.
Fig. 1. Overview of our reliable DL-based phase imaging technique. (a) Our technique opens up an expanded imaging attribute space, bypassing the conventional trade-off between FOV, resolution, and acquisition speed. (b) It uses five asymmetric illumination-coded intensities to encode large-SBP phase information. (c) A BNN is developed to make phase predictions and quantify the uncertainties of the model.
Fig. 2.
Fig. 2. Graphical model of our UL framework, which considers randomness in both the network weights w and the predicted output y.
Fig. 3.
Fig. 3. Overview of our UL framework. (a) The data uncertainty quantifies the effect of incomplete training data and is estimated via an uncertainty regularized loss function. The model uncertainty evaluates the stochasticity of neural network training and is estimated by network ensembles. (b) During testing, the direct output from the BNN consists of an ensemble of mean and standard deviation maps. Through statistical modeling, we obtain the final estimated phase, data, and model uncertainty maps.
Fig. 4.
Fig. 4. BNN structure to perform UL. The main network takes the U-Net structure. The input takes five low-resolution multiplexed intensity images. The output predicts two-channel high-resolution phase and uncertainty maps.
Fig. 5.
Fig. 5. High-resolution phase estimation from DL-augmented coded measurements. (a) The ground truth phase obtained from the sFPM. (b) The input to the neural network consists of five low-resolution intensity images, including two brightfield and three darkfield images. Our BNN prediction includes (c) phase, (d) data uncertainty, and (e) model uncertainty. (f) The absolute error is calculated between the predicted and ground truth phases. The uncertainty maps are highly correlated with the error maps, demonstrating the predictive power of our UL framework. Unlike existing FPM techniques, our method requires the same number of measurements when the final resolution increases. HeLa cells fixed in (i) ethanol and (ii) formalin are imaged with a 4×0.1NA objective and reconstructed at a resolution of 0.5 NA. (iii) Live HeLa and (iv) fixed MCF10A cells are imaged with a 4×0.2NA objective and reconstructed at a resolution of 0.8 NA. (v) Fixed U2OS cells are imaged with a 4×0.2NA objective and reconstructed at a resolution of 0.7 NA.
Fig. 6.
Fig. 6. BNN predictions under different training and testing data configurations. The network can robustly perform phase retrieval against variations in the sample type. The uncertainty map can reliably detect potential errors in the phase predictions and is consistent with the true error.
Fig. 7.
Fig. 7. Large-SBP phase prediction and uncertainty quantification. (a) Full-FOV phase prediction achieving 0.51 NA resolution across a 4× FOV. (b) The data uncertainty map reliably identifies the out-of-distribution data corresponding to the peripheral FOV regions. (c) The model uncertainty is consistently low across the FOV except around the boundary, validating the robustness of our model. (d) The training data are taken only from the central 0.4mm×0.4mm region. Panels (e) and (f) show the zoom-in of the predicted phase, data uncertainty, and model uncertainty of the region from the central FOV and the outer FOV, respectively.
Fig. 8.
Fig. 8. Reliability analysis of our predictions. (a) The predicted phase. (b) The credibility map calculated under the credible interval bound set by the intrinsic noise in sFPM. The less credible regions match with the out-of-distribution data containing phase clipping and wrapping artifacts. (c) The reliability diagram computed by comparing the averaged credibility and empirical accuracy show that (i–ii) are slightly overconfident whereas (iii–v) are well calibrated. (d) The predicted credible interval bound under 95% credibility correlates well with the corresponding true absolute error in (e).
Fig. 9.
Fig. 9. Time-series phase and credibility prediction. A representative frame from (a) the full-FOV phase prediction achieving 0.8 NA across 4× FOV and (b) a credibility map with a credible interval bound of ϵ=0.047rad. (c) The training data are taken from the upper 3/4 of the FOV at the 26 min frame. The averaged credibility are calculated over time on the whole FOV (red), the cell region (green), and the background (blue). This allows quantifying the “temporal decorrelation” induced by the temporal dynamics. (d, e) Spatially and/or temporally rare events, including cell mitosis and apoptosis, which result in out-of-distribution data during prediction, are automatically discovered by our BNN. The full time-series prediction is provided in the movie in Visualization 1.

Equations (12)

Equations on this page are rendered with MathJax. Learn more.

p(y|x*,X,Y)=p(y|x*,w)p(w|X,Y)dw,
p(yk|xk,w)=i=1Np(yik|xk,w),
p(yik|xk,w)=12σikexp(|yikμik|σik),
L(w|xt,yt)=1Ni=1N[|yitμit|σit+log(2σit)].
p(y|x*,X,Y)p(y|x*,w)q(w)dw1Pp=1Pp(y|x*,w(p)).
μ^iE[yi|x*,X,Y]1Pp=1PE[yi|x*,w(p)]1Pp=1Pμi(p),
σ^i2Var(yi|x*,w)=E[Var(yi|x*,w)]+Var(E[yi|x*,X,Y])1Pp=1P2(σi(p))2+1Pp=1P(μi(p)μ^i)2=(σi(D))2+(σi(M))2,
fi(y)p(yi=y|x*,X,Y)1Pp=1PL(y;μip,σip).
piϵgi(ϵ)=μiϵμi+ϵfi(y)dy=1Pp=1P[Fp(μi+ϵ)Fp(μiϵ)],foryiAiϵ,
ϵip=gi1(p).
Cred(Pm,ϵ)=1|Smϵ|iSmϵpi=1|Smϵ|pi(pm1,pm]pi,
Acc(Pm,ϵ)=1|Smϵ|iSmϵI{μi*Aiϵ}=1|Smϵ|iSmϵI{μiϵμi*μi+ϵ}.