Abstract

Intensity shot noise in digital holograms distorts the quality of the phase images after phase retrieval, limiting the usefulness of quantitative phase microscopy (QPM) systems in long term live cell imaging. In this paper, we devise a hologram-to-hologram neural network, Holo-UNet, that restores high quality digital holograms under high shot noise conditions (sub-mW/cm2 intensities) at high acquisition rates (sub-milliseconds). In comparison to current phase recovery methods, Holo-UNet denoises the recorded hologram, and so prevents shot noise from propagating through the phase retrieval step that in turn adversely affects phase and intensity images. Holo-UNet was tested on 2 independent QPM systems without any adjustment to the hardware setting. In both cases, Holo-UNet outperformed existing phase recovery and block-matching techniques by ∼ 1.8 folds in phase fidelity as measured by SSIM. Holo-UNet is immediately applicable to a wide range of other high-speed interferometric phase imaging techniques. The network paves the way towards the expansion of high-speed low light QPM biological imaging with minimal dependence on hardware constraints.

© 2020 Optical Society of America under the terms of the OSA Open Access Publishing Agreement

Full Article  |  PDF Article

References

  • View by:
  • |
  • |
  • |

  1. V. Micó, J. Zheng, J. Garcia, Z. Zalevsky, and P. Gao, “Resolution enhancement in quantitative phase microscopy,” Adv. Opt. Photonics 11(1), 135–214 (2019).
    [Crossref]
  2. N. M. Editorial, “Phototoxicity revisited,” Nat. Methods 15(10), 751 (2018).
    [Crossref]
  3. F. Charriére, B. Rappaz, J. Kühn, T. Colomb, P. Marquet, and C. Depeursinge, “Influence of shot noise on phase measurement accuracy in digital holographic microscopy,” Opt. Express 15(14), 8818–8831 (2007).
    [Crossref]
  4. G. Choi, D. Ryu, Y. Jo, Y. S. Kim, W. Park, H.-S. Min, and Y. Park, “Cycle-consistent deep learning approach to coherent noise reduction in optical diffraction tomography,” Opt. Express 27(4), 4927–4943 (2019).
    [Crossref]
  5. C. A. Metzler, F. Heide, P. Rangarajan, M. M. Balaji, A. Viswanath, A. Veeraraghavan, and R. G. Baraniuk, “Deep-inverse correlography: towards real-time high-resolution non-line-of-sight imaging,” Optica 7(1), 63–71 (2020).
    [Crossref]
  6. Z. Ren, Z. Xu, and E. Y. Lam, “End-to-end deep learning framework for digital holographic reconstruction,” Adv. Photonics 1(1), 016004 (2019).
    [Crossref]
  7. Y. Rivenson, Y. Wu, and A. Ozcan, “Deep learning in holography and coherent imaging,” Light: Sci. Appl. 8(1), 85 (2019).
    [Crossref]
  8. G. Zhang, T. Guan, Z. Shen, X. Wang, T. Hu, D. Wang, Y. He, and N. Xie, “Fast phase retrieval in off-axis digital holographic microscopy through deep learning,” Opt. Express 26(15), 19388–19405 (2018).
    [Crossref]
  9. H. Wang, M. Lyu, and G. Situ, “eHoloNet: a learning-based end-to-end approach for in-line digital holographic reconstruction,” Opt. Express 26(18), 22603–22614 (2018).
    [Crossref]
  10. A. Goy, K. Arthur, S. Li, and G. Barbastathis, “Low photon count phase retrieval using deep learning,” Phys. Rev. Lett. 121(24), 243902 (2018).
    [Crossref]
  11. Y. Rivenson, Y. Zhang, H. Günaydın, D. Teng, and A. Ozcan, “Phase recovery and holographic image reconstruction using deep learning in neural networks,” Light: Sci. Appl. 7(2), 17141 (2018).
    [Crossref]
  12. S. Montrésor, M. Tahon, A. Laurent, and P. Picart, “Computational de-noising based on deep learning for phase data in digital holographic interferometry,” APL Photonics 5(3), 030802 (2020).
    [Crossref]
  13. X. He, C. V. Nguyen, M. Pratap, Y. Zheng, Y. Wang, D. R. Nisbet, R. J. Williams, M. Rug, A. G. Maier, and W. M. Lee, “Automated Fourier space region-recognition filtering for off-axis digital holographic microscopy,” Biomed. Opt. Express 7(8), 3111–3123 (2016).
    [Crossref]
  14. O. Ronneberger, P. Fischer, and T. Brox, “U-net: Convolutional networks for biomedical image segmentation,” in International Conference on Medical image computing and computer-assisted intervention, (Springer, 2015), 234–241.
  15. J. Long, E. Shelhamer, and T. Darrell, “Fully convolutional networks for semantic segmentation,” in Proceedings of the IEEE conference on computer vision and pattern recognition, (2015), 3431–3440.
  16. G. Barbastathis, A. Ozcan, and G. Situ, “On the use of deep learning for computational imaging,” Optica 6(8), 921–943 (2019).
    [Crossref]
  17. M. Weigert, U. Schmidt, T. Boothe, A. Müller, A. Dibrov, A. Jain, B. Wilhelm, D. Schmidt, C. Broaddus, S. Culley, M. Rocha-Martins, F. Segovia-Miranda, C. Norden, R. Henriques, M. Zerial, M. Solimena, J. Rink, P. Tomancak, L. Royer, F. Jug, and E. W. Myers, “Content-aware image restoration: pushing the limits of fluorescence microscopy,” Nat. Methods 15(12), 1090–1097 (2018).
    [Crossref]
  18. A. Krizhevsky, I. Sutskever, and G. E. Hinton, “Imagenet classification with deep convolutional neural networks,” in Advances in neural information processing systems, (2012), 1097–1105.
  19. M. Weigert, U. Schmidt, T. Boothe, A. Müller, A. Dibrov, A. Jain, B. Wilhelm, D. Schmidt, C. Broaddus, and S. Culley, “Content-aware image restoration: pushing the limits of fluorescence microscopy,” Nat. Methods 15(12), 1090–1097 (2018).
    [Crossref]
  20. K. Dabov, A. Foi, V. Katkovnik, and K. Egiazarian, “Image denoising by sparse 3-D transform-domain collaborative filtering,” IEEE Trans. on Image Process. 16(8), 2080–2095 (2007).
    [Crossref]
  21. Z. Wang, A. C. Bovik, H. R. Sheikh, and E. P. Simoncelli, “Image quality assessment: from error visibility to structural similarity,” IEEE Trans. on Image Process. 13(4), 600–612 (2004).
    [Crossref]
  22. M. D. Zeiler, “Adadelta: an adaptive learning rate method,” arXiv preprint arXiv:1212.5701 (2012).
  23. M. Hÿtch, F. Houdellier, F. Hüe, and E. Snoeck, “Nanoscale holographic interferometry for strain measurements in electronic devices,” Nature 453(7198), 1086–1089 (2008).
    [Crossref]

2020 (2)

C. A. Metzler, F. Heide, P. Rangarajan, M. M. Balaji, A. Viswanath, A. Veeraraghavan, and R. G. Baraniuk, “Deep-inverse correlography: towards real-time high-resolution non-line-of-sight imaging,” Optica 7(1), 63–71 (2020).
[Crossref]

S. Montrésor, M. Tahon, A. Laurent, and P. Picart, “Computational de-noising based on deep learning for phase data in digital holographic interferometry,” APL Photonics 5(3), 030802 (2020).
[Crossref]

2019 (5)

V. Micó, J. Zheng, J. Garcia, Z. Zalevsky, and P. Gao, “Resolution enhancement in quantitative phase microscopy,” Adv. Opt. Photonics 11(1), 135–214 (2019).
[Crossref]

Z. Ren, Z. Xu, and E. Y. Lam, “End-to-end deep learning framework for digital holographic reconstruction,” Adv. Photonics 1(1), 016004 (2019).
[Crossref]

Y. Rivenson, Y. Wu, and A. Ozcan, “Deep learning in holography and coherent imaging,” Light: Sci. Appl. 8(1), 85 (2019).
[Crossref]

G. Barbastathis, A. Ozcan, and G. Situ, “On the use of deep learning for computational imaging,” Optica 6(8), 921–943 (2019).
[Crossref]

G. Choi, D. Ryu, Y. Jo, Y. S. Kim, W. Park, H.-S. Min, and Y. Park, “Cycle-consistent deep learning approach to coherent noise reduction in optical diffraction tomography,” Opt. Express 27(4), 4927–4943 (2019).
[Crossref]

2018 (7)

M. Weigert, U. Schmidt, T. Boothe, A. Müller, A. Dibrov, A. Jain, B. Wilhelm, D. Schmidt, C. Broaddus, S. Culley, M. Rocha-Martins, F. Segovia-Miranda, C. Norden, R. Henriques, M. Zerial, M. Solimena, J. Rink, P. Tomancak, L. Royer, F. Jug, and E. W. Myers, “Content-aware image restoration: pushing the limits of fluorescence microscopy,” Nat. Methods 15(12), 1090–1097 (2018).
[Crossref]

M. Weigert, U. Schmidt, T. Boothe, A. Müller, A. Dibrov, A. Jain, B. Wilhelm, D. Schmidt, C. Broaddus, and S. Culley, “Content-aware image restoration: pushing the limits of fluorescence microscopy,” Nat. Methods 15(12), 1090–1097 (2018).
[Crossref]

G. Zhang, T. Guan, Z. Shen, X. Wang, T. Hu, D. Wang, Y. He, and N. Xie, “Fast phase retrieval in off-axis digital holographic microscopy through deep learning,” Opt. Express 26(15), 19388–19405 (2018).
[Crossref]

H. Wang, M. Lyu, and G. Situ, “eHoloNet: a learning-based end-to-end approach for in-line digital holographic reconstruction,” Opt. Express 26(18), 22603–22614 (2018).
[Crossref]

A. Goy, K. Arthur, S. Li, and G. Barbastathis, “Low photon count phase retrieval using deep learning,” Phys. Rev. Lett. 121(24), 243902 (2018).
[Crossref]

Y. Rivenson, Y. Zhang, H. Günaydın, D. Teng, and A. Ozcan, “Phase recovery and holographic image reconstruction using deep learning in neural networks,” Light: Sci. Appl. 7(2), 17141 (2018).
[Crossref]

N. M. Editorial, “Phototoxicity revisited,” Nat. Methods 15(10), 751 (2018).
[Crossref]

2016 (1)

2008 (1)

M. Hÿtch, F. Houdellier, F. Hüe, and E. Snoeck, “Nanoscale holographic interferometry for strain measurements in electronic devices,” Nature 453(7198), 1086–1089 (2008).
[Crossref]

2007 (2)

K. Dabov, A. Foi, V. Katkovnik, and K. Egiazarian, “Image denoising by sparse 3-D transform-domain collaborative filtering,” IEEE Trans. on Image Process. 16(8), 2080–2095 (2007).
[Crossref]

F. Charriére, B. Rappaz, J. Kühn, T. Colomb, P. Marquet, and C. Depeursinge, “Influence of shot noise on phase measurement accuracy in digital holographic microscopy,” Opt. Express 15(14), 8818–8831 (2007).
[Crossref]

2004 (1)

Z. Wang, A. C. Bovik, H. R. Sheikh, and E. P. Simoncelli, “Image quality assessment: from error visibility to structural similarity,” IEEE Trans. on Image Process. 13(4), 600–612 (2004).
[Crossref]

Arthur, K.

A. Goy, K. Arthur, S. Li, and G. Barbastathis, “Low photon count phase retrieval using deep learning,” Phys. Rev. Lett. 121(24), 243902 (2018).
[Crossref]

Balaji, M. M.

Baraniuk, R. G.

Barbastathis, G.

G. Barbastathis, A. Ozcan, and G. Situ, “On the use of deep learning for computational imaging,” Optica 6(8), 921–943 (2019).
[Crossref]

A. Goy, K. Arthur, S. Li, and G. Barbastathis, “Low photon count phase retrieval using deep learning,” Phys. Rev. Lett. 121(24), 243902 (2018).
[Crossref]

Boothe, T.

M. Weigert, U. Schmidt, T. Boothe, A. Müller, A. Dibrov, A. Jain, B. Wilhelm, D. Schmidt, C. Broaddus, S. Culley, M. Rocha-Martins, F. Segovia-Miranda, C. Norden, R. Henriques, M. Zerial, M. Solimena, J. Rink, P. Tomancak, L. Royer, F. Jug, and E. W. Myers, “Content-aware image restoration: pushing the limits of fluorescence microscopy,” Nat. Methods 15(12), 1090–1097 (2018).
[Crossref]

M. Weigert, U. Schmidt, T. Boothe, A. Müller, A. Dibrov, A. Jain, B. Wilhelm, D. Schmidt, C. Broaddus, and S. Culley, “Content-aware image restoration: pushing the limits of fluorescence microscopy,” Nat. Methods 15(12), 1090–1097 (2018).
[Crossref]

Bovik, A. C.

Z. Wang, A. C. Bovik, H. R. Sheikh, and E. P. Simoncelli, “Image quality assessment: from error visibility to structural similarity,” IEEE Trans. on Image Process. 13(4), 600–612 (2004).
[Crossref]

Broaddus, C.

M. Weigert, U. Schmidt, T. Boothe, A. Müller, A. Dibrov, A. Jain, B. Wilhelm, D. Schmidt, C. Broaddus, and S. Culley, “Content-aware image restoration: pushing the limits of fluorescence microscopy,” Nat. Methods 15(12), 1090–1097 (2018).
[Crossref]

M. Weigert, U. Schmidt, T. Boothe, A. Müller, A. Dibrov, A. Jain, B. Wilhelm, D. Schmidt, C. Broaddus, S. Culley, M. Rocha-Martins, F. Segovia-Miranda, C. Norden, R. Henriques, M. Zerial, M. Solimena, J. Rink, P. Tomancak, L. Royer, F. Jug, and E. W. Myers, “Content-aware image restoration: pushing the limits of fluorescence microscopy,” Nat. Methods 15(12), 1090–1097 (2018).
[Crossref]

Brox, T.

O. Ronneberger, P. Fischer, and T. Brox, “U-net: Convolutional networks for biomedical image segmentation,” in International Conference on Medical image computing and computer-assisted intervention, (Springer, 2015), 234–241.

Charriére, F.

Choi, G.

Colomb, T.

Culley, S.

M. Weigert, U. Schmidt, T. Boothe, A. Müller, A. Dibrov, A. Jain, B. Wilhelm, D. Schmidt, C. Broaddus, S. Culley, M. Rocha-Martins, F. Segovia-Miranda, C. Norden, R. Henriques, M. Zerial, M. Solimena, J. Rink, P. Tomancak, L. Royer, F. Jug, and E. W. Myers, “Content-aware image restoration: pushing the limits of fluorescence microscopy,” Nat. Methods 15(12), 1090–1097 (2018).
[Crossref]

M. Weigert, U. Schmidt, T. Boothe, A. Müller, A. Dibrov, A. Jain, B. Wilhelm, D. Schmidt, C. Broaddus, and S. Culley, “Content-aware image restoration: pushing the limits of fluorescence microscopy,” Nat. Methods 15(12), 1090–1097 (2018).
[Crossref]

Dabov, K.

K. Dabov, A. Foi, V. Katkovnik, and K. Egiazarian, “Image denoising by sparse 3-D transform-domain collaborative filtering,” IEEE Trans. on Image Process. 16(8), 2080–2095 (2007).
[Crossref]

Darrell, T.

J. Long, E. Shelhamer, and T. Darrell, “Fully convolutional networks for semantic segmentation,” in Proceedings of the IEEE conference on computer vision and pattern recognition, (2015), 3431–3440.

Depeursinge, C.

Dibrov, A.

M. Weigert, U. Schmidt, T. Boothe, A. Müller, A. Dibrov, A. Jain, B. Wilhelm, D. Schmidt, C. Broaddus, S. Culley, M. Rocha-Martins, F. Segovia-Miranda, C. Norden, R. Henriques, M. Zerial, M. Solimena, J. Rink, P. Tomancak, L. Royer, F. Jug, and E. W. Myers, “Content-aware image restoration: pushing the limits of fluorescence microscopy,” Nat. Methods 15(12), 1090–1097 (2018).
[Crossref]

M. Weigert, U. Schmidt, T. Boothe, A. Müller, A. Dibrov, A. Jain, B. Wilhelm, D. Schmidt, C. Broaddus, and S. Culley, “Content-aware image restoration: pushing the limits of fluorescence microscopy,” Nat. Methods 15(12), 1090–1097 (2018).
[Crossref]

Editorial, N. M.

N. M. Editorial, “Phototoxicity revisited,” Nat. Methods 15(10), 751 (2018).
[Crossref]

Egiazarian, K.

K. Dabov, A. Foi, V. Katkovnik, and K. Egiazarian, “Image denoising by sparse 3-D transform-domain collaborative filtering,” IEEE Trans. on Image Process. 16(8), 2080–2095 (2007).
[Crossref]

Fischer, P.

O. Ronneberger, P. Fischer, and T. Brox, “U-net: Convolutional networks for biomedical image segmentation,” in International Conference on Medical image computing and computer-assisted intervention, (Springer, 2015), 234–241.

Foi, A.

K. Dabov, A. Foi, V. Katkovnik, and K. Egiazarian, “Image denoising by sparse 3-D transform-domain collaborative filtering,” IEEE Trans. on Image Process. 16(8), 2080–2095 (2007).
[Crossref]

Gao, P.

V. Micó, J. Zheng, J. Garcia, Z. Zalevsky, and P. Gao, “Resolution enhancement in quantitative phase microscopy,” Adv. Opt. Photonics 11(1), 135–214 (2019).
[Crossref]

Garcia, J.

V. Micó, J. Zheng, J. Garcia, Z. Zalevsky, and P. Gao, “Resolution enhancement in quantitative phase microscopy,” Adv. Opt. Photonics 11(1), 135–214 (2019).
[Crossref]

Goy, A.

A. Goy, K. Arthur, S. Li, and G. Barbastathis, “Low photon count phase retrieval using deep learning,” Phys. Rev. Lett. 121(24), 243902 (2018).
[Crossref]

Guan, T.

Günaydin, H.

Y. Rivenson, Y. Zhang, H. Günaydın, D. Teng, and A. Ozcan, “Phase recovery and holographic image reconstruction using deep learning in neural networks,” Light: Sci. Appl. 7(2), 17141 (2018).
[Crossref]

He, X.

He, Y.

Heide, F.

Henriques, R.

M. Weigert, U. Schmidt, T. Boothe, A. Müller, A. Dibrov, A. Jain, B. Wilhelm, D. Schmidt, C. Broaddus, S. Culley, M. Rocha-Martins, F. Segovia-Miranda, C. Norden, R. Henriques, M. Zerial, M. Solimena, J. Rink, P. Tomancak, L. Royer, F. Jug, and E. W. Myers, “Content-aware image restoration: pushing the limits of fluorescence microscopy,” Nat. Methods 15(12), 1090–1097 (2018).
[Crossref]

Hinton, G. E.

A. Krizhevsky, I. Sutskever, and G. E. Hinton, “Imagenet classification with deep convolutional neural networks,” in Advances in neural information processing systems, (2012), 1097–1105.

Houdellier, F.

M. Hÿtch, F. Houdellier, F. Hüe, and E. Snoeck, “Nanoscale holographic interferometry for strain measurements in electronic devices,” Nature 453(7198), 1086–1089 (2008).
[Crossref]

Hu, T.

Hüe, F.

M. Hÿtch, F. Houdellier, F. Hüe, and E. Snoeck, “Nanoscale holographic interferometry for strain measurements in electronic devices,” Nature 453(7198), 1086–1089 (2008).
[Crossref]

Hÿtch, M.

M. Hÿtch, F. Houdellier, F. Hüe, and E. Snoeck, “Nanoscale holographic interferometry for strain measurements in electronic devices,” Nature 453(7198), 1086–1089 (2008).
[Crossref]

Jain, A.

M. Weigert, U. Schmidt, T. Boothe, A. Müller, A. Dibrov, A. Jain, B. Wilhelm, D. Schmidt, C. Broaddus, and S. Culley, “Content-aware image restoration: pushing the limits of fluorescence microscopy,” Nat. Methods 15(12), 1090–1097 (2018).
[Crossref]

M. Weigert, U. Schmidt, T. Boothe, A. Müller, A. Dibrov, A. Jain, B. Wilhelm, D. Schmidt, C. Broaddus, S. Culley, M. Rocha-Martins, F. Segovia-Miranda, C. Norden, R. Henriques, M. Zerial, M. Solimena, J. Rink, P. Tomancak, L. Royer, F. Jug, and E. W. Myers, “Content-aware image restoration: pushing the limits of fluorescence microscopy,” Nat. Methods 15(12), 1090–1097 (2018).
[Crossref]

Jo, Y.

Jug, F.

M. Weigert, U. Schmidt, T. Boothe, A. Müller, A. Dibrov, A. Jain, B. Wilhelm, D. Schmidt, C. Broaddus, S. Culley, M. Rocha-Martins, F. Segovia-Miranda, C. Norden, R. Henriques, M. Zerial, M. Solimena, J. Rink, P. Tomancak, L. Royer, F. Jug, and E. W. Myers, “Content-aware image restoration: pushing the limits of fluorescence microscopy,” Nat. Methods 15(12), 1090–1097 (2018).
[Crossref]

Katkovnik, V.

K. Dabov, A. Foi, V. Katkovnik, and K. Egiazarian, “Image denoising by sparse 3-D transform-domain collaborative filtering,” IEEE Trans. on Image Process. 16(8), 2080–2095 (2007).
[Crossref]

Kim, Y. S.

Krizhevsky, A.

A. Krizhevsky, I. Sutskever, and G. E. Hinton, “Imagenet classification with deep convolutional neural networks,” in Advances in neural information processing systems, (2012), 1097–1105.

Kühn, J.

Lam, E. Y.

Z. Ren, Z. Xu, and E. Y. Lam, “End-to-end deep learning framework for digital holographic reconstruction,” Adv. Photonics 1(1), 016004 (2019).
[Crossref]

Laurent, A.

S. Montrésor, M. Tahon, A. Laurent, and P. Picart, “Computational de-noising based on deep learning for phase data in digital holographic interferometry,” APL Photonics 5(3), 030802 (2020).
[Crossref]

Lee, W. M.

Li, S.

A. Goy, K. Arthur, S. Li, and G. Barbastathis, “Low photon count phase retrieval using deep learning,” Phys. Rev. Lett. 121(24), 243902 (2018).
[Crossref]

Long, J.

J. Long, E. Shelhamer, and T. Darrell, “Fully convolutional networks for semantic segmentation,” in Proceedings of the IEEE conference on computer vision and pattern recognition, (2015), 3431–3440.

Lyu, M.

Maier, A. G.

Marquet, P.

Metzler, C. A.

Micó, V.

V. Micó, J. Zheng, J. Garcia, Z. Zalevsky, and P. Gao, “Resolution enhancement in quantitative phase microscopy,” Adv. Opt. Photonics 11(1), 135–214 (2019).
[Crossref]

Min, H.-S.

Montrésor, S.

S. Montrésor, M. Tahon, A. Laurent, and P. Picart, “Computational de-noising based on deep learning for phase data in digital holographic interferometry,” APL Photonics 5(3), 030802 (2020).
[Crossref]

Müller, A.

M. Weigert, U. Schmidt, T. Boothe, A. Müller, A. Dibrov, A. Jain, B. Wilhelm, D. Schmidt, C. Broaddus, and S. Culley, “Content-aware image restoration: pushing the limits of fluorescence microscopy,” Nat. Methods 15(12), 1090–1097 (2018).
[Crossref]

M. Weigert, U. Schmidt, T. Boothe, A. Müller, A. Dibrov, A. Jain, B. Wilhelm, D. Schmidt, C. Broaddus, S. Culley, M. Rocha-Martins, F. Segovia-Miranda, C. Norden, R. Henriques, M. Zerial, M. Solimena, J. Rink, P. Tomancak, L. Royer, F. Jug, and E. W. Myers, “Content-aware image restoration: pushing the limits of fluorescence microscopy,” Nat. Methods 15(12), 1090–1097 (2018).
[Crossref]

Myers, E. W.

M. Weigert, U. Schmidt, T. Boothe, A. Müller, A. Dibrov, A. Jain, B. Wilhelm, D. Schmidt, C. Broaddus, S. Culley, M. Rocha-Martins, F. Segovia-Miranda, C. Norden, R. Henriques, M. Zerial, M. Solimena, J. Rink, P. Tomancak, L. Royer, F. Jug, and E. W. Myers, “Content-aware image restoration: pushing the limits of fluorescence microscopy,” Nat. Methods 15(12), 1090–1097 (2018).
[Crossref]

Nguyen, C. V.

Nisbet, D. R.

Norden, C.

M. Weigert, U. Schmidt, T. Boothe, A. Müller, A. Dibrov, A. Jain, B. Wilhelm, D. Schmidt, C. Broaddus, S. Culley, M. Rocha-Martins, F. Segovia-Miranda, C. Norden, R. Henriques, M. Zerial, M. Solimena, J. Rink, P. Tomancak, L. Royer, F. Jug, and E. W. Myers, “Content-aware image restoration: pushing the limits of fluorescence microscopy,” Nat. Methods 15(12), 1090–1097 (2018).
[Crossref]

Ozcan, A.

G. Barbastathis, A. Ozcan, and G. Situ, “On the use of deep learning for computational imaging,” Optica 6(8), 921–943 (2019).
[Crossref]

Y. Rivenson, Y. Wu, and A. Ozcan, “Deep learning in holography and coherent imaging,” Light: Sci. Appl. 8(1), 85 (2019).
[Crossref]

Y. Rivenson, Y. Zhang, H. Günaydın, D. Teng, and A. Ozcan, “Phase recovery and holographic image reconstruction using deep learning in neural networks,” Light: Sci. Appl. 7(2), 17141 (2018).
[Crossref]

Park, W.

Park, Y.

Picart, P.

S. Montrésor, M. Tahon, A. Laurent, and P. Picart, “Computational de-noising based on deep learning for phase data in digital holographic interferometry,” APL Photonics 5(3), 030802 (2020).
[Crossref]

Pratap, M.

Rangarajan, P.

Rappaz, B.

Ren, Z.

Z. Ren, Z. Xu, and E. Y. Lam, “End-to-end deep learning framework for digital holographic reconstruction,” Adv. Photonics 1(1), 016004 (2019).
[Crossref]

Rink, J.

M. Weigert, U. Schmidt, T. Boothe, A. Müller, A. Dibrov, A. Jain, B. Wilhelm, D. Schmidt, C. Broaddus, S. Culley, M. Rocha-Martins, F. Segovia-Miranda, C. Norden, R. Henriques, M. Zerial, M. Solimena, J. Rink, P. Tomancak, L. Royer, F. Jug, and E. W. Myers, “Content-aware image restoration: pushing the limits of fluorescence microscopy,” Nat. Methods 15(12), 1090–1097 (2018).
[Crossref]

Rivenson, Y.

Y. Rivenson, Y. Wu, and A. Ozcan, “Deep learning in holography and coherent imaging,” Light: Sci. Appl. 8(1), 85 (2019).
[Crossref]

Y. Rivenson, Y. Zhang, H. Günaydın, D. Teng, and A. Ozcan, “Phase recovery and holographic image reconstruction using deep learning in neural networks,” Light: Sci. Appl. 7(2), 17141 (2018).
[Crossref]

Rocha-Martins, M.

M. Weigert, U. Schmidt, T. Boothe, A. Müller, A. Dibrov, A. Jain, B. Wilhelm, D. Schmidt, C. Broaddus, S. Culley, M. Rocha-Martins, F. Segovia-Miranda, C. Norden, R. Henriques, M. Zerial, M. Solimena, J. Rink, P. Tomancak, L. Royer, F. Jug, and E. W. Myers, “Content-aware image restoration: pushing the limits of fluorescence microscopy,” Nat. Methods 15(12), 1090–1097 (2018).
[Crossref]

Ronneberger, O.

O. Ronneberger, P. Fischer, and T. Brox, “U-net: Convolutional networks for biomedical image segmentation,” in International Conference on Medical image computing and computer-assisted intervention, (Springer, 2015), 234–241.

Royer, L.

M. Weigert, U. Schmidt, T. Boothe, A. Müller, A. Dibrov, A. Jain, B. Wilhelm, D. Schmidt, C. Broaddus, S. Culley, M. Rocha-Martins, F. Segovia-Miranda, C. Norden, R. Henriques, M. Zerial, M. Solimena, J. Rink, P. Tomancak, L. Royer, F. Jug, and E. W. Myers, “Content-aware image restoration: pushing the limits of fluorescence microscopy,” Nat. Methods 15(12), 1090–1097 (2018).
[Crossref]

Rug, M.

Ryu, D.

Schmidt, D.

M. Weigert, U. Schmidt, T. Boothe, A. Müller, A. Dibrov, A. Jain, B. Wilhelm, D. Schmidt, C. Broaddus, S. Culley, M. Rocha-Martins, F. Segovia-Miranda, C. Norden, R. Henriques, M. Zerial, M. Solimena, J. Rink, P. Tomancak, L. Royer, F. Jug, and E. W. Myers, “Content-aware image restoration: pushing the limits of fluorescence microscopy,” Nat. Methods 15(12), 1090–1097 (2018).
[Crossref]

M. Weigert, U. Schmidt, T. Boothe, A. Müller, A. Dibrov, A. Jain, B. Wilhelm, D. Schmidt, C. Broaddus, and S. Culley, “Content-aware image restoration: pushing the limits of fluorescence microscopy,” Nat. Methods 15(12), 1090–1097 (2018).
[Crossref]

Schmidt, U.

M. Weigert, U. Schmidt, T. Boothe, A. Müller, A. Dibrov, A. Jain, B. Wilhelm, D. Schmidt, C. Broaddus, S. Culley, M. Rocha-Martins, F. Segovia-Miranda, C. Norden, R. Henriques, M. Zerial, M. Solimena, J. Rink, P. Tomancak, L. Royer, F. Jug, and E. W. Myers, “Content-aware image restoration: pushing the limits of fluorescence microscopy,” Nat. Methods 15(12), 1090–1097 (2018).
[Crossref]

M. Weigert, U. Schmidt, T. Boothe, A. Müller, A. Dibrov, A. Jain, B. Wilhelm, D. Schmidt, C. Broaddus, and S. Culley, “Content-aware image restoration: pushing the limits of fluorescence microscopy,” Nat. Methods 15(12), 1090–1097 (2018).
[Crossref]

Segovia-Miranda, F.

M. Weigert, U. Schmidt, T. Boothe, A. Müller, A. Dibrov, A. Jain, B. Wilhelm, D. Schmidt, C. Broaddus, S. Culley, M. Rocha-Martins, F. Segovia-Miranda, C. Norden, R. Henriques, M. Zerial, M. Solimena, J. Rink, P. Tomancak, L. Royer, F. Jug, and E. W. Myers, “Content-aware image restoration: pushing the limits of fluorescence microscopy,” Nat. Methods 15(12), 1090–1097 (2018).
[Crossref]

Sheikh, H. R.

Z. Wang, A. C. Bovik, H. R. Sheikh, and E. P. Simoncelli, “Image quality assessment: from error visibility to structural similarity,” IEEE Trans. on Image Process. 13(4), 600–612 (2004).
[Crossref]

Shelhamer, E.

J. Long, E. Shelhamer, and T. Darrell, “Fully convolutional networks for semantic segmentation,” in Proceedings of the IEEE conference on computer vision and pattern recognition, (2015), 3431–3440.

Shen, Z.

Simoncelli, E. P.

Z. Wang, A. C. Bovik, H. R. Sheikh, and E. P. Simoncelli, “Image quality assessment: from error visibility to structural similarity,” IEEE Trans. on Image Process. 13(4), 600–612 (2004).
[Crossref]

Situ, G.

Snoeck, E.

M. Hÿtch, F. Houdellier, F. Hüe, and E. Snoeck, “Nanoscale holographic interferometry for strain measurements in electronic devices,” Nature 453(7198), 1086–1089 (2008).
[Crossref]

Solimena, M.

M. Weigert, U. Schmidt, T. Boothe, A. Müller, A. Dibrov, A. Jain, B. Wilhelm, D. Schmidt, C. Broaddus, S. Culley, M. Rocha-Martins, F. Segovia-Miranda, C. Norden, R. Henriques, M. Zerial, M. Solimena, J. Rink, P. Tomancak, L. Royer, F. Jug, and E. W. Myers, “Content-aware image restoration: pushing the limits of fluorescence microscopy,” Nat. Methods 15(12), 1090–1097 (2018).
[Crossref]

Sutskever, I.

A. Krizhevsky, I. Sutskever, and G. E. Hinton, “Imagenet classification with deep convolutional neural networks,” in Advances in neural information processing systems, (2012), 1097–1105.

Tahon, M.

S. Montrésor, M. Tahon, A. Laurent, and P. Picart, “Computational de-noising based on deep learning for phase data in digital holographic interferometry,” APL Photonics 5(3), 030802 (2020).
[Crossref]

Teng, D.

Y. Rivenson, Y. Zhang, H. Günaydın, D. Teng, and A. Ozcan, “Phase recovery and holographic image reconstruction using deep learning in neural networks,” Light: Sci. Appl. 7(2), 17141 (2018).
[Crossref]

Tomancak, P.

M. Weigert, U. Schmidt, T. Boothe, A. Müller, A. Dibrov, A. Jain, B. Wilhelm, D. Schmidt, C. Broaddus, S. Culley, M. Rocha-Martins, F. Segovia-Miranda, C. Norden, R. Henriques, M. Zerial, M. Solimena, J. Rink, P. Tomancak, L. Royer, F. Jug, and E. W. Myers, “Content-aware image restoration: pushing the limits of fluorescence microscopy,” Nat. Methods 15(12), 1090–1097 (2018).
[Crossref]

Veeraraghavan, A.

Viswanath, A.

Wang, D.

Wang, H.

Wang, X.

Wang, Y.

Wang, Z.

Z. Wang, A. C. Bovik, H. R. Sheikh, and E. P. Simoncelli, “Image quality assessment: from error visibility to structural similarity,” IEEE Trans. on Image Process. 13(4), 600–612 (2004).
[Crossref]

Weigert, M.

M. Weigert, U. Schmidt, T. Boothe, A. Müller, A. Dibrov, A. Jain, B. Wilhelm, D. Schmidt, C. Broaddus, and S. Culley, “Content-aware image restoration: pushing the limits of fluorescence microscopy,” Nat. Methods 15(12), 1090–1097 (2018).
[Crossref]

M. Weigert, U. Schmidt, T. Boothe, A. Müller, A. Dibrov, A. Jain, B. Wilhelm, D. Schmidt, C. Broaddus, S. Culley, M. Rocha-Martins, F. Segovia-Miranda, C. Norden, R. Henriques, M. Zerial, M. Solimena, J. Rink, P. Tomancak, L. Royer, F. Jug, and E. W. Myers, “Content-aware image restoration: pushing the limits of fluorescence microscopy,” Nat. Methods 15(12), 1090–1097 (2018).
[Crossref]

Wilhelm, B.

M. Weigert, U. Schmidt, T. Boothe, A. Müller, A. Dibrov, A. Jain, B. Wilhelm, D. Schmidt, C. Broaddus, S. Culley, M. Rocha-Martins, F. Segovia-Miranda, C. Norden, R. Henriques, M. Zerial, M. Solimena, J. Rink, P. Tomancak, L. Royer, F. Jug, and E. W. Myers, “Content-aware image restoration: pushing the limits of fluorescence microscopy,” Nat. Methods 15(12), 1090–1097 (2018).
[Crossref]

M. Weigert, U. Schmidt, T. Boothe, A. Müller, A. Dibrov, A. Jain, B. Wilhelm, D. Schmidt, C. Broaddus, and S. Culley, “Content-aware image restoration: pushing the limits of fluorescence microscopy,” Nat. Methods 15(12), 1090–1097 (2018).
[Crossref]

Williams, R. J.

Wu, Y.

Y. Rivenson, Y. Wu, and A. Ozcan, “Deep learning in holography and coherent imaging,” Light: Sci. Appl. 8(1), 85 (2019).
[Crossref]

Xie, N.

Xu, Z.

Z. Ren, Z. Xu, and E. Y. Lam, “End-to-end deep learning framework for digital holographic reconstruction,” Adv. Photonics 1(1), 016004 (2019).
[Crossref]

Zalevsky, Z.

V. Micó, J. Zheng, J. Garcia, Z. Zalevsky, and P. Gao, “Resolution enhancement in quantitative phase microscopy,” Adv. Opt. Photonics 11(1), 135–214 (2019).
[Crossref]

Zeiler, M. D.

M. D. Zeiler, “Adadelta: an adaptive learning rate method,” arXiv preprint arXiv:1212.5701 (2012).

Zerial, M.

M. Weigert, U. Schmidt, T. Boothe, A. Müller, A. Dibrov, A. Jain, B. Wilhelm, D. Schmidt, C. Broaddus, S. Culley, M. Rocha-Martins, F. Segovia-Miranda, C. Norden, R. Henriques, M. Zerial, M. Solimena, J. Rink, P. Tomancak, L. Royer, F. Jug, and E. W. Myers, “Content-aware image restoration: pushing the limits of fluorescence microscopy,” Nat. Methods 15(12), 1090–1097 (2018).
[Crossref]

Zhang, G.

Zhang, Y.

Y. Rivenson, Y. Zhang, H. Günaydın, D. Teng, and A. Ozcan, “Phase recovery and holographic image reconstruction using deep learning in neural networks,” Light: Sci. Appl. 7(2), 17141 (2018).
[Crossref]

Zheng, J.

V. Micó, J. Zheng, J. Garcia, Z. Zalevsky, and P. Gao, “Resolution enhancement in quantitative phase microscopy,” Adv. Opt. Photonics 11(1), 135–214 (2019).
[Crossref]

Zheng, Y.

Adv. Opt. Photonics (1)

V. Micó, J. Zheng, J. Garcia, Z. Zalevsky, and P. Gao, “Resolution enhancement in quantitative phase microscopy,” Adv. Opt. Photonics 11(1), 135–214 (2019).
[Crossref]

Adv. Photonics (1)

Z. Ren, Z. Xu, and E. Y. Lam, “End-to-end deep learning framework for digital holographic reconstruction,” Adv. Photonics 1(1), 016004 (2019).
[Crossref]

APL Photonics (1)

S. Montrésor, M. Tahon, A. Laurent, and P. Picart, “Computational de-noising based on deep learning for phase data in digital holographic interferometry,” APL Photonics 5(3), 030802 (2020).
[Crossref]

Biomed. Opt. Express (1)

IEEE Trans. on Image Process. (2)

K. Dabov, A. Foi, V. Katkovnik, and K. Egiazarian, “Image denoising by sparse 3-D transform-domain collaborative filtering,” IEEE Trans. on Image Process. 16(8), 2080–2095 (2007).
[Crossref]

Z. Wang, A. C. Bovik, H. R. Sheikh, and E. P. Simoncelli, “Image quality assessment: from error visibility to structural similarity,” IEEE Trans. on Image Process. 13(4), 600–612 (2004).
[Crossref]

Light: Sci. Appl. (2)

Y. Rivenson, Y. Zhang, H. Günaydın, D. Teng, and A. Ozcan, “Phase recovery and holographic image reconstruction using deep learning in neural networks,” Light: Sci. Appl. 7(2), 17141 (2018).
[Crossref]

Y. Rivenson, Y. Wu, and A. Ozcan, “Deep learning in holography and coherent imaging,” Light: Sci. Appl. 8(1), 85 (2019).
[Crossref]

Nat. Methods (3)

N. M. Editorial, “Phototoxicity revisited,” Nat. Methods 15(10), 751 (2018).
[Crossref]

M. Weigert, U. Schmidt, T. Boothe, A. Müller, A. Dibrov, A. Jain, B. Wilhelm, D. Schmidt, C. Broaddus, S. Culley, M. Rocha-Martins, F. Segovia-Miranda, C. Norden, R. Henriques, M. Zerial, M. Solimena, J. Rink, P. Tomancak, L. Royer, F. Jug, and E. W. Myers, “Content-aware image restoration: pushing the limits of fluorescence microscopy,” Nat. Methods 15(12), 1090–1097 (2018).
[Crossref]

M. Weigert, U. Schmidt, T. Boothe, A. Müller, A. Dibrov, A. Jain, B. Wilhelm, D. Schmidt, C. Broaddus, and S. Culley, “Content-aware image restoration: pushing the limits of fluorescence microscopy,” Nat. Methods 15(12), 1090–1097 (2018).
[Crossref]

Nature (1)

M. Hÿtch, F. Houdellier, F. Hüe, and E. Snoeck, “Nanoscale holographic interferometry for strain measurements in electronic devices,” Nature 453(7198), 1086–1089 (2008).
[Crossref]

Opt. Express (4)

Optica (2)

Phys. Rev. Lett. (1)

A. Goy, K. Arthur, S. Li, and G. Barbastathis, “Low photon count phase retrieval using deep learning,” Phys. Rev. Lett. 121(24), 243902 (2018).
[Crossref]

Other (4)

A. Krizhevsky, I. Sutskever, and G. E. Hinton, “Imagenet classification with deep convolutional neural networks,” in Advances in neural information processing systems, (2012), 1097–1105.

O. Ronneberger, P. Fischer, and T. Brox, “U-net: Convolutional networks for biomedical image segmentation,” in International Conference on Medical image computing and computer-assisted intervention, (Springer, 2015), 234–241.

J. Long, E. Shelhamer, and T. Darrell, “Fully convolutional networks for semantic segmentation,” in Proceedings of the IEEE conference on computer vision and pattern recognition, (2015), 3431–3440.

M. D. Zeiler, “Adadelta: an adaptive learning rate method,” arXiv preprint arXiv:1212.5701 (2012).

Cited By

OSA participates in Crossref's Cited-By Linking service. Citing articles from OSA journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (6)

Fig. 1.
Fig. 1. Off-axis QPM and low fidelity of phase retrieval under low light/high shot noise condition (a) A schematic of the off-axis QPM setup used in the study. b) i) and ii) view of 6 µm microspheres under linear intensity fringes captured with 140 mW/cm2, 0.3 mW/cm2, respectively. iii) cross sectional comparison of the visibility of the fringes. iv) and v) are the phase retrieved at the two power levels (P0 and P1). Red and blue box indicate the region in the hologram in b)i) and ii). Scale bar 30 µm, 1 µm.
Fig. 2.
Fig. 2. Holo-UNet based on modified U-Net and learning process (a) Illustration of process converting hologram to phase. A hologram is passed through Holo-UNet, resulting in a ‘clean’ hologram. A common phase retrieval process [13] is used to then retrieve the phase and intensity (Scale bar 6 µm). (b) Modified U-Net architecture diagram. Data flows from left to right. Numbers on left vertical axis denote the kernel input size. Each blue box shows the resultant feature map, with kernel size above the box. (c) shows the training process, where low power (using ND filter) holograms are passed through the model. The result is compared with the ground truth (hologram without any ND filter) using NPCC and used to update the weights of the network arrows in (b). Scale bar 15µm.
Fig. 3.
Fig. 3. Restoration of hologram using Holo-UNet and quantitative comparison of phase and amplitude. a) i) shows hologram imaged under 0.6 mW/µm2 and 10 millisecond that is then restored by Holo-UNet and respectively ground truth ii) Mean NPCC loss after each epoch of training during the bead experiment. b) shows hologram of 6 µm diameter microspheres under 0.6 mW/µm2 using 633 nm laser source and performance of standard denoising (BM3D, Holo-UNet and ground truth) and Holo-UNet method against the ground based for the (i) hologram, (ii) retrieved phase and (iii) amplitude. (c) and (d) Quantitative comparison of SSIM and PCC of retrieved phase and intensity respectively averaged over test dataset (60 images). a) Scale bar 2 µm b) Scale bar 10 µm and c) error bars are standard deviations from the mean.
Fig. 4.
Fig. 4. Restoration of hologram of fibroblast cells using Holo-UNet a)i) Input of hologram of fibroblast sample imaged under 5 mW/µm2 at 200 µs and after Holo-UNet is used to restore hologram. a)ii) shows the retrieved phase and intensity of the first ROI using BM3D, PRNN [11] and Holo-UNet as compared to the ground truth. b) i) ii) iii) are quantitative comparison of the retrieved phase as measured by SSIM, MSE and PCC averaged over the test dataset (500 images) respectively. iv) Mean NPCC loss after each epoch of training during the fibroblast experiment a) Scale bars 50 µm, 30 µm, inset (scale bar 5 µm) and b) error bars are standard deviations from the mean.
Fig. 5.
Fig. 5. Comparison between using NPCC with FFT vs NPCC without FFT loss on the fibroblast cells dataset.
Fig. 6.
Fig. 6. Comparison phase retrieved from holograms taken from ground truth and Holo-UNet on the fibroblast cells dataset. a-c) depict an example Fibroblast phase image with the dashed line denoting the position of cross-section. d) Shows the phase value along the dash-lines of a-c.

Tables (1)

Tables Icon

Table 1. Comparison of methods w.r.t the ground truth of both Microspheres and Fibroblast cells.

Equations (2)

Equations on this page are rendered with MathJax. Learn more.

N P C C ( X , Y ) = c o v ( X , Y ) σ X σ Y
L o s s ( X , Y ) = 1 2 N P C C ( X , Y ) + 1 2 N P C C [ | F F T ( X ) | , | F F T ( Y ) | ]