Abstract

Holography encodes the three-dimensional (3D) information of a sample in the form of an intensity-only recording. However, to decode the original sample image from its hologram(s), autofocusing and phase recovery are needed, which are in general cumbersome and time-consuming to perform digitally. Here we demonstrate a convolutional neural network (CNN)-based approach that simultaneously performs autofocusing and phase recovery to significantly extend the depth of field (DOF) and the reconstruction speed in holographic imaging. For this, a CNN is trained by using pairs of randomly defocused back-propagated holograms and their corresponding in-focus phase-recovered images. After this training phase, the CNN takes a single back-propagated hologram of a 3D sample as input to rapidly achieve phase recovery and reconstruct an in-focus image of the sample over a significantly extended DOF. This deep-learning-based DOF extension method is non-iterative and significantly improves the algorithm time complexity of holographic image reconstruction from O(nm) to O(1), where n refers to the number of individual object points or particles within the sample volume, and m represents the focusing search space within which each object point or particle needs to be individually focused. These results highlight some of the unique opportunities created by data-enabled statistical image reconstruction methods powered by machine learning, and we believe that the presented approach can be broadly applicable to computationally extend the DOF of other imaging modalities.

© 2018 Optical Society of America under the terms of the OSA Open Access Publishing Agreement

Full Article  |  PDF Article
OSA Recommended Articles
Learned phase coded aperture for the benefit of depth of field extension

Shay Elmalem, Raja Giryes, and Emanuel Marom
Opt. Express 26(12) 15316-15331 (2018)

Transform- and multi-domain deep learning for single-frame rapid autofocusing in whole slide imaging

Shaowei Jiang, Jun Liao, Zichao Bian, Kaikai Guo, Yongbing Zhang, and Guoan Zheng
Biomed. Opt. Express 9(4) 1601-1612 (2018)

Learning-based nonparametric autofocusing for digital holography

Zhenbo Ren, Zhimin Xu, and Edmund Y. Lam
Optica 5(4) 337-344 (2018)

References

  • View by:
  • |
  • |
  • |

  1. J. W. Goodman, Introduction to Fourier Optics, 3rd ed. (Roberts and Company, 2005).
  2. W. Xu, M. H. Jericho, I. A. Meinertzhagen, and H. J. Kreuzer, “Digital in-line holography for biological applications,” Proc. Natl. Acad. Sci. USA 98, 11301–11305 (2001).
    [Crossref]
  3. P. Marquet, B. Rappaz, P. J. Magistretti, E. Cuche, Y. Emery, T. Colomb, and C. Depeursinge, “Digital holographic microscopy: a noninvasive contrast imaging technique allowing quantitative visualization of living cells with subwavelength axial accuracy,” Opt. Lett. 30, 468–470 (2005).
    [Crossref]
  4. G. Popescu, Y. Park, W. Choi, R. R. Dasari, M. S. Feld, and K. Badizadegan, “Imaging red blood cell dynamics by quantitative phase microscopy,” Blood Cells. Mol. Dis. 41, 10–16 (2008).
    [Crossref]
  5. A. Greenbaum, W. Luo, T.-W. Su, Z. Göröcs, L. Xue, S. O. Isikman, A. F. Coskun, O. Mudanyali, and A. Ozcan, “Imaging without lenses: achievements and remaining challenges of wide-field on-chip microscopy,” Nat. Methods 9, 889–895 (2012).
    [Crossref]
  6. O. Mudanyali, D. Tseng, C. Oh, S. O. Isikman, I. Sencan, W. Bishara, C. Oztoprak, S. Seo, B. Khademhosseini, and A. Ozcan, “Compact, light-weight and cost-effective microscope based on lensless incoherent holography for telemedicine applications,” Lab. Chip 10, 1417–1428 (2010).
    [Crossref]
  7. Y. Wu and A. Ozcan, “Lensless digital holographic microscopy and its applications in biomedicine and environmental monitoring,” Methods 136, 4–16 (2017).
    [Crossref]
  8. E. McLeod and A. Ozcan, “Unconventional methods of imaging: computational microscopy and compact implementations,” Rep. Prog. Phys. 79, 076001 (2016).
    [Crossref]
  9. T.-W. Su, L. Xue, and A. Ozcan, “High-throughput lensfree 3D tracking of human sperms reveals rare statistics of helical trajectories,” Proc. Natl. Acad. Sci. USA 109, 16018–16022 (2012).
    [Crossref]
  10. S. O. Isikman, W. Bishara, S. Mavandadi, W. Y. Frank, S. Feng, R. Lau, and A. Ozcan, “Lens-free optical tomographic microscope with a large imaging volume on a chip,” Proc. Natl. Acad. Sci. USA 108, 7296–7301 (2011).
    [Crossref]
  11. E. Cuche, P. Marquet, and C. Depeursinge, “Spatial filtering for zero-order and twin-image elimination in digital off-axis holography,” Appl. Opt. 39, 4070–4075 (2000).
    [Crossref]
  12. V. Bianco, B. Mandracchia, V. Marchesano, V. Pagliarulo, F. Olivieri, S. Coppola, M. Paturzo, and P. Ferraro, “Endowing a plain fluidic chip with micro-optics: a holographic microscope slide,” Light: Sci. Appl. 6, e17055 (2017).
    [Crossref]
  13. J. Fienup, “Phase retrieval algorithms—a comparison,” Appl. Opt. 21, 2758–2769 (1982).
    [Crossref]
  14. A. Greenbaum and A. Ozcan, “Maskless imaging of dense samples using pixel super-resolution based multi-height lensfree on-chip microscopy,” Opt. Express 20, 3129–3143 (2012).
    [Crossref]
  15. A. Greenbaum, Y. Zhang, A. Feizi, P.-L. Chung, W. Luo, S. R. Kandukuri, and A. Ozcan, “Wide-field computational imaging of pathology slides using lens-free on-chip microscopy,” Sci. Transl. Med. 6, 267ra175 (2014).
    [Crossref]
  16. L. J. Allen and M. P. Oxley, “Phase retrieval from series of images obtained by defocus variation,” Opt. Commun. 199, 65–75 (2001).
    [Crossref]
  17. P. Almoro, G. Pedrini, and W. Osten, “Complete wavefront reconstruction using sequential intensity measurements of a volume speckle field,” Appl. Opt. 45, 8596 (2006).
    [Crossref]
  18. W. Luo, A. Greenbaum, Y. Zhang, and A. Ozcan, “Synthetic aperture-based on-chip microscopy,” Light: Sci. Appl. 4, e261 (2015).
    [Crossref]
  19. P. Bao, G. Situ, G. Pedrini, and W. Osten, “Lensless phase microscopy using phase retrieval with multiple illumination wavelengths,” Appl. Opt. 51, 5486–5494 (2012).
    [Crossref]
  20. J. Min, B. Yao, M. Zhou, R. Guo, M. Lei, Y. Yang, D. Dan, S. Yan, and T. Peng, “Phase retrieval without unwrapping by single-shot dual-wavelength digital holography,” J. Opt. 16, 125409 (2014).
    [Crossref]
  21. Y. Wu, Y. Zhang, W. Luo, and A. Ozcan, “Demosaiced pixel super-resolution for multiplexed holographic color imaging,” Sci. Rep. 6, 28601 (2016).
    [Crossref]
  22. Y. Rivenson, Y. Wu, H. Wang, Y. Zhang, A. Feizi, and A. Ozcan, “Sparsity-based multi-height phase recovery in holographic microscopy,” Sci. Rep. 6, srep37862 (2016).
    [Crossref]
  23. Y. Zhang, H. Wang, Y. Wu, M. Tamamitsu, and A. Ozcan, “Edge sparsity criterion for robust holographic autofocusing,” Opt. Lett. 42, 3824–3827 (2017).
    [Crossref]
  24. P. Memmolo, C. Distante, M. Paturzo, A. Finizio, P. Ferraro, and B. Javidi, “Automatic focusing in digital holography and its application to stretched holograms,” Opt. Lett. 36, 1945–1947 (2011).
    [Crossref]
  25. P. Memmolo, M. Paturzo, B. Javidi, P. A. Netti, and P. Ferraro, “Refocusing criterion via sparsity measurements in digital holography,” Opt. Lett. 39, 4719–4722 (2014).
    [Crossref]
  26. P. Langehanenberg, B. Kemper, D. Dirksen, and G. von Bally, “Autofocusing in digital holographic phase contrast microscopy on pure phase objects for live cell imaging,” Appl. Opt. 47, D176–D182 (2008).
    [Crossref]
  27. F. Dubois, C. Schockaert, N. Callens, and C. Yourassowsky, “Focus plane detection criteria in digital holography microscopy by amplitude analysis,” Opt. Express 14, 5895–5908 (2006).
    [Crossref]
  28. M. Liebling and M. Unser, “Autofocus for digital Fresnel holograms by use of a Fresnelet-sparsity criterion,” J. Opt. Soc. Am. A 21, 2424–2430 (2004).
    [Crossref]
  29. F. Dubois, A. E. Mallahi, J. Dohet-Eraly, and C. Yourassowsky, “Refocus criterion for both phase and amplitude objects in digital holographic microscopy,” Opt. Lett. 39, 4286–4289 (2014).
    [Crossref]
  30. M. Lyu, C. Yuan, D. Li, and G. Situ, “Fast autofocusing in digital holography using the magnitude differential,” Appl. Opt. 56, F152–F157 (2017).
    [Crossref]
  31. M. Tamamitsu, Y. Zhang, H. Wang, Y. Wu, and A. Ozcan, “Comparison of Gini index and Tamura coefficient for holographic autofocusing based on the edge sparsity of the complex optical wavefront,” ArXiv1708.08055 Phys. (2017).
  32. Y.-C. Wu, A. Shiledar, Y.-C. Li, J. Wong, S. Feng, X. Chen, C. Chen, K. Jin, S. Janamian, Z. Yang, Z. S. Ballard, Z. Göröcs, A. Feizi, and A. Ozcan, “Air quality monitoring using mobile microscopy and machine learning,” Light: Sci. Appl. 6, e17046 (2017).
    [Crossref]
  33. Z. Ren, Z. Xu, and E. Y. Lam, “Autofocusing in digital holography using deep learning,” in Three-Dimensional and Multidimensional Microscopy: Image Acquisition and Processing XXV (International Society for Optics and Photonics, 2018), Vol. 10499, p. 104991V.
  34. T. Shimobaba, T. Kakue, and T. Ito, “Convolutional neural network-based regression for depth prediction in digital holography,” ArXiv1802.00664 Cs Eess (2018).
  35. Y. Rivenson, Y. Zhang, H. Gunaydin, D. Teng, and A. Ozcan, “Phase recovery and holographic image reconstruction using deep learning in neural networks,” ArXiv1705.04286 Phys. (2017).
  36. A. Sinha, J. Lee, S. Li, and G. Barbastathis, “Lensless computational imaging through deep learning,” ArXiv1702.08516 Phys. (2017).
  37. A. Sinha, J. Lee, S. Li, and G. Barbastathis, “Lensless computational imaging through deep learning,” Optica 4, 1117–1125 (2017).
    [Crossref]
  38. Y. Rivenson, Y. Zhang, H. Günaydın, D. Teng, and A. Ozcan, “Phase recovery and holographic image reconstruction using deep learning in neural networks,” Light Sci. Appl. 7, 17141 (2018).
    [Crossref]
  39. O. Ronneberger, P. Fischer, and T. Brox, “U-Net: convolutional networks for biomedical image segmentation,” ArXiv1505.04597 Cs (2015).
  40. K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition,” in IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2016), pp. 770–778.
  41. M. Abadi, P. Barham, J. Chen, Z. Chen, A. Davis, J. Dean, M. Devin, S. Ghemawat, G. Irving, and M. Isard, “TensorFlow: a system for large-scale machine learning,” in Proceedings of the 12th USENIX Symposium on Operating Systems Design and Implementation (2016), Vol. 16, pp. 265–283.
  42. D. P. Kingma and J. Ba, “Adam: A Method for Stochastic Optimization,” ArXiv E-Prints 1412, arXiv:1412.6980 (2014).
  43. W. Luo, Y. Zhang, Z. Göröcs, A. Feizi, and A. Ozcan, “Propagation phasor approach for holographic image reconstruction,” Sci. Rep. 6, 22738 (2016).
    [Crossref]
  44. Z. Wang, A. C. Bovik, H. R. Sheikh, and E. P. Simoncelli, “Image quality assessment: from error visibility to structural similarity,” IEEE Trans. Image Process. 13, 600–612 (2004).
    [Crossref]

2018 (1)

Y. Rivenson, Y. Zhang, H. Günaydın, D. Teng, and A. Ozcan, “Phase recovery and holographic image reconstruction using deep learning in neural networks,” Light Sci. Appl. 7, 17141 (2018).
[Crossref]

2017 (6)

Y. Zhang, H. Wang, Y. Wu, M. Tamamitsu, and A. Ozcan, “Edge sparsity criterion for robust holographic autofocusing,” Opt. Lett. 42, 3824–3827 (2017).
[Crossref]

M. Lyu, C. Yuan, D. Li, and G. Situ, “Fast autofocusing in digital holography using the magnitude differential,” Appl. Opt. 56, F152–F157 (2017).
[Crossref]

Y.-C. Wu, A. Shiledar, Y.-C. Li, J. Wong, S. Feng, X. Chen, C. Chen, K. Jin, S. Janamian, Z. Yang, Z. S. Ballard, Z. Göröcs, A. Feizi, and A. Ozcan, “Air quality monitoring using mobile microscopy and machine learning,” Light: Sci. Appl. 6, e17046 (2017).
[Crossref]

A. Sinha, J. Lee, S. Li, and G. Barbastathis, “Lensless computational imaging through deep learning,” Optica 4, 1117–1125 (2017).
[Crossref]

Y. Wu and A. Ozcan, “Lensless digital holographic microscopy and its applications in biomedicine and environmental monitoring,” Methods 136, 4–16 (2017).
[Crossref]

V. Bianco, B. Mandracchia, V. Marchesano, V. Pagliarulo, F. Olivieri, S. Coppola, M. Paturzo, and P. Ferraro, “Endowing a plain fluidic chip with micro-optics: a holographic microscope slide,” Light: Sci. Appl. 6, e17055 (2017).
[Crossref]

2016 (4)

E. McLeod and A. Ozcan, “Unconventional methods of imaging: computational microscopy and compact implementations,” Rep. Prog. Phys. 79, 076001 (2016).
[Crossref]

W. Luo, Y. Zhang, Z. Göröcs, A. Feizi, and A. Ozcan, “Propagation phasor approach for holographic image reconstruction,” Sci. Rep. 6, 22738 (2016).
[Crossref]

Y. Wu, Y. Zhang, W. Luo, and A. Ozcan, “Demosaiced pixel super-resolution for multiplexed holographic color imaging,” Sci. Rep. 6, 28601 (2016).
[Crossref]

Y. Rivenson, Y. Wu, H. Wang, Y. Zhang, A. Feizi, and A. Ozcan, “Sparsity-based multi-height phase recovery in holographic microscopy,” Sci. Rep. 6, srep37862 (2016).
[Crossref]

2015 (1)

W. Luo, A. Greenbaum, Y. Zhang, and A. Ozcan, “Synthetic aperture-based on-chip microscopy,” Light: Sci. Appl. 4, e261 (2015).
[Crossref]

2014 (4)

A. Greenbaum, Y. Zhang, A. Feizi, P.-L. Chung, W. Luo, S. R. Kandukuri, and A. Ozcan, “Wide-field computational imaging of pathology slides using lens-free on-chip microscopy,” Sci. Transl. Med. 6, 267ra175 (2014).
[Crossref]

J. Min, B. Yao, M. Zhou, R. Guo, M. Lei, Y. Yang, D. Dan, S. Yan, and T. Peng, “Phase retrieval without unwrapping by single-shot dual-wavelength digital holography,” J. Opt. 16, 125409 (2014).
[Crossref]

P. Memmolo, M. Paturzo, B. Javidi, P. A. Netti, and P. Ferraro, “Refocusing criterion via sparsity measurements in digital holography,” Opt. Lett. 39, 4719–4722 (2014).
[Crossref]

F. Dubois, A. E. Mallahi, J. Dohet-Eraly, and C. Yourassowsky, “Refocus criterion for both phase and amplitude objects in digital holographic microscopy,” Opt. Lett. 39, 4286–4289 (2014).
[Crossref]

2012 (4)

P. Bao, G. Situ, G. Pedrini, and W. Osten, “Lensless phase microscopy using phase retrieval with multiple illumination wavelengths,” Appl. Opt. 51, 5486–5494 (2012).
[Crossref]

A. Greenbaum and A. Ozcan, “Maskless imaging of dense samples using pixel super-resolution based multi-height lensfree on-chip microscopy,” Opt. Express 20, 3129–3143 (2012).
[Crossref]

T.-W. Su, L. Xue, and A. Ozcan, “High-throughput lensfree 3D tracking of human sperms reveals rare statistics of helical trajectories,” Proc. Natl. Acad. Sci. USA 109, 16018–16022 (2012).
[Crossref]

A. Greenbaum, W. Luo, T.-W. Su, Z. Göröcs, L. Xue, S. O. Isikman, A. F. Coskun, O. Mudanyali, and A. Ozcan, “Imaging without lenses: achievements and remaining challenges of wide-field on-chip microscopy,” Nat. Methods 9, 889–895 (2012).
[Crossref]

2011 (2)

S. O. Isikman, W. Bishara, S. Mavandadi, W. Y. Frank, S. Feng, R. Lau, and A. Ozcan, “Lens-free optical tomographic microscope with a large imaging volume on a chip,” Proc. Natl. Acad. Sci. USA 108, 7296–7301 (2011).
[Crossref]

P. Memmolo, C. Distante, M. Paturzo, A. Finizio, P. Ferraro, and B. Javidi, “Automatic focusing in digital holography and its application to stretched holograms,” Opt. Lett. 36, 1945–1947 (2011).
[Crossref]

2010 (1)

O. Mudanyali, D. Tseng, C. Oh, S. O. Isikman, I. Sencan, W. Bishara, C. Oztoprak, S. Seo, B. Khademhosseini, and A. Ozcan, “Compact, light-weight and cost-effective microscope based on lensless incoherent holography for telemedicine applications,” Lab. Chip 10, 1417–1428 (2010).
[Crossref]

2008 (2)

G. Popescu, Y. Park, W. Choi, R. R. Dasari, M. S. Feld, and K. Badizadegan, “Imaging red blood cell dynamics by quantitative phase microscopy,” Blood Cells. Mol. Dis. 41, 10–16 (2008).
[Crossref]

P. Langehanenberg, B. Kemper, D. Dirksen, and G. von Bally, “Autofocusing in digital holographic phase contrast microscopy on pure phase objects for live cell imaging,” Appl. Opt. 47, D176–D182 (2008).
[Crossref]

2006 (2)

2005 (1)

2004 (2)

M. Liebling and M. Unser, “Autofocus for digital Fresnel holograms by use of a Fresnelet-sparsity criterion,” J. Opt. Soc. Am. A 21, 2424–2430 (2004).
[Crossref]

Z. Wang, A. C. Bovik, H. R. Sheikh, and E. P. Simoncelli, “Image quality assessment: from error visibility to structural similarity,” IEEE Trans. Image Process. 13, 600–612 (2004).
[Crossref]

2001 (2)

W. Xu, M. H. Jericho, I. A. Meinertzhagen, and H. J. Kreuzer, “Digital in-line holography for biological applications,” Proc. Natl. Acad. Sci. USA 98, 11301–11305 (2001).
[Crossref]

L. J. Allen and M. P. Oxley, “Phase retrieval from series of images obtained by defocus variation,” Opt. Commun. 199, 65–75 (2001).
[Crossref]

2000 (1)

1982 (1)

Abadi, M.

M. Abadi, P. Barham, J. Chen, Z. Chen, A. Davis, J. Dean, M. Devin, S. Ghemawat, G. Irving, and M. Isard, “TensorFlow: a system for large-scale machine learning,” in Proceedings of the 12th USENIX Symposium on Operating Systems Design and Implementation (2016), Vol. 16, pp. 265–283.

Allen, L. J.

L. J. Allen and M. P. Oxley, “Phase retrieval from series of images obtained by defocus variation,” Opt. Commun. 199, 65–75 (2001).
[Crossref]

Almoro, P.

Ba, J.

D. P. Kingma and J. Ba, “Adam: A Method for Stochastic Optimization,” ArXiv E-Prints 1412, arXiv:1412.6980 (2014).

Badizadegan, K.

G. Popescu, Y. Park, W. Choi, R. R. Dasari, M. S. Feld, and K. Badizadegan, “Imaging red blood cell dynamics by quantitative phase microscopy,” Blood Cells. Mol. Dis. 41, 10–16 (2008).
[Crossref]

Ballard, Z. S.

Y.-C. Wu, A. Shiledar, Y.-C. Li, J. Wong, S. Feng, X. Chen, C. Chen, K. Jin, S. Janamian, Z. Yang, Z. S. Ballard, Z. Göröcs, A. Feizi, and A. Ozcan, “Air quality monitoring using mobile microscopy and machine learning,” Light: Sci. Appl. 6, e17046 (2017).
[Crossref]

Bao, P.

Barbastathis, G.

A. Sinha, J. Lee, S. Li, and G. Barbastathis, “Lensless computational imaging through deep learning,” Optica 4, 1117–1125 (2017).
[Crossref]

A. Sinha, J. Lee, S. Li, and G. Barbastathis, “Lensless computational imaging through deep learning,” ArXiv1702.08516 Phys. (2017).

Barham, P.

M. Abadi, P. Barham, J. Chen, Z. Chen, A. Davis, J. Dean, M. Devin, S. Ghemawat, G. Irving, and M. Isard, “TensorFlow: a system for large-scale machine learning,” in Proceedings of the 12th USENIX Symposium on Operating Systems Design and Implementation (2016), Vol. 16, pp. 265–283.

Bianco, V.

V. Bianco, B. Mandracchia, V. Marchesano, V. Pagliarulo, F. Olivieri, S. Coppola, M. Paturzo, and P. Ferraro, “Endowing a plain fluidic chip with micro-optics: a holographic microscope slide,” Light: Sci. Appl. 6, e17055 (2017).
[Crossref]

Bishara, W.

S. O. Isikman, W. Bishara, S. Mavandadi, W. Y. Frank, S. Feng, R. Lau, and A. Ozcan, “Lens-free optical tomographic microscope with a large imaging volume on a chip,” Proc. Natl. Acad. Sci. USA 108, 7296–7301 (2011).
[Crossref]

O. Mudanyali, D. Tseng, C. Oh, S. O. Isikman, I. Sencan, W. Bishara, C. Oztoprak, S. Seo, B. Khademhosseini, and A. Ozcan, “Compact, light-weight and cost-effective microscope based on lensless incoherent holography for telemedicine applications,” Lab. Chip 10, 1417–1428 (2010).
[Crossref]

Bovik, A. C.

Z. Wang, A. C. Bovik, H. R. Sheikh, and E. P. Simoncelli, “Image quality assessment: from error visibility to structural similarity,” IEEE Trans. Image Process. 13, 600–612 (2004).
[Crossref]

Brox, T.

O. Ronneberger, P. Fischer, and T. Brox, “U-Net: convolutional networks for biomedical image segmentation,” ArXiv1505.04597 Cs (2015).

Callens, N.

Chen, C.

Y.-C. Wu, A. Shiledar, Y.-C. Li, J. Wong, S. Feng, X. Chen, C. Chen, K. Jin, S. Janamian, Z. Yang, Z. S. Ballard, Z. Göröcs, A. Feizi, and A. Ozcan, “Air quality monitoring using mobile microscopy and machine learning,” Light: Sci. Appl. 6, e17046 (2017).
[Crossref]

Chen, J.

M. Abadi, P. Barham, J. Chen, Z. Chen, A. Davis, J. Dean, M. Devin, S. Ghemawat, G. Irving, and M. Isard, “TensorFlow: a system for large-scale machine learning,” in Proceedings of the 12th USENIX Symposium on Operating Systems Design and Implementation (2016), Vol. 16, pp. 265–283.

Chen, X.

Y.-C. Wu, A. Shiledar, Y.-C. Li, J. Wong, S. Feng, X. Chen, C. Chen, K. Jin, S. Janamian, Z. Yang, Z. S. Ballard, Z. Göröcs, A. Feizi, and A. Ozcan, “Air quality monitoring using mobile microscopy and machine learning,” Light: Sci. Appl. 6, e17046 (2017).
[Crossref]

Chen, Z.

M. Abadi, P. Barham, J. Chen, Z. Chen, A. Davis, J. Dean, M. Devin, S. Ghemawat, G. Irving, and M. Isard, “TensorFlow: a system for large-scale machine learning,” in Proceedings of the 12th USENIX Symposium on Operating Systems Design and Implementation (2016), Vol. 16, pp. 265–283.

Choi, W.

G. Popescu, Y. Park, W. Choi, R. R. Dasari, M. S. Feld, and K. Badizadegan, “Imaging red blood cell dynamics by quantitative phase microscopy,” Blood Cells. Mol. Dis. 41, 10–16 (2008).
[Crossref]

Chung, P.-L.

A. Greenbaum, Y. Zhang, A. Feizi, P.-L. Chung, W. Luo, S. R. Kandukuri, and A. Ozcan, “Wide-field computational imaging of pathology slides using lens-free on-chip microscopy,” Sci. Transl. Med. 6, 267ra175 (2014).
[Crossref]

Colomb, T.

Coppola, S.

V. Bianco, B. Mandracchia, V. Marchesano, V. Pagliarulo, F. Olivieri, S. Coppola, M. Paturzo, and P. Ferraro, “Endowing a plain fluidic chip with micro-optics: a holographic microscope slide,” Light: Sci. Appl. 6, e17055 (2017).
[Crossref]

Coskun, A. F.

A. Greenbaum, W. Luo, T.-W. Su, Z. Göröcs, L. Xue, S. O. Isikman, A. F. Coskun, O. Mudanyali, and A. Ozcan, “Imaging without lenses: achievements and remaining challenges of wide-field on-chip microscopy,” Nat. Methods 9, 889–895 (2012).
[Crossref]

Cuche, E.

Dan, D.

J. Min, B. Yao, M. Zhou, R. Guo, M. Lei, Y. Yang, D. Dan, S. Yan, and T. Peng, “Phase retrieval without unwrapping by single-shot dual-wavelength digital holography,” J. Opt. 16, 125409 (2014).
[Crossref]

Dasari, R. R.

G. Popescu, Y. Park, W. Choi, R. R. Dasari, M. S. Feld, and K. Badizadegan, “Imaging red blood cell dynamics by quantitative phase microscopy,” Blood Cells. Mol. Dis. 41, 10–16 (2008).
[Crossref]

Davis, A.

M. Abadi, P. Barham, J. Chen, Z. Chen, A. Davis, J. Dean, M. Devin, S. Ghemawat, G. Irving, and M. Isard, “TensorFlow: a system for large-scale machine learning,” in Proceedings of the 12th USENIX Symposium on Operating Systems Design and Implementation (2016), Vol. 16, pp. 265–283.

Dean, J.

M. Abadi, P. Barham, J. Chen, Z. Chen, A. Davis, J. Dean, M. Devin, S. Ghemawat, G. Irving, and M. Isard, “TensorFlow: a system for large-scale machine learning,” in Proceedings of the 12th USENIX Symposium on Operating Systems Design and Implementation (2016), Vol. 16, pp. 265–283.

Depeursinge, C.

Devin, M.

M. Abadi, P. Barham, J. Chen, Z. Chen, A. Davis, J. Dean, M. Devin, S. Ghemawat, G. Irving, and M. Isard, “TensorFlow: a system for large-scale machine learning,” in Proceedings of the 12th USENIX Symposium on Operating Systems Design and Implementation (2016), Vol. 16, pp. 265–283.

Dirksen, D.

Distante, C.

Dohet-Eraly, J.

Dubois, F.

Emery, Y.

Feizi, A.

Y.-C. Wu, A. Shiledar, Y.-C. Li, J. Wong, S. Feng, X. Chen, C. Chen, K. Jin, S. Janamian, Z. Yang, Z. S. Ballard, Z. Göröcs, A. Feizi, and A. Ozcan, “Air quality monitoring using mobile microscopy and machine learning,” Light: Sci. Appl. 6, e17046 (2017).
[Crossref]

Y. Rivenson, Y. Wu, H. Wang, Y. Zhang, A. Feizi, and A. Ozcan, “Sparsity-based multi-height phase recovery in holographic microscopy,” Sci. Rep. 6, srep37862 (2016).
[Crossref]

W. Luo, Y. Zhang, Z. Göröcs, A. Feizi, and A. Ozcan, “Propagation phasor approach for holographic image reconstruction,” Sci. Rep. 6, 22738 (2016).
[Crossref]

A. Greenbaum, Y. Zhang, A. Feizi, P.-L. Chung, W. Luo, S. R. Kandukuri, and A. Ozcan, “Wide-field computational imaging of pathology slides using lens-free on-chip microscopy,” Sci. Transl. Med. 6, 267ra175 (2014).
[Crossref]

Feld, M. S.

G. Popescu, Y. Park, W. Choi, R. R. Dasari, M. S. Feld, and K. Badizadegan, “Imaging red blood cell dynamics by quantitative phase microscopy,” Blood Cells. Mol. Dis. 41, 10–16 (2008).
[Crossref]

Feng, S.

Y.-C. Wu, A. Shiledar, Y.-C. Li, J. Wong, S. Feng, X. Chen, C. Chen, K. Jin, S. Janamian, Z. Yang, Z. S. Ballard, Z. Göröcs, A. Feizi, and A. Ozcan, “Air quality monitoring using mobile microscopy and machine learning,” Light: Sci. Appl. 6, e17046 (2017).
[Crossref]

S. O. Isikman, W. Bishara, S. Mavandadi, W. Y. Frank, S. Feng, R. Lau, and A. Ozcan, “Lens-free optical tomographic microscope with a large imaging volume on a chip,” Proc. Natl. Acad. Sci. USA 108, 7296–7301 (2011).
[Crossref]

Ferraro, P.

Fienup, J.

Finizio, A.

Fischer, P.

O. Ronneberger, P. Fischer, and T. Brox, “U-Net: convolutional networks for biomedical image segmentation,” ArXiv1505.04597 Cs (2015).

Frank, W. Y.

S. O. Isikman, W. Bishara, S. Mavandadi, W. Y. Frank, S. Feng, R. Lau, and A. Ozcan, “Lens-free optical tomographic microscope with a large imaging volume on a chip,” Proc. Natl. Acad. Sci. USA 108, 7296–7301 (2011).
[Crossref]

Ghemawat, S.

M. Abadi, P. Barham, J. Chen, Z. Chen, A. Davis, J. Dean, M. Devin, S. Ghemawat, G. Irving, and M. Isard, “TensorFlow: a system for large-scale machine learning,” in Proceedings of the 12th USENIX Symposium on Operating Systems Design and Implementation (2016), Vol. 16, pp. 265–283.

Goodman, J. W.

J. W. Goodman, Introduction to Fourier Optics, 3rd ed. (Roberts and Company, 2005).

Göröcs, Z.

Y.-C. Wu, A. Shiledar, Y.-C. Li, J. Wong, S. Feng, X. Chen, C. Chen, K. Jin, S. Janamian, Z. Yang, Z. S. Ballard, Z. Göröcs, A. Feizi, and A. Ozcan, “Air quality monitoring using mobile microscopy and machine learning,” Light: Sci. Appl. 6, e17046 (2017).
[Crossref]

W. Luo, Y. Zhang, Z. Göröcs, A. Feizi, and A. Ozcan, “Propagation phasor approach for holographic image reconstruction,” Sci. Rep. 6, 22738 (2016).
[Crossref]

A. Greenbaum, W. Luo, T.-W. Su, Z. Göröcs, L. Xue, S. O. Isikman, A. F. Coskun, O. Mudanyali, and A. Ozcan, “Imaging without lenses: achievements and remaining challenges of wide-field on-chip microscopy,” Nat. Methods 9, 889–895 (2012).
[Crossref]

Greenbaum, A.

W. Luo, A. Greenbaum, Y. Zhang, and A. Ozcan, “Synthetic aperture-based on-chip microscopy,” Light: Sci. Appl. 4, e261 (2015).
[Crossref]

A. Greenbaum, Y. Zhang, A. Feizi, P.-L. Chung, W. Luo, S. R. Kandukuri, and A. Ozcan, “Wide-field computational imaging of pathology slides using lens-free on-chip microscopy,” Sci. Transl. Med. 6, 267ra175 (2014).
[Crossref]

A. Greenbaum, W. Luo, T.-W. Su, Z. Göröcs, L. Xue, S. O. Isikman, A. F. Coskun, O. Mudanyali, and A. Ozcan, “Imaging without lenses: achievements and remaining challenges of wide-field on-chip microscopy,” Nat. Methods 9, 889–895 (2012).
[Crossref]

A. Greenbaum and A. Ozcan, “Maskless imaging of dense samples using pixel super-resolution based multi-height lensfree on-chip microscopy,” Opt. Express 20, 3129–3143 (2012).
[Crossref]

Gunaydin, H.

Y. Rivenson, Y. Zhang, H. Gunaydin, D. Teng, and A. Ozcan, “Phase recovery and holographic image reconstruction using deep learning in neural networks,” ArXiv1705.04286 Phys. (2017).

Günaydin, H.

Y. Rivenson, Y. Zhang, H. Günaydın, D. Teng, and A. Ozcan, “Phase recovery and holographic image reconstruction using deep learning in neural networks,” Light Sci. Appl. 7, 17141 (2018).
[Crossref]

Guo, R.

J. Min, B. Yao, M. Zhou, R. Guo, M. Lei, Y. Yang, D. Dan, S. Yan, and T. Peng, “Phase retrieval without unwrapping by single-shot dual-wavelength digital holography,” J. Opt. 16, 125409 (2014).
[Crossref]

He, K.

K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition,” in IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2016), pp. 770–778.

Irving, G.

M. Abadi, P. Barham, J. Chen, Z. Chen, A. Davis, J. Dean, M. Devin, S. Ghemawat, G. Irving, and M. Isard, “TensorFlow: a system for large-scale machine learning,” in Proceedings of the 12th USENIX Symposium on Operating Systems Design and Implementation (2016), Vol. 16, pp. 265–283.

Isard, M.

M. Abadi, P. Barham, J. Chen, Z. Chen, A. Davis, J. Dean, M. Devin, S. Ghemawat, G. Irving, and M. Isard, “TensorFlow: a system for large-scale machine learning,” in Proceedings of the 12th USENIX Symposium on Operating Systems Design and Implementation (2016), Vol. 16, pp. 265–283.

Isikman, S. O.

A. Greenbaum, W. Luo, T.-W. Su, Z. Göröcs, L. Xue, S. O. Isikman, A. F. Coskun, O. Mudanyali, and A. Ozcan, “Imaging without lenses: achievements and remaining challenges of wide-field on-chip microscopy,” Nat. Methods 9, 889–895 (2012).
[Crossref]

S. O. Isikman, W. Bishara, S. Mavandadi, W. Y. Frank, S. Feng, R. Lau, and A. Ozcan, “Lens-free optical tomographic microscope with a large imaging volume on a chip,” Proc. Natl. Acad. Sci. USA 108, 7296–7301 (2011).
[Crossref]

O. Mudanyali, D. Tseng, C. Oh, S. O. Isikman, I. Sencan, W. Bishara, C. Oztoprak, S. Seo, B. Khademhosseini, and A. Ozcan, “Compact, light-weight and cost-effective microscope based on lensless incoherent holography for telemedicine applications,” Lab. Chip 10, 1417–1428 (2010).
[Crossref]

Ito, T.

T. Shimobaba, T. Kakue, and T. Ito, “Convolutional neural network-based regression for depth prediction in digital holography,” ArXiv1802.00664 Cs Eess (2018).

Janamian, S.

Y.-C. Wu, A. Shiledar, Y.-C. Li, J. Wong, S. Feng, X. Chen, C. Chen, K. Jin, S. Janamian, Z. Yang, Z. S. Ballard, Z. Göröcs, A. Feizi, and A. Ozcan, “Air quality monitoring using mobile microscopy and machine learning,” Light: Sci. Appl. 6, e17046 (2017).
[Crossref]

Javidi, B.

Jericho, M. H.

W. Xu, M. H. Jericho, I. A. Meinertzhagen, and H. J. Kreuzer, “Digital in-line holography for biological applications,” Proc. Natl. Acad. Sci. USA 98, 11301–11305 (2001).
[Crossref]

Jin, K.

Y.-C. Wu, A. Shiledar, Y.-C. Li, J. Wong, S. Feng, X. Chen, C. Chen, K. Jin, S. Janamian, Z. Yang, Z. S. Ballard, Z. Göröcs, A. Feizi, and A. Ozcan, “Air quality monitoring using mobile microscopy and machine learning,” Light: Sci. Appl. 6, e17046 (2017).
[Crossref]

Kakue, T.

T. Shimobaba, T. Kakue, and T. Ito, “Convolutional neural network-based regression for depth prediction in digital holography,” ArXiv1802.00664 Cs Eess (2018).

Kandukuri, S. R.

A. Greenbaum, Y. Zhang, A. Feizi, P.-L. Chung, W. Luo, S. R. Kandukuri, and A. Ozcan, “Wide-field computational imaging of pathology slides using lens-free on-chip microscopy,” Sci. Transl. Med. 6, 267ra175 (2014).
[Crossref]

Kemper, B.

Khademhosseini, B.

O. Mudanyali, D. Tseng, C. Oh, S. O. Isikman, I. Sencan, W. Bishara, C. Oztoprak, S. Seo, B. Khademhosseini, and A. Ozcan, “Compact, light-weight and cost-effective microscope based on lensless incoherent holography for telemedicine applications,” Lab. Chip 10, 1417–1428 (2010).
[Crossref]

Kingma, D. P.

D. P. Kingma and J. Ba, “Adam: A Method for Stochastic Optimization,” ArXiv E-Prints 1412, arXiv:1412.6980 (2014).

Kreuzer, H. J.

W. Xu, M. H. Jericho, I. A. Meinertzhagen, and H. J. Kreuzer, “Digital in-line holography for biological applications,” Proc. Natl. Acad. Sci. USA 98, 11301–11305 (2001).
[Crossref]

Lam, E. Y.

Z. Ren, Z. Xu, and E. Y. Lam, “Autofocusing in digital holography using deep learning,” in Three-Dimensional and Multidimensional Microscopy: Image Acquisition and Processing XXV (International Society for Optics and Photonics, 2018), Vol. 10499, p. 104991V.

Langehanenberg, P.

Lau, R.

S. O. Isikman, W. Bishara, S. Mavandadi, W. Y. Frank, S. Feng, R. Lau, and A. Ozcan, “Lens-free optical tomographic microscope with a large imaging volume on a chip,” Proc. Natl. Acad. Sci. USA 108, 7296–7301 (2011).
[Crossref]

Lee, J.

A. Sinha, J. Lee, S. Li, and G. Barbastathis, “Lensless computational imaging through deep learning,” Optica 4, 1117–1125 (2017).
[Crossref]

A. Sinha, J. Lee, S. Li, and G. Barbastathis, “Lensless computational imaging through deep learning,” ArXiv1702.08516 Phys. (2017).

Lei, M.

J. Min, B. Yao, M. Zhou, R. Guo, M. Lei, Y. Yang, D. Dan, S. Yan, and T. Peng, “Phase retrieval without unwrapping by single-shot dual-wavelength digital holography,” J. Opt. 16, 125409 (2014).
[Crossref]

Li, D.

Li, S.

A. Sinha, J. Lee, S. Li, and G. Barbastathis, “Lensless computational imaging through deep learning,” Optica 4, 1117–1125 (2017).
[Crossref]

A. Sinha, J. Lee, S. Li, and G. Barbastathis, “Lensless computational imaging through deep learning,” ArXiv1702.08516 Phys. (2017).

Li, Y.-C.

Y.-C. Wu, A. Shiledar, Y.-C. Li, J. Wong, S. Feng, X. Chen, C. Chen, K. Jin, S. Janamian, Z. Yang, Z. S. Ballard, Z. Göröcs, A. Feizi, and A. Ozcan, “Air quality monitoring using mobile microscopy and machine learning,” Light: Sci. Appl. 6, e17046 (2017).
[Crossref]

Liebling, M.

Luo, W.

Y. Wu, Y. Zhang, W. Luo, and A. Ozcan, “Demosaiced pixel super-resolution for multiplexed holographic color imaging,” Sci. Rep. 6, 28601 (2016).
[Crossref]

W. Luo, Y. Zhang, Z. Göröcs, A. Feizi, and A. Ozcan, “Propagation phasor approach for holographic image reconstruction,” Sci. Rep. 6, 22738 (2016).
[Crossref]

W. Luo, A. Greenbaum, Y. Zhang, and A. Ozcan, “Synthetic aperture-based on-chip microscopy,” Light: Sci. Appl. 4, e261 (2015).
[Crossref]

A. Greenbaum, Y. Zhang, A. Feizi, P.-L. Chung, W. Luo, S. R. Kandukuri, and A. Ozcan, “Wide-field computational imaging of pathology slides using lens-free on-chip microscopy,” Sci. Transl. Med. 6, 267ra175 (2014).
[Crossref]

A. Greenbaum, W. Luo, T.-W. Su, Z. Göröcs, L. Xue, S. O. Isikman, A. F. Coskun, O. Mudanyali, and A. Ozcan, “Imaging without lenses: achievements and remaining challenges of wide-field on-chip microscopy,” Nat. Methods 9, 889–895 (2012).
[Crossref]

Lyu, M.

Magistretti, P. J.

Mallahi, A. E.

Mandracchia, B.

V. Bianco, B. Mandracchia, V. Marchesano, V. Pagliarulo, F. Olivieri, S. Coppola, M. Paturzo, and P. Ferraro, “Endowing a plain fluidic chip with micro-optics: a holographic microscope slide,” Light: Sci. Appl. 6, e17055 (2017).
[Crossref]

Marchesano, V.

V. Bianco, B. Mandracchia, V. Marchesano, V. Pagliarulo, F. Olivieri, S. Coppola, M. Paturzo, and P. Ferraro, “Endowing a plain fluidic chip with micro-optics: a holographic microscope slide,” Light: Sci. Appl. 6, e17055 (2017).
[Crossref]

Marquet, P.

Mavandadi, S.

S. O. Isikman, W. Bishara, S. Mavandadi, W. Y. Frank, S. Feng, R. Lau, and A. Ozcan, “Lens-free optical tomographic microscope with a large imaging volume on a chip,” Proc. Natl. Acad. Sci. USA 108, 7296–7301 (2011).
[Crossref]

McLeod, E.

E. McLeod and A. Ozcan, “Unconventional methods of imaging: computational microscopy and compact implementations,” Rep. Prog. Phys. 79, 076001 (2016).
[Crossref]

Meinertzhagen, I. A.

W. Xu, M. H. Jericho, I. A. Meinertzhagen, and H. J. Kreuzer, “Digital in-line holography for biological applications,” Proc. Natl. Acad. Sci. USA 98, 11301–11305 (2001).
[Crossref]

Memmolo, P.

Min, J.

J. Min, B. Yao, M. Zhou, R. Guo, M. Lei, Y. Yang, D. Dan, S. Yan, and T. Peng, “Phase retrieval without unwrapping by single-shot dual-wavelength digital holography,” J. Opt. 16, 125409 (2014).
[Crossref]

Mudanyali, O.

A. Greenbaum, W. Luo, T.-W. Su, Z. Göröcs, L. Xue, S. O. Isikman, A. F. Coskun, O. Mudanyali, and A. Ozcan, “Imaging without lenses: achievements and remaining challenges of wide-field on-chip microscopy,” Nat. Methods 9, 889–895 (2012).
[Crossref]

O. Mudanyali, D. Tseng, C. Oh, S. O. Isikman, I. Sencan, W. Bishara, C. Oztoprak, S. Seo, B. Khademhosseini, and A. Ozcan, “Compact, light-weight and cost-effective microscope based on lensless incoherent holography for telemedicine applications,” Lab. Chip 10, 1417–1428 (2010).
[Crossref]

Netti, P. A.

Oh, C.

O. Mudanyali, D. Tseng, C. Oh, S. O. Isikman, I. Sencan, W. Bishara, C. Oztoprak, S. Seo, B. Khademhosseini, and A. Ozcan, “Compact, light-weight and cost-effective microscope based on lensless incoherent holography for telemedicine applications,” Lab. Chip 10, 1417–1428 (2010).
[Crossref]

Olivieri, F.

V. Bianco, B. Mandracchia, V. Marchesano, V. Pagliarulo, F. Olivieri, S. Coppola, M. Paturzo, and P. Ferraro, “Endowing a plain fluidic chip with micro-optics: a holographic microscope slide,” Light: Sci. Appl. 6, e17055 (2017).
[Crossref]

Osten, W.

Oxley, M. P.

L. J. Allen and M. P. Oxley, “Phase retrieval from series of images obtained by defocus variation,” Opt. Commun. 199, 65–75 (2001).
[Crossref]

Ozcan, A.

Y. Rivenson, Y. Zhang, H. Günaydın, D. Teng, and A. Ozcan, “Phase recovery and holographic image reconstruction using deep learning in neural networks,” Light Sci. Appl. 7, 17141 (2018).
[Crossref]

Y.-C. Wu, A. Shiledar, Y.-C. Li, J. Wong, S. Feng, X. Chen, C. Chen, K. Jin, S. Janamian, Z. Yang, Z. S. Ballard, Z. Göröcs, A. Feizi, and A. Ozcan, “Air quality monitoring using mobile microscopy and machine learning,” Light: Sci. Appl. 6, e17046 (2017).
[Crossref]

Y. Wu and A. Ozcan, “Lensless digital holographic microscopy and its applications in biomedicine and environmental monitoring,” Methods 136, 4–16 (2017).
[Crossref]

Y. Zhang, H. Wang, Y. Wu, M. Tamamitsu, and A. Ozcan, “Edge sparsity criterion for robust holographic autofocusing,” Opt. Lett. 42, 3824–3827 (2017).
[Crossref]

W. Luo, Y. Zhang, Z. Göröcs, A. Feizi, and A. Ozcan, “Propagation phasor approach for holographic image reconstruction,” Sci. Rep. 6, 22738 (2016).
[Crossref]

E. McLeod and A. Ozcan, “Unconventional methods of imaging: computational microscopy and compact implementations,” Rep. Prog. Phys. 79, 076001 (2016).
[Crossref]

Y. Wu, Y. Zhang, W. Luo, and A. Ozcan, “Demosaiced pixel super-resolution for multiplexed holographic color imaging,” Sci. Rep. 6, 28601 (2016).
[Crossref]

Y. Rivenson, Y. Wu, H. Wang, Y. Zhang, A. Feizi, and A. Ozcan, “Sparsity-based multi-height phase recovery in holographic microscopy,” Sci. Rep. 6, srep37862 (2016).
[Crossref]

W. Luo, A. Greenbaum, Y. Zhang, and A. Ozcan, “Synthetic aperture-based on-chip microscopy,” Light: Sci. Appl. 4, e261 (2015).
[Crossref]

A. Greenbaum, Y. Zhang, A. Feizi, P.-L. Chung, W. Luo, S. R. Kandukuri, and A. Ozcan, “Wide-field computational imaging of pathology slides using lens-free on-chip microscopy,” Sci. Transl. Med. 6, 267ra175 (2014).
[Crossref]

T.-W. Su, L. Xue, and A. Ozcan, “High-throughput lensfree 3D tracking of human sperms reveals rare statistics of helical trajectories,” Proc. Natl. Acad. Sci. USA 109, 16018–16022 (2012).
[Crossref]

A. Greenbaum, W. Luo, T.-W. Su, Z. Göröcs, L. Xue, S. O. Isikman, A. F. Coskun, O. Mudanyali, and A. Ozcan, “Imaging without lenses: achievements and remaining challenges of wide-field on-chip microscopy,” Nat. Methods 9, 889–895 (2012).
[Crossref]

A. Greenbaum and A. Ozcan, “Maskless imaging of dense samples using pixel super-resolution based multi-height lensfree on-chip microscopy,” Opt. Express 20, 3129–3143 (2012).
[Crossref]

S. O. Isikman, W. Bishara, S. Mavandadi, W. Y. Frank, S. Feng, R. Lau, and A. Ozcan, “Lens-free optical tomographic microscope with a large imaging volume on a chip,” Proc. Natl. Acad. Sci. USA 108, 7296–7301 (2011).
[Crossref]

O. Mudanyali, D. Tseng, C. Oh, S. O. Isikman, I. Sencan, W. Bishara, C. Oztoprak, S. Seo, B. Khademhosseini, and A. Ozcan, “Compact, light-weight and cost-effective microscope based on lensless incoherent holography for telemedicine applications,” Lab. Chip 10, 1417–1428 (2010).
[Crossref]

M. Tamamitsu, Y. Zhang, H. Wang, Y. Wu, and A. Ozcan, “Comparison of Gini index and Tamura coefficient for holographic autofocusing based on the edge sparsity of the complex optical wavefront,” ArXiv1708.08055 Phys. (2017).

Y. Rivenson, Y. Zhang, H. Gunaydin, D. Teng, and A. Ozcan, “Phase recovery and holographic image reconstruction using deep learning in neural networks,” ArXiv1705.04286 Phys. (2017).

Oztoprak, C.

O. Mudanyali, D. Tseng, C. Oh, S. O. Isikman, I. Sencan, W. Bishara, C. Oztoprak, S. Seo, B. Khademhosseini, and A. Ozcan, “Compact, light-weight and cost-effective microscope based on lensless incoherent holography for telemedicine applications,” Lab. Chip 10, 1417–1428 (2010).
[Crossref]

Pagliarulo, V.

V. Bianco, B. Mandracchia, V. Marchesano, V. Pagliarulo, F. Olivieri, S. Coppola, M. Paturzo, and P. Ferraro, “Endowing a plain fluidic chip with micro-optics: a holographic microscope slide,” Light: Sci. Appl. 6, e17055 (2017).
[Crossref]

Park, Y.

G. Popescu, Y. Park, W. Choi, R. R. Dasari, M. S. Feld, and K. Badizadegan, “Imaging red blood cell dynamics by quantitative phase microscopy,” Blood Cells. Mol. Dis. 41, 10–16 (2008).
[Crossref]

Paturzo, M.

Pedrini, G.

Peng, T.

J. Min, B. Yao, M. Zhou, R. Guo, M. Lei, Y. Yang, D. Dan, S. Yan, and T. Peng, “Phase retrieval without unwrapping by single-shot dual-wavelength digital holography,” J. Opt. 16, 125409 (2014).
[Crossref]

Popescu, G.

G. Popescu, Y. Park, W. Choi, R. R. Dasari, M. S. Feld, and K. Badizadegan, “Imaging red blood cell dynamics by quantitative phase microscopy,” Blood Cells. Mol. Dis. 41, 10–16 (2008).
[Crossref]

Rappaz, B.

Ren, S.

K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition,” in IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2016), pp. 770–778.

Ren, Z.

Z. Ren, Z. Xu, and E. Y. Lam, “Autofocusing in digital holography using deep learning,” in Three-Dimensional and Multidimensional Microscopy: Image Acquisition and Processing XXV (International Society for Optics and Photonics, 2018), Vol. 10499, p. 104991V.

Rivenson, Y.

Y. Rivenson, Y. Zhang, H. Günaydın, D. Teng, and A. Ozcan, “Phase recovery and holographic image reconstruction using deep learning in neural networks,” Light Sci. Appl. 7, 17141 (2018).
[Crossref]

Y. Rivenson, Y. Wu, H. Wang, Y. Zhang, A. Feizi, and A. Ozcan, “Sparsity-based multi-height phase recovery in holographic microscopy,” Sci. Rep. 6, srep37862 (2016).
[Crossref]

Y. Rivenson, Y. Zhang, H. Gunaydin, D. Teng, and A. Ozcan, “Phase recovery and holographic image reconstruction using deep learning in neural networks,” ArXiv1705.04286 Phys. (2017).

Ronneberger, O.

O. Ronneberger, P. Fischer, and T. Brox, “U-Net: convolutional networks for biomedical image segmentation,” ArXiv1505.04597 Cs (2015).

Schockaert, C.

Sencan, I.

O. Mudanyali, D. Tseng, C. Oh, S. O. Isikman, I. Sencan, W. Bishara, C. Oztoprak, S. Seo, B. Khademhosseini, and A. Ozcan, “Compact, light-weight and cost-effective microscope based on lensless incoherent holography for telemedicine applications,” Lab. Chip 10, 1417–1428 (2010).
[Crossref]

Seo, S.

O. Mudanyali, D. Tseng, C. Oh, S. O. Isikman, I. Sencan, W. Bishara, C. Oztoprak, S. Seo, B. Khademhosseini, and A. Ozcan, “Compact, light-weight and cost-effective microscope based on lensless incoherent holography for telemedicine applications,” Lab. Chip 10, 1417–1428 (2010).
[Crossref]

Sheikh, H. R.

Z. Wang, A. C. Bovik, H. R. Sheikh, and E. P. Simoncelli, “Image quality assessment: from error visibility to structural similarity,” IEEE Trans. Image Process. 13, 600–612 (2004).
[Crossref]

Shiledar, A.

Y.-C. Wu, A. Shiledar, Y.-C. Li, J. Wong, S. Feng, X. Chen, C. Chen, K. Jin, S. Janamian, Z. Yang, Z. S. Ballard, Z. Göröcs, A. Feizi, and A. Ozcan, “Air quality monitoring using mobile microscopy and machine learning,” Light: Sci. Appl. 6, e17046 (2017).
[Crossref]

Shimobaba, T.

T. Shimobaba, T. Kakue, and T. Ito, “Convolutional neural network-based regression for depth prediction in digital holography,” ArXiv1802.00664 Cs Eess (2018).

Simoncelli, E. P.

Z. Wang, A. C. Bovik, H. R. Sheikh, and E. P. Simoncelli, “Image quality assessment: from error visibility to structural similarity,” IEEE Trans. Image Process. 13, 600–612 (2004).
[Crossref]

Sinha, A.

A. Sinha, J. Lee, S. Li, and G. Barbastathis, “Lensless computational imaging through deep learning,” Optica 4, 1117–1125 (2017).
[Crossref]

A. Sinha, J. Lee, S. Li, and G. Barbastathis, “Lensless computational imaging through deep learning,” ArXiv1702.08516 Phys. (2017).

Situ, G.

Su, T.-W.

T.-W. Su, L. Xue, and A. Ozcan, “High-throughput lensfree 3D tracking of human sperms reveals rare statistics of helical trajectories,” Proc. Natl. Acad. Sci. USA 109, 16018–16022 (2012).
[Crossref]

A. Greenbaum, W. Luo, T.-W. Su, Z. Göröcs, L. Xue, S. O. Isikman, A. F. Coskun, O. Mudanyali, and A. Ozcan, “Imaging without lenses: achievements and remaining challenges of wide-field on-chip microscopy,” Nat. Methods 9, 889–895 (2012).
[Crossref]

Sun, J.

K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition,” in IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2016), pp. 770–778.

Tamamitsu, M.

Y. Zhang, H. Wang, Y. Wu, M. Tamamitsu, and A. Ozcan, “Edge sparsity criterion for robust holographic autofocusing,” Opt. Lett. 42, 3824–3827 (2017).
[Crossref]

M. Tamamitsu, Y. Zhang, H. Wang, Y. Wu, and A. Ozcan, “Comparison of Gini index and Tamura coefficient for holographic autofocusing based on the edge sparsity of the complex optical wavefront,” ArXiv1708.08055 Phys. (2017).

Teng, D.

Y. Rivenson, Y. Zhang, H. Günaydın, D. Teng, and A. Ozcan, “Phase recovery and holographic image reconstruction using deep learning in neural networks,” Light Sci. Appl. 7, 17141 (2018).
[Crossref]

Y. Rivenson, Y. Zhang, H. Gunaydin, D. Teng, and A. Ozcan, “Phase recovery and holographic image reconstruction using deep learning in neural networks,” ArXiv1705.04286 Phys. (2017).

Tseng, D.

O. Mudanyali, D. Tseng, C. Oh, S. O. Isikman, I. Sencan, W. Bishara, C. Oztoprak, S. Seo, B. Khademhosseini, and A. Ozcan, “Compact, light-weight and cost-effective microscope based on lensless incoherent holography for telemedicine applications,” Lab. Chip 10, 1417–1428 (2010).
[Crossref]

Unser, M.

von Bally, G.

Wang, H.

Y. Zhang, H. Wang, Y. Wu, M. Tamamitsu, and A. Ozcan, “Edge sparsity criterion for robust holographic autofocusing,” Opt. Lett. 42, 3824–3827 (2017).
[Crossref]

Y. Rivenson, Y. Wu, H. Wang, Y. Zhang, A. Feizi, and A. Ozcan, “Sparsity-based multi-height phase recovery in holographic microscopy,” Sci. Rep. 6, srep37862 (2016).
[Crossref]

M. Tamamitsu, Y. Zhang, H. Wang, Y. Wu, and A. Ozcan, “Comparison of Gini index and Tamura coefficient for holographic autofocusing based on the edge sparsity of the complex optical wavefront,” ArXiv1708.08055 Phys. (2017).

Wang, Z.

Z. Wang, A. C. Bovik, H. R. Sheikh, and E. P. Simoncelli, “Image quality assessment: from error visibility to structural similarity,” IEEE Trans. Image Process. 13, 600–612 (2004).
[Crossref]

Wong, J.

Y.-C. Wu, A. Shiledar, Y.-C. Li, J. Wong, S. Feng, X. Chen, C. Chen, K. Jin, S. Janamian, Z. Yang, Z. S. Ballard, Z. Göröcs, A. Feizi, and A. Ozcan, “Air quality monitoring using mobile microscopy and machine learning,” Light: Sci. Appl. 6, e17046 (2017).
[Crossref]

Wu, Y.

Y. Wu and A. Ozcan, “Lensless digital holographic microscopy and its applications in biomedicine and environmental monitoring,” Methods 136, 4–16 (2017).
[Crossref]

Y. Zhang, H. Wang, Y. Wu, M. Tamamitsu, and A. Ozcan, “Edge sparsity criterion for robust holographic autofocusing,” Opt. Lett. 42, 3824–3827 (2017).
[Crossref]

Y. Wu, Y. Zhang, W. Luo, and A. Ozcan, “Demosaiced pixel super-resolution for multiplexed holographic color imaging,” Sci. Rep. 6, 28601 (2016).
[Crossref]

Y. Rivenson, Y. Wu, H. Wang, Y. Zhang, A. Feizi, and A. Ozcan, “Sparsity-based multi-height phase recovery in holographic microscopy,” Sci. Rep. 6, srep37862 (2016).
[Crossref]

M. Tamamitsu, Y. Zhang, H. Wang, Y. Wu, and A. Ozcan, “Comparison of Gini index and Tamura coefficient for holographic autofocusing based on the edge sparsity of the complex optical wavefront,” ArXiv1708.08055 Phys. (2017).

Wu, Y.-C.

Y.-C. Wu, A. Shiledar, Y.-C. Li, J. Wong, S. Feng, X. Chen, C. Chen, K. Jin, S. Janamian, Z. Yang, Z. S. Ballard, Z. Göröcs, A. Feizi, and A. Ozcan, “Air quality monitoring using mobile microscopy and machine learning,” Light: Sci. Appl. 6, e17046 (2017).
[Crossref]

Xu, W.

W. Xu, M. H. Jericho, I. A. Meinertzhagen, and H. J. Kreuzer, “Digital in-line holography for biological applications,” Proc. Natl. Acad. Sci. USA 98, 11301–11305 (2001).
[Crossref]

Xu, Z.

Z. Ren, Z. Xu, and E. Y. Lam, “Autofocusing in digital holography using deep learning,” in Three-Dimensional and Multidimensional Microscopy: Image Acquisition and Processing XXV (International Society for Optics and Photonics, 2018), Vol. 10499, p. 104991V.

Xue, L.

T.-W. Su, L. Xue, and A. Ozcan, “High-throughput lensfree 3D tracking of human sperms reveals rare statistics of helical trajectories,” Proc. Natl. Acad. Sci. USA 109, 16018–16022 (2012).
[Crossref]

A. Greenbaum, W. Luo, T.-W. Su, Z. Göröcs, L. Xue, S. O. Isikman, A. F. Coskun, O. Mudanyali, and A. Ozcan, “Imaging without lenses: achievements and remaining challenges of wide-field on-chip microscopy,” Nat. Methods 9, 889–895 (2012).
[Crossref]

Yan, S.

J. Min, B. Yao, M. Zhou, R. Guo, M. Lei, Y. Yang, D. Dan, S. Yan, and T. Peng, “Phase retrieval without unwrapping by single-shot dual-wavelength digital holography,” J. Opt. 16, 125409 (2014).
[Crossref]

Yang, Y.

J. Min, B. Yao, M. Zhou, R. Guo, M. Lei, Y. Yang, D. Dan, S. Yan, and T. Peng, “Phase retrieval without unwrapping by single-shot dual-wavelength digital holography,” J. Opt. 16, 125409 (2014).
[Crossref]

Yang, Z.

Y.-C. Wu, A. Shiledar, Y.-C. Li, J. Wong, S. Feng, X. Chen, C. Chen, K. Jin, S. Janamian, Z. Yang, Z. S. Ballard, Z. Göröcs, A. Feizi, and A. Ozcan, “Air quality monitoring using mobile microscopy and machine learning,” Light: Sci. Appl. 6, e17046 (2017).
[Crossref]

Yao, B.

J. Min, B. Yao, M. Zhou, R. Guo, M. Lei, Y. Yang, D. Dan, S. Yan, and T. Peng, “Phase retrieval without unwrapping by single-shot dual-wavelength digital holography,” J. Opt. 16, 125409 (2014).
[Crossref]

Yourassowsky, C.

Yuan, C.

Zhang, X.

K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition,” in IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2016), pp. 770–778.

Zhang, Y.

Y. Rivenson, Y. Zhang, H. Günaydın, D. Teng, and A. Ozcan, “Phase recovery and holographic image reconstruction using deep learning in neural networks,” Light Sci. Appl. 7, 17141 (2018).
[Crossref]

Y. Zhang, H. Wang, Y. Wu, M. Tamamitsu, and A. Ozcan, “Edge sparsity criterion for robust holographic autofocusing,” Opt. Lett. 42, 3824–3827 (2017).
[Crossref]

W. Luo, Y. Zhang, Z. Göröcs, A. Feizi, and A. Ozcan, “Propagation phasor approach for holographic image reconstruction,” Sci. Rep. 6, 22738 (2016).
[Crossref]

Y. Wu, Y. Zhang, W. Luo, and A. Ozcan, “Demosaiced pixel super-resolution for multiplexed holographic color imaging,” Sci. Rep. 6, 28601 (2016).
[Crossref]

Y. Rivenson, Y. Wu, H. Wang, Y. Zhang, A. Feizi, and A. Ozcan, “Sparsity-based multi-height phase recovery in holographic microscopy,” Sci. Rep. 6, srep37862 (2016).
[Crossref]

W. Luo, A. Greenbaum, Y. Zhang, and A. Ozcan, “Synthetic aperture-based on-chip microscopy,” Light: Sci. Appl. 4, e261 (2015).
[Crossref]

A. Greenbaum, Y. Zhang, A. Feizi, P.-L. Chung, W. Luo, S. R. Kandukuri, and A. Ozcan, “Wide-field computational imaging of pathology slides using lens-free on-chip microscopy,” Sci. Transl. Med. 6, 267ra175 (2014).
[Crossref]

M. Tamamitsu, Y. Zhang, H. Wang, Y. Wu, and A. Ozcan, “Comparison of Gini index and Tamura coefficient for holographic autofocusing based on the edge sparsity of the complex optical wavefront,” ArXiv1708.08055 Phys. (2017).

Y. Rivenson, Y. Zhang, H. Gunaydin, D. Teng, and A. Ozcan, “Phase recovery and holographic image reconstruction using deep learning in neural networks,” ArXiv1705.04286 Phys. (2017).

Zhou, M.

J. Min, B. Yao, M. Zhou, R. Guo, M. Lei, Y. Yang, D. Dan, S. Yan, and T. Peng, “Phase retrieval without unwrapping by single-shot dual-wavelength digital holography,” J. Opt. 16, 125409 (2014).
[Crossref]

Appl. Opt. (6)

Blood Cells. Mol. Dis. (1)

G. Popescu, Y. Park, W. Choi, R. R. Dasari, M. S. Feld, and K. Badizadegan, “Imaging red blood cell dynamics by quantitative phase microscopy,” Blood Cells. Mol. Dis. 41, 10–16 (2008).
[Crossref]

IEEE Trans. Image Process. (1)

Z. Wang, A. C. Bovik, H. R. Sheikh, and E. P. Simoncelli, “Image quality assessment: from error visibility to structural similarity,” IEEE Trans. Image Process. 13, 600–612 (2004).
[Crossref]

J. Opt. (1)

J. Min, B. Yao, M. Zhou, R. Guo, M. Lei, Y. Yang, D. Dan, S. Yan, and T. Peng, “Phase retrieval without unwrapping by single-shot dual-wavelength digital holography,” J. Opt. 16, 125409 (2014).
[Crossref]

J. Opt. Soc. Am. A (1)

Lab. Chip (1)

O. Mudanyali, D. Tseng, C. Oh, S. O. Isikman, I. Sencan, W. Bishara, C. Oztoprak, S. Seo, B. Khademhosseini, and A. Ozcan, “Compact, light-weight and cost-effective microscope based on lensless incoherent holography for telemedicine applications,” Lab. Chip 10, 1417–1428 (2010).
[Crossref]

Light Sci. Appl. (1)

Y. Rivenson, Y. Zhang, H. Günaydın, D. Teng, and A. Ozcan, “Phase recovery and holographic image reconstruction using deep learning in neural networks,” Light Sci. Appl. 7, 17141 (2018).
[Crossref]

Light: Sci. Appl. (3)

W. Luo, A. Greenbaum, Y. Zhang, and A. Ozcan, “Synthetic aperture-based on-chip microscopy,” Light: Sci. Appl. 4, e261 (2015).
[Crossref]

V. Bianco, B. Mandracchia, V. Marchesano, V. Pagliarulo, F. Olivieri, S. Coppola, M. Paturzo, and P. Ferraro, “Endowing a plain fluidic chip with micro-optics: a holographic microscope slide,” Light: Sci. Appl. 6, e17055 (2017).
[Crossref]

Y.-C. Wu, A. Shiledar, Y.-C. Li, J. Wong, S. Feng, X. Chen, C. Chen, K. Jin, S. Janamian, Z. Yang, Z. S. Ballard, Z. Göröcs, A. Feizi, and A. Ozcan, “Air quality monitoring using mobile microscopy and machine learning,” Light: Sci. Appl. 6, e17046 (2017).
[Crossref]

Methods (1)

Y. Wu and A. Ozcan, “Lensless digital holographic microscopy and its applications in biomedicine and environmental monitoring,” Methods 136, 4–16 (2017).
[Crossref]

Nat. Methods (1)

A. Greenbaum, W. Luo, T.-W. Su, Z. Göröcs, L. Xue, S. O. Isikman, A. F. Coskun, O. Mudanyali, and A. Ozcan, “Imaging without lenses: achievements and remaining challenges of wide-field on-chip microscopy,” Nat. Methods 9, 889–895 (2012).
[Crossref]

Opt. Commun. (1)

L. J. Allen and M. P. Oxley, “Phase retrieval from series of images obtained by defocus variation,” Opt. Commun. 199, 65–75 (2001).
[Crossref]

Opt. Express (2)

Opt. Lett. (5)

Optica (1)

Proc. Natl. Acad. Sci. USA (3)

W. Xu, M. H. Jericho, I. A. Meinertzhagen, and H. J. Kreuzer, “Digital in-line holography for biological applications,” Proc. Natl. Acad. Sci. USA 98, 11301–11305 (2001).
[Crossref]

T.-W. Su, L. Xue, and A. Ozcan, “High-throughput lensfree 3D tracking of human sperms reveals rare statistics of helical trajectories,” Proc. Natl. Acad. Sci. USA 109, 16018–16022 (2012).
[Crossref]

S. O. Isikman, W. Bishara, S. Mavandadi, W. Y. Frank, S. Feng, R. Lau, and A. Ozcan, “Lens-free optical tomographic microscope with a large imaging volume on a chip,” Proc. Natl. Acad. Sci. USA 108, 7296–7301 (2011).
[Crossref]

Rep. Prog. Phys. (1)

E. McLeod and A. Ozcan, “Unconventional methods of imaging: computational microscopy and compact implementations,” Rep. Prog. Phys. 79, 076001 (2016).
[Crossref]

Sci. Rep. (3)

Y. Wu, Y. Zhang, W. Luo, and A. Ozcan, “Demosaiced pixel super-resolution for multiplexed holographic color imaging,” Sci. Rep. 6, 28601 (2016).
[Crossref]

Y. Rivenson, Y. Wu, H. Wang, Y. Zhang, A. Feizi, and A. Ozcan, “Sparsity-based multi-height phase recovery in holographic microscopy,” Sci. Rep. 6, srep37862 (2016).
[Crossref]

W. Luo, Y. Zhang, Z. Göröcs, A. Feizi, and A. Ozcan, “Propagation phasor approach for holographic image reconstruction,” Sci. Rep. 6, 22738 (2016).
[Crossref]

Sci. Transl. Med. (1)

A. Greenbaum, Y. Zhang, A. Feizi, P.-L. Chung, W. Luo, S. R. Kandukuri, and A. Ozcan, “Wide-field computational imaging of pathology slides using lens-free on-chip microscopy,” Sci. Transl. Med. 6, 267ra175 (2014).
[Crossref]

Other (10)

J. W. Goodman, Introduction to Fourier Optics, 3rd ed. (Roberts and Company, 2005).

M. Tamamitsu, Y. Zhang, H. Wang, Y. Wu, and A. Ozcan, “Comparison of Gini index and Tamura coefficient for holographic autofocusing based on the edge sparsity of the complex optical wavefront,” ArXiv1708.08055 Phys. (2017).

Z. Ren, Z. Xu, and E. Y. Lam, “Autofocusing in digital holography using deep learning,” in Three-Dimensional and Multidimensional Microscopy: Image Acquisition and Processing XXV (International Society for Optics and Photonics, 2018), Vol. 10499, p. 104991V.

T. Shimobaba, T. Kakue, and T. Ito, “Convolutional neural network-based regression for depth prediction in digital holography,” ArXiv1802.00664 Cs Eess (2018).

Y. Rivenson, Y. Zhang, H. Gunaydin, D. Teng, and A. Ozcan, “Phase recovery and holographic image reconstruction using deep learning in neural networks,” ArXiv1705.04286 Phys. (2017).

A. Sinha, J. Lee, S. Li, and G. Barbastathis, “Lensless computational imaging through deep learning,” ArXiv1702.08516 Phys. (2017).

O. Ronneberger, P. Fischer, and T. Brox, “U-Net: convolutional networks for biomedical image segmentation,” ArXiv1505.04597 Cs (2015).

K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition,” in IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2016), pp. 770–778.

M. Abadi, P. Barham, J. Chen, Z. Chen, A. Davis, J. Dean, M. Devin, S. Ghemawat, G. Irving, and M. Isard, “TensorFlow: a system for large-scale machine learning,” in Proceedings of the 12th USENIX Symposium on Operating Systems Design and Implementation (2016), Vol. 16, pp. 265–283.

D. P. Kingma and J. Ba, “Adam: A Method for Stochastic Optimization,” ArXiv E-Prints 1412, arXiv:1412.6980 (2014).

Supplementary Material (5)

NameDescription
» Supplement 1       Supplemental document
» Visualization 1       Comparison of particle images as a function of the axial defocus distance
» Visualization 2       Comparison of particle images as a function of the axial defocus distance, also containing the range that the HIDEF network was not trained for
» Visualization 3       Comparison of breast tissue images as a function of the axial defocus distance
» Visualization 4       Comparison of breast tissue images as a function of the axial defocus distance, also containing the range that the HIDEF network was not trained for

Cited By

OSA participates in Crossref's Cited-By Linking service. Citing articles from OSA journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (4)

Fig. 1.
Fig. 1. HIDEF CNN, after its training, simultaneously achieves phase recovery and autofocusing, significantly extending the DOF of holographic image reconstruction. The network has a down-sampling decomposition path (green arrows) and a symmetric up-sampling expansion path (red arrows). The blue arrows mark the paths that skip through the convolutional layers (defining the residual connections). The numbers in italics represent the number of the input and output channels in these blocks at different levels. The orange arrows represent the connections between the down-sampling and up-sampling paths, where the channels of the output from the down-sampling block are concatenated with the output from the corresponding up-sampling block, doubling the channel numbers (see Supplement 1 for further details). ASP, angular spectrum propagation.
Fig. 2.
Fig. 2. Extended-DOF reconstruction of aerosols at different depths using HIDEF. (a) After its training, HIDEF CNN brings all the particles within the FOV into focus while also performing phase recovery. Each particle’s depth is color-coded with respect to the back-propagation distance (1 mm), as shown with the color bar on the right. (b) As a comparison, MH-PR images of the same FOV show that some of the particles come into focus at different depths and become invisible or distorted at other depths. For each particle’s arrow, the same color-coding is used as in (a). (c) The enhanced-DOF of HIDEF is illustrated by tracking a particle’s amplitude full width at half-maximum (FWHM) as a function of the axial defocus distance (see Supplement 1 for details). HIDEF preserves the particle’s FWHM diameter and its correct image across a large DOF of > 0.2    mm , which is expected since it was trained for this range of defocus ( ± 0.1    mm ). On the other hand, MH-PR results show a much more limited DOF, as also confirmed with the same particle’s amplitude images at different defocus distances, reported at the bottom. Also see Visualization 1 and Visualization 2 for a detailed comparison.
Fig. 3.
Fig. 3. Comparison of HIDEF results against free-space back-propagation (CNN Input) and MH-PR (MH Phase Recovered) results, as a function of axial defocus distance (dz). The test sample is a thin section of a human breast tissue sample. The first two columns use a single intensity hologram, whereas the third column (MH-PR) uses eight in-line holograms of the same sample, acquired at different heights. These results clearly demonstrate that the HIDEF network simultaneously performs phase recovery and autofocusing over the axial defocus range that it was trained for (i.e., | d z | 100    μm in this case). Outside this training range (marked with red dz values), the network output is not reliable. See Visualization 3 and Visualization 4 for a detailed comparison. Scale bar: 20 µm.
Fig. 4.
Fig. 4. SSIM values as a function of the axial defocus distance. Each one of these SSIM curves is averaged over 180 test FOVs ( 512 × 512    pixels ) corresponding to thin sections of a human breast tissue sample. The results confirm the extended-DOF of the HIDEF network output images, up to the axial defocus range that it was trained for.

Equations (1)

Equations on this page are rendered with MathJax. Learn more.

SSIM ( U 1 , U 2 ) = ( 2 μ 1 μ 2 + C 1 ) ( 2 σ 1 , 2 + C 2 ) ( μ 1 2 + μ 2 2 + C 1 ) ( σ 1 2 + σ 2 2 + C 2 ) ,

Metrics