Abstract

Maintaining an in-focus image over long time scales is an essential and nontrivial task for a variety of microscopy applications. Here, we describe a fast, robust autofocusing method compatible with a wide range of existing microscopes. It requires only the addition of one or a few off-axis illumination sources (e.g., LEDs), and can predict the focus correction from a single image with this illumination. We designed a neural network architecture, the fully connected Fourier neural network (FCFNN), that exploits an understanding of the physics of the illumination to make accurate predictions with 2–3 orders of magnitude fewer learned parameters and less memory usage than existing state-of-the-art architectures, allowing it to be trained without any specialized hardware. We provide an open-source implementation of our method, to enable fast, inexpensive autofocus compatible with a variety of microscopes.

© 2019 Optical Society of America under the terms of the OSA Open Access Publishing Agreement

Full Article  |  PDF Article
OSA Recommended Articles
Transform- and multi-domain deep learning for single-frame rapid autofocusing in whole slide imaging

Shaowei Jiang, Jun Liao, Zichao Bian, Kaikai Guo, Yongbing Zhang, and Guoan Zheng
Biomed. Opt. Express 9(4) 1601-1612 (2018)

Deep learning microscopy

Yair Rivenson, Zoltán Göröcs, Harun Günaydin, Yibo Zhang, Hongda Wang, and Aydogan Ozcan
Optica 4(11) 1437-1443 (2017)

Deep-STORM: super-resolution single-molecule microscopy by deep learning

Elias Nehme, Lucien E. Weiss, Tomer Michaeli, and Yoav Shechtman
Optica 5(4) 458-464 (2018)

References

  • View by:
  • |
  • |
  • |

  1. M. Kreft, M. Stenovec, and R. Zorec, Ann. N.Y. Acad. Sci. 1048, 321 (2005).
    [Crossref]
  2. M. D. Zarella, D. Bowman, F. Aeffner, N. Farahani, A. Xthona, S. F. Absar, A. Parwani, M. Bui, and D. J. Hartman, Arch. Pathol. Lab. Med. 143, 222 (2019).
    [Crossref]
  3. Nikon Instruments Inc., “Nikon perfect focus,” 2019, https://www.microscopyu.com/applications/live-cell-imaging/nikon-perfect-focus-system .
  4. ZEISS, “Zeiss definite focus,” 2019, https://www.zeiss.com/microscopy/us/products/light-microscopes/axio-observer-for-biology/definite-focus.html .
  5. K. Guo, J. Liao, Z. Bian, X. Heng, and G. Zheng, Biomed. Opt. Express 6, 3210 (2015).
    [Crossref]
  6. M. Bathe-Peters, P. Annibale, and M. J. Lohse, Opt. Express 26, 2359 (2018).
    [Crossref]
  7. X. Zhang, F. Zeng, Y. Li, and Y. Qiao, Opt. Express 26, 887 (2018).
    [Crossref]
  8. F. Shen, L. Hodgson, and K. Hahn, “Digital autofocus methods for automated microscopy,” in Methods in Enzymology (Academic, 2006), Vol. 414, pp. 620–632.
  9. S. Yazdanfar, K. B. Kenny, K. Tasimi, A. D. Corwin, E. L. Dixon, and R. J. Filkins, Opt. Express 16, 8670 (2008).
    [Crossref]
  10. J. Liao, Y. Jiang, Z. Bian, B. Mahrou, A. Nambiar, A. W. Magsam, K. Guo, S. Wang, Y. K. Cho, and G. Zheng, Opt. Lett. 42, 3379 (2017).
    [Crossref]
  11. J. Liao, L. Bian, Z. Bian, Z. Zhang, C. Patel, K. Hoshino, Y. C. Eldar, and G. Zheng, Biomed. Opt. Express 7, 4763 (2016).
    [Crossref]
  12. G. Zheng, R. Horstmeyer, C. Yang, G. Zheng, and C. Yang, Nat. Photonics 7, 739 (2013).
    [Crossref]
  13. X. Ou, R. Horstmeyer, C. Yang, and G. Zheng, Opt. Lett. 38, 4845 (2013).
    [Crossref]
  14. L. Tian and L. Waller, Optica 2, 104 (2015).
    [Crossref]
  15. L. Tian and L. Waller, Opt. Express 23, 11394 (2015).
    [Crossref]
  16. G. Zheng, C. Kolner, and C. Yang, Opt. Lett. 36, 3987 (2011).
    [Crossref]
  17. Z. Liu, L. Tian, S. Liu, and L. Waller, J. Biomed. Opt. 19, 106002 (2014).
    [Crossref]
  18. Y. Wu, Y. Rivenson, Y. Zhang, Z. Wei, H. Günaydin, X. Lin, and A. Ozcan, Optica 5, 704 (2018).
    [Crossref]
  19. Z. Ren, Z. Xu, and E. Y. Lam, Optica 5, 337 (2018).
    [Crossref]
  20. Z. F. Phillips, R. Eckert, and L. Waller, “Quasi-dome: a self-calibrated high-NA LED illuminator for Fourier ptychography,” in Imaging and Applied Optics (3D, AIO, COSI, IS, MATH, pcAOP) (2017), Vol. IW4E.5.
  21. H. Pinkard, N. Stuurman, K. Corbin, R. Vale, and M. F. Krummel, Nat. Methods 13, 807 (2016).
    [Crossref]
  22. S. B. Mehta and C. J. R. Sheppard, Opt. Lett. 34, 1924 (2009).
    [Crossref]
  23. S. J. Yang, M. Berndl, D. M. Ando, M. Barch, A. Narayanaswamy, E. Christiansen, S. Hoyer, C. Roat, J. Hung, C. T. Rueden, A. Shankar, S. Finkbeiner, and P. Nelson, BMC Bioinf. 19, 28 (2018).
    [Crossref]
  24. A. Krizhevsky, I. Sutskever, and G. E. Hinton, Advances in Neural Information Processing Systems (2011), pp. 1097–1105.
  25. K. Simonyan, A. Vedaldi, and A. Zisserman, “Deep inside convolutional networks: visualising image classification models and saliency maps,” arXiv:1312.6034 (2013).
  26. R. Eckert, Z. F. Phillips, and L. Waller, Appl. Opt. 57, 5434 (2018).
    [Crossref]
  27. H. Pinkard, “Single-shot autofocus microscopy using deep learning–code,” (2019), https://doi.org/10.6084/m9.figshare.7453436.v1 .

2019 (1)

M. D. Zarella, D. Bowman, F. Aeffner, N. Farahani, A. Xthona, S. F. Absar, A. Parwani, M. Bui, and D. J. Hartman, Arch. Pathol. Lab. Med. 143, 222 (2019).
[Crossref]

2018 (6)

2017 (1)

2016 (2)

2015 (3)

2014 (1)

Z. Liu, L. Tian, S. Liu, and L. Waller, J. Biomed. Opt. 19, 106002 (2014).
[Crossref]

2013 (2)

G. Zheng, R. Horstmeyer, C. Yang, G. Zheng, and C. Yang, Nat. Photonics 7, 739 (2013).
[Crossref]

X. Ou, R. Horstmeyer, C. Yang, and G. Zheng, Opt. Lett. 38, 4845 (2013).
[Crossref]

2011 (1)

2009 (1)

2008 (1)

2005 (1)

M. Kreft, M. Stenovec, and R. Zorec, Ann. N.Y. Acad. Sci. 1048, 321 (2005).
[Crossref]

Absar, S. F.

M. D. Zarella, D. Bowman, F. Aeffner, N. Farahani, A. Xthona, S. F. Absar, A. Parwani, M. Bui, and D. J. Hartman, Arch. Pathol. Lab. Med. 143, 222 (2019).
[Crossref]

Aeffner, F.

M. D. Zarella, D. Bowman, F. Aeffner, N. Farahani, A. Xthona, S. F. Absar, A. Parwani, M. Bui, and D. J. Hartman, Arch. Pathol. Lab. Med. 143, 222 (2019).
[Crossref]

Ando, D. M.

S. J. Yang, M. Berndl, D. M. Ando, M. Barch, A. Narayanaswamy, E. Christiansen, S. Hoyer, C. Roat, J. Hung, C. T. Rueden, A. Shankar, S. Finkbeiner, and P. Nelson, BMC Bioinf. 19, 28 (2018).
[Crossref]

Annibale, P.

Barch, M.

S. J. Yang, M. Berndl, D. M. Ando, M. Barch, A. Narayanaswamy, E. Christiansen, S. Hoyer, C. Roat, J. Hung, C. T. Rueden, A. Shankar, S. Finkbeiner, and P. Nelson, BMC Bioinf. 19, 28 (2018).
[Crossref]

Bathe-Peters, M.

Berndl, M.

S. J. Yang, M. Berndl, D. M. Ando, M. Barch, A. Narayanaswamy, E. Christiansen, S. Hoyer, C. Roat, J. Hung, C. T. Rueden, A. Shankar, S. Finkbeiner, and P. Nelson, BMC Bioinf. 19, 28 (2018).
[Crossref]

Bian, L.

Bian, Z.

Bowman, D.

M. D. Zarella, D. Bowman, F. Aeffner, N. Farahani, A. Xthona, S. F. Absar, A. Parwani, M. Bui, and D. J. Hartman, Arch. Pathol. Lab. Med. 143, 222 (2019).
[Crossref]

Bui, M.

M. D. Zarella, D. Bowman, F. Aeffner, N. Farahani, A. Xthona, S. F. Absar, A. Parwani, M. Bui, and D. J. Hartman, Arch. Pathol. Lab. Med. 143, 222 (2019).
[Crossref]

Cho, Y. K.

Christiansen, E.

S. J. Yang, M. Berndl, D. M. Ando, M. Barch, A. Narayanaswamy, E. Christiansen, S. Hoyer, C. Roat, J. Hung, C. T. Rueden, A. Shankar, S. Finkbeiner, and P. Nelson, BMC Bioinf. 19, 28 (2018).
[Crossref]

Corbin, K.

H. Pinkard, N. Stuurman, K. Corbin, R. Vale, and M. F. Krummel, Nat. Methods 13, 807 (2016).
[Crossref]

Corwin, A. D.

Dixon, E. L.

Eckert, R.

R. Eckert, Z. F. Phillips, and L. Waller, Appl. Opt. 57, 5434 (2018).
[Crossref]

Z. F. Phillips, R. Eckert, and L. Waller, “Quasi-dome: a self-calibrated high-NA LED illuminator for Fourier ptychography,” in Imaging and Applied Optics (3D, AIO, COSI, IS, MATH, pcAOP) (2017), Vol. IW4E.5.

Eldar, Y. C.

Farahani, N.

M. D. Zarella, D. Bowman, F. Aeffner, N. Farahani, A. Xthona, S. F. Absar, A. Parwani, M. Bui, and D. J. Hartman, Arch. Pathol. Lab. Med. 143, 222 (2019).
[Crossref]

Filkins, R. J.

Finkbeiner, S.

S. J. Yang, M. Berndl, D. M. Ando, M. Barch, A. Narayanaswamy, E. Christiansen, S. Hoyer, C. Roat, J. Hung, C. T. Rueden, A. Shankar, S. Finkbeiner, and P. Nelson, BMC Bioinf. 19, 28 (2018).
[Crossref]

Günaydin, H.

Guo, K.

Hahn, K.

F. Shen, L. Hodgson, and K. Hahn, “Digital autofocus methods for automated microscopy,” in Methods in Enzymology (Academic, 2006), Vol. 414, pp. 620–632.

Hartman, D. J.

M. D. Zarella, D. Bowman, F. Aeffner, N. Farahani, A. Xthona, S. F. Absar, A. Parwani, M. Bui, and D. J. Hartman, Arch. Pathol. Lab. Med. 143, 222 (2019).
[Crossref]

Heng, X.

Hinton, G. E.

A. Krizhevsky, I. Sutskever, and G. E. Hinton, Advances in Neural Information Processing Systems (2011), pp. 1097–1105.

Hodgson, L.

F. Shen, L. Hodgson, and K. Hahn, “Digital autofocus methods for automated microscopy,” in Methods in Enzymology (Academic, 2006), Vol. 414, pp. 620–632.

Horstmeyer, R.

G. Zheng, R. Horstmeyer, C. Yang, G. Zheng, and C. Yang, Nat. Photonics 7, 739 (2013).
[Crossref]

X. Ou, R. Horstmeyer, C. Yang, and G. Zheng, Opt. Lett. 38, 4845 (2013).
[Crossref]

Hoshino, K.

Hoyer, S.

S. J. Yang, M. Berndl, D. M. Ando, M. Barch, A. Narayanaswamy, E. Christiansen, S. Hoyer, C. Roat, J. Hung, C. T. Rueden, A. Shankar, S. Finkbeiner, and P. Nelson, BMC Bioinf. 19, 28 (2018).
[Crossref]

Hung, J.

S. J. Yang, M. Berndl, D. M. Ando, M. Barch, A. Narayanaswamy, E. Christiansen, S. Hoyer, C. Roat, J. Hung, C. T. Rueden, A. Shankar, S. Finkbeiner, and P. Nelson, BMC Bioinf. 19, 28 (2018).
[Crossref]

Jiang, Y.

Kenny, K. B.

Kolner, C.

Kreft, M.

M. Kreft, M. Stenovec, and R. Zorec, Ann. N.Y. Acad. Sci. 1048, 321 (2005).
[Crossref]

Krizhevsky, A.

A. Krizhevsky, I. Sutskever, and G. E. Hinton, Advances in Neural Information Processing Systems (2011), pp. 1097–1105.

Krummel, M. F.

H. Pinkard, N. Stuurman, K. Corbin, R. Vale, and M. F. Krummel, Nat. Methods 13, 807 (2016).
[Crossref]

Lam, E. Y.

Li, Y.

Liao, J.

Lin, X.

Liu, S.

Z. Liu, L. Tian, S. Liu, and L. Waller, J. Biomed. Opt. 19, 106002 (2014).
[Crossref]

Liu, Z.

Z. Liu, L. Tian, S. Liu, and L. Waller, J. Biomed. Opt. 19, 106002 (2014).
[Crossref]

Lohse, M. J.

Magsam, A. W.

Mahrou, B.

Mehta, S. B.

Nambiar, A.

Narayanaswamy, A.

S. J. Yang, M. Berndl, D. M. Ando, M. Barch, A. Narayanaswamy, E. Christiansen, S. Hoyer, C. Roat, J. Hung, C. T. Rueden, A. Shankar, S. Finkbeiner, and P. Nelson, BMC Bioinf. 19, 28 (2018).
[Crossref]

Nelson, P.

S. J. Yang, M. Berndl, D. M. Ando, M. Barch, A. Narayanaswamy, E. Christiansen, S. Hoyer, C. Roat, J. Hung, C. T. Rueden, A. Shankar, S. Finkbeiner, and P. Nelson, BMC Bioinf. 19, 28 (2018).
[Crossref]

Ou, X.

Ozcan, A.

Parwani, A.

M. D. Zarella, D. Bowman, F. Aeffner, N. Farahani, A. Xthona, S. F. Absar, A. Parwani, M. Bui, and D. J. Hartman, Arch. Pathol. Lab. Med. 143, 222 (2019).
[Crossref]

Patel, C.

Phillips, Z. F.

R. Eckert, Z. F. Phillips, and L. Waller, Appl. Opt. 57, 5434 (2018).
[Crossref]

Z. F. Phillips, R. Eckert, and L. Waller, “Quasi-dome: a self-calibrated high-NA LED illuminator for Fourier ptychography,” in Imaging and Applied Optics (3D, AIO, COSI, IS, MATH, pcAOP) (2017), Vol. IW4E.5.

Pinkard, H.

H. Pinkard, N. Stuurman, K. Corbin, R. Vale, and M. F. Krummel, Nat. Methods 13, 807 (2016).
[Crossref]

Qiao, Y.

Ren, Z.

Rivenson, Y.

Roat, C.

S. J. Yang, M. Berndl, D. M. Ando, M. Barch, A. Narayanaswamy, E. Christiansen, S. Hoyer, C. Roat, J. Hung, C. T. Rueden, A. Shankar, S. Finkbeiner, and P. Nelson, BMC Bioinf. 19, 28 (2018).
[Crossref]

Rueden, C. T.

S. J. Yang, M. Berndl, D. M. Ando, M. Barch, A. Narayanaswamy, E. Christiansen, S. Hoyer, C. Roat, J. Hung, C. T. Rueden, A. Shankar, S. Finkbeiner, and P. Nelson, BMC Bioinf. 19, 28 (2018).
[Crossref]

Shankar, A.

S. J. Yang, M. Berndl, D. M. Ando, M. Barch, A. Narayanaswamy, E. Christiansen, S. Hoyer, C. Roat, J. Hung, C. T. Rueden, A. Shankar, S. Finkbeiner, and P. Nelson, BMC Bioinf. 19, 28 (2018).
[Crossref]

Shen, F.

F. Shen, L. Hodgson, and K. Hahn, “Digital autofocus methods for automated microscopy,” in Methods in Enzymology (Academic, 2006), Vol. 414, pp. 620–632.

Sheppard, C. J. R.

Simonyan, K.

K. Simonyan, A. Vedaldi, and A. Zisserman, “Deep inside convolutional networks: visualising image classification models and saliency maps,” arXiv:1312.6034 (2013).

Stenovec, M.

M. Kreft, M. Stenovec, and R. Zorec, Ann. N.Y. Acad. Sci. 1048, 321 (2005).
[Crossref]

Stuurman, N.

H. Pinkard, N. Stuurman, K. Corbin, R. Vale, and M. F. Krummel, Nat. Methods 13, 807 (2016).
[Crossref]

Sutskever, I.

A. Krizhevsky, I. Sutskever, and G. E. Hinton, Advances in Neural Information Processing Systems (2011), pp. 1097–1105.

Tasimi, K.

Tian, L.

Vale, R.

H. Pinkard, N. Stuurman, K. Corbin, R. Vale, and M. F. Krummel, Nat. Methods 13, 807 (2016).
[Crossref]

Vedaldi, A.

K. Simonyan, A. Vedaldi, and A. Zisserman, “Deep inside convolutional networks: visualising image classification models and saliency maps,” arXiv:1312.6034 (2013).

Waller, L.

R. Eckert, Z. F. Phillips, and L. Waller, Appl. Opt. 57, 5434 (2018).
[Crossref]

L. Tian and L. Waller, Opt. Express 23, 11394 (2015).
[Crossref]

L. Tian and L. Waller, Optica 2, 104 (2015).
[Crossref]

Z. Liu, L. Tian, S. Liu, and L. Waller, J. Biomed. Opt. 19, 106002 (2014).
[Crossref]

Z. F. Phillips, R. Eckert, and L. Waller, “Quasi-dome: a self-calibrated high-NA LED illuminator for Fourier ptychography,” in Imaging and Applied Optics (3D, AIO, COSI, IS, MATH, pcAOP) (2017), Vol. IW4E.5.

Wang, S.

Wei, Z.

Wu, Y.

Xthona, A.

M. D. Zarella, D. Bowman, F. Aeffner, N. Farahani, A. Xthona, S. F. Absar, A. Parwani, M. Bui, and D. J. Hartman, Arch. Pathol. Lab. Med. 143, 222 (2019).
[Crossref]

Xu, Z.

Yang, C.

G. Zheng, R. Horstmeyer, C. Yang, G. Zheng, and C. Yang, Nat. Photonics 7, 739 (2013).
[Crossref]

G. Zheng, R. Horstmeyer, C. Yang, G. Zheng, and C. Yang, Nat. Photonics 7, 739 (2013).
[Crossref]

X. Ou, R. Horstmeyer, C. Yang, and G. Zheng, Opt. Lett. 38, 4845 (2013).
[Crossref]

G. Zheng, C. Kolner, and C. Yang, Opt. Lett. 36, 3987 (2011).
[Crossref]

Yang, S. J.

S. J. Yang, M. Berndl, D. M. Ando, M. Barch, A. Narayanaswamy, E. Christiansen, S. Hoyer, C. Roat, J. Hung, C. T. Rueden, A. Shankar, S. Finkbeiner, and P. Nelson, BMC Bioinf. 19, 28 (2018).
[Crossref]

Yazdanfar, S.

Zarella, M. D.

M. D. Zarella, D. Bowman, F. Aeffner, N. Farahani, A. Xthona, S. F. Absar, A. Parwani, M. Bui, and D. J. Hartman, Arch. Pathol. Lab. Med. 143, 222 (2019).
[Crossref]

Zeng, F.

Zhang, X.

Zhang, Y.

Zhang, Z.

Zheng, G.

Zisserman, A.

K. Simonyan, A. Vedaldi, and A. Zisserman, “Deep inside convolutional networks: visualising image classification models and saliency maps,” arXiv:1312.6034 (2013).

Zorec, R.

M. Kreft, M. Stenovec, and R. Zorec, Ann. N.Y. Acad. Sci. 1048, 321 (2005).
[Crossref]

Ann. N.Y. Acad. Sci. (1)

M. Kreft, M. Stenovec, and R. Zorec, Ann. N.Y. Acad. Sci. 1048, 321 (2005).
[Crossref]

Appl. Opt. (1)

Arch. Pathol. Lab. Med. (1)

M. D. Zarella, D. Bowman, F. Aeffner, N. Farahani, A. Xthona, S. F. Absar, A. Parwani, M. Bui, and D. J. Hartman, Arch. Pathol. Lab. Med. 143, 222 (2019).
[Crossref]

Biomed. Opt. Express (2)

BMC Bioinf. (1)

S. J. Yang, M. Berndl, D. M. Ando, M. Barch, A. Narayanaswamy, E. Christiansen, S. Hoyer, C. Roat, J. Hung, C. T. Rueden, A. Shankar, S. Finkbeiner, and P. Nelson, BMC Bioinf. 19, 28 (2018).
[Crossref]

J. Biomed. Opt. (1)

Z. Liu, L. Tian, S. Liu, and L. Waller, J. Biomed. Opt. 19, 106002 (2014).
[Crossref]

Nat. Methods (1)

H. Pinkard, N. Stuurman, K. Corbin, R. Vale, and M. F. Krummel, Nat. Methods 13, 807 (2016).
[Crossref]

Nat. Photonics (1)

G. Zheng, R. Horstmeyer, C. Yang, G. Zheng, and C. Yang, Nat. Photonics 7, 739 (2013).
[Crossref]

Opt. Express (4)

Opt. Lett. (4)

Optica (3)

Other (7)

F. Shen, L. Hodgson, and K. Hahn, “Digital autofocus methods for automated microscopy,” in Methods in Enzymology (Academic, 2006), Vol. 414, pp. 620–632.

Nikon Instruments Inc., “Nikon perfect focus,” 2019, https://www.microscopyu.com/applications/live-cell-imaging/nikon-perfect-focus-system .

ZEISS, “Zeiss definite focus,” 2019, https://www.zeiss.com/microscopy/us/products/light-microscopes/axio-observer-for-biology/definite-focus.html .

Z. F. Phillips, R. Eckert, and L. Waller, “Quasi-dome: a self-calibrated high-NA LED illuminator for Fourier ptychography,” in Imaging and Applied Optics (3D, AIO, COSI, IS, MATH, pcAOP) (2017), Vol. IW4E.5.

A. Krizhevsky, I. Sutskever, and G. E. Hinton, Advances in Neural Information Processing Systems (2011), pp. 1097–1105.

K. Simonyan, A. Vedaldi, and A. Zisserman, “Deep inside convolutional networks: visualising image classification models and saliency maps,” arXiv:1312.6034 (2013).

H. Pinkard, “Single-shot autofocus microscopy using deep learning–code,” (2019), https://doi.org/10.6084/m9.figshare.7453436.v1 .

Supplementary Material (2)

NameDescription
» Code 1       Code
» Supplement 1       Supplemental document

Cited By

OSA participates in Crossref's Cited-By Linking service. Citing articles from OSA journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (4)

Fig. 1.
Fig. 1. Training and defocus prediction. (a) Training data consists of two focal stacks for each part of the sample, one with incoherent (phase contrast) illumination, and one with off-axis coherent illumination. Left: The high spatial frequency part of each image’s power spectrum from the incoherent stack is used to compute a ground truth focal position. Right: For each coherent image in the stack, the central pixels from the magnitude of its Fourier transform are used as input to a neural network trained to predict defocus. The full set of training examples is generated by repeating this process for each of the coherent images in the stack. (b) After training, experiments need only collect a single coherent image, which is fed through the same pipeline to predict defocus. The microscope’s focus can then be adjusted to correct defocus.
Fig. 2.
Fig. 2. Performance versus amount of training data. Defocus prediction performance (measured by validation RMSE) improves as a function of the number of focal stacks used during the training phase of the method.
Fig. 3.
Fig. 3. Generalization to new sample types. (a) Representative images of cells and tissue section samples. (b) A network trained on focal stacks of cells predicts defocus well in other cell samples. (c) This network, however, fails when predicting defocus in tissue sections. (d) After adding limited additional training data on tissue section samples, however, the network can learn to predict defocus well in both sample types.
Fig. 4.
Fig. 4. Understanding how the network predicts defocus. (a) A network trained on the magnitude of the Fourier transform of the input image performs better than one trained on the argument of the phase of the Fourier transform. (b) Left: A saliency map (the magnitude of the defocus prediction’s gradient with respect to the Fourier transform magnitude) shows the edges of the object spectrum have the strongest influence on defocus predictions. Right: Edges correspond to high-angle scattered light, which may not be captured off-focus, providing significant changes in the input image with defocus.