Abstract

A whole slide imaging (WSI) system has recently been approved for primary diagnostic use in the US. The image quality and system throughput of WSI is largely determined by the autofocusing process. Traditional approaches acquire multiple images along the optical axis and maximize a figure of merit for autofocusing. Here we explore the use of deep convolution neural networks (CNNs) to predict the focal position of the acquired image without axial scanning. We investigate the autofocusing performance with three illumination settings: incoherent Kohler illumination, partially coherent illumination with two plane waves, and one-plane-wave illumination. We acquire ~130,000 images with different defocus distances as the training data set. Different defocus distances lead to different spatial features of the captured images. However, solely relying on the spatial information leads to a relatively bad performance of the autofocusing process. It is better to extract defocus features from transform domains of the acquired image. For incoherent illumination, the Fourier cutoff frequency is directly related to the defocus distance. Similarly, autocorrelation peaks are directly related to the defocus distance for two-plane-wave illumination. In our implementation, we use the spatial image, the Fourier spectrum, the autocorrelation of the spatial image, and combinations thereof as the inputs for the CNNs. We show that the information from the transform domains can improve the performance and robustness of the autofocusing process. The resulting focusing error is ~0.5 µm, which is within the 0.8-µm depth-of-field range. The reported approach requires little hardware modification for conventional WSI systems and the images can be captured on the fly without focus map surveying. It may find applications in WSI and time-lapse microscopy. The transform- and multi-domain approaches may also provide new insights for developing microscopy-related deep-learning networks. We have made our training and testing data set (~12 GB) open-source for the broad research community.

© 2018 Optical Society of America under the terms of the OSA Open Access Publishing Agreement

Full Article  |  PDF Article
OSA Recommended Articles
Single-frame rapid autofocusing for brightfield and fluorescence whole slide imaging

Jun Liao, Liheng Bian, Zichao Bian, Zibang Zhang, Charmi Patel, Kazunori Hoshino, Yonina C. Eldar, and Guoan Zheng
Biomed. Opt. Express 7(11) 4763-4768 (2016)

Extended depth-of-field in holographic imaging using deep-learning-based autofocusing and phase recovery

Yichen Wu, Yair Rivenson, Yibo Zhang, Zhensong Wei, Harun Günaydin, Xing Lin, and Aydogan Ozcan
Optica 5(6) 704-710 (2018)

Cytopathological image analysis using deep-learning networks in microfluidic microscopy

G. Gopakumar, K. Hari Babu, Deepak Mishra, Sai Siva Gorthi, and Gorthi. R. K. Sai Subrahmanyam
J. Opt. Soc. Am. A 34(1) 111-121 (2017)

References

  • View by:
  • |
  • |
  • |

  1. S. Al-Janabi, A. Huisman, and P. J. Van Diest, “Digital pathology: current status and future perspectives,” Histopathology 61(1), 1–9 (2012).
    [Crossref] [PubMed]
  2. E. Abels and L. Pantanowitz, “Current state of the regulatory trajectory for whole slide imaging devices in the USA,” J. Pathol. Inform. 8(1), 23 (2017).
    [Crossref] [PubMed]
  3. J. Liao, Y. Jiang, Z. Bian, B. Mahrou, A. Nambiar, A. W. Magsam, K. Guo, S. Wang, Y. K. Cho, and G. Zheng, “Rapid focus map surveying for whole slide imaging with continuous sample motion,” Opt. Lett. 42(17), 3379–3382 (2017).
    [Crossref] [PubMed]
  4. M. C. Montalto, R. R. McKay, and R. J. Filkins, “Autofocus methods of whole slide imaging systems and the introduction of a second-generation independent dual sensor scanning method,” J. Pathol. Inform. 2(1), 44 (2011).
    [Crossref] [PubMed]
  5. J. Liao, L. Bian, Z. Bian, Z. Zhang, C. Patel, K. Hoshino, Y. C. Eldar, and G. Zheng, “Single-frame rapid autofocusing for brightfield and fluorescence whole slide imaging,” Biomed. Opt. Express 7(11), 4763–4768 (2016).
    [Crossref] [PubMed]
  6. K. Guo, J. Liao, Z. Bian, X. Heng, and G. Zheng, “InstantScope: a low-cost whole slide imaging system with instant focal plane detection,” Biomed. Opt. Express 6(9), 3210–3216 (2015).
    [Crossref] [PubMed]
  7. L. Yu, H. Chen, Q. Dou, J. Qin, and P.-A. Heng, “Automated melanoma recognition in dermoscopy images via very deep residual networks,” IEEE Trans. Med. Imaging 36(4), 994–1004 (2017).
    [Crossref] [PubMed]
  8. K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition,” in Proceedings of the IEEE conference on computer vision and pattern recognition, 2016), 770–778.
  9. J. Kim, J. Kwon Lee, and K. Mu Lee, “Accurate image super-resolution using very deep convolutional networks,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2016), 1646–1654.
    [Crossref]
  10. C. Feichtenhofer, A. Pinz, and R. Wildes, “Spatiotemporal residual networks for video action recognition,” in Advances in Neural Information Processing Systems, 2016), 3468–3476.
  11. P. Langehanenberg, G. von Bally, and B. Kemper, “Autofocusing in digital holographic microscopy,” Opt. Lett. 2, 4 (2011).
  12. P. Gao, B. Yao, J. Min, R. Guo, B. Ma, J. Zheng, M. Lei, S. Yan, D. Dan, and T. Ye, “Autofocusing of digital holographic microscopy based on off-axis illuminations,” Opt. Lett. 37(17), 3630–3632 (2012).
    [Crossref] [PubMed]
  13. Y. Sun, S. Duthaler, and B. J. Nelson, “Autofocusing in computer microscopy: selecting the optimal focus algorithm,” Microsc. Res. Tech. 65(3), 139–149 (2004).
    [Crossref] [PubMed]
  14. S. Yazdanfar, K. B. Kenny, K. Tasimi, A. D. Corwin, E. L. Dixon, and R. J. Filkins, “Simple and robust image-based autofocusing for digital microscopy,” Opt. Express 16(12), 8670–8677 (2008).
    [Crossref] [PubMed]
  15. B. Zoph and Q. V. Le, “Neural architecture search with reinforcement learning,” arXiv preprint arXiv:1611.01578 (2016).
  16. J. M. Castillo-Secilla, M. Saval-Calvo, L. Medina-Valdès, S. Cuenca-Asensi, A. Martínez-Álvarez, C. Sánchez, and G. Cristóbal, “Autofocus method for automated microscopy using embedded GPUs,” Biomed. Opt. Express 8(3), 1731–1740 (2017).
    [Crossref] [PubMed]
  17. Domain Data Part 1 & 2, and Channel Data for "Multi-domain deep learning for single-frame rapid autofocusing in whole slide imaging,” [retrieved 8 March 2018], https://figshare.com/s/6650f84490a512404177 .

2017 (4)

2016 (1)

2015 (1)

2012 (2)

2011 (2)

P. Langehanenberg, G. von Bally, and B. Kemper, “Autofocusing in digital holographic microscopy,” Opt. Lett. 2, 4 (2011).

M. C. Montalto, R. R. McKay, and R. J. Filkins, “Autofocus methods of whole slide imaging systems and the introduction of a second-generation independent dual sensor scanning method,” J. Pathol. Inform. 2(1), 44 (2011).
[Crossref] [PubMed]

2008 (1)

2004 (1)

Y. Sun, S. Duthaler, and B. J. Nelson, “Autofocusing in computer microscopy: selecting the optimal focus algorithm,” Microsc. Res. Tech. 65(3), 139–149 (2004).
[Crossref] [PubMed]

Abels, E.

E. Abels and L. Pantanowitz, “Current state of the regulatory trajectory for whole slide imaging devices in the USA,” J. Pathol. Inform. 8(1), 23 (2017).
[Crossref] [PubMed]

Al-Janabi, S.

S. Al-Janabi, A. Huisman, and P. J. Van Diest, “Digital pathology: current status and future perspectives,” Histopathology 61(1), 1–9 (2012).
[Crossref] [PubMed]

Bian, L.

Bian, Z.

Castillo-Secilla, J. M.

Chen, H.

L. Yu, H. Chen, Q. Dou, J. Qin, and P.-A. Heng, “Automated melanoma recognition in dermoscopy images via very deep residual networks,” IEEE Trans. Med. Imaging 36(4), 994–1004 (2017).
[Crossref] [PubMed]

Cho, Y. K.

Corwin, A. D.

Cristóbal, G.

Cuenca-Asensi, S.

Dan, D.

Dixon, E. L.

Dou, Q.

L. Yu, H. Chen, Q. Dou, J. Qin, and P.-A. Heng, “Automated melanoma recognition in dermoscopy images via very deep residual networks,” IEEE Trans. Med. Imaging 36(4), 994–1004 (2017).
[Crossref] [PubMed]

Duthaler, S.

Y. Sun, S. Duthaler, and B. J. Nelson, “Autofocusing in computer microscopy: selecting the optimal focus algorithm,” Microsc. Res. Tech. 65(3), 139–149 (2004).
[Crossref] [PubMed]

Eldar, Y. C.

Filkins, R. J.

M. C. Montalto, R. R. McKay, and R. J. Filkins, “Autofocus methods of whole slide imaging systems and the introduction of a second-generation independent dual sensor scanning method,” J. Pathol. Inform. 2(1), 44 (2011).
[Crossref] [PubMed]

S. Yazdanfar, K. B. Kenny, K. Tasimi, A. D. Corwin, E. L. Dixon, and R. J. Filkins, “Simple and robust image-based autofocusing for digital microscopy,” Opt. Express 16(12), 8670–8677 (2008).
[Crossref] [PubMed]

Gao, P.

Guo, K.

Guo, R.

He, K.

K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition,” in Proceedings of the IEEE conference on computer vision and pattern recognition, 2016), 770–778.

Heng, P.-A.

L. Yu, H. Chen, Q. Dou, J. Qin, and P.-A. Heng, “Automated melanoma recognition in dermoscopy images via very deep residual networks,” IEEE Trans. Med. Imaging 36(4), 994–1004 (2017).
[Crossref] [PubMed]

Heng, X.

Hoshino, K.

Huisman, A.

S. Al-Janabi, A. Huisman, and P. J. Van Diest, “Digital pathology: current status and future perspectives,” Histopathology 61(1), 1–9 (2012).
[Crossref] [PubMed]

Jiang, Y.

Kemper, B.

Kenny, K. B.

Kim, J.

J. Kim, J. Kwon Lee, and K. Mu Lee, “Accurate image super-resolution using very deep convolutional networks,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2016), 1646–1654.
[Crossref]

Kwon Lee, J.

J. Kim, J. Kwon Lee, and K. Mu Lee, “Accurate image super-resolution using very deep convolutional networks,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2016), 1646–1654.
[Crossref]

Langehanenberg, P.

Lei, M.

Liao, J.

Ma, B.

Magsam, A. W.

Mahrou, B.

Martínez-Álvarez, A.

McKay, R. R.

M. C. Montalto, R. R. McKay, and R. J. Filkins, “Autofocus methods of whole slide imaging systems and the introduction of a second-generation independent dual sensor scanning method,” J. Pathol. Inform. 2(1), 44 (2011).
[Crossref] [PubMed]

Medina-Valdès, L.

Min, J.

Montalto, M. C.

M. C. Montalto, R. R. McKay, and R. J. Filkins, “Autofocus methods of whole slide imaging systems and the introduction of a second-generation independent dual sensor scanning method,” J. Pathol. Inform. 2(1), 44 (2011).
[Crossref] [PubMed]

Mu Lee, K.

J. Kim, J. Kwon Lee, and K. Mu Lee, “Accurate image super-resolution using very deep convolutional networks,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2016), 1646–1654.
[Crossref]

Nambiar, A.

Nelson, B. J.

Y. Sun, S. Duthaler, and B. J. Nelson, “Autofocusing in computer microscopy: selecting the optimal focus algorithm,” Microsc. Res. Tech. 65(3), 139–149 (2004).
[Crossref] [PubMed]

Pantanowitz, L.

E. Abels and L. Pantanowitz, “Current state of the regulatory trajectory for whole slide imaging devices in the USA,” J. Pathol. Inform. 8(1), 23 (2017).
[Crossref] [PubMed]

Patel, C.

Qin, J.

L. Yu, H. Chen, Q. Dou, J. Qin, and P.-A. Heng, “Automated melanoma recognition in dermoscopy images via very deep residual networks,” IEEE Trans. Med. Imaging 36(4), 994–1004 (2017).
[Crossref] [PubMed]

Ren, S.

K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition,” in Proceedings of the IEEE conference on computer vision and pattern recognition, 2016), 770–778.

Sánchez, C.

Saval-Calvo, M.

Sun, J.

K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition,” in Proceedings of the IEEE conference on computer vision and pattern recognition, 2016), 770–778.

Sun, Y.

Y. Sun, S. Duthaler, and B. J. Nelson, “Autofocusing in computer microscopy: selecting the optimal focus algorithm,” Microsc. Res. Tech. 65(3), 139–149 (2004).
[Crossref] [PubMed]

Tasimi, K.

Van Diest, P. J.

S. Al-Janabi, A. Huisman, and P. J. Van Diest, “Digital pathology: current status and future perspectives,” Histopathology 61(1), 1–9 (2012).
[Crossref] [PubMed]

von Bally, G.

Wang, S.

Yan, S.

Yao, B.

Yazdanfar, S.

Ye, T.

Yu, L.

L. Yu, H. Chen, Q. Dou, J. Qin, and P.-A. Heng, “Automated melanoma recognition in dermoscopy images via very deep residual networks,” IEEE Trans. Med. Imaging 36(4), 994–1004 (2017).
[Crossref] [PubMed]

Zhang, X.

K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition,” in Proceedings of the IEEE conference on computer vision and pattern recognition, 2016), 770–778.

Zhang, Z.

Zheng, G.

Zheng, J.

Biomed. Opt. Express (3)

Histopathology (1)

S. Al-Janabi, A. Huisman, and P. J. Van Diest, “Digital pathology: current status and future perspectives,” Histopathology 61(1), 1–9 (2012).
[Crossref] [PubMed]

IEEE Trans. Med. Imaging (1)

L. Yu, H. Chen, Q. Dou, J. Qin, and P.-A. Heng, “Automated melanoma recognition in dermoscopy images via very deep residual networks,” IEEE Trans. Med. Imaging 36(4), 994–1004 (2017).
[Crossref] [PubMed]

J. Pathol. Inform. (2)

E. Abels and L. Pantanowitz, “Current state of the regulatory trajectory for whole slide imaging devices in the USA,” J. Pathol. Inform. 8(1), 23 (2017).
[Crossref] [PubMed]

M. C. Montalto, R. R. McKay, and R. J. Filkins, “Autofocus methods of whole slide imaging systems and the introduction of a second-generation independent dual sensor scanning method,” J. Pathol. Inform. 2(1), 44 (2011).
[Crossref] [PubMed]

Microsc. Res. Tech. (1)

Y. Sun, S. Duthaler, and B. J. Nelson, “Autofocusing in computer microscopy: selecting the optimal focus algorithm,” Microsc. Res. Tech. 65(3), 139–149 (2004).
[Crossref] [PubMed]

Opt. Express (1)

Opt. Lett. (3)

Other (5)

B. Zoph and Q. V. Le, “Neural architecture search with reinforcement learning,” arXiv preprint arXiv:1611.01578 (2016).

Domain Data Part 1 & 2, and Channel Data for "Multi-domain deep learning for single-frame rapid autofocusing in whole slide imaging,” [retrieved 8 March 2018], https://figshare.com/s/6650f84490a512404177 .

K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition,” in Proceedings of the IEEE conference on computer vision and pattern recognition, 2016), 770–778.

J. Kim, J. Kwon Lee, and K. Mu Lee, “Accurate image super-resolution using very deep convolutional networks,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2016), 1646–1654.
[Crossref]

C. Feichtenhofer, A. Pinz, and R. Wildes, “Spatiotemporal residual networks for video action recognition,” in Advances in Neural Information Processing Systems, 2016), 3468–3476.

Supplementary Material (1)

NameDescription
» Dataset 1       dataset 1

Cited By

OSA participates in Crossref's Cited-By Linking service. Citing articles from OSA journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (9)

Fig. 1
Fig. 1 The architecture of the deep residual network employed in this work. The input for the network is the captured image with an unknown defocus distance. The output of the network is the predicted defocus distance.
Fig. 2
Fig. 2 The three z-stacks for three illumination conditions.
Fig. 3
Fig. 3 Comparison between spatial-domain-only input ((a)-(c)), transform-domain-only input ((d)-(e)), and multi-domain input ((f)-(g)) for the networks. (a) The red, green, and blue spatial inputs for the incoherent illumination condition. (b) The single green channel input for the dual-LED illumination condition. (c) The single green channel input for the single-LED illumination condition. (d) The Fourier-domain-only input for the incoherent illumination condition with a Fourier magnitude channel (d1), and Fourier angle channel (d2). (e) The autocorrelation-only input for the dual-LED illumination condition. (f) The two-domain input for the incoherent illumination condition with a spatial intensity channel (f1), a Fourier magnitude channel (f2), and a Fourier angle channel (f3). (g) The three-domain input for the dual-LED illumination condition with a spatial intensity channel (g1), a Fourier magnitude channel (g2), and an autocorrelation channel (g3). All data can be downloaded from Dataset 1 [17].
Fig. 4
Fig. 4 The autofocusing performance for three networks with spatial-domain only inputs. (a) Test on different slides from the same set of samples (slides here have not been used in the training process). (b) Test on different slides prepared by a different clinical lab.
Fig. 5
Fig. 5 The autofocusing performance for two networks with transform-domain-only inputs. (a) Test on different slides from the same set of samples (slides here have not been used in the training process). (b) Test on different slides prepared by a different clinical lab.
Fig. 6
Fig. 6 The autofocusing performance for two networks with multi-domain inputs. (a) Test on different slides from the same set of samples (slides here have not been used in the training process). (b) Test on different slides prepared by a different clinical lab.
Fig. 7
Fig. 7 Comparison between the spatial-domain only incoherent network and two-domain incoherent network. (a) Spatial features at different defocus distances. (b) Fourier-spectrum features at different defocus distance. (c) The predictions of the two networks.
Fig. 8
Fig. 8 Comparison between the spatial-domain only dual-LED network and the three-domain dual-LED network. Spatial, Fourier and autocorrelation features at (a) z = 6.6 µm and (b) z = 9.6 µm. (c) The predictions of the two networks.
Fig. 9
Fig. 9 Test of the two-domain incoherent network for whole slide imaging. (a) The captured whole-slide images of a type 1 sample (a) and type 2 sample (b). (c1) The focus error map for (a). (c2) The focus error map for (b).

Metrics