Abstract

We present a system for automatic determination of the intradermal volume of hydrogels based on optical coherence tomography (OCT) and deep learning. Volumetric image data was acquired using a custom-built OCT prototype that employs an akinetic swept laser at ~1310 nm with a bandwidth of 87 nm, providing an axial resolution of ~6.5 μm in tissue. Three-dimensional data sets of a 10×10 mm skin patch comprising the intradermal filler and the surrounding tissue were acquired. A convolutional neural network using a u-net-like architecture was trained from slices of 100 OCT volume data sets where the dermal filler volume was manually annotated. Using six-fold cross-validation, a mean accuracy of 0.9938 and a Jaccard similarity coefficient of 0.879 were achieved.

© 2019 Optical Society of America under the terms of the OSA Open Access Publishing Agreement

Full Article  |  PDF Article
OSA Recommended Articles
Segmentation of mouse skin layers in optical coherence tomography image data using deep convolutional neural networks

Timo Kepp, Christine Droigk, Malte Casper, Michael Evers, Gereon Hüttmann, Nunciada Salma, Dieter Manstein, Mattias P. Heinrich, and Heinz Handels
Biomed. Opt. Express 10(7) 3484-3496 (2019)

Multiple surface segmentation using convolution neural nets: application to retinal layer segmentation in OCT images

Abhay Shah, Leixin Zhou, Michael D. Abrámoff, and Xiaodong Wu
Biomed. Opt. Express 9(9) 4509-4526 (2018)

Parallel deep neural networks for endoscopic OCT image segmentation

Dawei Li, Jimin Wu, Yufan He, Xinwen Yao, Wu Yuan, Defu Chen, Hyeon-Cheol Park, Shaoyong Yu, Jerry L. Prince, and Xingde Li
Biomed. Opt. Express 10(3) 1126-1135 (2019)

References

  • View by:
  • |
  • |
  • |

  1. J. L. Herrmann, R. K. Hoffmann, C. E. Ward, J. M. Schulman, and R. C. Grekin, “Biochemistry, Physiology, and Tissue Interactions of Contemporary Biodegradable Injectable Dermal Fillers:,” Dermatol. Surg. p. 1 (2018).
  2. W. Drexler and J. G. Fujimoto, eds., Optical Coherence Tomography: Technology and Applications (SpringerInternational Publishing, 2015), 2nd ed.
    [Crossref]
  3. D. Huang, E. A. Swanson, C. P. Lin, J. S. Schuman, W. G. Stinson, W. Chang, M. R. Hee, T. Flotte, K. Gregory, C. A. Puliafito, and A. Et, “Optical coherence tomography,” Science 254, 1178–1181 (1991).
    [Crossref] [PubMed]
  4. E. A. Swanson, J. A. Izatt, M. R. Hee, D. Huang, C. P. Lin, J. S. Schuman, C. A. Puliafito, and J. G. Fujimoto, “In vivo retinal imaging by optical coherence tomography,” Opt. Lett. 18, 1864–1866 (1993).
    [Crossref] [PubMed]
  5. A. F. Fercher, C. K. Hitzenberger, W. Drexler, G. Kamp, and H. Sattmann, “In Vivo Optical Coherence Tomography,” Am. J. Ophthalmol. 116, 113–114 (1993).
    [Crossref] [PubMed]
  6. F. Rosenblatt, “The perceptron, a perceiving and recognizing automaton,” Tech. Report 85-460-1, Cornell Aeronautical Laboratory, Inc. (1957).
  7. A. Krizhevsky, I. Sutskever, and G. E. Hinton, “ImageNet classification with deep convolutional neural networks,” in Proc. Advances in Neural Information Processing Systems, vol.  25 (2012), pp. 1090–1098.
  8. K. He, X. Zhang, S. Ren, and J. Sun, “Delving Deep into Rectifiers: Surpassing Human-Level Performance on ImageNet Classification,” arXiv:1502.01852 [cs] (2015).
  9. G. Hinton, L. Deng, D. Yu, G. Dahl, A.-r. Mohamed, N. Jaitly, A. Senior, V. Vanhoucke, P. Nguyen, B. Kingsbury, and T. Sainath, “Deep Neural Networks for Acoustic Modeling in Speech Recognition,” IEEE Signal Process. Mag. 29, 82–97 (2012).
    [Crossref]
  10. M. Bojarski, D. Del Testa, D. Dworakowski, B. Firner, B. Flepp, P. Goyal, L. D. Jackel, M. Monfort, U. Muller, J. Zhang, X. Zhang, J. Zhao, and K. Zieba, “End to End Learning for Self-Driving Cars,” arXiv:1604.07316 [cs] (2016).
  11. Y. M. Assael, B. Shillingford, S. Whiteson, and N. de Freitas, “LipNet: End-to-End Sentence-level Lipreading,” arXiv:1611.01599 [cs] (2016).
  12. D. Silver, T. Hubert, J. Schrittwieser, I. Antonoglou, M. Lai, A. Guez, M. Lanctot, L. Sifre, D. Kumaran, T. Graepel, T. Lillicrap, K. Simonyan, and D. Hassabis, “Mastering Chess and Shogi by Self-Play with a General Reinforcement Learning Algorithm,” arXiv:1712.01815 [cs] (2017).
  13. P. M. Burlina, N. Joshi, M. Pekala, K. D. Pacheco, D. E. Freund, and N. M. Bressler, “Automated Grading of Age-Related Macular Degeneration From Color Fundus Images Using Deep Convolutional Neural Networks,” JAMA Ophthalmol. 135, 1170 (2017).
    [Crossref] [PubMed]
  14. K.-H. Yu, C. Zhang, G. J. Berry, R. B. Altman, C. Ré, D. L. Rubin, and M. Snyder, “Predicting non-small cell lung cancer prognosis by fully automated microscopic pathology image features,” Nat. Commun. 7, 12474 (2016).
    [Crossref] [PubMed]
  15. R. Poplin, A. V. Varadarajan, K. Blumer, Y. Liu, M. V. McConnell, G. S. Corrado, L. Peng, and D. R. Webster, “Prediction of cardiovascular risk factors from retinal fundus photographs via deep learning,” Nat. Biomed. Eng. 2, 158–164 (2018).
    [Crossref]
  16. O. Ronneberger, P. Fischer, and T. Brox, “U-Net: Convolutional Networks for Biomedical Image Segmentation,” in Medical Image Computing and Computer-Assisted Intervention – MICCAI 2015, N. Navab, J. Hornegger, W. M. Wells, and A. F. Frangi, eds. (Springer International Publishing, 2015), pp. 234–241.
    [Crossref]
  17. L. Fang, D. Cunefare, C. Wang, R. H. Guymer, S. Li, and S. Farsiu, “Automatic segmentation of nine retinal layer boundaries in OCT images of non-exudative AMD patients using deep learning and graph search,” Biomed. Opt. Express 8, 2732–2744 (2017).
    [Crossref] [PubMed]
  18. A. Shah, L. Zhou, M. D. Abrámoff, and X. Wu, “Multiple surface segmentation using convolution neural nets: Application to retinal layer segmentation in OCT images,” Biomed. Opt. Express 9, 4509–4526 (2018).
    [Crossref]
  19. C. S. Lee, A. J. Tyring, N. P. Deruyter, Y. Wu, A. Rokem, and A. Y. Lee, “Deep-learning based, automated segmentation of macular edema in optical coherence tomography,” Biomed. Opt. Express 8, 3440–3448 (2017).
    [Crossref] [PubMed]
  20. T. Schlegl, S. M. Waldstein, H. Bogunovic, F. Endstraßer, A. Sadeghipour, A.-M. Philip, D. Podkowinski, B. S. Gerendas, G. Langs, and U. Schmidt-Erfurth, “Fully Automated Detection and Quantification of Macular Fluid in OCT Using Deep Learning,” Ophthalmology 125, 549–558 (2018).
    [Crossref]
  21. Y. Xu, K. Yan, J. Kim, X. Wang, C. Li, L. Su, S. Yu, X. Xu, and D. D. Feng, “Dual-stage deep learning framework for pigment epithelium detachment segmentation in polypoidal choroidal vasculopathy,” Biomed. Opt. Express 8, 4061–4076 (2017).
    [Crossref] [PubMed]
  22. Y. L. Yong, L. K. Tan, R. A. McLaughlin, K. H. Chee, and Y. M. Liew, “Linear-regression convolutional neural network for fully automated coronary lumen segmentation in intravascular optical coherence tomography,” J. Biomed. Opt. 22, 126005 (2017).
    [Crossref]
  23. Z. Chen, M. Liu, M. Minneman, L. Ginner, E. Hoover, H. Sattmann, M. Bonesi, W. Drexler, and R. A. Leitgeb, “Phase-stable swept source OCT angiography in human skin using an akinetic source,” Biomed. Opt. Express 7, 3032 (2016).
    [Crossref] [PubMed]
  24. “Safety of laser products - Part 1: Equipment classification and requirements (IEC 60825–1:2014),” Standard, International Electrotechnical Commission (2014).
  25. S. Ioffe and C. Szegedy, “Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift,” arXiv:1502.03167 [cs] (2015).
  26. M. Thieme and T. Reina, “Biomedical Image Segmentation with U-Net,” https://ai.intel.com/biomedical-image-segmentation-u-net/ (2018).
  27. F. Chollet and et al., “Keras,” https://keras.io (2015).
  28. M. Abadi, A. Agarwal, P. Barham, E. Brevdo, Z. Chen, C. Citro, G. S. Corrado, A. Davis, J. Dean, M. Devin, S. Ghemawat, I. Goodfellow, A. Harp, G. Irving, M. Isard, Y. Jia, R. Jozefowicz, L. Kaiser, M. Kudlur, J. Levenberg, D. Mané, R. Monga, S. Moore, D. Murray, C. Olah, M. Schuster, J. Shlens, B. Steiner, I. Sutskever, K. Talwar, P. Tucker, V. Vanhoucke, V. Vasudevan, F. Viégas, O. Vinyals, P. Warden, M. Wattenberg, M. Wicke, Y. Yu, and X. Zheng, “TensorFlow: Large-Scale Machine Learning on Heterogeneous Systems,” https://tensorflow.org (2015).
  29. G. Evangelidis and E. Psarakis, “Parametric Image Alignment Using Enhanced Correlation Coefficient Maximization,” IEEE Transactions on Pattern Analysis Mach. Intell. 30, 1858–1865 (2008).
    [Crossref]
  30. G. Bradski, “The OpenCV Library,” Dr. Dobb’s J. Softw. Tools (2000).
  31. M. D. Bloice, C. Stocker, and A. Holzinger, “Augmentor: An Image Augmentation Library for Machine Learning,” arXiv:1708.04680 [cs, stat] (2017).
  32. S. L. Jacques, “Optical properties of biological tissues: A review,” Phys. Medicine Biol. 58, R37–R61 (2013).
    [Crossref]
  33. A. Aneesh, B. Považay, B. Hofer, S. V. Popov, C. Glittenberg, S. Binder, and W. Drexler, “Multispectral in vivo three-dimensional optical coherence tomography of human skin,” J. Biomed. Opt. 15, 026025 (2010).
    [Crossref]
  34. Ö. Çiçek, A. Abdulkadir, S. S. Lienkamp, T. Brox, and O. Ronneberger, “3D U-Net: Learning dense volumetric segmentation from sparse annotation,” in International Conference on Medical Image Computing and Computer-Assisted Intervention, (Springer, 2016), pp. 424–432.

2018 (4)

J. L. Herrmann, R. K. Hoffmann, C. E. Ward, J. M. Schulman, and R. C. Grekin, “Biochemistry, Physiology, and Tissue Interactions of Contemporary Biodegradable Injectable Dermal Fillers:,” Dermatol. Surg. p. 1 (2018).

R. Poplin, A. V. Varadarajan, K. Blumer, Y. Liu, M. V. McConnell, G. S. Corrado, L. Peng, and D. R. Webster, “Prediction of cardiovascular risk factors from retinal fundus photographs via deep learning,” Nat. Biomed. Eng. 2, 158–164 (2018).
[Crossref]

T. Schlegl, S. M. Waldstein, H. Bogunovic, F. Endstraßer, A. Sadeghipour, A.-M. Philip, D. Podkowinski, B. S. Gerendas, G. Langs, and U. Schmidt-Erfurth, “Fully Automated Detection and Quantification of Macular Fluid in OCT Using Deep Learning,” Ophthalmology 125, 549–558 (2018).
[Crossref]

A. Shah, L. Zhou, M. D. Abrámoff, and X. Wu, “Multiple surface segmentation using convolution neural nets: Application to retinal layer segmentation in OCT images,” Biomed. Opt. Express 9, 4509–4526 (2018).
[Crossref]

2017 (7)

C. S. Lee, A. J. Tyring, N. P. Deruyter, Y. Wu, A. Rokem, and A. Y. Lee, “Deep-learning based, automated segmentation of macular edema in optical coherence tomography,” Biomed. Opt. Express 8, 3440–3448 (2017).
[Crossref] [PubMed]

M. D. Bloice, C. Stocker, and A. Holzinger, “Augmentor: An Image Augmentation Library for Machine Learning,” arXiv:1708.04680 [cs, stat] (2017).

Y. Xu, K. Yan, J. Kim, X. Wang, C. Li, L. Su, S. Yu, X. Xu, and D. D. Feng, “Dual-stage deep learning framework for pigment epithelium detachment segmentation in polypoidal choroidal vasculopathy,” Biomed. Opt. Express 8, 4061–4076 (2017).
[Crossref] [PubMed]

Y. L. Yong, L. K. Tan, R. A. McLaughlin, K. H. Chee, and Y. M. Liew, “Linear-regression convolutional neural network for fully automated coronary lumen segmentation in intravascular optical coherence tomography,” J. Biomed. Opt. 22, 126005 (2017).
[Crossref]

L. Fang, D. Cunefare, C. Wang, R. H. Guymer, S. Li, and S. Farsiu, “Automatic segmentation of nine retinal layer boundaries in OCT images of non-exudative AMD patients using deep learning and graph search,” Biomed. Opt. Express 8, 2732–2744 (2017).
[Crossref] [PubMed]

D. Silver, T. Hubert, J. Schrittwieser, I. Antonoglou, M. Lai, A. Guez, M. Lanctot, L. Sifre, D. Kumaran, T. Graepel, T. Lillicrap, K. Simonyan, and D. Hassabis, “Mastering Chess and Shogi by Self-Play with a General Reinforcement Learning Algorithm,” arXiv:1712.01815 [cs] (2017).

P. M. Burlina, N. Joshi, M. Pekala, K. D. Pacheco, D. E. Freund, and N. M. Bressler, “Automated Grading of Age-Related Macular Degeneration From Color Fundus Images Using Deep Convolutional Neural Networks,” JAMA Ophthalmol. 135, 1170 (2017).
[Crossref] [PubMed]

2016 (4)

K.-H. Yu, C. Zhang, G. J. Berry, R. B. Altman, C. Ré, D. L. Rubin, and M. Snyder, “Predicting non-small cell lung cancer prognosis by fully automated microscopic pathology image features,” Nat. Commun. 7, 12474 (2016).
[Crossref] [PubMed]

Z. Chen, M. Liu, M. Minneman, L. Ginner, E. Hoover, H. Sattmann, M. Bonesi, W. Drexler, and R. A. Leitgeb, “Phase-stable swept source OCT angiography in human skin using an akinetic source,” Biomed. Opt. Express 7, 3032 (2016).
[Crossref] [PubMed]

M. Bojarski, D. Del Testa, D. Dworakowski, B. Firner, B. Flepp, P. Goyal, L. D. Jackel, M. Monfort, U. Muller, J. Zhang, X. Zhang, J. Zhao, and K. Zieba, “End to End Learning for Self-Driving Cars,” arXiv:1604.07316 [cs] (2016).

Y. M. Assael, B. Shillingford, S. Whiteson, and N. de Freitas, “LipNet: End-to-End Sentence-level Lipreading,” arXiv:1611.01599 [cs] (2016).

2015 (2)

K. He, X. Zhang, S. Ren, and J. Sun, “Delving Deep into Rectifiers: Surpassing Human-Level Performance on ImageNet Classification,” arXiv:1502.01852 [cs] (2015).

S. Ioffe and C. Szegedy, “Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift,” arXiv:1502.03167 [cs] (2015).

2013 (1)

S. L. Jacques, “Optical properties of biological tissues: A review,” Phys. Medicine Biol. 58, R37–R61 (2013).
[Crossref]

2012 (2)

A. Krizhevsky, I. Sutskever, and G. E. Hinton, “ImageNet classification with deep convolutional neural networks,” in Proc. Advances in Neural Information Processing Systems, vol.  25 (2012), pp. 1090–1098.

G. Hinton, L. Deng, D. Yu, G. Dahl, A.-r. Mohamed, N. Jaitly, A. Senior, V. Vanhoucke, P. Nguyen, B. Kingsbury, and T. Sainath, “Deep Neural Networks for Acoustic Modeling in Speech Recognition,” IEEE Signal Process. Mag. 29, 82–97 (2012).
[Crossref]

2010 (1)

A. Aneesh, B. Považay, B. Hofer, S. V. Popov, C. Glittenberg, S. Binder, and W. Drexler, “Multispectral in vivo three-dimensional optical coherence tomography of human skin,” J. Biomed. Opt. 15, 026025 (2010).
[Crossref]

2008 (1)

G. Evangelidis and E. Psarakis, “Parametric Image Alignment Using Enhanced Correlation Coefficient Maximization,” IEEE Transactions on Pattern Analysis Mach. Intell. 30, 1858–1865 (2008).
[Crossref]

2000 (1)

G. Bradski, “The OpenCV Library,” Dr. Dobb’s J. Softw. Tools (2000).

1993 (2)

1991 (1)

D. Huang, E. A. Swanson, C. P. Lin, J. S. Schuman, W. G. Stinson, W. Chang, M. R. Hee, T. Flotte, K. Gregory, C. A. Puliafito, and A. Et, “Optical coherence tomography,” Science 254, 1178–1181 (1991).
[Crossref] [PubMed]

Abdulkadir, A.

Ö. Çiçek, A. Abdulkadir, S. S. Lienkamp, T. Brox, and O. Ronneberger, “3D U-Net: Learning dense volumetric segmentation from sparse annotation,” in International Conference on Medical Image Computing and Computer-Assisted Intervention, (Springer, 2016), pp. 424–432.

Abrámoff, M. D.

Altman, R. B.

K.-H. Yu, C. Zhang, G. J. Berry, R. B. Altman, C. Ré, D. L. Rubin, and M. Snyder, “Predicting non-small cell lung cancer prognosis by fully automated microscopic pathology image features,” Nat. Commun. 7, 12474 (2016).
[Crossref] [PubMed]

Aneesh, A.

A. Aneesh, B. Považay, B. Hofer, S. V. Popov, C. Glittenberg, S. Binder, and W. Drexler, “Multispectral in vivo three-dimensional optical coherence tomography of human skin,” J. Biomed. Opt. 15, 026025 (2010).
[Crossref]

Antonoglou, I.

D. Silver, T. Hubert, J. Schrittwieser, I. Antonoglou, M. Lai, A. Guez, M. Lanctot, L. Sifre, D. Kumaran, T. Graepel, T. Lillicrap, K. Simonyan, and D. Hassabis, “Mastering Chess and Shogi by Self-Play with a General Reinforcement Learning Algorithm,” arXiv:1712.01815 [cs] (2017).

Assael, Y. M.

Y. M. Assael, B. Shillingford, S. Whiteson, and N. de Freitas, “LipNet: End-to-End Sentence-level Lipreading,” arXiv:1611.01599 [cs] (2016).

Berry, G. J.

K.-H. Yu, C. Zhang, G. J. Berry, R. B. Altman, C. Ré, D. L. Rubin, and M. Snyder, “Predicting non-small cell lung cancer prognosis by fully automated microscopic pathology image features,” Nat. Commun. 7, 12474 (2016).
[Crossref] [PubMed]

Binder, S.

A. Aneesh, B. Považay, B. Hofer, S. V. Popov, C. Glittenberg, S. Binder, and W. Drexler, “Multispectral in vivo three-dimensional optical coherence tomography of human skin,” J. Biomed. Opt. 15, 026025 (2010).
[Crossref]

Bloice, M. D.

M. D. Bloice, C. Stocker, and A. Holzinger, “Augmentor: An Image Augmentation Library for Machine Learning,” arXiv:1708.04680 [cs, stat] (2017).

Blumer, K.

R. Poplin, A. V. Varadarajan, K. Blumer, Y. Liu, M. V. McConnell, G. S. Corrado, L. Peng, and D. R. Webster, “Prediction of cardiovascular risk factors from retinal fundus photographs via deep learning,” Nat. Biomed. Eng. 2, 158–164 (2018).
[Crossref]

Bogunovic, H.

T. Schlegl, S. M. Waldstein, H. Bogunovic, F. Endstraßer, A. Sadeghipour, A.-M. Philip, D. Podkowinski, B. S. Gerendas, G. Langs, and U. Schmidt-Erfurth, “Fully Automated Detection and Quantification of Macular Fluid in OCT Using Deep Learning,” Ophthalmology 125, 549–558 (2018).
[Crossref]

Bojarski, M.

M. Bojarski, D. Del Testa, D. Dworakowski, B. Firner, B. Flepp, P. Goyal, L. D. Jackel, M. Monfort, U. Muller, J. Zhang, X. Zhang, J. Zhao, and K. Zieba, “End to End Learning for Self-Driving Cars,” arXiv:1604.07316 [cs] (2016).

Bonesi, M.

Bradski, G.

G. Bradski, “The OpenCV Library,” Dr. Dobb’s J. Softw. Tools (2000).

Bressler, N. M.

P. M. Burlina, N. Joshi, M. Pekala, K. D. Pacheco, D. E. Freund, and N. M. Bressler, “Automated Grading of Age-Related Macular Degeneration From Color Fundus Images Using Deep Convolutional Neural Networks,” JAMA Ophthalmol. 135, 1170 (2017).
[Crossref] [PubMed]

Brox, T.

O. Ronneberger, P. Fischer, and T. Brox, “U-Net: Convolutional Networks for Biomedical Image Segmentation,” in Medical Image Computing and Computer-Assisted Intervention – MICCAI 2015, N. Navab, J. Hornegger, W. M. Wells, and A. F. Frangi, eds. (Springer International Publishing, 2015), pp. 234–241.
[Crossref]

Ö. Çiçek, A. Abdulkadir, S. S. Lienkamp, T. Brox, and O. Ronneberger, “3D U-Net: Learning dense volumetric segmentation from sparse annotation,” in International Conference on Medical Image Computing and Computer-Assisted Intervention, (Springer, 2016), pp. 424–432.

Burlina, P. M.

P. M. Burlina, N. Joshi, M. Pekala, K. D. Pacheco, D. E. Freund, and N. M. Bressler, “Automated Grading of Age-Related Macular Degeneration From Color Fundus Images Using Deep Convolutional Neural Networks,” JAMA Ophthalmol. 135, 1170 (2017).
[Crossref] [PubMed]

Chang, W.

D. Huang, E. A. Swanson, C. P. Lin, J. S. Schuman, W. G. Stinson, W. Chang, M. R. Hee, T. Flotte, K. Gregory, C. A. Puliafito, and A. Et, “Optical coherence tomography,” Science 254, 1178–1181 (1991).
[Crossref] [PubMed]

Chee, K. H.

Y. L. Yong, L. K. Tan, R. A. McLaughlin, K. H. Chee, and Y. M. Liew, “Linear-regression convolutional neural network for fully automated coronary lumen segmentation in intravascular optical coherence tomography,” J. Biomed. Opt. 22, 126005 (2017).
[Crossref]

Chen, Z.

Çiçek, Ö.

Ö. Çiçek, A. Abdulkadir, S. S. Lienkamp, T. Brox, and O. Ronneberger, “3D U-Net: Learning dense volumetric segmentation from sparse annotation,” in International Conference on Medical Image Computing and Computer-Assisted Intervention, (Springer, 2016), pp. 424–432.

Corrado, G. S.

R. Poplin, A. V. Varadarajan, K. Blumer, Y. Liu, M. V. McConnell, G. S. Corrado, L. Peng, and D. R. Webster, “Prediction of cardiovascular risk factors from retinal fundus photographs via deep learning,” Nat. Biomed. Eng. 2, 158–164 (2018).
[Crossref]

Cunefare, D.

Dahl, G.

G. Hinton, L. Deng, D. Yu, G. Dahl, A.-r. Mohamed, N. Jaitly, A. Senior, V. Vanhoucke, P. Nguyen, B. Kingsbury, and T. Sainath, “Deep Neural Networks for Acoustic Modeling in Speech Recognition,” IEEE Signal Process. Mag. 29, 82–97 (2012).
[Crossref]

de Freitas, N.

Y. M. Assael, B. Shillingford, S. Whiteson, and N. de Freitas, “LipNet: End-to-End Sentence-level Lipreading,” arXiv:1611.01599 [cs] (2016).

Del Testa, D.

M. Bojarski, D. Del Testa, D. Dworakowski, B. Firner, B. Flepp, P. Goyal, L. D. Jackel, M. Monfort, U. Muller, J. Zhang, X. Zhang, J. Zhao, and K. Zieba, “End to End Learning for Self-Driving Cars,” arXiv:1604.07316 [cs] (2016).

Deng, L.

G. Hinton, L. Deng, D. Yu, G. Dahl, A.-r. Mohamed, N. Jaitly, A. Senior, V. Vanhoucke, P. Nguyen, B. Kingsbury, and T. Sainath, “Deep Neural Networks for Acoustic Modeling in Speech Recognition,” IEEE Signal Process. Mag. 29, 82–97 (2012).
[Crossref]

Deruyter, N. P.

Drexler, W.

Z. Chen, M. Liu, M. Minneman, L. Ginner, E. Hoover, H. Sattmann, M. Bonesi, W. Drexler, and R. A. Leitgeb, “Phase-stable swept source OCT angiography in human skin using an akinetic source,” Biomed. Opt. Express 7, 3032 (2016).
[Crossref] [PubMed]

A. Aneesh, B. Považay, B. Hofer, S. V. Popov, C. Glittenberg, S. Binder, and W. Drexler, “Multispectral in vivo three-dimensional optical coherence tomography of human skin,” J. Biomed. Opt. 15, 026025 (2010).
[Crossref]

A. F. Fercher, C. K. Hitzenberger, W. Drexler, G. Kamp, and H. Sattmann, “In Vivo Optical Coherence Tomography,” Am. J. Ophthalmol. 116, 113–114 (1993).
[Crossref] [PubMed]

Dworakowski, D.

M. Bojarski, D. Del Testa, D. Dworakowski, B. Firner, B. Flepp, P. Goyal, L. D. Jackel, M. Monfort, U. Muller, J. Zhang, X. Zhang, J. Zhao, and K. Zieba, “End to End Learning for Self-Driving Cars,” arXiv:1604.07316 [cs] (2016).

Endstraßer, F.

T. Schlegl, S. M. Waldstein, H. Bogunovic, F. Endstraßer, A. Sadeghipour, A.-M. Philip, D. Podkowinski, B. S. Gerendas, G. Langs, and U. Schmidt-Erfurth, “Fully Automated Detection and Quantification of Macular Fluid in OCT Using Deep Learning,” Ophthalmology 125, 549–558 (2018).
[Crossref]

Et, A.

D. Huang, E. A. Swanson, C. P. Lin, J. S. Schuman, W. G. Stinson, W. Chang, M. R. Hee, T. Flotte, K. Gregory, C. A. Puliafito, and A. Et, “Optical coherence tomography,” Science 254, 1178–1181 (1991).
[Crossref] [PubMed]

Evangelidis, G.

G. Evangelidis and E. Psarakis, “Parametric Image Alignment Using Enhanced Correlation Coefficient Maximization,” IEEE Transactions on Pattern Analysis Mach. Intell. 30, 1858–1865 (2008).
[Crossref]

Fang, L.

Farsiu, S.

Feng, D. D.

Fercher, A. F.

A. F. Fercher, C. K. Hitzenberger, W. Drexler, G. Kamp, and H. Sattmann, “In Vivo Optical Coherence Tomography,” Am. J. Ophthalmol. 116, 113–114 (1993).
[Crossref] [PubMed]

Firner, B.

M. Bojarski, D. Del Testa, D. Dworakowski, B. Firner, B. Flepp, P. Goyal, L. D. Jackel, M. Monfort, U. Muller, J. Zhang, X. Zhang, J. Zhao, and K. Zieba, “End to End Learning for Self-Driving Cars,” arXiv:1604.07316 [cs] (2016).

Fischer, P.

O. Ronneberger, P. Fischer, and T. Brox, “U-Net: Convolutional Networks for Biomedical Image Segmentation,” in Medical Image Computing and Computer-Assisted Intervention – MICCAI 2015, N. Navab, J. Hornegger, W. M. Wells, and A. F. Frangi, eds. (Springer International Publishing, 2015), pp. 234–241.
[Crossref]

Flepp, B.

M. Bojarski, D. Del Testa, D. Dworakowski, B. Firner, B. Flepp, P. Goyal, L. D. Jackel, M. Monfort, U. Muller, J. Zhang, X. Zhang, J. Zhao, and K. Zieba, “End to End Learning for Self-Driving Cars,” arXiv:1604.07316 [cs] (2016).

Flotte, T.

D. Huang, E. A. Swanson, C. P. Lin, J. S. Schuman, W. G. Stinson, W. Chang, M. R. Hee, T. Flotte, K. Gregory, C. A. Puliafito, and A. Et, “Optical coherence tomography,” Science 254, 1178–1181 (1991).
[Crossref] [PubMed]

Freund, D. E.

P. M. Burlina, N. Joshi, M. Pekala, K. D. Pacheco, D. E. Freund, and N. M. Bressler, “Automated Grading of Age-Related Macular Degeneration From Color Fundus Images Using Deep Convolutional Neural Networks,” JAMA Ophthalmol. 135, 1170 (2017).
[Crossref] [PubMed]

Fujimoto, J. G.

Gerendas, B. S.

T. Schlegl, S. M. Waldstein, H. Bogunovic, F. Endstraßer, A. Sadeghipour, A.-M. Philip, D. Podkowinski, B. S. Gerendas, G. Langs, and U. Schmidt-Erfurth, “Fully Automated Detection and Quantification of Macular Fluid in OCT Using Deep Learning,” Ophthalmology 125, 549–558 (2018).
[Crossref]

Ginner, L.

Glittenberg, C.

A. Aneesh, B. Považay, B. Hofer, S. V. Popov, C. Glittenberg, S. Binder, and W. Drexler, “Multispectral in vivo three-dimensional optical coherence tomography of human skin,” J. Biomed. Opt. 15, 026025 (2010).
[Crossref]

Goyal, P.

M. Bojarski, D. Del Testa, D. Dworakowski, B. Firner, B. Flepp, P. Goyal, L. D. Jackel, M. Monfort, U. Muller, J. Zhang, X. Zhang, J. Zhao, and K. Zieba, “End to End Learning for Self-Driving Cars,” arXiv:1604.07316 [cs] (2016).

Graepel, T.

D. Silver, T. Hubert, J. Schrittwieser, I. Antonoglou, M. Lai, A. Guez, M. Lanctot, L. Sifre, D. Kumaran, T. Graepel, T. Lillicrap, K. Simonyan, and D. Hassabis, “Mastering Chess and Shogi by Self-Play with a General Reinforcement Learning Algorithm,” arXiv:1712.01815 [cs] (2017).

Gregory, K.

D. Huang, E. A. Swanson, C. P. Lin, J. S. Schuman, W. G. Stinson, W. Chang, M. R. Hee, T. Flotte, K. Gregory, C. A. Puliafito, and A. Et, “Optical coherence tomography,” Science 254, 1178–1181 (1991).
[Crossref] [PubMed]

Grekin, R. C.

J. L. Herrmann, R. K. Hoffmann, C. E. Ward, J. M. Schulman, and R. C. Grekin, “Biochemistry, Physiology, and Tissue Interactions of Contemporary Biodegradable Injectable Dermal Fillers:,” Dermatol. Surg. p. 1 (2018).

Guez, A.

D. Silver, T. Hubert, J. Schrittwieser, I. Antonoglou, M. Lai, A. Guez, M. Lanctot, L. Sifre, D. Kumaran, T. Graepel, T. Lillicrap, K. Simonyan, and D. Hassabis, “Mastering Chess and Shogi by Self-Play with a General Reinforcement Learning Algorithm,” arXiv:1712.01815 [cs] (2017).

Guymer, R. H.

Hassabis, D.

D. Silver, T. Hubert, J. Schrittwieser, I. Antonoglou, M. Lai, A. Guez, M. Lanctot, L. Sifre, D. Kumaran, T. Graepel, T. Lillicrap, K. Simonyan, and D. Hassabis, “Mastering Chess and Shogi by Self-Play with a General Reinforcement Learning Algorithm,” arXiv:1712.01815 [cs] (2017).

He, K.

K. He, X. Zhang, S. Ren, and J. Sun, “Delving Deep into Rectifiers: Surpassing Human-Level Performance on ImageNet Classification,” arXiv:1502.01852 [cs] (2015).

Hee, M. R.

E. A. Swanson, J. A. Izatt, M. R. Hee, D. Huang, C. P. Lin, J. S. Schuman, C. A. Puliafito, and J. G. Fujimoto, “In vivo retinal imaging by optical coherence tomography,” Opt. Lett. 18, 1864–1866 (1993).
[Crossref] [PubMed]

D. Huang, E. A. Swanson, C. P. Lin, J. S. Schuman, W. G. Stinson, W. Chang, M. R. Hee, T. Flotte, K. Gregory, C. A. Puliafito, and A. Et, “Optical coherence tomography,” Science 254, 1178–1181 (1991).
[Crossref] [PubMed]

Herrmann, J. L.

J. L. Herrmann, R. K. Hoffmann, C. E. Ward, J. M. Schulman, and R. C. Grekin, “Biochemistry, Physiology, and Tissue Interactions of Contemporary Biodegradable Injectable Dermal Fillers:,” Dermatol. Surg. p. 1 (2018).

Hinton, G.

G. Hinton, L. Deng, D. Yu, G. Dahl, A.-r. Mohamed, N. Jaitly, A. Senior, V. Vanhoucke, P. Nguyen, B. Kingsbury, and T. Sainath, “Deep Neural Networks for Acoustic Modeling in Speech Recognition,” IEEE Signal Process. Mag. 29, 82–97 (2012).
[Crossref]

Hinton, G. E.

A. Krizhevsky, I. Sutskever, and G. E. Hinton, “ImageNet classification with deep convolutional neural networks,” in Proc. Advances in Neural Information Processing Systems, vol.  25 (2012), pp. 1090–1098.

Hitzenberger, C. K.

A. F. Fercher, C. K. Hitzenberger, W. Drexler, G. Kamp, and H. Sattmann, “In Vivo Optical Coherence Tomography,” Am. J. Ophthalmol. 116, 113–114 (1993).
[Crossref] [PubMed]

Hofer, B.

A. Aneesh, B. Považay, B. Hofer, S. V. Popov, C. Glittenberg, S. Binder, and W. Drexler, “Multispectral in vivo three-dimensional optical coherence tomography of human skin,” J. Biomed. Opt. 15, 026025 (2010).
[Crossref]

Hoffmann, R. K.

J. L. Herrmann, R. K. Hoffmann, C. E. Ward, J. M. Schulman, and R. C. Grekin, “Biochemistry, Physiology, and Tissue Interactions of Contemporary Biodegradable Injectable Dermal Fillers:,” Dermatol. Surg. p. 1 (2018).

Holzinger, A.

M. D. Bloice, C. Stocker, and A. Holzinger, “Augmentor: An Image Augmentation Library for Machine Learning,” arXiv:1708.04680 [cs, stat] (2017).

Hoover, E.

Huang, D.

E. A. Swanson, J. A. Izatt, M. R. Hee, D. Huang, C. P. Lin, J. S. Schuman, C. A. Puliafito, and J. G. Fujimoto, “In vivo retinal imaging by optical coherence tomography,” Opt. Lett. 18, 1864–1866 (1993).
[Crossref] [PubMed]

D. Huang, E. A. Swanson, C. P. Lin, J. S. Schuman, W. G. Stinson, W. Chang, M. R. Hee, T. Flotte, K. Gregory, C. A. Puliafito, and A. Et, “Optical coherence tomography,” Science 254, 1178–1181 (1991).
[Crossref] [PubMed]

Hubert, T.

D. Silver, T. Hubert, J. Schrittwieser, I. Antonoglou, M. Lai, A. Guez, M. Lanctot, L. Sifre, D. Kumaran, T. Graepel, T. Lillicrap, K. Simonyan, and D. Hassabis, “Mastering Chess and Shogi by Self-Play with a General Reinforcement Learning Algorithm,” arXiv:1712.01815 [cs] (2017).

Ioffe, S.

S. Ioffe and C. Szegedy, “Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift,” arXiv:1502.03167 [cs] (2015).

Izatt, J. A.

Jackel, L. D.

M. Bojarski, D. Del Testa, D. Dworakowski, B. Firner, B. Flepp, P. Goyal, L. D. Jackel, M. Monfort, U. Muller, J. Zhang, X. Zhang, J. Zhao, and K. Zieba, “End to End Learning for Self-Driving Cars,” arXiv:1604.07316 [cs] (2016).

Jacques, S. L.

S. L. Jacques, “Optical properties of biological tissues: A review,” Phys. Medicine Biol. 58, R37–R61 (2013).
[Crossref]

Jaitly, N.

G. Hinton, L. Deng, D. Yu, G. Dahl, A.-r. Mohamed, N. Jaitly, A. Senior, V. Vanhoucke, P. Nguyen, B. Kingsbury, and T. Sainath, “Deep Neural Networks for Acoustic Modeling in Speech Recognition,” IEEE Signal Process. Mag. 29, 82–97 (2012).
[Crossref]

Joshi, N.

P. M. Burlina, N. Joshi, M. Pekala, K. D. Pacheco, D. E. Freund, and N. M. Bressler, “Automated Grading of Age-Related Macular Degeneration From Color Fundus Images Using Deep Convolutional Neural Networks,” JAMA Ophthalmol. 135, 1170 (2017).
[Crossref] [PubMed]

Kamp, G.

A. F. Fercher, C. K. Hitzenberger, W. Drexler, G. Kamp, and H. Sattmann, “In Vivo Optical Coherence Tomography,” Am. J. Ophthalmol. 116, 113–114 (1993).
[Crossref] [PubMed]

Kim, J.

Kingsbury, B.

G. Hinton, L. Deng, D. Yu, G. Dahl, A.-r. Mohamed, N. Jaitly, A. Senior, V. Vanhoucke, P. Nguyen, B. Kingsbury, and T. Sainath, “Deep Neural Networks for Acoustic Modeling in Speech Recognition,” IEEE Signal Process. Mag. 29, 82–97 (2012).
[Crossref]

Krizhevsky, A.

A. Krizhevsky, I. Sutskever, and G. E. Hinton, “ImageNet classification with deep convolutional neural networks,” in Proc. Advances in Neural Information Processing Systems, vol.  25 (2012), pp. 1090–1098.

Kumaran, D.

D. Silver, T. Hubert, J. Schrittwieser, I. Antonoglou, M. Lai, A. Guez, M. Lanctot, L. Sifre, D. Kumaran, T. Graepel, T. Lillicrap, K. Simonyan, and D. Hassabis, “Mastering Chess and Shogi by Self-Play with a General Reinforcement Learning Algorithm,” arXiv:1712.01815 [cs] (2017).

Lai, M.

D. Silver, T. Hubert, J. Schrittwieser, I. Antonoglou, M. Lai, A. Guez, M. Lanctot, L. Sifre, D. Kumaran, T. Graepel, T. Lillicrap, K. Simonyan, and D. Hassabis, “Mastering Chess and Shogi by Self-Play with a General Reinforcement Learning Algorithm,” arXiv:1712.01815 [cs] (2017).

Lanctot, M.

D. Silver, T. Hubert, J. Schrittwieser, I. Antonoglou, M. Lai, A. Guez, M. Lanctot, L. Sifre, D. Kumaran, T. Graepel, T. Lillicrap, K. Simonyan, and D. Hassabis, “Mastering Chess and Shogi by Self-Play with a General Reinforcement Learning Algorithm,” arXiv:1712.01815 [cs] (2017).

Langs, G.

T. Schlegl, S. M. Waldstein, H. Bogunovic, F. Endstraßer, A. Sadeghipour, A.-M. Philip, D. Podkowinski, B. S. Gerendas, G. Langs, and U. Schmidt-Erfurth, “Fully Automated Detection and Quantification of Macular Fluid in OCT Using Deep Learning,” Ophthalmology 125, 549–558 (2018).
[Crossref]

Lee, A. Y.

Lee, C. S.

Leitgeb, R. A.

Li, C.

Li, S.

Lienkamp, S. S.

Ö. Çiçek, A. Abdulkadir, S. S. Lienkamp, T. Brox, and O. Ronneberger, “3D U-Net: Learning dense volumetric segmentation from sparse annotation,” in International Conference on Medical Image Computing and Computer-Assisted Intervention, (Springer, 2016), pp. 424–432.

Liew, Y. M.

Y. L. Yong, L. K. Tan, R. A. McLaughlin, K. H. Chee, and Y. M. Liew, “Linear-regression convolutional neural network for fully automated coronary lumen segmentation in intravascular optical coherence tomography,” J. Biomed. Opt. 22, 126005 (2017).
[Crossref]

Lillicrap, T.

D. Silver, T. Hubert, J. Schrittwieser, I. Antonoglou, M. Lai, A. Guez, M. Lanctot, L. Sifre, D. Kumaran, T. Graepel, T. Lillicrap, K. Simonyan, and D. Hassabis, “Mastering Chess and Shogi by Self-Play with a General Reinforcement Learning Algorithm,” arXiv:1712.01815 [cs] (2017).

Lin, C. P.

E. A. Swanson, J. A. Izatt, M. R. Hee, D. Huang, C. P. Lin, J. S. Schuman, C. A. Puliafito, and J. G. Fujimoto, “In vivo retinal imaging by optical coherence tomography,” Opt. Lett. 18, 1864–1866 (1993).
[Crossref] [PubMed]

D. Huang, E. A. Swanson, C. P. Lin, J. S. Schuman, W. G. Stinson, W. Chang, M. R. Hee, T. Flotte, K. Gregory, C. A. Puliafito, and A. Et, “Optical coherence tomography,” Science 254, 1178–1181 (1991).
[Crossref] [PubMed]

Liu, M.

Liu, Y.

R. Poplin, A. V. Varadarajan, K. Blumer, Y. Liu, M. V. McConnell, G. S. Corrado, L. Peng, and D. R. Webster, “Prediction of cardiovascular risk factors from retinal fundus photographs via deep learning,” Nat. Biomed. Eng. 2, 158–164 (2018).
[Crossref]

McConnell, M. V.

R. Poplin, A. V. Varadarajan, K. Blumer, Y. Liu, M. V. McConnell, G. S. Corrado, L. Peng, and D. R. Webster, “Prediction of cardiovascular risk factors from retinal fundus photographs via deep learning,” Nat. Biomed. Eng. 2, 158–164 (2018).
[Crossref]

McLaughlin, R. A.

Y. L. Yong, L. K. Tan, R. A. McLaughlin, K. H. Chee, and Y. M. Liew, “Linear-regression convolutional neural network for fully automated coronary lumen segmentation in intravascular optical coherence tomography,” J. Biomed. Opt. 22, 126005 (2017).
[Crossref]

Minneman, M.

Mohamed, A.-r.

G. Hinton, L. Deng, D. Yu, G. Dahl, A.-r. Mohamed, N. Jaitly, A. Senior, V. Vanhoucke, P. Nguyen, B. Kingsbury, and T. Sainath, “Deep Neural Networks for Acoustic Modeling in Speech Recognition,” IEEE Signal Process. Mag. 29, 82–97 (2012).
[Crossref]

Monfort, M.

M. Bojarski, D. Del Testa, D. Dworakowski, B. Firner, B. Flepp, P. Goyal, L. D. Jackel, M. Monfort, U. Muller, J. Zhang, X. Zhang, J. Zhao, and K. Zieba, “End to End Learning for Self-Driving Cars,” arXiv:1604.07316 [cs] (2016).

Muller, U.

M. Bojarski, D. Del Testa, D. Dworakowski, B. Firner, B. Flepp, P. Goyal, L. D. Jackel, M. Monfort, U. Muller, J. Zhang, X. Zhang, J. Zhao, and K. Zieba, “End to End Learning for Self-Driving Cars,” arXiv:1604.07316 [cs] (2016).

Nguyen, P.

G. Hinton, L. Deng, D. Yu, G. Dahl, A.-r. Mohamed, N. Jaitly, A. Senior, V. Vanhoucke, P. Nguyen, B. Kingsbury, and T. Sainath, “Deep Neural Networks for Acoustic Modeling in Speech Recognition,” IEEE Signal Process. Mag. 29, 82–97 (2012).
[Crossref]

Pacheco, K. D.

P. M. Burlina, N. Joshi, M. Pekala, K. D. Pacheco, D. E. Freund, and N. M. Bressler, “Automated Grading of Age-Related Macular Degeneration From Color Fundus Images Using Deep Convolutional Neural Networks,” JAMA Ophthalmol. 135, 1170 (2017).
[Crossref] [PubMed]

Pekala, M.

P. M. Burlina, N. Joshi, M. Pekala, K. D. Pacheco, D. E. Freund, and N. M. Bressler, “Automated Grading of Age-Related Macular Degeneration From Color Fundus Images Using Deep Convolutional Neural Networks,” JAMA Ophthalmol. 135, 1170 (2017).
[Crossref] [PubMed]

Peng, L.

R. Poplin, A. V. Varadarajan, K. Blumer, Y. Liu, M. V. McConnell, G. S. Corrado, L. Peng, and D. R. Webster, “Prediction of cardiovascular risk factors from retinal fundus photographs via deep learning,” Nat. Biomed. Eng. 2, 158–164 (2018).
[Crossref]

Philip, A.-M.

T. Schlegl, S. M. Waldstein, H. Bogunovic, F. Endstraßer, A. Sadeghipour, A.-M. Philip, D. Podkowinski, B. S. Gerendas, G. Langs, and U. Schmidt-Erfurth, “Fully Automated Detection and Quantification of Macular Fluid in OCT Using Deep Learning,” Ophthalmology 125, 549–558 (2018).
[Crossref]

Podkowinski, D.

T. Schlegl, S. M. Waldstein, H. Bogunovic, F. Endstraßer, A. Sadeghipour, A.-M. Philip, D. Podkowinski, B. S. Gerendas, G. Langs, and U. Schmidt-Erfurth, “Fully Automated Detection and Quantification of Macular Fluid in OCT Using Deep Learning,” Ophthalmology 125, 549–558 (2018).
[Crossref]

Poplin, R.

R. Poplin, A. V. Varadarajan, K. Blumer, Y. Liu, M. V. McConnell, G. S. Corrado, L. Peng, and D. R. Webster, “Prediction of cardiovascular risk factors from retinal fundus photographs via deep learning,” Nat. Biomed. Eng. 2, 158–164 (2018).
[Crossref]

Popov, S. V.

A. Aneesh, B. Považay, B. Hofer, S. V. Popov, C. Glittenberg, S. Binder, and W. Drexler, “Multispectral in vivo three-dimensional optical coherence tomography of human skin,” J. Biomed. Opt. 15, 026025 (2010).
[Crossref]

Považay, B.

A. Aneesh, B. Považay, B. Hofer, S. V. Popov, C. Glittenberg, S. Binder, and W. Drexler, “Multispectral in vivo three-dimensional optical coherence tomography of human skin,” J. Biomed. Opt. 15, 026025 (2010).
[Crossref]

Psarakis, E.

G. Evangelidis and E. Psarakis, “Parametric Image Alignment Using Enhanced Correlation Coefficient Maximization,” IEEE Transactions on Pattern Analysis Mach. Intell. 30, 1858–1865 (2008).
[Crossref]

Puliafito, C. A.

E. A. Swanson, J. A. Izatt, M. R. Hee, D. Huang, C. P. Lin, J. S. Schuman, C. A. Puliafito, and J. G. Fujimoto, “In vivo retinal imaging by optical coherence tomography,” Opt. Lett. 18, 1864–1866 (1993).
[Crossref] [PubMed]

D. Huang, E. A. Swanson, C. P. Lin, J. S. Schuman, W. G. Stinson, W. Chang, M. R. Hee, T. Flotte, K. Gregory, C. A. Puliafito, and A. Et, “Optical coherence tomography,” Science 254, 1178–1181 (1991).
[Crossref] [PubMed]

Ré, C.

K.-H. Yu, C. Zhang, G. J. Berry, R. B. Altman, C. Ré, D. L. Rubin, and M. Snyder, “Predicting non-small cell lung cancer prognosis by fully automated microscopic pathology image features,” Nat. Commun. 7, 12474 (2016).
[Crossref] [PubMed]

Ren, S.

K. He, X. Zhang, S. Ren, and J. Sun, “Delving Deep into Rectifiers: Surpassing Human-Level Performance on ImageNet Classification,” arXiv:1502.01852 [cs] (2015).

Rokem, A.

Ronneberger, O.

O. Ronneberger, P. Fischer, and T. Brox, “U-Net: Convolutional Networks for Biomedical Image Segmentation,” in Medical Image Computing and Computer-Assisted Intervention – MICCAI 2015, N. Navab, J. Hornegger, W. M. Wells, and A. F. Frangi, eds. (Springer International Publishing, 2015), pp. 234–241.
[Crossref]

Ö. Çiçek, A. Abdulkadir, S. S. Lienkamp, T. Brox, and O. Ronneberger, “3D U-Net: Learning dense volumetric segmentation from sparse annotation,” in International Conference on Medical Image Computing and Computer-Assisted Intervention, (Springer, 2016), pp. 424–432.

Rosenblatt, F.

F. Rosenblatt, “The perceptron, a perceiving and recognizing automaton,” Tech. Report 85-460-1, Cornell Aeronautical Laboratory, Inc. (1957).

Rubin, D. L.

K.-H. Yu, C. Zhang, G. J. Berry, R. B. Altman, C. Ré, D. L. Rubin, and M. Snyder, “Predicting non-small cell lung cancer prognosis by fully automated microscopic pathology image features,” Nat. Commun. 7, 12474 (2016).
[Crossref] [PubMed]

Sadeghipour, A.

T. Schlegl, S. M. Waldstein, H. Bogunovic, F. Endstraßer, A. Sadeghipour, A.-M. Philip, D. Podkowinski, B. S. Gerendas, G. Langs, and U. Schmidt-Erfurth, “Fully Automated Detection and Quantification of Macular Fluid in OCT Using Deep Learning,” Ophthalmology 125, 549–558 (2018).
[Crossref]

Sainath, T.

G. Hinton, L. Deng, D. Yu, G. Dahl, A.-r. Mohamed, N. Jaitly, A. Senior, V. Vanhoucke, P. Nguyen, B. Kingsbury, and T. Sainath, “Deep Neural Networks for Acoustic Modeling in Speech Recognition,” IEEE Signal Process. Mag. 29, 82–97 (2012).
[Crossref]

Sattmann, H.

Schlegl, T.

T. Schlegl, S. M. Waldstein, H. Bogunovic, F. Endstraßer, A. Sadeghipour, A.-M. Philip, D. Podkowinski, B. S. Gerendas, G. Langs, and U. Schmidt-Erfurth, “Fully Automated Detection and Quantification of Macular Fluid in OCT Using Deep Learning,” Ophthalmology 125, 549–558 (2018).
[Crossref]

Schmidt-Erfurth, U.

T. Schlegl, S. M. Waldstein, H. Bogunovic, F. Endstraßer, A. Sadeghipour, A.-M. Philip, D. Podkowinski, B. S. Gerendas, G. Langs, and U. Schmidt-Erfurth, “Fully Automated Detection and Quantification of Macular Fluid in OCT Using Deep Learning,” Ophthalmology 125, 549–558 (2018).
[Crossref]

Schrittwieser, J.

D. Silver, T. Hubert, J. Schrittwieser, I. Antonoglou, M. Lai, A. Guez, M. Lanctot, L. Sifre, D. Kumaran, T. Graepel, T. Lillicrap, K. Simonyan, and D. Hassabis, “Mastering Chess and Shogi by Self-Play with a General Reinforcement Learning Algorithm,” arXiv:1712.01815 [cs] (2017).

Schulman, J. M.

J. L. Herrmann, R. K. Hoffmann, C. E. Ward, J. M. Schulman, and R. C. Grekin, “Biochemistry, Physiology, and Tissue Interactions of Contemporary Biodegradable Injectable Dermal Fillers:,” Dermatol. Surg. p. 1 (2018).

Schuman, J. S.

E. A. Swanson, J. A. Izatt, M. R. Hee, D. Huang, C. P. Lin, J. S. Schuman, C. A. Puliafito, and J. G. Fujimoto, “In vivo retinal imaging by optical coherence tomography,” Opt. Lett. 18, 1864–1866 (1993).
[Crossref] [PubMed]

D. Huang, E. A. Swanson, C. P. Lin, J. S. Schuman, W. G. Stinson, W. Chang, M. R. Hee, T. Flotte, K. Gregory, C. A. Puliafito, and A. Et, “Optical coherence tomography,” Science 254, 1178–1181 (1991).
[Crossref] [PubMed]

Senior, A.

G. Hinton, L. Deng, D. Yu, G. Dahl, A.-r. Mohamed, N. Jaitly, A. Senior, V. Vanhoucke, P. Nguyen, B. Kingsbury, and T. Sainath, “Deep Neural Networks for Acoustic Modeling in Speech Recognition,” IEEE Signal Process. Mag. 29, 82–97 (2012).
[Crossref]

Shah, A.

Shillingford, B.

Y. M. Assael, B. Shillingford, S. Whiteson, and N. de Freitas, “LipNet: End-to-End Sentence-level Lipreading,” arXiv:1611.01599 [cs] (2016).

Sifre, L.

D. Silver, T. Hubert, J. Schrittwieser, I. Antonoglou, M. Lai, A. Guez, M. Lanctot, L. Sifre, D. Kumaran, T. Graepel, T. Lillicrap, K. Simonyan, and D. Hassabis, “Mastering Chess and Shogi by Self-Play with a General Reinforcement Learning Algorithm,” arXiv:1712.01815 [cs] (2017).

Silver, D.

D. Silver, T. Hubert, J. Schrittwieser, I. Antonoglou, M. Lai, A. Guez, M. Lanctot, L. Sifre, D. Kumaran, T. Graepel, T. Lillicrap, K. Simonyan, and D. Hassabis, “Mastering Chess and Shogi by Self-Play with a General Reinforcement Learning Algorithm,” arXiv:1712.01815 [cs] (2017).

Simonyan, K.

D. Silver, T. Hubert, J. Schrittwieser, I. Antonoglou, M. Lai, A. Guez, M. Lanctot, L. Sifre, D. Kumaran, T. Graepel, T. Lillicrap, K. Simonyan, and D. Hassabis, “Mastering Chess and Shogi by Self-Play with a General Reinforcement Learning Algorithm,” arXiv:1712.01815 [cs] (2017).

Snyder, M.

K.-H. Yu, C. Zhang, G. J. Berry, R. B. Altman, C. Ré, D. L. Rubin, and M. Snyder, “Predicting non-small cell lung cancer prognosis by fully automated microscopic pathology image features,” Nat. Commun. 7, 12474 (2016).
[Crossref] [PubMed]

Stinson, W. G.

D. Huang, E. A. Swanson, C. P. Lin, J. S. Schuman, W. G. Stinson, W. Chang, M. R. Hee, T. Flotte, K. Gregory, C. A. Puliafito, and A. Et, “Optical coherence tomography,” Science 254, 1178–1181 (1991).
[Crossref] [PubMed]

Stocker, C.

M. D. Bloice, C. Stocker, and A. Holzinger, “Augmentor: An Image Augmentation Library for Machine Learning,” arXiv:1708.04680 [cs, stat] (2017).

Su, L.

Sun, J.

K. He, X. Zhang, S. Ren, and J. Sun, “Delving Deep into Rectifiers: Surpassing Human-Level Performance on ImageNet Classification,” arXiv:1502.01852 [cs] (2015).

Sutskever, I.

A. Krizhevsky, I. Sutskever, and G. E. Hinton, “ImageNet classification with deep convolutional neural networks,” in Proc. Advances in Neural Information Processing Systems, vol.  25 (2012), pp. 1090–1098.

Swanson, E. A.

E. A. Swanson, J. A. Izatt, M. R. Hee, D. Huang, C. P. Lin, J. S. Schuman, C. A. Puliafito, and J. G. Fujimoto, “In vivo retinal imaging by optical coherence tomography,” Opt. Lett. 18, 1864–1866 (1993).
[Crossref] [PubMed]

D. Huang, E. A. Swanson, C. P. Lin, J. S. Schuman, W. G. Stinson, W. Chang, M. R. Hee, T. Flotte, K. Gregory, C. A. Puliafito, and A. Et, “Optical coherence tomography,” Science 254, 1178–1181 (1991).
[Crossref] [PubMed]

Szegedy, C.

S. Ioffe and C. Szegedy, “Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift,” arXiv:1502.03167 [cs] (2015).

Tan, L. K.

Y. L. Yong, L. K. Tan, R. A. McLaughlin, K. H. Chee, and Y. M. Liew, “Linear-regression convolutional neural network for fully automated coronary lumen segmentation in intravascular optical coherence tomography,” J. Biomed. Opt. 22, 126005 (2017).
[Crossref]

Tyring, A. J.

Vanhoucke, V.

G. Hinton, L. Deng, D. Yu, G. Dahl, A.-r. Mohamed, N. Jaitly, A. Senior, V. Vanhoucke, P. Nguyen, B. Kingsbury, and T. Sainath, “Deep Neural Networks for Acoustic Modeling in Speech Recognition,” IEEE Signal Process. Mag. 29, 82–97 (2012).
[Crossref]

Varadarajan, A. V.

R. Poplin, A. V. Varadarajan, K. Blumer, Y. Liu, M. V. McConnell, G. S. Corrado, L. Peng, and D. R. Webster, “Prediction of cardiovascular risk factors from retinal fundus photographs via deep learning,” Nat. Biomed. Eng. 2, 158–164 (2018).
[Crossref]

Waldstein, S. M.

T. Schlegl, S. M. Waldstein, H. Bogunovic, F. Endstraßer, A. Sadeghipour, A.-M. Philip, D. Podkowinski, B. S. Gerendas, G. Langs, and U. Schmidt-Erfurth, “Fully Automated Detection and Quantification of Macular Fluid in OCT Using Deep Learning,” Ophthalmology 125, 549–558 (2018).
[Crossref]

Wang, C.

Wang, X.

Ward, C. E.

J. L. Herrmann, R. K. Hoffmann, C. E. Ward, J. M. Schulman, and R. C. Grekin, “Biochemistry, Physiology, and Tissue Interactions of Contemporary Biodegradable Injectable Dermal Fillers:,” Dermatol. Surg. p. 1 (2018).

Webster, D. R.

R. Poplin, A. V. Varadarajan, K. Blumer, Y. Liu, M. V. McConnell, G. S. Corrado, L. Peng, and D. R. Webster, “Prediction of cardiovascular risk factors from retinal fundus photographs via deep learning,” Nat. Biomed. Eng. 2, 158–164 (2018).
[Crossref]

Whiteson, S.

Y. M. Assael, B. Shillingford, S. Whiteson, and N. de Freitas, “LipNet: End-to-End Sentence-level Lipreading,” arXiv:1611.01599 [cs] (2016).

Wu, X.

Wu, Y.

Xu, X.

Xu, Y.

Yan, K.

Yong, Y. L.

Y. L. Yong, L. K. Tan, R. A. McLaughlin, K. H. Chee, and Y. M. Liew, “Linear-regression convolutional neural network for fully automated coronary lumen segmentation in intravascular optical coherence tomography,” J. Biomed. Opt. 22, 126005 (2017).
[Crossref]

Yu, D.

G. Hinton, L. Deng, D. Yu, G. Dahl, A.-r. Mohamed, N. Jaitly, A. Senior, V. Vanhoucke, P. Nguyen, B. Kingsbury, and T. Sainath, “Deep Neural Networks for Acoustic Modeling in Speech Recognition,” IEEE Signal Process. Mag. 29, 82–97 (2012).
[Crossref]

Yu, K.-H.

K.-H. Yu, C. Zhang, G. J. Berry, R. B. Altman, C. Ré, D. L. Rubin, and M. Snyder, “Predicting non-small cell lung cancer prognosis by fully automated microscopic pathology image features,” Nat. Commun. 7, 12474 (2016).
[Crossref] [PubMed]

Yu, S.

Zhang, C.

K.-H. Yu, C. Zhang, G. J. Berry, R. B. Altman, C. Ré, D. L. Rubin, and M. Snyder, “Predicting non-small cell lung cancer prognosis by fully automated microscopic pathology image features,” Nat. Commun. 7, 12474 (2016).
[Crossref] [PubMed]

Zhang, J.

M. Bojarski, D. Del Testa, D. Dworakowski, B. Firner, B. Flepp, P. Goyal, L. D. Jackel, M. Monfort, U. Muller, J. Zhang, X. Zhang, J. Zhao, and K. Zieba, “End to End Learning for Self-Driving Cars,” arXiv:1604.07316 [cs] (2016).

Zhang, X.

M. Bojarski, D. Del Testa, D. Dworakowski, B. Firner, B. Flepp, P. Goyal, L. D. Jackel, M. Monfort, U. Muller, J. Zhang, X. Zhang, J. Zhao, and K. Zieba, “End to End Learning for Self-Driving Cars,” arXiv:1604.07316 [cs] (2016).

K. He, X. Zhang, S. Ren, and J. Sun, “Delving Deep into Rectifiers: Surpassing Human-Level Performance on ImageNet Classification,” arXiv:1502.01852 [cs] (2015).

Zhao, J.

M. Bojarski, D. Del Testa, D. Dworakowski, B. Firner, B. Flepp, P. Goyal, L. D. Jackel, M. Monfort, U. Muller, J. Zhang, X. Zhang, J. Zhao, and K. Zieba, “End to End Learning for Self-Driving Cars,” arXiv:1604.07316 [cs] (2016).

Zhou, L.

Zieba, K.

M. Bojarski, D. Del Testa, D. Dworakowski, B. Firner, B. Flepp, P. Goyal, L. D. Jackel, M. Monfort, U. Muller, J. Zhang, X. Zhang, J. Zhao, and K. Zieba, “End to End Learning for Self-Driving Cars,” arXiv:1604.07316 [cs] (2016).

Am. J. Ophthalmol. (1)

A. F. Fercher, C. K. Hitzenberger, W. Drexler, G. Kamp, and H. Sattmann, “In Vivo Optical Coherence Tomography,” Am. J. Ophthalmol. 116, 113–114 (1993).
[Crossref] [PubMed]

arXiv:1502.01852 [cs] (1)

K. He, X. Zhang, S. Ren, and J. Sun, “Delving Deep into Rectifiers: Surpassing Human-Level Performance on ImageNet Classification,” arXiv:1502.01852 [cs] (2015).

arXiv:1502.03167 [cs] (1)

S. Ioffe and C. Szegedy, “Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift,” arXiv:1502.03167 [cs] (2015).

arXiv:1604.07316 [cs] (1)

M. Bojarski, D. Del Testa, D. Dworakowski, B. Firner, B. Flepp, P. Goyal, L. D. Jackel, M. Monfort, U. Muller, J. Zhang, X. Zhang, J. Zhao, and K. Zieba, “End to End Learning for Self-Driving Cars,” arXiv:1604.07316 [cs] (2016).

arXiv:1611.01599 [cs] (1)

Y. M. Assael, B. Shillingford, S. Whiteson, and N. de Freitas, “LipNet: End-to-End Sentence-level Lipreading,” arXiv:1611.01599 [cs] (2016).

arXiv:1708.04680 [cs, stat] (1)

M. D. Bloice, C. Stocker, and A. Holzinger, “Augmentor: An Image Augmentation Library for Machine Learning,” arXiv:1708.04680 [cs, stat] (2017).

arXiv:1712.01815 [cs] (1)

D. Silver, T. Hubert, J. Schrittwieser, I. Antonoglou, M. Lai, A. Guez, M. Lanctot, L. Sifre, D. Kumaran, T. Graepel, T. Lillicrap, K. Simonyan, and D. Hassabis, “Mastering Chess and Shogi by Self-Play with a General Reinforcement Learning Algorithm,” arXiv:1712.01815 [cs] (2017).

Biomed. Opt. Express (5)

Dermatol. Surg. (1)

J. L. Herrmann, R. K. Hoffmann, C. E. Ward, J. M. Schulman, and R. C. Grekin, “Biochemistry, Physiology, and Tissue Interactions of Contemporary Biodegradable Injectable Dermal Fillers:,” Dermatol. Surg. p. 1 (2018).

Dr. Dobb’s J. Softw. Tools (1)

G. Bradski, “The OpenCV Library,” Dr. Dobb’s J. Softw. Tools (2000).

IEEE Signal Process. Mag. (1)

G. Hinton, L. Deng, D. Yu, G. Dahl, A.-r. Mohamed, N. Jaitly, A. Senior, V. Vanhoucke, P. Nguyen, B. Kingsbury, and T. Sainath, “Deep Neural Networks for Acoustic Modeling in Speech Recognition,” IEEE Signal Process. Mag. 29, 82–97 (2012).
[Crossref]

IEEE Transactions on Pattern Analysis Mach. Intell. (1)

G. Evangelidis and E. Psarakis, “Parametric Image Alignment Using Enhanced Correlation Coefficient Maximization,” IEEE Transactions on Pattern Analysis Mach. Intell. 30, 1858–1865 (2008).
[Crossref]

J. Biomed. Opt. (2)

A. Aneesh, B. Považay, B. Hofer, S. V. Popov, C. Glittenberg, S. Binder, and W. Drexler, “Multispectral in vivo three-dimensional optical coherence tomography of human skin,” J. Biomed. Opt. 15, 026025 (2010).
[Crossref]

Y. L. Yong, L. K. Tan, R. A. McLaughlin, K. H. Chee, and Y. M. Liew, “Linear-regression convolutional neural network for fully automated coronary lumen segmentation in intravascular optical coherence tomography,” J. Biomed. Opt. 22, 126005 (2017).
[Crossref]

JAMA Ophthalmol. (1)

P. M. Burlina, N. Joshi, M. Pekala, K. D. Pacheco, D. E. Freund, and N. M. Bressler, “Automated Grading of Age-Related Macular Degeneration From Color Fundus Images Using Deep Convolutional Neural Networks,” JAMA Ophthalmol. 135, 1170 (2017).
[Crossref] [PubMed]

Nat. Biomed. Eng. (1)

R. Poplin, A. V. Varadarajan, K. Blumer, Y. Liu, M. V. McConnell, G. S. Corrado, L. Peng, and D. R. Webster, “Prediction of cardiovascular risk factors from retinal fundus photographs via deep learning,” Nat. Biomed. Eng. 2, 158–164 (2018).
[Crossref]

Nat. Commun. (1)

K.-H. Yu, C. Zhang, G. J. Berry, R. B. Altman, C. Ré, D. L. Rubin, and M. Snyder, “Predicting non-small cell lung cancer prognosis by fully automated microscopic pathology image features,” Nat. Commun. 7, 12474 (2016).
[Crossref] [PubMed]

Ophthalmology (1)

T. Schlegl, S. M. Waldstein, H. Bogunovic, F. Endstraßer, A. Sadeghipour, A.-M. Philip, D. Podkowinski, B. S. Gerendas, G. Langs, and U. Schmidt-Erfurth, “Fully Automated Detection and Quantification of Macular Fluid in OCT Using Deep Learning,” Ophthalmology 125, 549–558 (2018).
[Crossref]

Opt. Lett. (1)

Phys. Medicine Biol. (1)

S. L. Jacques, “Optical properties of biological tissues: A review,” Phys. Medicine Biol. 58, R37–R61 (2013).
[Crossref]

Proc. Advances in Neural Information Processing Systems (1)

A. Krizhevsky, I. Sutskever, and G. E. Hinton, “ImageNet classification with deep convolutional neural networks,” in Proc. Advances in Neural Information Processing Systems, vol.  25 (2012), pp. 1090–1098.

Science (1)

D. Huang, E. A. Swanson, C. P. Lin, J. S. Schuman, W. G. Stinson, W. Chang, M. R. Hee, T. Flotte, K. Gregory, C. A. Puliafito, and A. Et, “Optical coherence tomography,” Science 254, 1178–1181 (1991).
[Crossref] [PubMed]

Other (8)

“Safety of laser products - Part 1: Equipment classification and requirements (IEC 60825–1:2014),” Standard, International Electrotechnical Commission (2014).

O. Ronneberger, P. Fischer, and T. Brox, “U-Net: Convolutional Networks for Biomedical Image Segmentation,” in Medical Image Computing and Computer-Assisted Intervention – MICCAI 2015, N. Navab, J. Hornegger, W. M. Wells, and A. F. Frangi, eds. (Springer International Publishing, 2015), pp. 234–241.
[Crossref]

W. Drexler and J. G. Fujimoto, eds., Optical Coherence Tomography: Technology and Applications (SpringerInternational Publishing, 2015), 2nd ed.
[Crossref]

F. Rosenblatt, “The perceptron, a perceiving and recognizing automaton,” Tech. Report 85-460-1, Cornell Aeronautical Laboratory, Inc. (1957).

M. Thieme and T. Reina, “Biomedical Image Segmentation with U-Net,” https://ai.intel.com/biomedical-image-segmentation-u-net/ (2018).

F. Chollet and et al., “Keras,” https://keras.io (2015).

M. Abadi, A. Agarwal, P. Barham, E. Brevdo, Z. Chen, C. Citro, G. S. Corrado, A. Davis, J. Dean, M. Devin, S. Ghemawat, I. Goodfellow, A. Harp, G. Irving, M. Isard, Y. Jia, R. Jozefowicz, L. Kaiser, M. Kudlur, J. Levenberg, D. Mané, R. Monga, S. Moore, D. Murray, C. Olah, M. Schuster, J. Shlens, B. Steiner, I. Sutskever, K. Talwar, P. Tucker, V. Vanhoucke, V. Vasudevan, F. Viégas, O. Vinyals, P. Warden, M. Wattenberg, M. Wicke, Y. Yu, and X. Zheng, “TensorFlow: Large-Scale Machine Learning on Heterogeneous Systems,” https://tensorflow.org (2015).

Ö. Çiçek, A. Abdulkadir, S. S. Lienkamp, T. Brox, and O. Ronneberger, “3D U-Net: Learning dense volumetric segmentation from sparse annotation,” in International Conference on Medical Image Computing and Computer-Assisted Intervention, (Springer, 2016), pp. 424–432.

Cited By

OSA participates in Crossref's Cited-By Linking service. Citing articles from OSA journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (7)

Fig. 1
Fig. 1 Schematic representation of the optical coherence tomography system.
Fig. 2
Fig. 2 U-net-like architecture of the neural network. The contracting path forms the left side of the Ü" and the expanding path the right side. The horizontal arrows are the skip connections. The numbers h × w × f denote the shape of the layer with h the height and w the width of the image and f the number of feature channels.
Fig. 3
Fig. 3 (a) Example of strong motion artifacts. Image shows a cut through the volume in the slow scanning direction of the OCT system. (b) Same image after motion correction.
Fig. 4
Fig. 4 Examples for image augmentation. Manual filler annotation in green. (a) Original image before augmentation. (b–d) Three examples for the same image with a mixture of random deformation, translation and occasional mirroring.
Fig. 5
Fig. 5 Typical cross-sectional images with annotated filler. (a1)–(e1) OCT image. (a2)–(e2) Manual annotation. (a3)–(e3) Automatic segmentation by the neural network without any post-processing.
Fig. 6
Fig. 6 Examples of insufficient automatic segmentation. (a1)–(c1) OCT image. (a2)–(c2) Manual annotation. (a3)–(c3) Automatic segmentation by the neural network without any post-processing. Images (a3) and (b3) give examples where the neural network did not correctly assess the posterior border of the filler area. Image (c3) correctly delineates the filler area, but also contains false positives.
Fig. 7
Fig. 7 Worst-case examples of incorrect annotation of small filler areas. As images with a small filler area are mostly cut near the edge of the three-dimensional filler volume, the not-pixel-perfect manual segmentation shows the most frequent errors here. Therefore, images with a small filler area below a chosen threshold were excluded from training and 2D validation and included only in the final 3D validation set. (a1)–(b1) OCT image. (a2)–(b2) Manual annotation. (a3)–(b3) Automatic segmentation by the neural network without any post-processing.

Tables (1)

Tables Icon

Table 1 Six-fold cross-validated results. Data is presented for 2D slices with fillers of a minimum size as used for training, 3D (one direction) full volumes with the network only applied in one direction and 3D (two directions) full volumes with the network applied in two directions independently and the averages over the predicted probabilities used in order to reduce false positives. The last column Human presents intra-operator variability between two human annotators. Numbers in brackets are the standard deviations calculated for 100 volumes (neural networks) or 10 volumes (human), respectively. TP number of true positives, TN number of true negatives, FP number of false positives, FN number of false negatives. Key results discussed in the text are highlighted in bold.

Equations (3)

Equations on this page are rendered with MathJax. Learn more.

J ( A , B ) = | A B | | A B | = | A B | | A | + | B | | A B |
d J ( A , B ) = 1 J ( A , B ) = 1 | A B | | A | + | B | | A B |
L = 1 x l ( x ) p ( x ) x l ( x ) + x p ( x ) x l ( x ) p ( x ) + ϵ

Metrics