Abstract

Optical coherence tomography angiography (OCTA) is a promising imaging modality for microvasculature studies. Meanwhile, deep learning has achieved rapid development in image-to-image translation tasks. Some studies have proposed applying deep learning models to OCTA reconstruction and have obtained preliminary results. However, current studies are mostly limited to a few specific deep neural networks. In this paper, we conducted a comparative study to investigate OCTA reconstruction using deep learning models. Four representative network architectures including single-path models, U-shaped models, generative adversarial network (GAN)-based models and multi-path models were investigated on a dataset of OCTA images acquired from rat brains. Three potential solutions were also investigated to study the feasibility of improving performance. The results showed that U-shaped models and multi-path models are two suitable architectures for OCTA reconstruction. Furthermore, merging phase information should be the potential improving direction in further research.

© 2020 Optical Society of America under the terms of the OSA Open Access Publishing Agreement

Full Article  |  PDF Article
OSA Recommended Articles
Automated diagnosis and segmentation of choroidal neovascularization in OCT angiography using deep learning

Jie Wang, Tristan T. Hormel, Liqin Gao, Pengxiao Zang, Yukun Guo, Xiaogang Wang, Steven T. Bailey, and Yali Jia
Biomed. Opt. Express 11(2) 927-944 (2020)

Noise reduction in optical coherence tomography images using a deep neural network with perceptually-sensitive loss function

Bin Qiu, Zhiyu Huang, Xi Liu, Xiangxi Meng, Yunfei You, Gangjun Liu, Kun Yang, Andreas Maier, Qiushi Ren, and Yanye Lu
Biomed. Opt. Express 11(2) 817-830 (2020)

Deep learning segmentation for optical coherence tomography measurements of the lower tear meniscus

Hannes Stegmann, René M. Werkmeister, Martin Pfister, Gerhard Garhöfer, Leopold Schmetterer, and Valentin Aranha dos Santos
Biomed. Opt. Express 11(3) 1539-1554 (2020)

References

  • View by:
  • |
  • |
  • |

  1. D. Huang, E. A. Swanson, C. P. Lin, J. S. Schuman, W. G. Stinson, W. Chang, M. R. Hee, T. Flotte, K. Gregory, and C. A. Puliafito, “Optical coherence tomography,” Science 254(5035), 1178–1181 (1991).
    [Crossref]
  2. W. Drexler, U. Morgner, F. Kärtner, C. Pitris, S. Boppart, X. Li, E. Ippen, and J. Fujimoto, “In vivo ultrahigh-resolution optical coherence tomography,” Opt. Lett. 24(17), 1221–1223 (1999).
    [Crossref]
  3. H. G. Bezerra, M. A. Costa, G. Guagliumi, A. M. Rollins, and D. I. Simon, “Intracoronary optical coherence tomography: a comprehensive review: clinical and research applications,” JACC: Cardiovasc. Interv. 2, 1035–1046 (2009).
    [Crossref]
  4. H.-P. Hammes, Y. Feng, F. Pfister, and M. Brownlee, “Diabetic retinopathy: targeting vasoregression,” Diabetes 60(1), 9–16 (2011).
    [Crossref]
  5. E. C. Sattler, R. Kästle, and J. Welzel, “Optical coherence tomography in dermatology,” J. Biomed. Opt. 18(6), 061224 (2013).
    [Crossref]
  6. D. Y. Kim, J. Fingler, J. S. Werner, D. M. Schwartz, S. E. Fraser, and R. J. Zawadzki, “In vivo volumetric imaging of human retinal circulation with phase-variance optical coherence tomography,” Biomed. Opt. Express 2(6), 1504–1513 (2011).
    [Crossref]
  7. J. Fingler, R. J. Zawadzki, J. S. Werner, D. Schwartz, and S. E. Fraser, “Volumetric microvascular imaging of human retina using optical coherence tomography with a novel motion contrast technique,” Opt. Express 17(24), 22190–22200 (2009).
    [Crossref]
  8. A. Maier, S. Steidl, V. Christlein, and J. Hornegger, Medical Imaging Systems: An Introductory Guide, vol. 11111 (Springer, 2018).
  9. L. F. Yu and Z. P. Chen, “Doppler variance imaging for three-dimensional retina and choroid angiography,” J. Biomed. Opt. 15(1), 016029 (2010).
    [Crossref]
  10. J. Enfield, E. Jonathan, and M. Leahy, “In vivo imaging of the microcirculation of the volar forearm using correlation mapping optical coherence tomography (cmOCT),” Biomed. Opt. Express 2(5), 1184–1193 (2011).
    [Crossref]
  11. C. Blatter, J. Weingast, A. Alex, B. Grajciar, W. Wieser, W. Drexler, R. Huber, and R. A. Leitgeb, “In situ structural and microangiographic assessment of human skin lesions with high-speed OCT,” Biomed. Opt. Express 3(10), 2636–2646 (2012).
    [Crossref]
  12. Y. L. Jia, O. Tan, J. Tokayer, B. Potsaid, Y. M. Wang, J. J. Liu, M. F. Kraus, H. Subhash, J. G. Fujimoto, J. Hornegger, and D. Huang, “Split-spectrum amplitude-decorrelation angiography with optical coherence tomography,” Opt. Express 20(4), 4710–4725 (2012).
    [Crossref]
  13. L. An, J. Qin, and R. K. Wang, “Ultrahigh sensitive optical microangiography for in vivo imaging of microcirculations within human skin tissue beds,” Opt. Express 18(8), 8220–8228 (2010).
    [Crossref]
  14. V. J. Srinivasan, J. Y. Jiang, M. A. Yaseen, H. Radhakrishnan, W. C. Wu, S. Barry, A. E. Cable, and D. A. Boas, “Rapid volumetric angiography of cortical microvasculature with optical coherence tomography,” Opt. Lett. 35(1), 43–45 (2010).
    [Crossref]
  15. G. Liu, Y. Jia, A. D. Pechauer, R. Chandwani, and D. Huang, “Split-spectrum phase-gradient optical coherence tomography angiography,” Biomed. Opt. Express 7(8), 2943–2954 (2016).
    [Crossref]
  16. K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (Nevada, USA, 2016), pp. 770–778.
  17. O. Ronneberger, P. Fischer, and T. Brox, “U-net: Convolutional networks for biomedical image segmentation,” in IEEE Conference on Computer Vision and Pattern Recognition (Springer, 2015), pp. 234–241.
  18. K. Zhang, W. Zuo, Y. Chen, D. Meng, and L. Zhang, “Beyond a Gaussian Denoiser: Residual Learning of Deep CNN for Image Denoising,” IEEE Trans. on Image Process. 26(7), 3142–3155 (2017).
    [Crossref]
  19. Y. L. Zhang, Y. P. Tian, Y. Kong, B. N. Zhong, and Y. FuIEEE, “Residual Dense Network for Image Super-Resolution,” in 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition (IEEE, 2018), pp. 2472–2481.
  20. A. Maier, C. Syben, T. Lasser, and C. Riess, “A gentle introduction to deep learning in medical image processing,” Z. Med. Phys. 29(2), 86–101 (2019).
    [Crossref]
  21. C. S. Lee, A. J. Tyring, Y. Wu, S. Xiao, A. S. Rokem, N. P. DeRuyter, Q. Q. Zhang, A. Tufail, R. K. Wang, and A. Y. Lee, “Generating retinal flow maps from structural optical coherence tomography with artificial intelligence,” Sci. Rep. 9(1), 5694 (2019).
    [Crossref]
  22. X. Liu, Z. Y. Huang, Z. Z. Wang, C. Y. Wen, Z. Jiang, Z. K. Yu, J. F. Liu, G. J. Liu, X. L. Huang, A. Maier, Q. S. Ren, and Y. Y. Lu, “A deep learning based pipeline for optical coherence tomography angiography,” J. Biophotonics 12(10), 10 (2019).
    [Crossref]
  23. D. G. Lowe, “Object recognition from local scale-invariant features,” in Proc. Seventh IEEE Int. Conf. on Comput. Vis., vol.2, (1999), pp. 1150–1157.
  24. C. Dong, C. C. Loy, K. M. He, and X. O. Tang, “Image Super-Resolution Using Deep Convolutional Networks,” IEEE Trans. Pattern Anal. Mach. Intell. 38(2), 295–307 (2016).
    [Crossref]
  25. J. Kim, J. K. Lee, and K. M. LeeIeee, “Accurate Image Super-Resolution Using Very Deep Convolutional Networks,” in 2016 IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2016), pp. 1646–1654.
  26. Z. Jiang, Z. K. Yu, S. X. Feng, Z. Y. Huang, Y. H. Peng, J. X. Guo, Q. S. Ren, and Y. Y. Lu, “A super-resolution method-based pipeline for fundus fluorescein angiography imaging,” Biomed. Eng. Online 17(1), 125 (2018).
    [Crossref]
  27. B. Qiu, Z. Huang, X. Liu, X. Meng, Y. You, G. Liu, K. Yang, A. Maier, Q. Ren, and Y. Lu, “Noise reduction in optical coherence tomography images using a deep neural network with perceptually-sensitive loss function,” Biomed. Opt. Express 11(2), 817–830 (2020).
    [Crossref]
  28. P. Isola, J. Y. Zhu, T. H. Zhou, and A. A. EfrosIEEE, “Image-to-Image Translation with Conditional Adversarial Networks,” in 30th IEEE Conference on Computer Vision and Pattern Recognition, (IEEE, 2017), pp. 5967–5976.
  29. J. Y. Zhu, T. Park, P. Isola, and A. A. EfrosIEEE, “Unpaired Image-to-Image Translation using Cycle-Consistent Adversarial Networks,” in 2017 IEEE International Conference on Computer Vision, (IEEE, 2017), pp. 2242–2251.
  30. A. Ben-Cohen, E. Klang, S. P. Raskin, S. Soffer, S. Ben-Haim, E. Konen, M. M. Amitai, and H. Greenspan, “Cross-modality synthesis from CT to PET using FCN and GAN networks for improved automated lesion detection,” Eng. Appl. Artif. Intell. 78, 186–194 (2019).
    [Crossref]
  31. Z. K. Yu, Q. Xiang, J. H. Meng, C. X. Kou, Q. S. Ren, and Y. Y. Lu, “Retinal image synthesis from multiple-landmarks input with generative adversarial networks,” Biomed. Eng. Online 18(1), 62 (2019).
    [Crossref]
  32. Y. Y. Lu, M. Kowarschik, X. L. Huang, Y. Xia, J. H. Choi, S. Q. Chen, S. Y. Hu, Q. S. Ren, R. Fahrig, J. Hornegger, and A. Maier, “A learning-based material decomposition pipeline for multi-energy x-ray imaging,” Med. Phys. 46(2), 689–703 (2019).
    [Crossref]
  33. X. Yi and P. Babyn, “Sharpness-Aware Low-Dose CT Denoising Using Conditional Generative Adversarial Network,” J. Digit. Imaging 31(5), 655–669 (2018).
    [Crossref]
  34. A. Krizhevsky, I. Sutskever, and G. E. Hinton, “Imagenet classification with deep convolutional neural networks,” in Advances in Neural Information Processing SystemsNevada, USA, 2012), pp. 1097–1105.
  35. S. Ioffe and C. Szegedy, “Batch normalization: Accelerating deep network training by reducing internal covariate shift,” arXiv preprint arXiv:1502.03167 (2015).
  36. K. M. He, X. Y. Zhang, S. Q. Ren, and J. SunIEEE, “Delving Deep into Rectifiers: Surpassing Human-Level Performance on ImageNet Classification,” in 2015 IEEE International Conference on Computer Vision (IEEE, 2015), pp. 1026–1034.
  37. D. P. Kingma and J. Ba, “Adam: A method for stochastic optimization,” arXiv preprint arXiv:1412.6980 (2014).
  38. X. J. Mao, C. H. Shen, and Y. B. Yang, “Image Restoration Using Very Deep Convolutional Encoder-Decoder Networks with Symmetric Skip Connections,” in Advances in Neural Information Processing Systems, vol. 29D. D. Lee, M. Sugiyama, U. V. Luxburg, I. Guyon, and R. Garnett, eds. (Neural Information Processing Systems,2016).
  39. V. Badrinarayanan, A. Kendall, and R. Cipolla, “SegNet: A Deep Convolutional Encoder-Decoder Architecture for Image Segmentation,” IEEE Trans. Pattern Anal. Mach. Intell. 39(12), 2481–2495 (2017).
    [Crossref]
  40. G. Huang, Z. Liu, L. van der Maaten, and K. Q. WeinbergerIeee, “Densely Connected Convolutional Networks,” in 30th IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2017), pp. 2261–2269.
  41. M. Haris, G. Shakhnarovich, and N. UkitaIEEE, “Deep Back-Projection Networks For Super-Resolution,” in 2018 IEEE/Cvf Conference on Computer Vision and Pattern Recognition (IEEE, 2018), pp. 1664–1673.
  42. Y. Tai, J. Yang, X. M. Liu, and C. Y. XuIEEE, “MemNet: A Persistent Memory Network for Image Restoration,” in 2017 IEEE International Conference on Computer Vision (IEEE, 2017), pp. 4549–4557.
  43. I. J. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville, and Y. Bengio, “Generative Adversarial Nets,” in Advances in Neural Information Processing Systems, Vol. 27Z. Ghahramani, M. Welling, C. Cortes, N. D. Lawrence, and K. Q. Weinberger, eds. (Neural Information Processing Systems, 2014).
  44. A. Radford, L. Metz, and S. Chintala, “Unsupervised representation learning with deep convolutional generative adversarial networks,” arXiv preprint arXiv:1511.06434 (2015).
  45. Z. Wang, A. C. Bovik, H. R. Sheikh, and E. P. Simoncelli, “Image quality assessment: from error visibility to structural similarity,” IEEE Trans. on Image Process. 13(4), 600–612 (2004).
    [Crossref]
  46. S. K. Devalla, G. Subramanian, T. H. Pham, X. Wang, S. Perera, T. A. Tun, T. Aung, L. Schmetterer, A. H. Thiéry, and M. J. Girard, “A deep learning approach to denoise optical coherence tomography images of the optic nerve head,” Sci. Rep. 9(1), 14454–13 (2019).
    [Crossref]

2020 (1)

2019 (7)

A. Ben-Cohen, E. Klang, S. P. Raskin, S. Soffer, S. Ben-Haim, E. Konen, M. M. Amitai, and H. Greenspan, “Cross-modality synthesis from CT to PET using FCN and GAN networks for improved automated lesion detection,” Eng. Appl. Artif. Intell. 78, 186–194 (2019).
[Crossref]

Z. K. Yu, Q. Xiang, J. H. Meng, C. X. Kou, Q. S. Ren, and Y. Y. Lu, “Retinal image synthesis from multiple-landmarks input with generative adversarial networks,” Biomed. Eng. Online 18(1), 62 (2019).
[Crossref]

Y. Y. Lu, M. Kowarschik, X. L. Huang, Y. Xia, J. H. Choi, S. Q. Chen, S. Y. Hu, Q. S. Ren, R. Fahrig, J. Hornegger, and A. Maier, “A learning-based material decomposition pipeline for multi-energy x-ray imaging,” Med. Phys. 46(2), 689–703 (2019).
[Crossref]

S. K. Devalla, G. Subramanian, T. H. Pham, X. Wang, S. Perera, T. A. Tun, T. Aung, L. Schmetterer, A. H. Thiéry, and M. J. Girard, “A deep learning approach to denoise optical coherence tomography images of the optic nerve head,” Sci. Rep. 9(1), 14454–13 (2019).
[Crossref]

A. Maier, C. Syben, T. Lasser, and C. Riess, “A gentle introduction to deep learning in medical image processing,” Z. Med. Phys. 29(2), 86–101 (2019).
[Crossref]

C. S. Lee, A. J. Tyring, Y. Wu, S. Xiao, A. S. Rokem, N. P. DeRuyter, Q. Q. Zhang, A. Tufail, R. K. Wang, and A. Y. Lee, “Generating retinal flow maps from structural optical coherence tomography with artificial intelligence,” Sci. Rep. 9(1), 5694 (2019).
[Crossref]

X. Liu, Z. Y. Huang, Z. Z. Wang, C. Y. Wen, Z. Jiang, Z. K. Yu, J. F. Liu, G. J. Liu, X. L. Huang, A. Maier, Q. S. Ren, and Y. Y. Lu, “A deep learning based pipeline for optical coherence tomography angiography,” J. Biophotonics 12(10), 10 (2019).
[Crossref]

2018 (2)

Z. Jiang, Z. K. Yu, S. X. Feng, Z. Y. Huang, Y. H. Peng, J. X. Guo, Q. S. Ren, and Y. Y. Lu, “A super-resolution method-based pipeline for fundus fluorescein angiography imaging,” Biomed. Eng. Online 17(1), 125 (2018).
[Crossref]

X. Yi and P. Babyn, “Sharpness-Aware Low-Dose CT Denoising Using Conditional Generative Adversarial Network,” J. Digit. Imaging 31(5), 655–669 (2018).
[Crossref]

2017 (2)

V. Badrinarayanan, A. Kendall, and R. Cipolla, “SegNet: A Deep Convolutional Encoder-Decoder Architecture for Image Segmentation,” IEEE Trans. Pattern Anal. Mach. Intell. 39(12), 2481–2495 (2017).
[Crossref]

K. Zhang, W. Zuo, Y. Chen, D. Meng, and L. Zhang, “Beyond a Gaussian Denoiser: Residual Learning of Deep CNN for Image Denoising,” IEEE Trans. on Image Process. 26(7), 3142–3155 (2017).
[Crossref]

2016 (2)

G. Liu, Y. Jia, A. D. Pechauer, R. Chandwani, and D. Huang, “Split-spectrum phase-gradient optical coherence tomography angiography,” Biomed. Opt. Express 7(8), 2943–2954 (2016).
[Crossref]

C. Dong, C. C. Loy, K. M. He, and X. O. Tang, “Image Super-Resolution Using Deep Convolutional Networks,” IEEE Trans. Pattern Anal. Mach. Intell. 38(2), 295–307 (2016).
[Crossref]

2013 (1)

E. C. Sattler, R. Kästle, and J. Welzel, “Optical coherence tomography in dermatology,” J. Biomed. Opt. 18(6), 061224 (2013).
[Crossref]

2012 (2)

2011 (3)

2010 (3)

2009 (2)

J. Fingler, R. J. Zawadzki, J. S. Werner, D. Schwartz, and S. E. Fraser, “Volumetric microvascular imaging of human retina using optical coherence tomography with a novel motion contrast technique,” Opt. Express 17(24), 22190–22200 (2009).
[Crossref]

H. G. Bezerra, M. A. Costa, G. Guagliumi, A. M. Rollins, and D. I. Simon, “Intracoronary optical coherence tomography: a comprehensive review: clinical and research applications,” JACC: Cardiovasc. Interv. 2, 1035–1046 (2009).
[Crossref]

2004 (1)

Z. Wang, A. C. Bovik, H. R. Sheikh, and E. P. Simoncelli, “Image quality assessment: from error visibility to structural similarity,” IEEE Trans. on Image Process. 13(4), 600–612 (2004).
[Crossref]

1999 (1)

1991 (1)

D. Huang, E. A. Swanson, C. P. Lin, J. S. Schuman, W. G. Stinson, W. Chang, M. R. Hee, T. Flotte, K. Gregory, and C. A. Puliafito, “Optical coherence tomography,” Science 254(5035), 1178–1181 (1991).
[Crossref]

Alex, A.

Amitai, M. M.

A. Ben-Cohen, E. Klang, S. P. Raskin, S. Soffer, S. Ben-Haim, E. Konen, M. M. Amitai, and H. Greenspan, “Cross-modality synthesis from CT to PET using FCN and GAN networks for improved automated lesion detection,” Eng. Appl. Artif. Intell. 78, 186–194 (2019).
[Crossref]

An, L.

Aung, T.

S. K. Devalla, G. Subramanian, T. H. Pham, X. Wang, S. Perera, T. A. Tun, T. Aung, L. Schmetterer, A. H. Thiéry, and M. J. Girard, “A deep learning approach to denoise optical coherence tomography images of the optic nerve head,” Sci. Rep. 9(1), 14454–13 (2019).
[Crossref]

Ba, J.

D. P. Kingma and J. Ba, “Adam: A method for stochastic optimization,” arXiv preprint arXiv:1412.6980 (2014).

Babyn, P.

X. Yi and P. Babyn, “Sharpness-Aware Low-Dose CT Denoising Using Conditional Generative Adversarial Network,” J. Digit. Imaging 31(5), 655–669 (2018).
[Crossref]

Badrinarayanan, V.

V. Badrinarayanan, A. Kendall, and R. Cipolla, “SegNet: A Deep Convolutional Encoder-Decoder Architecture for Image Segmentation,” IEEE Trans. Pattern Anal. Mach. Intell. 39(12), 2481–2495 (2017).
[Crossref]

Barry, S.

Ben-Cohen, A.

A. Ben-Cohen, E. Klang, S. P. Raskin, S. Soffer, S. Ben-Haim, E. Konen, M. M. Amitai, and H. Greenspan, “Cross-modality synthesis from CT to PET using FCN and GAN networks for improved automated lesion detection,” Eng. Appl. Artif. Intell. 78, 186–194 (2019).
[Crossref]

Bengio, Y.

I. J. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville, and Y. Bengio, “Generative Adversarial Nets,” in Advances in Neural Information Processing Systems, Vol. 27Z. Ghahramani, M. Welling, C. Cortes, N. D. Lawrence, and K. Q. Weinberger, eds. (Neural Information Processing Systems, 2014).

Ben-Haim, S.

A. Ben-Cohen, E. Klang, S. P. Raskin, S. Soffer, S. Ben-Haim, E. Konen, M. M. Amitai, and H. Greenspan, “Cross-modality synthesis from CT to PET using FCN and GAN networks for improved automated lesion detection,” Eng. Appl. Artif. Intell. 78, 186–194 (2019).
[Crossref]

Bezerra, H. G.

H. G. Bezerra, M. A. Costa, G. Guagliumi, A. M. Rollins, and D. I. Simon, “Intracoronary optical coherence tomography: a comprehensive review: clinical and research applications,” JACC: Cardiovasc. Interv. 2, 1035–1046 (2009).
[Crossref]

Blatter, C.

Boas, D. A.

Boppart, S.

Bovik, A. C.

Z. Wang, A. C. Bovik, H. R. Sheikh, and E. P. Simoncelli, “Image quality assessment: from error visibility to structural similarity,” IEEE Trans. on Image Process. 13(4), 600–612 (2004).
[Crossref]

Brownlee, M.

H.-P. Hammes, Y. Feng, F. Pfister, and M. Brownlee, “Diabetic retinopathy: targeting vasoregression,” Diabetes 60(1), 9–16 (2011).
[Crossref]

Brox, T.

O. Ronneberger, P. Fischer, and T. Brox, “U-net: Convolutional networks for biomedical image segmentation,” in IEEE Conference on Computer Vision and Pattern Recognition (Springer, 2015), pp. 234–241.

Cable, A. E.

Chandwani, R.

Chang, W.

D. Huang, E. A. Swanson, C. P. Lin, J. S. Schuman, W. G. Stinson, W. Chang, M. R. Hee, T. Flotte, K. Gregory, and C. A. Puliafito, “Optical coherence tomography,” Science 254(5035), 1178–1181 (1991).
[Crossref]

Chen, S. Q.

Y. Y. Lu, M. Kowarschik, X. L. Huang, Y. Xia, J. H. Choi, S. Q. Chen, S. Y. Hu, Q. S. Ren, R. Fahrig, J. Hornegger, and A. Maier, “A learning-based material decomposition pipeline for multi-energy x-ray imaging,” Med. Phys. 46(2), 689–703 (2019).
[Crossref]

Chen, Y.

K. Zhang, W. Zuo, Y. Chen, D. Meng, and L. Zhang, “Beyond a Gaussian Denoiser: Residual Learning of Deep CNN for Image Denoising,” IEEE Trans. on Image Process. 26(7), 3142–3155 (2017).
[Crossref]

Chen, Z. P.

L. F. Yu and Z. P. Chen, “Doppler variance imaging for three-dimensional retina and choroid angiography,” J. Biomed. Opt. 15(1), 016029 (2010).
[Crossref]

Chintala, S.

A. Radford, L. Metz, and S. Chintala, “Unsupervised representation learning with deep convolutional generative adversarial networks,” arXiv preprint arXiv:1511.06434 (2015).

Choi, J. H.

Y. Y. Lu, M. Kowarschik, X. L. Huang, Y. Xia, J. H. Choi, S. Q. Chen, S. Y. Hu, Q. S. Ren, R. Fahrig, J. Hornegger, and A. Maier, “A learning-based material decomposition pipeline for multi-energy x-ray imaging,” Med. Phys. 46(2), 689–703 (2019).
[Crossref]

Christlein, V.

A. Maier, S. Steidl, V. Christlein, and J. Hornegger, Medical Imaging Systems: An Introductory Guide, vol. 11111 (Springer, 2018).

Cipolla, R.

V. Badrinarayanan, A. Kendall, and R. Cipolla, “SegNet: A Deep Convolutional Encoder-Decoder Architecture for Image Segmentation,” IEEE Trans. Pattern Anal. Mach. Intell. 39(12), 2481–2495 (2017).
[Crossref]

Costa, M. A.

H. G. Bezerra, M. A. Costa, G. Guagliumi, A. M. Rollins, and D. I. Simon, “Intracoronary optical coherence tomography: a comprehensive review: clinical and research applications,” JACC: Cardiovasc. Interv. 2, 1035–1046 (2009).
[Crossref]

Courville, A.

I. J. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville, and Y. Bengio, “Generative Adversarial Nets,” in Advances in Neural Information Processing Systems, Vol. 27Z. Ghahramani, M. Welling, C. Cortes, N. D. Lawrence, and K. Q. Weinberger, eds. (Neural Information Processing Systems, 2014).

DeRuyter, N. P.

C. S. Lee, A. J. Tyring, Y. Wu, S. Xiao, A. S. Rokem, N. P. DeRuyter, Q. Q. Zhang, A. Tufail, R. K. Wang, and A. Y. Lee, “Generating retinal flow maps from structural optical coherence tomography with artificial intelligence,” Sci. Rep. 9(1), 5694 (2019).
[Crossref]

Devalla, S. K.

S. K. Devalla, G. Subramanian, T. H. Pham, X. Wang, S. Perera, T. A. Tun, T. Aung, L. Schmetterer, A. H. Thiéry, and M. J. Girard, “A deep learning approach to denoise optical coherence tomography images of the optic nerve head,” Sci. Rep. 9(1), 14454–13 (2019).
[Crossref]

Dong, C.

C. Dong, C. C. Loy, K. M. He, and X. O. Tang, “Image Super-Resolution Using Deep Convolutional Networks,” IEEE Trans. Pattern Anal. Mach. Intell. 38(2), 295–307 (2016).
[Crossref]

Drexler, W.

Efros, A. A.

J. Y. Zhu, T. Park, P. Isola, and A. A. EfrosIEEE, “Unpaired Image-to-Image Translation using Cycle-Consistent Adversarial Networks,” in 2017 IEEE International Conference on Computer Vision, (IEEE, 2017), pp. 2242–2251.

P. Isola, J. Y. Zhu, T. H. Zhou, and A. A. EfrosIEEE, “Image-to-Image Translation with Conditional Adversarial Networks,” in 30th IEEE Conference on Computer Vision and Pattern Recognition, (IEEE, 2017), pp. 5967–5976.

Enfield, J.

Fahrig, R.

Y. Y. Lu, M. Kowarschik, X. L. Huang, Y. Xia, J. H. Choi, S. Q. Chen, S. Y. Hu, Q. S. Ren, R. Fahrig, J. Hornegger, and A. Maier, “A learning-based material decomposition pipeline for multi-energy x-ray imaging,” Med. Phys. 46(2), 689–703 (2019).
[Crossref]

Feng, S. X.

Z. Jiang, Z. K. Yu, S. X. Feng, Z. Y. Huang, Y. H. Peng, J. X. Guo, Q. S. Ren, and Y. Y. Lu, “A super-resolution method-based pipeline for fundus fluorescein angiography imaging,” Biomed. Eng. Online 17(1), 125 (2018).
[Crossref]

Feng, Y.

H.-P. Hammes, Y. Feng, F. Pfister, and M. Brownlee, “Diabetic retinopathy: targeting vasoregression,” Diabetes 60(1), 9–16 (2011).
[Crossref]

Fingler, J.

Fischer, P.

O. Ronneberger, P. Fischer, and T. Brox, “U-net: Convolutional networks for biomedical image segmentation,” in IEEE Conference on Computer Vision and Pattern Recognition (Springer, 2015), pp. 234–241.

Flotte, T.

D. Huang, E. A. Swanson, C. P. Lin, J. S. Schuman, W. G. Stinson, W. Chang, M. R. Hee, T. Flotte, K. Gregory, and C. A. Puliafito, “Optical coherence tomography,” Science 254(5035), 1178–1181 (1991).
[Crossref]

Fraser, S. E.

Fu, Y.

Y. L. Zhang, Y. P. Tian, Y. Kong, B. N. Zhong, and Y. FuIEEE, “Residual Dense Network for Image Super-Resolution,” in 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition (IEEE, 2018), pp. 2472–2481.

Fujimoto, J.

Fujimoto, J. G.

Girard, M. J.

S. K. Devalla, G. Subramanian, T. H. Pham, X. Wang, S. Perera, T. A. Tun, T. Aung, L. Schmetterer, A. H. Thiéry, and M. J. Girard, “A deep learning approach to denoise optical coherence tomography images of the optic nerve head,” Sci. Rep. 9(1), 14454–13 (2019).
[Crossref]

Goodfellow, I. J.

I. J. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville, and Y. Bengio, “Generative Adversarial Nets,” in Advances in Neural Information Processing Systems, Vol. 27Z. Ghahramani, M. Welling, C. Cortes, N. D. Lawrence, and K. Q. Weinberger, eds. (Neural Information Processing Systems, 2014).

Grajciar, B.

Greenspan, H.

A. Ben-Cohen, E. Klang, S. P. Raskin, S. Soffer, S. Ben-Haim, E. Konen, M. M. Amitai, and H. Greenspan, “Cross-modality synthesis from CT to PET using FCN and GAN networks for improved automated lesion detection,” Eng. Appl. Artif. Intell. 78, 186–194 (2019).
[Crossref]

Gregory, K.

D. Huang, E. A. Swanson, C. P. Lin, J. S. Schuman, W. G. Stinson, W. Chang, M. R. Hee, T. Flotte, K. Gregory, and C. A. Puliafito, “Optical coherence tomography,” Science 254(5035), 1178–1181 (1991).
[Crossref]

Guagliumi, G.

H. G. Bezerra, M. A. Costa, G. Guagliumi, A. M. Rollins, and D. I. Simon, “Intracoronary optical coherence tomography: a comprehensive review: clinical and research applications,” JACC: Cardiovasc. Interv. 2, 1035–1046 (2009).
[Crossref]

Guo, J. X.

Z. Jiang, Z. K. Yu, S. X. Feng, Z. Y. Huang, Y. H. Peng, J. X. Guo, Q. S. Ren, and Y. Y. Lu, “A super-resolution method-based pipeline for fundus fluorescein angiography imaging,” Biomed. Eng. Online 17(1), 125 (2018).
[Crossref]

Hammes, H.-P.

H.-P. Hammes, Y. Feng, F. Pfister, and M. Brownlee, “Diabetic retinopathy: targeting vasoregression,” Diabetes 60(1), 9–16 (2011).
[Crossref]

Haris, M.

M. Haris, G. Shakhnarovich, and N. UkitaIEEE, “Deep Back-Projection Networks For Super-Resolution,” in 2018 IEEE/Cvf Conference on Computer Vision and Pattern Recognition (IEEE, 2018), pp. 1664–1673.

He, K.

K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (Nevada, USA, 2016), pp. 770–778.

He, K. M.

C. Dong, C. C. Loy, K. M. He, and X. O. Tang, “Image Super-Resolution Using Deep Convolutional Networks,” IEEE Trans. Pattern Anal. Mach. Intell. 38(2), 295–307 (2016).
[Crossref]

K. M. He, X. Y. Zhang, S. Q. Ren, and J. SunIEEE, “Delving Deep into Rectifiers: Surpassing Human-Level Performance on ImageNet Classification,” in 2015 IEEE International Conference on Computer Vision (IEEE, 2015), pp. 1026–1034.

Hee, M. R.

D. Huang, E. A. Swanson, C. P. Lin, J. S. Schuman, W. G. Stinson, W. Chang, M. R. Hee, T. Flotte, K. Gregory, and C. A. Puliafito, “Optical coherence tomography,” Science 254(5035), 1178–1181 (1991).
[Crossref]

Hinton, G. E.

A. Krizhevsky, I. Sutskever, and G. E. Hinton, “Imagenet classification with deep convolutional neural networks,” in Advances in Neural Information Processing SystemsNevada, USA, 2012), pp. 1097–1105.

Hornegger, J.

Y. Y. Lu, M. Kowarschik, X. L. Huang, Y. Xia, J. H. Choi, S. Q. Chen, S. Y. Hu, Q. S. Ren, R. Fahrig, J. Hornegger, and A. Maier, “A learning-based material decomposition pipeline for multi-energy x-ray imaging,” Med. Phys. 46(2), 689–703 (2019).
[Crossref]

Y. L. Jia, O. Tan, J. Tokayer, B. Potsaid, Y. M. Wang, J. J. Liu, M. F. Kraus, H. Subhash, J. G. Fujimoto, J. Hornegger, and D. Huang, “Split-spectrum amplitude-decorrelation angiography with optical coherence tomography,” Opt. Express 20(4), 4710–4725 (2012).
[Crossref]

A. Maier, S. Steidl, V. Christlein, and J. Hornegger, Medical Imaging Systems: An Introductory Guide, vol. 11111 (Springer, 2018).

Hu, S. Y.

Y. Y. Lu, M. Kowarschik, X. L. Huang, Y. Xia, J. H. Choi, S. Q. Chen, S. Y. Hu, Q. S. Ren, R. Fahrig, J. Hornegger, and A. Maier, “A learning-based material decomposition pipeline for multi-energy x-ray imaging,” Med. Phys. 46(2), 689–703 (2019).
[Crossref]

Huang, D.

Huang, G.

G. Huang, Z. Liu, L. van der Maaten, and K. Q. WeinbergerIeee, “Densely Connected Convolutional Networks,” in 30th IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2017), pp. 2261–2269.

Huang, X. L.

Y. Y. Lu, M. Kowarschik, X. L. Huang, Y. Xia, J. H. Choi, S. Q. Chen, S. Y. Hu, Q. S. Ren, R. Fahrig, J. Hornegger, and A. Maier, “A learning-based material decomposition pipeline for multi-energy x-ray imaging,” Med. Phys. 46(2), 689–703 (2019).
[Crossref]

X. Liu, Z. Y. Huang, Z. Z. Wang, C. Y. Wen, Z. Jiang, Z. K. Yu, J. F. Liu, G. J. Liu, X. L. Huang, A. Maier, Q. S. Ren, and Y. Y. Lu, “A deep learning based pipeline for optical coherence tomography angiography,” J. Biophotonics 12(10), 10 (2019).
[Crossref]

Huang, Z.

Huang, Z. Y.

X. Liu, Z. Y. Huang, Z. Z. Wang, C. Y. Wen, Z. Jiang, Z. K. Yu, J. F. Liu, G. J. Liu, X. L. Huang, A. Maier, Q. S. Ren, and Y. Y. Lu, “A deep learning based pipeline for optical coherence tomography angiography,” J. Biophotonics 12(10), 10 (2019).
[Crossref]

Z. Jiang, Z. K. Yu, S. X. Feng, Z. Y. Huang, Y. H. Peng, J. X. Guo, Q. S. Ren, and Y. Y. Lu, “A super-resolution method-based pipeline for fundus fluorescein angiography imaging,” Biomed. Eng. Online 17(1), 125 (2018).
[Crossref]

Huber, R.

Ioffe, S.

S. Ioffe and C. Szegedy, “Batch normalization: Accelerating deep network training by reducing internal covariate shift,” arXiv preprint arXiv:1502.03167 (2015).

Ippen, E.

Isola, P.

P. Isola, J. Y. Zhu, T. H. Zhou, and A. A. EfrosIEEE, “Image-to-Image Translation with Conditional Adversarial Networks,” in 30th IEEE Conference on Computer Vision and Pattern Recognition, (IEEE, 2017), pp. 5967–5976.

J. Y. Zhu, T. Park, P. Isola, and A. A. EfrosIEEE, “Unpaired Image-to-Image Translation using Cycle-Consistent Adversarial Networks,” in 2017 IEEE International Conference on Computer Vision, (IEEE, 2017), pp. 2242–2251.

Jia, Y.

Jia, Y. L.

Jiang, J. Y.

Jiang, Z.

X. Liu, Z. Y. Huang, Z. Z. Wang, C. Y. Wen, Z. Jiang, Z. K. Yu, J. F. Liu, G. J. Liu, X. L. Huang, A. Maier, Q. S. Ren, and Y. Y. Lu, “A deep learning based pipeline for optical coherence tomography angiography,” J. Biophotonics 12(10), 10 (2019).
[Crossref]

Z. Jiang, Z. K. Yu, S. X. Feng, Z. Y. Huang, Y. H. Peng, J. X. Guo, Q. S. Ren, and Y. Y. Lu, “A super-resolution method-based pipeline for fundus fluorescein angiography imaging,” Biomed. Eng. Online 17(1), 125 (2018).
[Crossref]

Jonathan, E.

Kärtner, F.

Kästle, R.

E. C. Sattler, R. Kästle, and J. Welzel, “Optical coherence tomography in dermatology,” J. Biomed. Opt. 18(6), 061224 (2013).
[Crossref]

Kendall, A.

V. Badrinarayanan, A. Kendall, and R. Cipolla, “SegNet: A Deep Convolutional Encoder-Decoder Architecture for Image Segmentation,” IEEE Trans. Pattern Anal. Mach. Intell. 39(12), 2481–2495 (2017).
[Crossref]

Kim, D. Y.

Kim, J.

J. Kim, J. K. Lee, and K. M. LeeIeee, “Accurate Image Super-Resolution Using Very Deep Convolutional Networks,” in 2016 IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2016), pp. 1646–1654.

Kingma, D. P.

D. P. Kingma and J. Ba, “Adam: A method for stochastic optimization,” arXiv preprint arXiv:1412.6980 (2014).

Klang, E.

A. Ben-Cohen, E. Klang, S. P. Raskin, S. Soffer, S. Ben-Haim, E. Konen, M. M. Amitai, and H. Greenspan, “Cross-modality synthesis from CT to PET using FCN and GAN networks for improved automated lesion detection,” Eng. Appl. Artif. Intell. 78, 186–194 (2019).
[Crossref]

Konen, E.

A. Ben-Cohen, E. Klang, S. P. Raskin, S. Soffer, S. Ben-Haim, E. Konen, M. M. Amitai, and H. Greenspan, “Cross-modality synthesis from CT to PET using FCN and GAN networks for improved automated lesion detection,” Eng. Appl. Artif. Intell. 78, 186–194 (2019).
[Crossref]

Kong, Y.

Y. L. Zhang, Y. P. Tian, Y. Kong, B. N. Zhong, and Y. FuIEEE, “Residual Dense Network for Image Super-Resolution,” in 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition (IEEE, 2018), pp. 2472–2481.

Kou, C. X.

Z. K. Yu, Q. Xiang, J. H. Meng, C. X. Kou, Q. S. Ren, and Y. Y. Lu, “Retinal image synthesis from multiple-landmarks input with generative adversarial networks,” Biomed. Eng. Online 18(1), 62 (2019).
[Crossref]

Kowarschik, M.

Y. Y. Lu, M. Kowarschik, X. L. Huang, Y. Xia, J. H. Choi, S. Q. Chen, S. Y. Hu, Q. S. Ren, R. Fahrig, J. Hornegger, and A. Maier, “A learning-based material decomposition pipeline for multi-energy x-ray imaging,” Med. Phys. 46(2), 689–703 (2019).
[Crossref]

Kraus, M. F.

Krizhevsky, A.

A. Krizhevsky, I. Sutskever, and G. E. Hinton, “Imagenet classification with deep convolutional neural networks,” in Advances in Neural Information Processing SystemsNevada, USA, 2012), pp. 1097–1105.

Lasser, T.

A. Maier, C. Syben, T. Lasser, and C. Riess, “A gentle introduction to deep learning in medical image processing,” Z. Med. Phys. 29(2), 86–101 (2019).
[Crossref]

Leahy, M.

Lee, A. Y.

C. S. Lee, A. J. Tyring, Y. Wu, S. Xiao, A. S. Rokem, N. P. DeRuyter, Q. Q. Zhang, A. Tufail, R. K. Wang, and A. Y. Lee, “Generating retinal flow maps from structural optical coherence tomography with artificial intelligence,” Sci. Rep. 9(1), 5694 (2019).
[Crossref]

Lee, C. S.

C. S. Lee, A. J. Tyring, Y. Wu, S. Xiao, A. S. Rokem, N. P. DeRuyter, Q. Q. Zhang, A. Tufail, R. K. Wang, and A. Y. Lee, “Generating retinal flow maps from structural optical coherence tomography with artificial intelligence,” Sci. Rep. 9(1), 5694 (2019).
[Crossref]

Lee, J. K.

J. Kim, J. K. Lee, and K. M. LeeIeee, “Accurate Image Super-Resolution Using Very Deep Convolutional Networks,” in 2016 IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2016), pp. 1646–1654.

Lee, K. M.

J. Kim, J. K. Lee, and K. M. LeeIeee, “Accurate Image Super-Resolution Using Very Deep Convolutional Networks,” in 2016 IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2016), pp. 1646–1654.

Leitgeb, R. A.

Li, X.

Lin, C. P.

D. Huang, E. A. Swanson, C. P. Lin, J. S. Schuman, W. G. Stinson, W. Chang, M. R. Hee, T. Flotte, K. Gregory, and C. A. Puliafito, “Optical coherence tomography,” Science 254(5035), 1178–1181 (1991).
[Crossref]

Liu, G.

Liu, G. J.

X. Liu, Z. Y. Huang, Z. Z. Wang, C. Y. Wen, Z. Jiang, Z. K. Yu, J. F. Liu, G. J. Liu, X. L. Huang, A. Maier, Q. S. Ren, and Y. Y. Lu, “A deep learning based pipeline for optical coherence tomography angiography,” J. Biophotonics 12(10), 10 (2019).
[Crossref]

Liu, J. F.

X. Liu, Z. Y. Huang, Z. Z. Wang, C. Y. Wen, Z. Jiang, Z. K. Yu, J. F. Liu, G. J. Liu, X. L. Huang, A. Maier, Q. S. Ren, and Y. Y. Lu, “A deep learning based pipeline for optical coherence tomography angiography,” J. Biophotonics 12(10), 10 (2019).
[Crossref]

Liu, J. J.

Liu, X.

B. Qiu, Z. Huang, X. Liu, X. Meng, Y. You, G. Liu, K. Yang, A. Maier, Q. Ren, and Y. Lu, “Noise reduction in optical coherence tomography images using a deep neural network with perceptually-sensitive loss function,” Biomed. Opt. Express 11(2), 817–830 (2020).
[Crossref]

X. Liu, Z. Y. Huang, Z. Z. Wang, C. Y. Wen, Z. Jiang, Z. K. Yu, J. F. Liu, G. J. Liu, X. L. Huang, A. Maier, Q. S. Ren, and Y. Y. Lu, “A deep learning based pipeline for optical coherence tomography angiography,” J. Biophotonics 12(10), 10 (2019).
[Crossref]

Liu, X. M.

Y. Tai, J. Yang, X. M. Liu, and C. Y. XuIEEE, “MemNet: A Persistent Memory Network for Image Restoration,” in 2017 IEEE International Conference on Computer Vision (IEEE, 2017), pp. 4549–4557.

Liu, Z.

G. Huang, Z. Liu, L. van der Maaten, and K. Q. WeinbergerIeee, “Densely Connected Convolutional Networks,” in 30th IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2017), pp. 2261–2269.

Lowe, D. G.

D. G. Lowe, “Object recognition from local scale-invariant features,” in Proc. Seventh IEEE Int. Conf. on Comput. Vis., vol.2, (1999), pp. 1150–1157.

Loy, C. C.

C. Dong, C. C. Loy, K. M. He, and X. O. Tang, “Image Super-Resolution Using Deep Convolutional Networks,” IEEE Trans. Pattern Anal. Mach. Intell. 38(2), 295–307 (2016).
[Crossref]

Lu, Y.

Lu, Y. Y.

Z. K. Yu, Q. Xiang, J. H. Meng, C. X. Kou, Q. S. Ren, and Y. Y. Lu, “Retinal image synthesis from multiple-landmarks input with generative adversarial networks,” Biomed. Eng. Online 18(1), 62 (2019).
[Crossref]

Y. Y. Lu, M. Kowarschik, X. L. Huang, Y. Xia, J. H. Choi, S. Q. Chen, S. Y. Hu, Q. S. Ren, R. Fahrig, J. Hornegger, and A. Maier, “A learning-based material decomposition pipeline for multi-energy x-ray imaging,” Med. Phys. 46(2), 689–703 (2019).
[Crossref]

X. Liu, Z. Y. Huang, Z. Z. Wang, C. Y. Wen, Z. Jiang, Z. K. Yu, J. F. Liu, G. J. Liu, X. L. Huang, A. Maier, Q. S. Ren, and Y. Y. Lu, “A deep learning based pipeline for optical coherence tomography angiography,” J. Biophotonics 12(10), 10 (2019).
[Crossref]

Z. Jiang, Z. K. Yu, S. X. Feng, Z. Y. Huang, Y. H. Peng, J. X. Guo, Q. S. Ren, and Y. Y. Lu, “A super-resolution method-based pipeline for fundus fluorescein angiography imaging,” Biomed. Eng. Online 17(1), 125 (2018).
[Crossref]

Maier, A.

B. Qiu, Z. Huang, X. Liu, X. Meng, Y. You, G. Liu, K. Yang, A. Maier, Q. Ren, and Y. Lu, “Noise reduction in optical coherence tomography images using a deep neural network with perceptually-sensitive loss function,” Biomed. Opt. Express 11(2), 817–830 (2020).
[Crossref]

Y. Y. Lu, M. Kowarschik, X. L. Huang, Y. Xia, J. H. Choi, S. Q. Chen, S. Y. Hu, Q. S. Ren, R. Fahrig, J. Hornegger, and A. Maier, “A learning-based material decomposition pipeline for multi-energy x-ray imaging,” Med. Phys. 46(2), 689–703 (2019).
[Crossref]

X. Liu, Z. Y. Huang, Z. Z. Wang, C. Y. Wen, Z. Jiang, Z. K. Yu, J. F. Liu, G. J. Liu, X. L. Huang, A. Maier, Q. S. Ren, and Y. Y. Lu, “A deep learning based pipeline for optical coherence tomography angiography,” J. Biophotonics 12(10), 10 (2019).
[Crossref]

A. Maier, C. Syben, T. Lasser, and C. Riess, “A gentle introduction to deep learning in medical image processing,” Z. Med. Phys. 29(2), 86–101 (2019).
[Crossref]

A. Maier, S. Steidl, V. Christlein, and J. Hornegger, Medical Imaging Systems: An Introductory Guide, vol. 11111 (Springer, 2018).

Mao, X. J.

X. J. Mao, C. H. Shen, and Y. B. Yang, “Image Restoration Using Very Deep Convolutional Encoder-Decoder Networks with Symmetric Skip Connections,” in Advances in Neural Information Processing Systems, vol. 29D. D. Lee, M. Sugiyama, U. V. Luxburg, I. Guyon, and R. Garnett, eds. (Neural Information Processing Systems,2016).

Meng, D.

K. Zhang, W. Zuo, Y. Chen, D. Meng, and L. Zhang, “Beyond a Gaussian Denoiser: Residual Learning of Deep CNN for Image Denoising,” IEEE Trans. on Image Process. 26(7), 3142–3155 (2017).
[Crossref]

Meng, J. H.

Z. K. Yu, Q. Xiang, J. H. Meng, C. X. Kou, Q. S. Ren, and Y. Y. Lu, “Retinal image synthesis from multiple-landmarks input with generative adversarial networks,” Biomed. Eng. Online 18(1), 62 (2019).
[Crossref]

Meng, X.

Metz, L.

A. Radford, L. Metz, and S. Chintala, “Unsupervised representation learning with deep convolutional generative adversarial networks,” arXiv preprint arXiv:1511.06434 (2015).

Mirza, M.

I. J. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville, and Y. Bengio, “Generative Adversarial Nets,” in Advances in Neural Information Processing Systems, Vol. 27Z. Ghahramani, M. Welling, C. Cortes, N. D. Lawrence, and K. Q. Weinberger, eds. (Neural Information Processing Systems, 2014).

Morgner, U.

Ozair, S.

I. J. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville, and Y. Bengio, “Generative Adversarial Nets,” in Advances in Neural Information Processing Systems, Vol. 27Z. Ghahramani, M. Welling, C. Cortes, N. D. Lawrence, and K. Q. Weinberger, eds. (Neural Information Processing Systems, 2014).

Park, T.

J. Y. Zhu, T. Park, P. Isola, and A. A. EfrosIEEE, “Unpaired Image-to-Image Translation using Cycle-Consistent Adversarial Networks,” in 2017 IEEE International Conference on Computer Vision, (IEEE, 2017), pp. 2242–2251.

Pechauer, A. D.

Peng, Y. H.

Z. Jiang, Z. K. Yu, S. X. Feng, Z. Y. Huang, Y. H. Peng, J. X. Guo, Q. S. Ren, and Y. Y. Lu, “A super-resolution method-based pipeline for fundus fluorescein angiography imaging,” Biomed. Eng. Online 17(1), 125 (2018).
[Crossref]

Perera, S.

S. K. Devalla, G. Subramanian, T. H. Pham, X. Wang, S. Perera, T. A. Tun, T. Aung, L. Schmetterer, A. H. Thiéry, and M. J. Girard, “A deep learning approach to denoise optical coherence tomography images of the optic nerve head,” Sci. Rep. 9(1), 14454–13 (2019).
[Crossref]

Pfister, F.

H.-P. Hammes, Y. Feng, F. Pfister, and M. Brownlee, “Diabetic retinopathy: targeting vasoregression,” Diabetes 60(1), 9–16 (2011).
[Crossref]

Pham, T. H.

S. K. Devalla, G. Subramanian, T. H. Pham, X. Wang, S. Perera, T. A. Tun, T. Aung, L. Schmetterer, A. H. Thiéry, and M. J. Girard, “A deep learning approach to denoise optical coherence tomography images of the optic nerve head,” Sci. Rep. 9(1), 14454–13 (2019).
[Crossref]

Pitris, C.

Potsaid, B.

Pouget-Abadie, J.

I. J. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville, and Y. Bengio, “Generative Adversarial Nets,” in Advances in Neural Information Processing Systems, Vol. 27Z. Ghahramani, M. Welling, C. Cortes, N. D. Lawrence, and K. Q. Weinberger, eds. (Neural Information Processing Systems, 2014).

Puliafito, C. A.

D. Huang, E. A. Swanson, C. P. Lin, J. S. Schuman, W. G. Stinson, W. Chang, M. R. Hee, T. Flotte, K. Gregory, and C. A. Puliafito, “Optical coherence tomography,” Science 254(5035), 1178–1181 (1991).
[Crossref]

Qin, J.

Qiu, B.

Radford, A.

A. Radford, L. Metz, and S. Chintala, “Unsupervised representation learning with deep convolutional generative adversarial networks,” arXiv preprint arXiv:1511.06434 (2015).

Radhakrishnan, H.

Raskin, S. P.

A. Ben-Cohen, E. Klang, S. P. Raskin, S. Soffer, S. Ben-Haim, E. Konen, M. M. Amitai, and H. Greenspan, “Cross-modality synthesis from CT to PET using FCN and GAN networks for improved automated lesion detection,” Eng. Appl. Artif. Intell. 78, 186–194 (2019).
[Crossref]

Ren, Q.

Ren, Q. S.

Z. K. Yu, Q. Xiang, J. H. Meng, C. X. Kou, Q. S. Ren, and Y. Y. Lu, “Retinal image synthesis from multiple-landmarks input with generative adversarial networks,” Biomed. Eng. Online 18(1), 62 (2019).
[Crossref]

Y. Y. Lu, M. Kowarschik, X. L. Huang, Y. Xia, J. H. Choi, S. Q. Chen, S. Y. Hu, Q. S. Ren, R. Fahrig, J. Hornegger, and A. Maier, “A learning-based material decomposition pipeline for multi-energy x-ray imaging,” Med. Phys. 46(2), 689–703 (2019).
[Crossref]

X. Liu, Z. Y. Huang, Z. Z. Wang, C. Y. Wen, Z. Jiang, Z. K. Yu, J. F. Liu, G. J. Liu, X. L. Huang, A. Maier, Q. S. Ren, and Y. Y. Lu, “A deep learning based pipeline for optical coherence tomography angiography,” J. Biophotonics 12(10), 10 (2019).
[Crossref]

Z. Jiang, Z. K. Yu, S. X. Feng, Z. Y. Huang, Y. H. Peng, J. X. Guo, Q. S. Ren, and Y. Y. Lu, “A super-resolution method-based pipeline for fundus fluorescein angiography imaging,” Biomed. Eng. Online 17(1), 125 (2018).
[Crossref]

Ren, S.

K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (Nevada, USA, 2016), pp. 770–778.

Ren, S. Q.

K. M. He, X. Y. Zhang, S. Q. Ren, and J. SunIEEE, “Delving Deep into Rectifiers: Surpassing Human-Level Performance on ImageNet Classification,” in 2015 IEEE International Conference on Computer Vision (IEEE, 2015), pp. 1026–1034.

Riess, C.

A. Maier, C. Syben, T. Lasser, and C. Riess, “A gentle introduction to deep learning in medical image processing,” Z. Med. Phys. 29(2), 86–101 (2019).
[Crossref]

Rokem, A. S.

C. S. Lee, A. J. Tyring, Y. Wu, S. Xiao, A. S. Rokem, N. P. DeRuyter, Q. Q. Zhang, A. Tufail, R. K. Wang, and A. Y. Lee, “Generating retinal flow maps from structural optical coherence tomography with artificial intelligence,” Sci. Rep. 9(1), 5694 (2019).
[Crossref]

Rollins, A. M.

H. G. Bezerra, M. A. Costa, G. Guagliumi, A. M. Rollins, and D. I. Simon, “Intracoronary optical coherence tomography: a comprehensive review: clinical and research applications,” JACC: Cardiovasc. Interv. 2, 1035–1046 (2009).
[Crossref]

Ronneberger, O.

O. Ronneberger, P. Fischer, and T. Brox, “U-net: Convolutional networks for biomedical image segmentation,” in IEEE Conference on Computer Vision and Pattern Recognition (Springer, 2015), pp. 234–241.

Sattler, E. C.

E. C. Sattler, R. Kästle, and J. Welzel, “Optical coherence tomography in dermatology,” J. Biomed. Opt. 18(6), 061224 (2013).
[Crossref]

Schmetterer, L.

S. K. Devalla, G. Subramanian, T. H. Pham, X. Wang, S. Perera, T. A. Tun, T. Aung, L. Schmetterer, A. H. Thiéry, and M. J. Girard, “A deep learning approach to denoise optical coherence tomography images of the optic nerve head,” Sci. Rep. 9(1), 14454–13 (2019).
[Crossref]

Schuman, J. S.

D. Huang, E. A. Swanson, C. P. Lin, J. S. Schuman, W. G. Stinson, W. Chang, M. R. Hee, T. Flotte, K. Gregory, and C. A. Puliafito, “Optical coherence tomography,” Science 254(5035), 1178–1181 (1991).
[Crossref]

Schwartz, D.

Schwartz, D. M.

Shakhnarovich, G.

M. Haris, G. Shakhnarovich, and N. UkitaIEEE, “Deep Back-Projection Networks For Super-Resolution,” in 2018 IEEE/Cvf Conference on Computer Vision and Pattern Recognition (IEEE, 2018), pp. 1664–1673.

Sheikh, H. R.

Z. Wang, A. C. Bovik, H. R. Sheikh, and E. P. Simoncelli, “Image quality assessment: from error visibility to structural similarity,” IEEE Trans. on Image Process. 13(4), 600–612 (2004).
[Crossref]

Shen, C. H.

X. J. Mao, C. H. Shen, and Y. B. Yang, “Image Restoration Using Very Deep Convolutional Encoder-Decoder Networks with Symmetric Skip Connections,” in Advances in Neural Information Processing Systems, vol. 29D. D. Lee, M. Sugiyama, U. V. Luxburg, I. Guyon, and R. Garnett, eds. (Neural Information Processing Systems,2016).

Simon, D. I.

H. G. Bezerra, M. A. Costa, G. Guagliumi, A. M. Rollins, and D. I. Simon, “Intracoronary optical coherence tomography: a comprehensive review: clinical and research applications,” JACC: Cardiovasc. Interv. 2, 1035–1046 (2009).
[Crossref]

Simoncelli, E. P.

Z. Wang, A. C. Bovik, H. R. Sheikh, and E. P. Simoncelli, “Image quality assessment: from error visibility to structural similarity,” IEEE Trans. on Image Process. 13(4), 600–612 (2004).
[Crossref]

Soffer, S.

A. Ben-Cohen, E. Klang, S. P. Raskin, S. Soffer, S. Ben-Haim, E. Konen, M. M. Amitai, and H. Greenspan, “Cross-modality synthesis from CT to PET using FCN and GAN networks for improved automated lesion detection,” Eng. Appl. Artif. Intell. 78, 186–194 (2019).
[Crossref]

Srinivasan, V. J.

Steidl, S.

A. Maier, S. Steidl, V. Christlein, and J. Hornegger, Medical Imaging Systems: An Introductory Guide, vol. 11111 (Springer, 2018).

Stinson, W. G.

D. Huang, E. A. Swanson, C. P. Lin, J. S. Schuman, W. G. Stinson, W. Chang, M. R. Hee, T. Flotte, K. Gregory, and C. A. Puliafito, “Optical coherence tomography,” Science 254(5035), 1178–1181 (1991).
[Crossref]

Subhash, H.

Subramanian, G.

S. K. Devalla, G. Subramanian, T. H. Pham, X. Wang, S. Perera, T. A. Tun, T. Aung, L. Schmetterer, A. H. Thiéry, and M. J. Girard, “A deep learning approach to denoise optical coherence tomography images of the optic nerve head,” Sci. Rep. 9(1), 14454–13 (2019).
[Crossref]

Sun, J.

K. M. He, X. Y. Zhang, S. Q. Ren, and J. SunIEEE, “Delving Deep into Rectifiers: Surpassing Human-Level Performance on ImageNet Classification,” in 2015 IEEE International Conference on Computer Vision (IEEE, 2015), pp. 1026–1034.

K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (Nevada, USA, 2016), pp. 770–778.

Sutskever, I.

A. Krizhevsky, I. Sutskever, and G. E. Hinton, “Imagenet classification with deep convolutional neural networks,” in Advances in Neural Information Processing SystemsNevada, USA, 2012), pp. 1097–1105.

Swanson, E. A.

D. Huang, E. A. Swanson, C. P. Lin, J. S. Schuman, W. G. Stinson, W. Chang, M. R. Hee, T. Flotte, K. Gregory, and C. A. Puliafito, “Optical coherence tomography,” Science 254(5035), 1178–1181 (1991).
[Crossref]

Syben, C.

A. Maier, C. Syben, T. Lasser, and C. Riess, “A gentle introduction to deep learning in medical image processing,” Z. Med. Phys. 29(2), 86–101 (2019).
[Crossref]

Szegedy, C.

S. Ioffe and C. Szegedy, “Batch normalization: Accelerating deep network training by reducing internal covariate shift,” arXiv preprint arXiv:1502.03167 (2015).

Tai, Y.

Y. Tai, J. Yang, X. M. Liu, and C. Y. XuIEEE, “MemNet: A Persistent Memory Network for Image Restoration,” in 2017 IEEE International Conference on Computer Vision (IEEE, 2017), pp. 4549–4557.

Tan, O.

Tang, X. O.

C. Dong, C. C. Loy, K. M. He, and X. O. Tang, “Image Super-Resolution Using Deep Convolutional Networks,” IEEE Trans. Pattern Anal. Mach. Intell. 38(2), 295–307 (2016).
[Crossref]

Thiéry, A. H.

S. K. Devalla, G. Subramanian, T. H. Pham, X. Wang, S. Perera, T. A. Tun, T. Aung, L. Schmetterer, A. H. Thiéry, and M. J. Girard, “A deep learning approach to denoise optical coherence tomography images of the optic nerve head,” Sci. Rep. 9(1), 14454–13 (2019).
[Crossref]

Tian, Y. P.

Y. L. Zhang, Y. P. Tian, Y. Kong, B. N. Zhong, and Y. FuIEEE, “Residual Dense Network for Image Super-Resolution,” in 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition (IEEE, 2018), pp. 2472–2481.

Tokayer, J.

Tufail, A.

C. S. Lee, A. J. Tyring, Y. Wu, S. Xiao, A. S. Rokem, N. P. DeRuyter, Q. Q. Zhang, A. Tufail, R. K. Wang, and A. Y. Lee, “Generating retinal flow maps from structural optical coherence tomography with artificial intelligence,” Sci. Rep. 9(1), 5694 (2019).
[Crossref]

Tun, T. A.

S. K. Devalla, G. Subramanian, T. H. Pham, X. Wang, S. Perera, T. A. Tun, T. Aung, L. Schmetterer, A. H. Thiéry, and M. J. Girard, “A deep learning approach to denoise optical coherence tomography images of the optic nerve head,” Sci. Rep. 9(1), 14454–13 (2019).
[Crossref]

Tyring, A. J.

C. S. Lee, A. J. Tyring, Y. Wu, S. Xiao, A. S. Rokem, N. P. DeRuyter, Q. Q. Zhang, A. Tufail, R. K. Wang, and A. Y. Lee, “Generating retinal flow maps from structural optical coherence tomography with artificial intelligence,” Sci. Rep. 9(1), 5694 (2019).
[Crossref]

Ukita, N.

M. Haris, G. Shakhnarovich, and N. UkitaIEEE, “Deep Back-Projection Networks For Super-Resolution,” in 2018 IEEE/Cvf Conference on Computer Vision and Pattern Recognition (IEEE, 2018), pp. 1664–1673.

van der Maaten, L.

G. Huang, Z. Liu, L. van der Maaten, and K. Q. WeinbergerIeee, “Densely Connected Convolutional Networks,” in 30th IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2017), pp. 2261–2269.

Wang, R. K.

C. S. Lee, A. J. Tyring, Y. Wu, S. Xiao, A. S. Rokem, N. P. DeRuyter, Q. Q. Zhang, A. Tufail, R. K. Wang, and A. Y. Lee, “Generating retinal flow maps from structural optical coherence tomography with artificial intelligence,” Sci. Rep. 9(1), 5694 (2019).
[Crossref]

L. An, J. Qin, and R. K. Wang, “Ultrahigh sensitive optical microangiography for in vivo imaging of microcirculations within human skin tissue beds,” Opt. Express 18(8), 8220–8228 (2010).
[Crossref]

Wang, X.

S. K. Devalla, G. Subramanian, T. H. Pham, X. Wang, S. Perera, T. A. Tun, T. Aung, L. Schmetterer, A. H. Thiéry, and M. J. Girard, “A deep learning approach to denoise optical coherence tomography images of the optic nerve head,” Sci. Rep. 9(1), 14454–13 (2019).
[Crossref]

Wang, Y. M.

Wang, Z.

Z. Wang, A. C. Bovik, H. R. Sheikh, and E. P. Simoncelli, “Image quality assessment: from error visibility to structural similarity,” IEEE Trans. on Image Process. 13(4), 600–612 (2004).
[Crossref]

Wang, Z. Z.

X. Liu, Z. Y. Huang, Z. Z. Wang, C. Y. Wen, Z. Jiang, Z. K. Yu, J. F. Liu, G. J. Liu, X. L. Huang, A. Maier, Q. S. Ren, and Y. Y. Lu, “A deep learning based pipeline for optical coherence tomography angiography,” J. Biophotonics 12(10), 10 (2019).
[Crossref]

Warde-Farley, D.

I. J. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville, and Y. Bengio, “Generative Adversarial Nets,” in Advances in Neural Information Processing Systems, Vol. 27Z. Ghahramani, M. Welling, C. Cortes, N. D. Lawrence, and K. Q. Weinberger, eds. (Neural Information Processing Systems, 2014).

Weinberger, K. Q.

G. Huang, Z. Liu, L. van der Maaten, and K. Q. WeinbergerIeee, “Densely Connected Convolutional Networks,” in 30th IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2017), pp. 2261–2269.

Weingast, J.

Welzel, J.

E. C. Sattler, R. Kästle, and J. Welzel, “Optical coherence tomography in dermatology,” J. Biomed. Opt. 18(6), 061224 (2013).
[Crossref]

Wen, C. Y.

X. Liu, Z. Y. Huang, Z. Z. Wang, C. Y. Wen, Z. Jiang, Z. K. Yu, J. F. Liu, G. J. Liu, X. L. Huang, A. Maier, Q. S. Ren, and Y. Y. Lu, “A deep learning based pipeline for optical coherence tomography angiography,” J. Biophotonics 12(10), 10 (2019).
[Crossref]

Werner, J. S.

Wieser, W.

Wu, W. C.

Wu, Y.

C. S. Lee, A. J. Tyring, Y. Wu, S. Xiao, A. S. Rokem, N. P. DeRuyter, Q. Q. Zhang, A. Tufail, R. K. Wang, and A. Y. Lee, “Generating retinal flow maps from structural optical coherence tomography with artificial intelligence,” Sci. Rep. 9(1), 5694 (2019).
[Crossref]

Xia, Y.

Y. Y. Lu, M. Kowarschik, X. L. Huang, Y. Xia, J. H. Choi, S. Q. Chen, S. Y. Hu, Q. S. Ren, R. Fahrig, J. Hornegger, and A. Maier, “A learning-based material decomposition pipeline for multi-energy x-ray imaging,” Med. Phys. 46(2), 689–703 (2019).
[Crossref]

Xiang, Q.

Z. K. Yu, Q. Xiang, J. H. Meng, C. X. Kou, Q. S. Ren, and Y. Y. Lu, “Retinal image synthesis from multiple-landmarks input with generative adversarial networks,” Biomed. Eng. Online 18(1), 62 (2019).
[Crossref]

Xiao, S.

C. S. Lee, A. J. Tyring, Y. Wu, S. Xiao, A. S. Rokem, N. P. DeRuyter, Q. Q. Zhang, A. Tufail, R. K. Wang, and A. Y. Lee, “Generating retinal flow maps from structural optical coherence tomography with artificial intelligence,” Sci. Rep. 9(1), 5694 (2019).
[Crossref]

Xu, B.

I. J. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville, and Y. Bengio, “Generative Adversarial Nets,” in Advances in Neural Information Processing Systems, Vol. 27Z. Ghahramani, M. Welling, C. Cortes, N. D. Lawrence, and K. Q. Weinberger, eds. (Neural Information Processing Systems, 2014).

Xu, C. Y.

Y. Tai, J. Yang, X. M. Liu, and C. Y. XuIEEE, “MemNet: A Persistent Memory Network for Image Restoration,” in 2017 IEEE International Conference on Computer Vision (IEEE, 2017), pp. 4549–4557.

Yang, J.

Y. Tai, J. Yang, X. M. Liu, and C. Y. XuIEEE, “MemNet: A Persistent Memory Network for Image Restoration,” in 2017 IEEE International Conference on Computer Vision (IEEE, 2017), pp. 4549–4557.

Yang, K.

Yang, Y. B.

X. J. Mao, C. H. Shen, and Y. B. Yang, “Image Restoration Using Very Deep Convolutional Encoder-Decoder Networks with Symmetric Skip Connections,” in Advances in Neural Information Processing Systems, vol. 29D. D. Lee, M. Sugiyama, U. V. Luxburg, I. Guyon, and R. Garnett, eds. (Neural Information Processing Systems,2016).

Yaseen, M. A.

Yi, X.

X. Yi and P. Babyn, “Sharpness-Aware Low-Dose CT Denoising Using Conditional Generative Adversarial Network,” J. Digit. Imaging 31(5), 655–669 (2018).
[Crossref]

You, Y.

Yu, L. F.

L. F. Yu and Z. P. Chen, “Doppler variance imaging for three-dimensional retina and choroid angiography,” J. Biomed. Opt. 15(1), 016029 (2010).
[Crossref]

Yu, Z. K.

X. Liu, Z. Y. Huang, Z. Z. Wang, C. Y. Wen, Z. Jiang, Z. K. Yu, J. F. Liu, G. J. Liu, X. L. Huang, A. Maier, Q. S. Ren, and Y. Y. Lu, “A deep learning based pipeline for optical coherence tomography angiography,” J. Biophotonics 12(10), 10 (2019).
[Crossref]

Z. K. Yu, Q. Xiang, J. H. Meng, C. X. Kou, Q. S. Ren, and Y. Y. Lu, “Retinal image synthesis from multiple-landmarks input with generative adversarial networks,” Biomed. Eng. Online 18(1), 62 (2019).
[Crossref]

Z. Jiang, Z. K. Yu, S. X. Feng, Z. Y. Huang, Y. H. Peng, J. X. Guo, Q. S. Ren, and Y. Y. Lu, “A super-resolution method-based pipeline for fundus fluorescein angiography imaging,” Biomed. Eng. Online 17(1), 125 (2018).
[Crossref]

Zawadzki, R. J.

Zhang, K.

K. Zhang, W. Zuo, Y. Chen, D. Meng, and L. Zhang, “Beyond a Gaussian Denoiser: Residual Learning of Deep CNN for Image Denoising,” IEEE Trans. on Image Process. 26(7), 3142–3155 (2017).
[Crossref]

Zhang, L.

K. Zhang, W. Zuo, Y. Chen, D. Meng, and L. Zhang, “Beyond a Gaussian Denoiser: Residual Learning of Deep CNN for Image Denoising,” IEEE Trans. on Image Process. 26(7), 3142–3155 (2017).
[Crossref]

Zhang, Q. Q.

C. S. Lee, A. J. Tyring, Y. Wu, S. Xiao, A. S. Rokem, N. P. DeRuyter, Q. Q. Zhang, A. Tufail, R. K. Wang, and A. Y. Lee, “Generating retinal flow maps from structural optical coherence tomography with artificial intelligence,” Sci. Rep. 9(1), 5694 (2019).
[Crossref]

Zhang, X.

K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (Nevada, USA, 2016), pp. 770–778.

Zhang, X. Y.

K. M. He, X. Y. Zhang, S. Q. Ren, and J. SunIEEE, “Delving Deep into Rectifiers: Surpassing Human-Level Performance on ImageNet Classification,” in 2015 IEEE International Conference on Computer Vision (IEEE, 2015), pp. 1026–1034.

Zhang, Y. L.

Y. L. Zhang, Y. P. Tian, Y. Kong, B. N. Zhong, and Y. FuIEEE, “Residual Dense Network for Image Super-Resolution,” in 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition (IEEE, 2018), pp. 2472–2481.

Zhong, B. N.

Y. L. Zhang, Y. P. Tian, Y. Kong, B. N. Zhong, and Y. FuIEEE, “Residual Dense Network for Image Super-Resolution,” in 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition (IEEE, 2018), pp. 2472–2481.

Zhou, T. H.

P. Isola, J. Y. Zhu, T. H. Zhou, and A. A. EfrosIEEE, “Image-to-Image Translation with Conditional Adversarial Networks,” in 30th IEEE Conference on Computer Vision and Pattern Recognition, (IEEE, 2017), pp. 5967–5976.

Zhu, J. Y.

J. Y. Zhu, T. Park, P. Isola, and A. A. EfrosIEEE, “Unpaired Image-to-Image Translation using Cycle-Consistent Adversarial Networks,” in 2017 IEEE International Conference on Computer Vision, (IEEE, 2017), pp. 2242–2251.

P. Isola, J. Y. Zhu, T. H. Zhou, and A. A. EfrosIEEE, “Image-to-Image Translation with Conditional Adversarial Networks,” in 30th IEEE Conference on Computer Vision and Pattern Recognition, (IEEE, 2017), pp. 5967–5976.

Zuo, W.

K. Zhang, W. Zuo, Y. Chen, D. Meng, and L. Zhang, “Beyond a Gaussian Denoiser: Residual Learning of Deep CNN for Image Denoising,” IEEE Trans. on Image Process. 26(7), 3142–3155 (2017).
[Crossref]

Biomed. Eng. Online (2)

Z. Jiang, Z. K. Yu, S. X. Feng, Z. Y. Huang, Y. H. Peng, J. X. Guo, Q. S. Ren, and Y. Y. Lu, “A super-resolution method-based pipeline for fundus fluorescein angiography imaging,” Biomed. Eng. Online 17(1), 125 (2018).
[Crossref]

Z. K. Yu, Q. Xiang, J. H. Meng, C. X. Kou, Q. S. Ren, and Y. Y. Lu, “Retinal image synthesis from multiple-landmarks input with generative adversarial networks,” Biomed. Eng. Online 18(1), 62 (2019).
[Crossref]

Biomed. Opt. Express (5)

Diabetes (1)

H.-P. Hammes, Y. Feng, F. Pfister, and M. Brownlee, “Diabetic retinopathy: targeting vasoregression,” Diabetes 60(1), 9–16 (2011).
[Crossref]

Eng. Appl. Artif. Intell. (1)

A. Ben-Cohen, E. Klang, S. P. Raskin, S. Soffer, S. Ben-Haim, E. Konen, M. M. Amitai, and H. Greenspan, “Cross-modality synthesis from CT to PET using FCN and GAN networks for improved automated lesion detection,” Eng. Appl. Artif. Intell. 78, 186–194 (2019).
[Crossref]

IEEE Trans. on Image Process. (2)

K. Zhang, W. Zuo, Y. Chen, D. Meng, and L. Zhang, “Beyond a Gaussian Denoiser: Residual Learning of Deep CNN for Image Denoising,” IEEE Trans. on Image Process. 26(7), 3142–3155 (2017).
[Crossref]

Z. Wang, A. C. Bovik, H. R. Sheikh, and E. P. Simoncelli, “Image quality assessment: from error visibility to structural similarity,” IEEE Trans. on Image Process. 13(4), 600–612 (2004).
[Crossref]

IEEE Trans. Pattern Anal. Mach. Intell. (2)

V. Badrinarayanan, A. Kendall, and R. Cipolla, “SegNet: A Deep Convolutional Encoder-Decoder Architecture for Image Segmentation,” IEEE Trans. Pattern Anal. Mach. Intell. 39(12), 2481–2495 (2017).
[Crossref]

C. Dong, C. C. Loy, K. M. He, and X. O. Tang, “Image Super-Resolution Using Deep Convolutional Networks,” IEEE Trans. Pattern Anal. Mach. Intell. 38(2), 295–307 (2016).
[Crossref]

J. Biomed. Opt. (2)

E. C. Sattler, R. Kästle, and J. Welzel, “Optical coherence tomography in dermatology,” J. Biomed. Opt. 18(6), 061224 (2013).
[Crossref]

L. F. Yu and Z. P. Chen, “Doppler variance imaging for three-dimensional retina and choroid angiography,” J. Biomed. Opt. 15(1), 016029 (2010).
[Crossref]

J. Biophotonics (1)

X. Liu, Z. Y. Huang, Z. Z. Wang, C. Y. Wen, Z. Jiang, Z. K. Yu, J. F. Liu, G. J. Liu, X. L. Huang, A. Maier, Q. S. Ren, and Y. Y. Lu, “A deep learning based pipeline for optical coherence tomography angiography,” J. Biophotonics 12(10), 10 (2019).
[Crossref]

J. Digit. Imaging (1)

X. Yi and P. Babyn, “Sharpness-Aware Low-Dose CT Denoising Using Conditional Generative Adversarial Network,” J. Digit. Imaging 31(5), 655–669 (2018).
[Crossref]

JACC: Cardiovasc. Interv. (1)

H. G. Bezerra, M. A. Costa, G. Guagliumi, A. M. Rollins, and D. I. Simon, “Intracoronary optical coherence tomography: a comprehensive review: clinical and research applications,” JACC: Cardiovasc. Interv. 2, 1035–1046 (2009).
[Crossref]

Med. Phys. (1)

Y. Y. Lu, M. Kowarschik, X. L. Huang, Y. Xia, J. H. Choi, S. Q. Chen, S. Y. Hu, Q. S. Ren, R. Fahrig, J. Hornegger, and A. Maier, “A learning-based material decomposition pipeline for multi-energy x-ray imaging,” Med. Phys. 46(2), 689–703 (2019).
[Crossref]

Opt. Express (3)

Opt. Lett. (2)

Sci. Rep. (2)

S. K. Devalla, G. Subramanian, T. H. Pham, X. Wang, S. Perera, T. A. Tun, T. Aung, L. Schmetterer, A. H. Thiéry, and M. J. Girard, “A deep learning approach to denoise optical coherence tomography images of the optic nerve head,” Sci. Rep. 9(1), 14454–13 (2019).
[Crossref]

C. S. Lee, A. J. Tyring, Y. Wu, S. Xiao, A. S. Rokem, N. P. DeRuyter, Q. Q. Zhang, A. Tufail, R. K. Wang, and A. Y. Lee, “Generating retinal flow maps from structural optical coherence tomography with artificial intelligence,” Sci. Rep. 9(1), 5694 (2019).
[Crossref]

Science (1)

D. Huang, E. A. Swanson, C. P. Lin, J. S. Schuman, W. G. Stinson, W. Chang, M. R. Hee, T. Flotte, K. Gregory, and C. A. Puliafito, “Optical coherence tomography,” Science 254(5035), 1178–1181 (1991).
[Crossref]

Z. Med. Phys. (1)

A. Maier, C. Syben, T. Lasser, and C. Riess, “A gentle introduction to deep learning in medical image processing,” Z. Med. Phys. 29(2), 86–101 (2019).
[Crossref]

Other (18)

Y. L. Zhang, Y. P. Tian, Y. Kong, B. N. Zhong, and Y. FuIEEE, “Residual Dense Network for Image Super-Resolution,” in 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition (IEEE, 2018), pp. 2472–2481.

K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (Nevada, USA, 2016), pp. 770–778.

O. Ronneberger, P. Fischer, and T. Brox, “U-net: Convolutional networks for biomedical image segmentation,” in IEEE Conference on Computer Vision and Pattern Recognition (Springer, 2015), pp. 234–241.

A. Maier, S. Steidl, V. Christlein, and J. Hornegger, Medical Imaging Systems: An Introductory Guide, vol. 11111 (Springer, 2018).

P. Isola, J. Y. Zhu, T. H. Zhou, and A. A. EfrosIEEE, “Image-to-Image Translation with Conditional Adversarial Networks,” in 30th IEEE Conference on Computer Vision and Pattern Recognition, (IEEE, 2017), pp. 5967–5976.

J. Y. Zhu, T. Park, P. Isola, and A. A. EfrosIEEE, “Unpaired Image-to-Image Translation using Cycle-Consistent Adversarial Networks,” in 2017 IEEE International Conference on Computer Vision, (IEEE, 2017), pp. 2242–2251.

D. G. Lowe, “Object recognition from local scale-invariant features,” in Proc. Seventh IEEE Int. Conf. on Comput. Vis., vol.2, (1999), pp. 1150–1157.

J. Kim, J. K. Lee, and K. M. LeeIeee, “Accurate Image Super-Resolution Using Very Deep Convolutional Networks,” in 2016 IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2016), pp. 1646–1654.

A. Krizhevsky, I. Sutskever, and G. E. Hinton, “Imagenet classification with deep convolutional neural networks,” in Advances in Neural Information Processing SystemsNevada, USA, 2012), pp. 1097–1105.

S. Ioffe and C. Szegedy, “Batch normalization: Accelerating deep network training by reducing internal covariate shift,” arXiv preprint arXiv:1502.03167 (2015).

K. M. He, X. Y. Zhang, S. Q. Ren, and J. SunIEEE, “Delving Deep into Rectifiers: Surpassing Human-Level Performance on ImageNet Classification,” in 2015 IEEE International Conference on Computer Vision (IEEE, 2015), pp. 1026–1034.

D. P. Kingma and J. Ba, “Adam: A method for stochastic optimization,” arXiv preprint arXiv:1412.6980 (2014).

X. J. Mao, C. H. Shen, and Y. B. Yang, “Image Restoration Using Very Deep Convolutional Encoder-Decoder Networks with Symmetric Skip Connections,” in Advances in Neural Information Processing Systems, vol. 29D. D. Lee, M. Sugiyama, U. V. Luxburg, I. Guyon, and R. Garnett, eds. (Neural Information Processing Systems,2016).

G. Huang, Z. Liu, L. van der Maaten, and K. Q. WeinbergerIeee, “Densely Connected Convolutional Networks,” in 30th IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2017), pp. 2261–2269.

M. Haris, G. Shakhnarovich, and N. UkitaIEEE, “Deep Back-Projection Networks For Super-Resolution,” in 2018 IEEE/Cvf Conference on Computer Vision and Pattern Recognition (IEEE, 2018), pp. 1664–1673.

Y. Tai, J. Yang, X. M. Liu, and C. Y. XuIEEE, “MemNet: A Persistent Memory Network for Image Restoration,” in 2017 IEEE International Conference on Computer Vision (IEEE, 2017), pp. 4549–4557.

I. J. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville, and Y. Bengio, “Generative Adversarial Nets,” in Advances in Neural Information Processing Systems, Vol. 27Z. Ghahramani, M. Welling, C. Cortes, N. D. Lawrence, and K. Q. Weinberger, eds. (Neural Information Processing Systems, 2014).

A. Radford, L. Metz, and S. Chintala, “Unsupervised representation learning with deep convolutional generative adversarial networks,” arXiv preprint arXiv:1511.06434 (2015).

Cited By

OSA participates in Crossref's Cited-By Linking service. Citing articles from OSA journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (10)

Fig. 1.
Fig. 1. A schematic diagram of deep learning-based optical coherence tomography angiography pipeline.
Fig. 2.
Fig. 2. The structure of the DnCNN for OCTA reconstruction. (Conv: convolutional layer; R: ReLU layer; B: BN layer.)
Fig. 3.
Fig. 3. The structure of the U-Net for OCTA reconstruction. (Conv: convolutional layer; LR: Leaky ReLU layer; B: BN layer; Max-Pool: max-poling layer; Deconv: deconvolution layer.)
Fig. 4.
Fig. 4. The schematic diagram of Residual connection and Dense connection. (Conv: convolutional layer; R: ReLU layer.)
Fig. 5.
Fig. 5. The structure of the RDN for OCTA reconstruction. (Conv: convolutional layer; R: ReLU layer.)
Fig. 6.
Fig. 6. The structure of the Pix2Pix GAN for OCTA reconstruction. (Conv: convolutional layer; LR: Leaky ReLU layer; I: instance normalization layer.)
Fig. 7.
Fig. 7. Reconstructed cross-sectional angiograms via different algorithms; for visual comparison, all the results shown in figure is the same region-of-interest (ROI) ($337 \times 371$) extracted from the original cross-sectional angiogram; (a) ground truth image obtained from SSAPGA algorithm with 48 consecutive B-scans as input; (b)-(f) reconstructed angiograms through corresponding OCTA algorithms with 4 consecutive B-scans as input.
Fig. 8.
Fig. 8. Reconstructed MIP enface angiograms through different algorithms; for visual comparison, an ROIs are labeled by the rectangular box, and the 2.5-fold magnified images of the ROIs are also provided in the right of the original image; (a) ground truth image obtained via SSAPGA algorithm with 48 consecutive B-scans as input; (b)-(f) reconstructed angiograms via corresponding OCTA algorithms with 4 consecutive B-scans as input.
Fig. 9.
Fig. 9. Reconstructed MIP enface angiograms under the AWGN with different noise levels; (a) ground truth image obtained via SSAPGA algorithm with 48 consecutive B-scans as input; (b) original reconstructed angiogram of RDN without AWGN; (c)-(f) reconstructed angiograms of RDN with different AWGN.
Fig. 10.
Fig. 10. Reconstructed MIP enface angiograms via different algorithms; for visual comparison, two ROIs are labeled by rectangular boxes, and the corresponding 2.5-fold magnified images of the ROIs are also provided in both sides of the figure; (a) ground truth image obtained from SSAPGA algorithm with 48 consecutive B-scans as input; (b) reconstructed angiograms through basic U-Net model; (c) reconstructed angiograms through U-Net model with phase information fusion.

Tables (5)

Tables Icon

Table 1. The average PSNR(dB)/SSIM/R results of the cross-sectional angiograms with different algorithms and protocols (The best results are highlighted).

Tables Icon

Table 2. The PSNR(dB)/SSIM/R results of the MIP enface angiogram with different algorithms and protocols (The best results are highlighted).

Tables Icon

Table 3. The PSNR(dB)/SSIM/R results of the MIP enface angiograms reconstructed by the 4-input RDN under different noise levels.

Tables Icon

Table 4. The PSNR(dB)/SSIM/R results of the MIP enface angiogram with different improvement schemes (The best results are highlighted).

Tables Icon

Table 5. The PSNR(dB)/SSIM/R results of the ROIs from MIP enface angiogram with merging phase information (The increase relative to basic U-Net model is also provided for each metrics).

Equations (12)

Equations on this page are rendered with MathJax. Learn more.

S ( x l , z l , t ) = A ( x l , z l , t ) e i Φ ( x l , z l , t ) ,
SSAPG ( x l , z l , t ) = 1 1 R l 1 1 M | m = 1 M r = 1 R l 1 2 A m ( x l , z l , t + ( r 1 ) Δ t ) A m ( x l , z l , t + r Δ t ) e j ρ PG m ( x l , z l , t + ( r 1 ) Δ t ) [ A m ( x l , z l , t + ( r 1 ) Δ t ) ] 2 + [ A m ( x l , z l , t + r Δ t ) ] 2
PG ( x l , z l , t + ( r 1 ) Δ t ) = d ( Δ Φ ( x l , z l , t + ( r 1 ) Δ t ) ) d z l
P S N R = 10 lg ( M A X y 2 M S E ) ,
M S E = H , W ( z y ) 2 H × W .
L ( Θ ) = 1 P p = 1 P F ( x p ; Θ ) y p 2 2 .
v l = T ( v l 1 ) + v l 3 .
v l = T ( [ v 0 , v 1 , , v l ] ) .
L P 2 P ( G , D ) = E x , y [ log D ( x , y ) ] + E x [ log ( 1 D ( x , G ( x ) ) ) ] + λ E x , y [ y G ( x ) 2 ] ,
S S I M ( y , z ) = ( 2 μ y μ z + C 1 ) ( 2 σ z y + C 2 ) ( μ y 2 + μ z 2 + C 1 ) ( σ y 2 + σ z 2 + C 2 ) ,
R ( y , z ) = σ y z σ z σ y ,
L ( Θ ) = 1 P p = 1 P | F ( x p ; Θ ) y p | .

Metrics