Abstract

The adaptive optics (AO) technique has been integrated in confocal scanning laser ophthalmoscopy (SLO) to obtain near diffraction-limited high-resolution retinal images. However, the quality of AOSLO images is decreased by various sources of noise and fixational eye movements. To improve image quality and remove distortions in AOSLO images, the multi-frame averaging method is usually utilized, which relies on an accurate image registration. The goal of image registrations is finding the optimal transformation to best align the input image sequences. However, current methods for AOSLO image registration have some obvious defects due to the limitation of transformation models. In this paper, we first established the retina motion model by using the Taylor series and polynomial expansion. Then we generated the polynomial transformation model and provided its close-form solution for consecutively frame-to-frame AOSLO retina image registration, allowing one to consider more general retinal motions such as scale changes, shearing and rotation motions, and so on. The experimental results demonstrated that higher-order polynomial transformation models are helpful to achieve more accurate registration, and the fourth-order polynomial transformation model is preferred to accomplish an efficient registration with a satisfying computational complexity. In addition, the AKAZE feature detection method was adopted and improved to achieve more accurate image registrations, and a new strategy was validated to exclude those unsuccessful registered regions to promote the robustness of image registration.

© 2019 Optical Society of America under the terms of the OSA Open Access Publishing Agreement

Full Article  |  PDF Article
OSA Recommended Articles
Tracking features in retinal images of adaptive optics confocal scanning laser ophthalmoscope using KLT-SIFT algorithm

Hao Li, Jing Lu, Guohua Shi, and Yudong Zhang
Biomed. Opt. Express 1(1) 31-40 (2010)

Scanning ophthalmoscope retinal image registration using one-dimensional deformation fields

S. Faisan, D. Lara, and C. Paterson
Opt. Express 19(5) 4157-4169 (2011)

Adaptive optics scanning laser ophthalmoscope with integrated wide-field retinal imaging and tracking

R. Daniel Ferguson, Zhangyi Zhong, Daniel X. Hammer, Mircea Mujat, Ankit H. Patel, Cong Deng, Weiyao Zou, and Stephen A. Burns
J. Opt. Soc. Am. A 27(11) A265-A277 (2010)

References

  • View by:
  • |
  • |
  • |

  1. A. Roorda, F. Romero-Borja, W. J. Donnelly, H. Queener, T. J. Hebert, and M. C. Campbell, “Adaptive optics scanning laser ophthalmoscopy,” Opt. Express 10(9), 405–412 (2002).
    [Crossref]
  2. S. A. Burns, R. Tumbar, A. E. Elsner, D. Ferguson, and D. X. Hammer, “Large-field-of-view, modular, stabilized, adaptive-optics-based scanning laser ophthalmoscope,” J. Opt. Soc. Am. A 24(5), 1313–1326 (2007).
    [Crossref]
  3. R. D. Ferguson, Z. Zhong, D. X. Hammer, M. Mujat, A. H. Patel, C. Deng, W. Zou, and S. A. Burns, “Adaptive optics scanning laser ophthalmoscope with integrated wide-field retinal imaging and tracking,” J. Opt. Soc. Am. A 27(11), A265–A277 (2010).
    [Crossref]
  4. A. Dubra and Y. Sulai, “Reflective afocal broadband adaptive optics scanning ophthalmoscope,” Biomed. Opt. Express 2(6), 1757–1768 (2011).
    [Crossref]
  5. N. D. Shemonski, F. A. South, Y.-Z. Liu, S. G. Adie, P. S. Carney, and S. A. Boppart, “Computational high-resolution optical imaging of the living human retina,” Nat. Photonics 9(7), 440–443 (2015).
    [Crossref]
  6. J. Lu, B. Gu, X. Wang, and Y. Zhang, “High speed adaptive optics ophthalmoscopy with an anamorphic point spread function,” Opt. Express 26(11), 14356–14374 (2018).
    [Crossref]
  7. S. Mozaffari, V. Jaedicke, F. Larocca, P. Tiruveedhula, and A. Roorda, “Versatile multi-detector scheme for adaptive optics scanning laser ophthalmoscopy,” Biomed. Opt. Express 9(11), 5477–5488 (2018).
    [Crossref]
  8. S. Martinez-Conde, S. L. Macknik, and D. H. Hubel, “The role of fixational eye movements in visual perception,” Nat. Rev. Neurosci. 5(3), 229–240 (2004).
    [Crossref]
  9. S. Martinez-Conde, “Fixational eye movements in normal and pathological vision,” Prog. Brain. Res. 154(Part A), 151–176 (2006).
    [Crossref]
  10. C. R. Vogel, D. W. Arathorn, A. Roorda, and A. Parker, “Retinal motion estimation in adaptive optics scanning laser ophthalmoscopy,” Opt. Express 14(2), 487–497 (2006).
    [Crossref]
  11. A. Dubra and Z. Harvey, “Registration of 2D images from fast scanning ophthalmic instruments,” in International Workshop on Biomedical Image Registration, (Springer, 2010), pp. 60–71.
  12. S. Faisan, D. Lara, and C. Paterson, “Scanning ophthalmoscope retinal image registration using one-dimensional deformation fields,” Opt. Express 19(5), 4157–4169 (2011).
    [Crossref]
  13. C. K. Sheehy, Q. Yang, D. W. Arathorn, P. Tiruveedhula, J. F. de Boer, and A. Roorda, “High-speed, image-based eye tracking with a scanning laser ophthalmoscope,” Biomed. Opt. Express 3(10), 2611–2622 (2012).
    [Crossref]
  14. Q. Yang, J. Zhang, K. Nozato, K. Saito, D. R. Williams, A. Roorda, and E. A. Rossi, “Closed-loop optical stabilization and digital image registration in adaptive optics scanning light ophthalmoscopy,” Biomed. Opt. Express 5(9), 3174–3191 (2014).
    [Crossref]
  15. H. Li, J. Lu, G. Shi, and Y. Zhang, “Tracking features in retinal images of adaptive optics confocal scanning laser ophthalmoscope using KLT-SIFT algorithm,” Biomed. Opt. Express 1(1), 31–40 (2010).
    [Crossref]
  16. H. Chen, Y. He, L. Wei, X. Li, and Y. Zhang, “Automatic Dewarping of Retina Images in Adaptive Optics Confocal Scanning Laser Ophthalmoscope,” IEEE Access 7, 59585–59599 (2019).
    [Crossref]
  17. M. Chen, R. F. Cooper, G. K. Han, J. Gee, D. H. Brainard, and J. I. Morgan, “Multi-modal automatic montaging of adaptive optics retinal images,” Biomed. Opt. Express 7(12), 4899–4918 (2016).
    [Crossref]
  18. B. Davidson, A. Kalitzeos, J. Carroll, A. Dubra, S. Ourselin, M. Michaelides, and C. Bergeles, “Fast adaptive optics scanning light ophthalmoscope retinal montaging,” Biomed. Opt. Express 9(9), 4317–4328 (2018).
    [Crossref]
  19. V. Nourrit, J. M. Bueno, B. Vohnsen, and P. Artal, “Nonlinear registration for scanned retinal images: application to ocular polarimetry,” Appl. Opt. 47(29), 5341–5347 (2008).
    [Crossref]
  20. J. Mulligan, “Recovery of motion parameters from distortions in scanned images,” in NASA Conference Publication, (NASA, 1998), pp. 281–292.
  21. P. F. Alcantarilla and T. Solutions, “Fast explicit diffusion for accelerated features in nonlinear scale spaces,” IEEE Trans. Pattern Anal. Mach. Intell 34(7), 1281–1298 (2011).
  22. M. A. Fischler and R. C. Bolles, “Random sample consensus: a paradigm for model fitting with applications to image analysis and automated cartography,” Commun. ACM 24(6), 381–395 (1981).
    [Crossref]
  23. D. G. Lowe, “Distinctive image features from scale-invariant keypoints,” Int. J. Comput. Vision 60(2), 91–110 (2004).
    [Crossref]
  24. H. Bay, A. Ess, T. Tuytelaars, and L. Van Gool, “Speeded-up robust features (SURF),” Comput. Vis. Image Und. 110(3), 346–359 (2008).
    [Crossref]
  25. D. W. Arathorn, Q. Yang, C. R. Vogel, Y. Zhang, P. Tiruveedhula, and A. Roorda, “Retinally stabilized cone-targeted stimulus delivery,” Opt. Express 15(21), 13731–13744 (2007).
    [Crossref]
  26. A. N. S. Institute, American national standard for safe use of lasers (Laser Institute of America, 2007).
  27. Z. Wang and A. C. Bovik, “A universal image quality index,” IEEE Signal Process. Lett. 9(3), 81–84 (2002).
    [Crossref]
  28. Z. Wang, A. C. Bovik, H. R. Sheikh, and E. P. Simoncelli, “Image quality assessment: from error visibility to structural similarity,” IEEE Trans. on Image Process. 13(4), 600–612 (2004).
    [Crossref]
  29. G. Wu, M. Kim, Q. Wang, Y. Gao, S. Liao, and D. Shen, “Unsupervised deep feature learning for deformable registration of MR brain images,” in International Conference on Medical Image Computing and Computer-Assisted Intervention, (Springer, 2013), pp. 649–656.
  30. S. Miao, Z. J. Wang, and R. Liao, “A CNN regression approach for real-time 2D/3D registration,” IEEE Trans. Med. Imaging 35(5), 1352–1363 (2016).
    [Crossref]
  31. X. Yang, R. Kwitt, M. Styner, and M. Niethammer, “Quicksilver: Fast predictive image registration–a deep learning approach,” NeuroImage 158, 378–396 (2017).
    [Crossref]
  32. Z. Yang, T. Dan, and Y. Yang, “Multi-Temporal Remote Sensing Image Registration Using Deep Convolutional Features,” IEEE Access 6, 38544–38555 (2018).
    [Crossref]
  33. F. Ye, Y. Su, H. Xiao, X. Zhao, and W. Min, “Remote sensing image registration using convolutional neural network features,” IEEE Geosci. Remote Sensing Lett. 15(2), 232–236 (2018).
    [Crossref]

2019 (1)

H. Chen, Y. He, L. Wei, X. Li, and Y. Zhang, “Automatic Dewarping of Retina Images in Adaptive Optics Confocal Scanning Laser Ophthalmoscope,” IEEE Access 7, 59585–59599 (2019).
[Crossref]

2018 (5)

2017 (1)

X. Yang, R. Kwitt, M. Styner, and M. Niethammer, “Quicksilver: Fast predictive image registration–a deep learning approach,” NeuroImage 158, 378–396 (2017).
[Crossref]

2016 (2)

S. Miao, Z. J. Wang, and R. Liao, “A CNN regression approach for real-time 2D/3D registration,” IEEE Trans. Med. Imaging 35(5), 1352–1363 (2016).
[Crossref]

M. Chen, R. F. Cooper, G. K. Han, J. Gee, D. H. Brainard, and J. I. Morgan, “Multi-modal automatic montaging of adaptive optics retinal images,” Biomed. Opt. Express 7(12), 4899–4918 (2016).
[Crossref]

2015 (1)

N. D. Shemonski, F. A. South, Y.-Z. Liu, S. G. Adie, P. S. Carney, and S. A. Boppart, “Computational high-resolution optical imaging of the living human retina,” Nat. Photonics 9(7), 440–443 (2015).
[Crossref]

2014 (1)

2012 (1)

2011 (3)

2010 (2)

2008 (2)

V. Nourrit, J. M. Bueno, B. Vohnsen, and P. Artal, “Nonlinear registration for scanned retinal images: application to ocular polarimetry,” Appl. Opt. 47(29), 5341–5347 (2008).
[Crossref]

H. Bay, A. Ess, T. Tuytelaars, and L. Van Gool, “Speeded-up robust features (SURF),” Comput. Vis. Image Und. 110(3), 346–359 (2008).
[Crossref]

2007 (2)

2006 (2)

S. Martinez-Conde, “Fixational eye movements in normal and pathological vision,” Prog. Brain. Res. 154(Part A), 151–176 (2006).
[Crossref]

C. R. Vogel, D. W. Arathorn, A. Roorda, and A. Parker, “Retinal motion estimation in adaptive optics scanning laser ophthalmoscopy,” Opt. Express 14(2), 487–497 (2006).
[Crossref]

2004 (3)

S. Martinez-Conde, S. L. Macknik, and D. H. Hubel, “The role of fixational eye movements in visual perception,” Nat. Rev. Neurosci. 5(3), 229–240 (2004).
[Crossref]

Z. Wang, A. C. Bovik, H. R. Sheikh, and E. P. Simoncelli, “Image quality assessment: from error visibility to structural similarity,” IEEE Trans. on Image Process. 13(4), 600–612 (2004).
[Crossref]

D. G. Lowe, “Distinctive image features from scale-invariant keypoints,” Int. J. Comput. Vision 60(2), 91–110 (2004).
[Crossref]

2002 (2)

1981 (1)

M. A. Fischler and R. C. Bolles, “Random sample consensus: a paradigm for model fitting with applications to image analysis and automated cartography,” Commun. ACM 24(6), 381–395 (1981).
[Crossref]

Adie, S. G.

N. D. Shemonski, F. A. South, Y.-Z. Liu, S. G. Adie, P. S. Carney, and S. A. Boppart, “Computational high-resolution optical imaging of the living human retina,” Nat. Photonics 9(7), 440–443 (2015).
[Crossref]

Alcantarilla, P. F.

P. F. Alcantarilla and T. Solutions, “Fast explicit diffusion for accelerated features in nonlinear scale spaces,” IEEE Trans. Pattern Anal. Mach. Intell 34(7), 1281–1298 (2011).

Arathorn, D. W.

Artal, P.

Bay, H.

H. Bay, A. Ess, T. Tuytelaars, and L. Van Gool, “Speeded-up robust features (SURF),” Comput. Vis. Image Und. 110(3), 346–359 (2008).
[Crossref]

Bergeles, C.

Bolles, R. C.

M. A. Fischler and R. C. Bolles, “Random sample consensus: a paradigm for model fitting with applications to image analysis and automated cartography,” Commun. ACM 24(6), 381–395 (1981).
[Crossref]

Boppart, S. A.

N. D. Shemonski, F. A. South, Y.-Z. Liu, S. G. Adie, P. S. Carney, and S. A. Boppart, “Computational high-resolution optical imaging of the living human retina,” Nat. Photonics 9(7), 440–443 (2015).
[Crossref]

Bovik, A. C.

Z. Wang, A. C. Bovik, H. R. Sheikh, and E. P. Simoncelli, “Image quality assessment: from error visibility to structural similarity,” IEEE Trans. on Image Process. 13(4), 600–612 (2004).
[Crossref]

Z. Wang and A. C. Bovik, “A universal image quality index,” IEEE Signal Process. Lett. 9(3), 81–84 (2002).
[Crossref]

Brainard, D. H.

Bueno, J. M.

Burns, S. A.

Campbell, M. C.

Carney, P. S.

N. D. Shemonski, F. A. South, Y.-Z. Liu, S. G. Adie, P. S. Carney, and S. A. Boppart, “Computational high-resolution optical imaging of the living human retina,” Nat. Photonics 9(7), 440–443 (2015).
[Crossref]

Carroll, J.

Chen, H.

H. Chen, Y. He, L. Wei, X. Li, and Y. Zhang, “Automatic Dewarping of Retina Images in Adaptive Optics Confocal Scanning Laser Ophthalmoscope,” IEEE Access 7, 59585–59599 (2019).
[Crossref]

Chen, M.

Cooper, R. F.

Dan, T.

Z. Yang, T. Dan, and Y. Yang, “Multi-Temporal Remote Sensing Image Registration Using Deep Convolutional Features,” IEEE Access 6, 38544–38555 (2018).
[Crossref]

Davidson, B.

de Boer, J. F.

Deng, C.

Donnelly, W. J.

Dubra, A.

Elsner, A. E.

Ess, A.

H. Bay, A. Ess, T. Tuytelaars, and L. Van Gool, “Speeded-up robust features (SURF),” Comput. Vis. Image Und. 110(3), 346–359 (2008).
[Crossref]

Faisan, S.

Ferguson, D.

Ferguson, R. D.

Fischler, M. A.

M. A. Fischler and R. C. Bolles, “Random sample consensus: a paradigm for model fitting with applications to image analysis and automated cartography,” Commun. ACM 24(6), 381–395 (1981).
[Crossref]

Gao, Y.

G. Wu, M. Kim, Q. Wang, Y. Gao, S. Liao, and D. Shen, “Unsupervised deep feature learning for deformable registration of MR brain images,” in International Conference on Medical Image Computing and Computer-Assisted Intervention, (Springer, 2013), pp. 649–656.

Gee, J.

Gu, B.

Hammer, D. X.

Han, G. K.

Harvey, Z.

A. Dubra and Z. Harvey, “Registration of 2D images from fast scanning ophthalmic instruments,” in International Workshop on Biomedical Image Registration, (Springer, 2010), pp. 60–71.

He, Y.

H. Chen, Y. He, L. Wei, X. Li, and Y. Zhang, “Automatic Dewarping of Retina Images in Adaptive Optics Confocal Scanning Laser Ophthalmoscope,” IEEE Access 7, 59585–59599 (2019).
[Crossref]

Hebert, T. J.

Hubel, D. H.

S. Martinez-Conde, S. L. Macknik, and D. H. Hubel, “The role of fixational eye movements in visual perception,” Nat. Rev. Neurosci. 5(3), 229–240 (2004).
[Crossref]

Jaedicke, V.

Kalitzeos, A.

Kim, M.

G. Wu, M. Kim, Q. Wang, Y. Gao, S. Liao, and D. Shen, “Unsupervised deep feature learning for deformable registration of MR brain images,” in International Conference on Medical Image Computing and Computer-Assisted Intervention, (Springer, 2013), pp. 649–656.

Kwitt, R.

X. Yang, R. Kwitt, M. Styner, and M. Niethammer, “Quicksilver: Fast predictive image registration–a deep learning approach,” NeuroImage 158, 378–396 (2017).
[Crossref]

Lara, D.

Larocca, F.

Li, H.

Li, X.

H. Chen, Y. He, L. Wei, X. Li, and Y. Zhang, “Automatic Dewarping of Retina Images in Adaptive Optics Confocal Scanning Laser Ophthalmoscope,” IEEE Access 7, 59585–59599 (2019).
[Crossref]

Liao, R.

S. Miao, Z. J. Wang, and R. Liao, “A CNN regression approach for real-time 2D/3D registration,” IEEE Trans. Med. Imaging 35(5), 1352–1363 (2016).
[Crossref]

Liao, S.

G. Wu, M. Kim, Q. Wang, Y. Gao, S. Liao, and D. Shen, “Unsupervised deep feature learning for deformable registration of MR brain images,” in International Conference on Medical Image Computing and Computer-Assisted Intervention, (Springer, 2013), pp. 649–656.

Liu, Y.-Z.

N. D. Shemonski, F. A. South, Y.-Z. Liu, S. G. Adie, P. S. Carney, and S. A. Boppart, “Computational high-resolution optical imaging of the living human retina,” Nat. Photonics 9(7), 440–443 (2015).
[Crossref]

Lowe, D. G.

D. G. Lowe, “Distinctive image features from scale-invariant keypoints,” Int. J. Comput. Vision 60(2), 91–110 (2004).
[Crossref]

Lu, J.

Macknik, S. L.

S. Martinez-Conde, S. L. Macknik, and D. H. Hubel, “The role of fixational eye movements in visual perception,” Nat. Rev. Neurosci. 5(3), 229–240 (2004).
[Crossref]

Martinez-Conde, S.

S. Martinez-Conde, “Fixational eye movements in normal and pathological vision,” Prog. Brain. Res. 154(Part A), 151–176 (2006).
[Crossref]

S. Martinez-Conde, S. L. Macknik, and D. H. Hubel, “The role of fixational eye movements in visual perception,” Nat. Rev. Neurosci. 5(3), 229–240 (2004).
[Crossref]

Miao, S.

S. Miao, Z. J. Wang, and R. Liao, “A CNN regression approach for real-time 2D/3D registration,” IEEE Trans. Med. Imaging 35(5), 1352–1363 (2016).
[Crossref]

Michaelides, M.

Min, W.

F. Ye, Y. Su, H. Xiao, X. Zhao, and W. Min, “Remote sensing image registration using convolutional neural network features,” IEEE Geosci. Remote Sensing Lett. 15(2), 232–236 (2018).
[Crossref]

Morgan, J. I.

Mozaffari, S.

Mujat, M.

Mulligan, J.

J. Mulligan, “Recovery of motion parameters from distortions in scanned images,” in NASA Conference Publication, (NASA, 1998), pp. 281–292.

Niethammer, M.

X. Yang, R. Kwitt, M. Styner, and M. Niethammer, “Quicksilver: Fast predictive image registration–a deep learning approach,” NeuroImage 158, 378–396 (2017).
[Crossref]

Nourrit, V.

Nozato, K.

Ourselin, S.

Parker, A.

Patel, A. H.

Paterson, C.

Queener, H.

Romero-Borja, F.

Roorda, A.

Rossi, E. A.

Saito, K.

Sheehy, C. K.

Sheikh, H. R.

Z. Wang, A. C. Bovik, H. R. Sheikh, and E. P. Simoncelli, “Image quality assessment: from error visibility to structural similarity,” IEEE Trans. on Image Process. 13(4), 600–612 (2004).
[Crossref]

Shemonski, N. D.

N. D. Shemonski, F. A. South, Y.-Z. Liu, S. G. Adie, P. S. Carney, and S. A. Boppart, “Computational high-resolution optical imaging of the living human retina,” Nat. Photonics 9(7), 440–443 (2015).
[Crossref]

Shen, D.

G. Wu, M. Kim, Q. Wang, Y. Gao, S. Liao, and D. Shen, “Unsupervised deep feature learning for deformable registration of MR brain images,” in International Conference on Medical Image Computing and Computer-Assisted Intervention, (Springer, 2013), pp. 649–656.

Shi, G.

Simoncelli, E. P.

Z. Wang, A. C. Bovik, H. R. Sheikh, and E. P. Simoncelli, “Image quality assessment: from error visibility to structural similarity,” IEEE Trans. on Image Process. 13(4), 600–612 (2004).
[Crossref]

Solutions, T.

P. F. Alcantarilla and T. Solutions, “Fast explicit diffusion for accelerated features in nonlinear scale spaces,” IEEE Trans. Pattern Anal. Mach. Intell 34(7), 1281–1298 (2011).

South, F. A.

N. D. Shemonski, F. A. South, Y.-Z. Liu, S. G. Adie, P. S. Carney, and S. A. Boppart, “Computational high-resolution optical imaging of the living human retina,” Nat. Photonics 9(7), 440–443 (2015).
[Crossref]

Styner, M.

X. Yang, R. Kwitt, M. Styner, and M. Niethammer, “Quicksilver: Fast predictive image registration–a deep learning approach,” NeuroImage 158, 378–396 (2017).
[Crossref]

Su, Y.

F. Ye, Y. Su, H. Xiao, X. Zhao, and W. Min, “Remote sensing image registration using convolutional neural network features,” IEEE Geosci. Remote Sensing Lett. 15(2), 232–236 (2018).
[Crossref]

Sulai, Y.

Tiruveedhula, P.

Tumbar, R.

Tuytelaars, T.

H. Bay, A. Ess, T. Tuytelaars, and L. Van Gool, “Speeded-up robust features (SURF),” Comput. Vis. Image Und. 110(3), 346–359 (2008).
[Crossref]

Van Gool, L.

H. Bay, A. Ess, T. Tuytelaars, and L. Van Gool, “Speeded-up robust features (SURF),” Comput. Vis. Image Und. 110(3), 346–359 (2008).
[Crossref]

Vogel, C. R.

Vohnsen, B.

Wang, Q.

G. Wu, M. Kim, Q. Wang, Y. Gao, S. Liao, and D. Shen, “Unsupervised deep feature learning for deformable registration of MR brain images,” in International Conference on Medical Image Computing and Computer-Assisted Intervention, (Springer, 2013), pp. 649–656.

Wang, X.

Wang, Z.

Z. Wang, A. C. Bovik, H. R. Sheikh, and E. P. Simoncelli, “Image quality assessment: from error visibility to structural similarity,” IEEE Trans. on Image Process. 13(4), 600–612 (2004).
[Crossref]

Z. Wang and A. C. Bovik, “A universal image quality index,” IEEE Signal Process. Lett. 9(3), 81–84 (2002).
[Crossref]

Wang, Z. J.

S. Miao, Z. J. Wang, and R. Liao, “A CNN regression approach for real-time 2D/3D registration,” IEEE Trans. Med. Imaging 35(5), 1352–1363 (2016).
[Crossref]

Wei, L.

H. Chen, Y. He, L. Wei, X. Li, and Y. Zhang, “Automatic Dewarping of Retina Images in Adaptive Optics Confocal Scanning Laser Ophthalmoscope,” IEEE Access 7, 59585–59599 (2019).
[Crossref]

Williams, D. R.

Wu, G.

G. Wu, M. Kim, Q. Wang, Y. Gao, S. Liao, and D. Shen, “Unsupervised deep feature learning for deformable registration of MR brain images,” in International Conference on Medical Image Computing and Computer-Assisted Intervention, (Springer, 2013), pp. 649–656.

Xiao, H.

F. Ye, Y. Su, H. Xiao, X. Zhao, and W. Min, “Remote sensing image registration using convolutional neural network features,” IEEE Geosci. Remote Sensing Lett. 15(2), 232–236 (2018).
[Crossref]

Yang, Q.

Yang, X.

X. Yang, R. Kwitt, M. Styner, and M. Niethammer, “Quicksilver: Fast predictive image registration–a deep learning approach,” NeuroImage 158, 378–396 (2017).
[Crossref]

Yang, Y.

Z. Yang, T. Dan, and Y. Yang, “Multi-Temporal Remote Sensing Image Registration Using Deep Convolutional Features,” IEEE Access 6, 38544–38555 (2018).
[Crossref]

Yang, Z.

Z. Yang, T. Dan, and Y. Yang, “Multi-Temporal Remote Sensing Image Registration Using Deep Convolutional Features,” IEEE Access 6, 38544–38555 (2018).
[Crossref]

Ye, F.

F. Ye, Y. Su, H. Xiao, X. Zhao, and W. Min, “Remote sensing image registration using convolutional neural network features,” IEEE Geosci. Remote Sensing Lett. 15(2), 232–236 (2018).
[Crossref]

Zhang, J.

Zhang, Y.

Zhao, X.

F. Ye, Y. Su, H. Xiao, X. Zhao, and W. Min, “Remote sensing image registration using convolutional neural network features,” IEEE Geosci. Remote Sensing Lett. 15(2), 232–236 (2018).
[Crossref]

Zhong, Z.

Zou, W.

Appl. Opt. (1)

Biomed. Opt. Express (7)

A. Dubra and Y. Sulai, “Reflective afocal broadband adaptive optics scanning ophthalmoscope,” Biomed. Opt. Express 2(6), 1757–1768 (2011).
[Crossref]

S. Mozaffari, V. Jaedicke, F. Larocca, P. Tiruveedhula, and A. Roorda, “Versatile multi-detector scheme for adaptive optics scanning laser ophthalmoscopy,” Biomed. Opt. Express 9(11), 5477–5488 (2018).
[Crossref]

C. K. Sheehy, Q. Yang, D. W. Arathorn, P. Tiruveedhula, J. F. de Boer, and A. Roorda, “High-speed, image-based eye tracking with a scanning laser ophthalmoscope,” Biomed. Opt. Express 3(10), 2611–2622 (2012).
[Crossref]

Q. Yang, J. Zhang, K. Nozato, K. Saito, D. R. Williams, A. Roorda, and E. A. Rossi, “Closed-loop optical stabilization and digital image registration in adaptive optics scanning light ophthalmoscopy,” Biomed. Opt. Express 5(9), 3174–3191 (2014).
[Crossref]

H. Li, J. Lu, G. Shi, and Y. Zhang, “Tracking features in retinal images of adaptive optics confocal scanning laser ophthalmoscope using KLT-SIFT algorithm,” Biomed. Opt. Express 1(1), 31–40 (2010).
[Crossref]

M. Chen, R. F. Cooper, G. K. Han, J. Gee, D. H. Brainard, and J. I. Morgan, “Multi-modal automatic montaging of adaptive optics retinal images,” Biomed. Opt. Express 7(12), 4899–4918 (2016).
[Crossref]

B. Davidson, A. Kalitzeos, J. Carroll, A. Dubra, S. Ourselin, M. Michaelides, and C. Bergeles, “Fast adaptive optics scanning light ophthalmoscope retinal montaging,” Biomed. Opt. Express 9(9), 4317–4328 (2018).
[Crossref]

Commun. ACM (1)

M. A. Fischler and R. C. Bolles, “Random sample consensus: a paradigm for model fitting with applications to image analysis and automated cartography,” Commun. ACM 24(6), 381–395 (1981).
[Crossref]

Comput. Vis. Image Und. (1)

H. Bay, A. Ess, T. Tuytelaars, and L. Van Gool, “Speeded-up robust features (SURF),” Comput. Vis. Image Und. 110(3), 346–359 (2008).
[Crossref]

IEEE Access (2)

Z. Yang, T. Dan, and Y. Yang, “Multi-Temporal Remote Sensing Image Registration Using Deep Convolutional Features,” IEEE Access 6, 38544–38555 (2018).
[Crossref]

H. Chen, Y. He, L. Wei, X. Li, and Y. Zhang, “Automatic Dewarping of Retina Images in Adaptive Optics Confocal Scanning Laser Ophthalmoscope,” IEEE Access 7, 59585–59599 (2019).
[Crossref]

IEEE Geosci. Remote Sensing Lett. (1)

F. Ye, Y. Su, H. Xiao, X. Zhao, and W. Min, “Remote sensing image registration using convolutional neural network features,” IEEE Geosci. Remote Sensing Lett. 15(2), 232–236 (2018).
[Crossref]

IEEE Signal Process. Lett. (1)

Z. Wang and A. C. Bovik, “A universal image quality index,” IEEE Signal Process. Lett. 9(3), 81–84 (2002).
[Crossref]

IEEE Trans. Med. Imaging (1)

S. Miao, Z. J. Wang, and R. Liao, “A CNN regression approach for real-time 2D/3D registration,” IEEE Trans. Med. Imaging 35(5), 1352–1363 (2016).
[Crossref]

IEEE Trans. on Image Process. (1)

Z. Wang, A. C. Bovik, H. R. Sheikh, and E. P. Simoncelli, “Image quality assessment: from error visibility to structural similarity,” IEEE Trans. on Image Process. 13(4), 600–612 (2004).
[Crossref]

IEEE Trans. Pattern Anal. Mach. Intell (1)

P. F. Alcantarilla and T. Solutions, “Fast explicit diffusion for accelerated features in nonlinear scale spaces,” IEEE Trans. Pattern Anal. Mach. Intell 34(7), 1281–1298 (2011).

Int. J. Comput. Vision (1)

D. G. Lowe, “Distinctive image features from scale-invariant keypoints,” Int. J. Comput. Vision 60(2), 91–110 (2004).
[Crossref]

J. Opt. Soc. Am. A (2)

Nat. Photonics (1)

N. D. Shemonski, F. A. South, Y.-Z. Liu, S. G. Adie, P. S. Carney, and S. A. Boppart, “Computational high-resolution optical imaging of the living human retina,” Nat. Photonics 9(7), 440–443 (2015).
[Crossref]

Nat. Rev. Neurosci. (1)

S. Martinez-Conde, S. L. Macknik, and D. H. Hubel, “The role of fixational eye movements in visual perception,” Nat. Rev. Neurosci. 5(3), 229–240 (2004).
[Crossref]

NeuroImage (1)

X. Yang, R. Kwitt, M. Styner, and M. Niethammer, “Quicksilver: Fast predictive image registration–a deep learning approach,” NeuroImage 158, 378–396 (2017).
[Crossref]

Opt. Express (5)

Prog. Brain. Res. (1)

S. Martinez-Conde, “Fixational eye movements in normal and pathological vision,” Prog. Brain. Res. 154(Part A), 151–176 (2006).
[Crossref]

Other (4)

A. Dubra and Z. Harvey, “Registration of 2D images from fast scanning ophthalmic instruments,” in International Workshop on Biomedical Image Registration, (Springer, 2010), pp. 60–71.

A. N. S. Institute, American national standard for safe use of lasers (Laser Institute of America, 2007).

G. Wu, M. Kim, Q. Wang, Y. Gao, S. Liao, and D. Shen, “Unsupervised deep feature learning for deformable registration of MR brain images,” in International Conference on Medical Image Computing and Computer-Assisted Intervention, (Springer, 2013), pp. 649–656.

J. Mulligan, “Recovery of motion parameters from distortions in scanned images,” in NASA Conference Publication, (NASA, 1998), pp. 281–292.

Cited By

OSA participates in Crossref's Cited-By Linking service. Citing articles from OSA journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (9)

Fig. 1.
Fig. 1. Retinal images acquired in a healthy 30-year-old male. From left to right: (a) the reference frame, (b) the distorted frame, (c) and (d) the registration image by using patch-based method, and the frame-to-frame method based on second-order polynomial transformation model, respectively. Zoomed-in regions in blue boxes show the distorted areas which cannot be registered by these methods, while the zoomed-in regions in red boxes present the newly introduced distortions due to the poor registration. The scale bar is 50 µm.
Fig. 2.
Fig. 2. The NCC and SSIM values of one dataset. From top to bottom: (a-c) the NCC results, (d-f) the SSIM results. The sub-lines in blue and pink boxes at the first column are enlarged on the middle and last columns, respectively.
Fig. 3.
Fig. 3. The averaged standard deviations of NCC and SSIM metrics in all datasets for Table 1 and 2.
Fig. 4.
Fig. 4. The averaged registration performances obtained by utilizing different number of point pairs and different transformation models. (a) NCC results and (b) SSIM results.
Fig. 5.
Fig. 5. An unsuccessful registration case due to the non-uniformity distribution of matched point pairs. The retina images were acquired from a 38-year-old healthy female. (a) The reference frame, (b) the query frame, (c) the result of matched point pairs obtained with original AKAZE method, (d) the registered image by using the patch-based method, (e) image registered by proposed method with the matched point pairs in (c) used, (f) matched point pairs by using the improved AKAZE method and (g) image registered by proposed method with the matched point pairs in (f) used. The red dot denotes a tracked cone and the green box is added to evaluate the relative position of the tracked cone. The scale bar is 50 µm.
Fig. 6.
Fig. 6. An example validates the effectiveness of the proposed maximum valid area method. The retina images were acquired from a 28-year-old male. (a) The reference image, (b) the query frame, (c) the registered image by using the patch-based method, (d) the registered image by the proposed method, (e) matched point pairs by using the improved AKAZE method, (f) the distribution of matched keypoints in the reference image, (g) diagram of the maximum valid region, (h) the difference between the calculated maximum valid region (blue box) and the real valid region (red box, selected by human rater) and (i) the segmented valid image of (d). The red dot denotes a tracked cone and the green box is added to evaluate the relative position of the tracked cone. The scale bar is 50 µm.
Fig. 7.
Fig. 7. Image registration with micro-saccade motions. (a) The reference image;(b) the distorted image;(c-d) the registered images by using patch-based and the proposed methods, respectively; (e) the comparison of the maximum valid region (blue box) and the real valid region (red box); (f) the segmented area by the MVA strategy; (g-h) the averaged residual values of lines in horizontal and vertical directions, respectively. The scale bar is 50 µm.
Fig. 8.
Fig. 8. The lines (from 250 to 300) extracted from the images in Fig. 7. From top to bottom, the reference image, the query image, the image registered by patch-based and the image registered by the proposed method, respectively.
Fig. 9.
Fig. 9. Comparison of averaged images obtained with different methods. (a) The reference image, (b) the averaged images obtained by using the patch-based method, (c) averaged image obtained by the proposed method, (d) the horizontal intersecting lines in the same position of (b-c). The scale bar is 50 µm.

Tables (5)

Tables Icon

Table 1. The Mean Values (Mean) and Standard Deviations (Stds) of Normalized Cross-correlation (NCC) Values of Ten Datasets. The Largest Mean and The Smallest Std in Each Column Are Highlighted in Bold.

Tables Icon

Table 2. The Mean Values (Mean) and Standard Deviations (Stds) of Structural Similarity Index (SSIM) Values of Ten Datasets. The Largest Mean and The Smallest Std in Each Column Are Highlighted in Bold.

Tables Icon

Table 3. Guidelines to Choose the Best Suitable Transformation Model According to the Number of Detected Point Pairs.

Tables Icon

Table 4. The Image Contrasts of Averaged Images of Different Datasets. The Largest One in Each Column Is Highlighted in Bold.

Tables Icon

Table 5. The Mean (Mean) and Standard Deviation (Std) values of The Polynomial Transformation Parameters. The Largest Value in Each Column Is Highlighted in Bold.

Equations (18)

Equations on this page are rendered with MathJax. Learn more.

p x ( t ) = x 0 + p ˙ x ( 0 ) t + 1 2 p ¨ x ( 0 ) t 2 ,
p y ( t ) = y 0 + p ˙ y ( 0 ) t + 1 2 p ¨ y ( 0 ) t 2 ,
t p = y p V S , y + x p V S , x ,
p x ( t ) = x 0 + p ˙ x ( 0 ) t + 1 2 p ¨ x ( 0 ) t 2 + + 1 n ! p n x ( 0 ) t n ,
p y ( t ) = y 0 + p ˙ y ( 0 ) t + 1 2 p ¨ y ( 0 ) t 2 + + 1 n ! p n y ( 0 ) t n .
x p = x 0 + j = 1 n K j ( x p , y p ) ,
y p = y 0 + j = 1 n M j ( x , y ) ,
K j ( x , y ) = u 0 , v 0 u + v = j k u , v x u y v ,
M j ( x , y ) = u 0 , v 0 u + v = j m u , v x u y v .
x r = j = 1 n K j ( x q , y q ) + k 0 , 0 ,
y r = j = 1 n M j ( x , y ) + m 0 , 0 ,
( x r y r ) = ( k 0 , 0 k 1 , 0 k 0 , 1 k 1 , 1 k n , 0 k 0 , n m 0 , 0 m 1 , 0 m 0 , 1 m 1 , 1 m n , 0 m 0. n ) . ( 1 x q y q x q y q x q n y q n ) T
R = [ x r , 1 x r , 2 x r , w y r , 1 y r , 2 y r , w ] .
Q = [ 1 x q , 1 y q , 1 x q , 1 y q , 1 x q , 1 n y q , 1 n 1 x q , 2 y q , 2 x q , 2 y q , 2 x q , 2 n y q , 2 n 1 x q , k y q , k x q , k y q , k x q , k n y q , k n ] T .
P = R Q T ( Q Q T ) 1 .
D o F = ( n + 2 ) ( n + 1 ) 2 .
B i = 10 2 σ i ,
B i = 10 2 c σ i ,