Abstract

We apply a novel computational technique known as the map-seeking circuit algorithm to estimate the motion of the retina of eye from a sequence of frames of data from a scanning laser ophthalmoscope. We also present a scheme to dewarp and co-add frames of retinal image data, given the estimated motion. The motion estimation and dewarping techniques are applied to data collected from an adaptive optics scanning laser ophthalmoscopy.

© 2006 Optical Society of America

Full Article  |  PDF Article

References

  • View by:
  • |

  1. A. Roorda, F Romero-Borja, W.J. Donnelly, T.J. Hebert, H. Queener, M.C.W. Campbell, "Adaptive Optics Scanning Laser Ophthalmoscopy," Opt. Express 10 , pp. 405-412 (2002).
    [PubMed]
  2. J. Liang, D. R. Williams, and D. Miller, "Supernormal vision and high-resolution retinal imaging through adaptive optics," J. Opt. Soc. Am. A 14 (1997), pp. 2884-2892.
    [CrossRef]
  3. R. H. Webb, G. W. Hughes, and F. C. Delori, "Confocal scanning laser ophthalmoscope," Appl. Opt. 26 (1987), pp. 1492-1499.
    [CrossRef] [PubMed]
  4. J.B. Mulligan, "Recovery of motion parameters from distortions in scanned images," Proceedings of the NASA Image Registration Workshop (IRW97), NASA Goddard Space Flight Center, MD, 1997.
  5. S.B. Stevenson and A. Roorda, "Correcting for miniature eye movements in high resolution scanning laser ophthalmoscopy," in Ophthalmic Technologies XV, edited by Fabrice Manns, Per Soderberg, Arthur Ho, Proceedings of SPIE Vol. 5688A (SPIE, Bellingham, WA, 2005), pp. 145-151.
    [CrossRef]
  6. D. P. Wornson, G. W. Hughes, et al, "Fundus tracking with the scanning laser ophthalmoscope," Applied Optics 26 (1987), pp. 1500-1504.
    [CrossRef] [PubMed]
  7. N. J. O'Connor, D. U. Bartsch, et al, "Fluorescent infrared scanning-laser ophthalmoscope for three-dimensional visualization: automatic random-eye-motion correction and deconvolution," Applied Optics 37 (1998), pp. 2021-2033.
    [CrossRef]
  8. E. Decastro, G. Cristini, et al, "Compensation of random eye motion in television ophthalmoscopy—preliminary results," IEEE Transactions on Medical Imaging 6 (1987): 74-81.
    [CrossRef]
  9. A. V. Cideciyan, "Registration of ocular fundus images—an algorithm using cross-correlation of triple invariant image descriptors," IEEE Engineering in Medicine and Biology Magazine 14 (1995), pp. 52-58.
    [CrossRef]
  10. J. Modersitzki, Numerical Methods for Image Registration, Oxford University Press, 2004.
  11. D.W. Arathorn, Map-Seeking Circuits in Visual Cognition: A Computational Mechanism for Biological and Machine Vision, Stanford University Press, 2002.
  12. D. W. ARATHORN, "Computation in higher visual cortices: Map-seeking circuit theory and application to machine vision," Proceedings of IEEE Applied Imagery Pattern Recognition Workshop (2004), pp. 73-78.
  13. D. W. ARATHORN, "From wolves hunting elk to Rubik's cubes: Are the cortices compositional/decompositional engines?" Proceedings of AAAI Symposium on Compositional Connectionism (2004), pp. 1-5.
  14. D. W. ARATHORN, "Memory-driven visual attention: An emergent behavior of map-seeking circuits," in Neurobiology of Attention, Eds. L. Itti, G. Rees, and J. Tsotsos, Academic Press/Elsevier, 2005.
  15. D. W. ARATHORN, A cortically plausible inverse problem solving method applied to recognizing static and kinematic 3-D objects, proceedings of Neural Information Processing Systems (NIPS) Workshop, 2005
  16. D. W. Arathorn and T. Gedeon, "Convergence in map finding circuits," preprint, 2004.
  17. S.A. Harker, T. Gedeon, and C.R. Vogel, "A multilinear optimization problem associated with correspondence maximization," preprint, 2005.
  18. <a href="http://www.math.montana.edu/~vogel/Vision/graphics/">http://www.math.montana.edu/~vogel/Vision/graphics/</a>
  19. J. A. Martin and A. Roorda, "Direct and n on-invasive assessment of parafoveal capillary leukocyte velocity," Ophthalmology (in press).
  20. T.N. Cornsweet and H.D. Crane, "Accurate two-dimensional eye tracker using first and fourth Purkinje images," J. Opt. Soc. Am. 63 (1973), pp. 921-928.
    [CrossRef] [PubMed]

Appl. Opt. (1)

R. H. Webb, G. W. Hughes, and F. C. Delori, "Confocal scanning laser ophthalmoscope," Appl. Opt. 26 (1987), pp. 1492-1499.
[CrossRef] [PubMed]

Applied Optics (2)

D. P. Wornson, G. W. Hughes, et al, "Fundus tracking with the scanning laser ophthalmoscope," Applied Optics 26 (1987), pp. 1500-1504.
[CrossRef] [PubMed]

N. J. O'Connor, D. U. Bartsch, et al, "Fluorescent infrared scanning-laser ophthalmoscope for three-dimensional visualization: automatic random-eye-motion correction and deconvolution," Applied Optics 37 (1998), pp. 2021-2033.
[CrossRef]

IEEE Engineering in Medicine and Biology (1)

A. V. Cideciyan, "Registration of ocular fundus images—an algorithm using cross-correlation of triple invariant image descriptors," IEEE Engineering in Medicine and Biology Magazine 14 (1995), pp. 52-58.
[CrossRef]

IEEE Transactions on Medical Imaging (1)

E. Decastro, G. Cristini, et al, "Compensation of random eye motion in television ophthalmoscopy—preliminary results," IEEE Transactions on Medical Imaging 6 (1987): 74-81.
[CrossRef]

J. Opt. Soc. Am. (1)

T.N. Cornsweet and H.D. Crane, "Accurate two-dimensional eye tracker using first and fourth Purkinje images," J. Opt. Soc. Am. 63 (1973), pp. 921-928.
[CrossRef] [PubMed]

J. Opt. Soc. Am. A (1)

J. Liang, D. R. Williams, and D. Miller, "Supernormal vision and high-resolution retinal imaging through adaptive optics," J. Opt. Soc. Am. A 14 (1997), pp. 2884-2892.
[CrossRef]

Neurobiology of Attention (1)

D. W. ARATHORN, "Memory-driven visual attention: An emergent behavior of map-seeking circuits," in Neurobiology of Attention, Eds. L. Itti, G. Rees, and J. Tsotsos, Academic Press/Elsevier, 2005.

Ophthalmology (1)

J. A. Martin and A. Roorda, "Direct and n on-invasive assessment of parafoveal capillary leukocyte velocity," Ophthalmology (in press).

Opt. Express (1)

A. Roorda, F Romero-Borja, W.J. Donnelly, T.J. Hebert, H. Queener, M.C.W. Campbell, "Adaptive Optics Scanning Laser Ophthalmoscopy," Opt. Express 10 , pp. 405-412 (2002).
[PubMed]

Proc. SPIE (1)

S.B. Stevenson and A. Roorda, "Correcting for miniature eye movements in high resolution scanning laser ophthalmoscopy," in Ophthalmic Technologies XV, edited by Fabrice Manns, Per Soderberg, Arthur Ho, Proceedings of SPIE Vol. 5688A (SPIE, Bellingham, WA, 2005), pp. 145-151.
[CrossRef]

Proceedings of AAAI Symposium on Composi (1)

D. W. ARATHORN, "From wolves hunting elk to Rubik's cubes: Are the cortices compositional/decompositional engines?" Proceedings of AAAI Symposium on Compositional Connectionism (2004), pp. 1-5.

Proceedings of IEEE Applied Imagery Patt (1)

D. W. ARATHORN, "Computation in higher visual cortices: Map-seeking circuit theory and application to machine vision," Proceedings of IEEE Applied Imagery Pattern Recognition Workshop (2004), pp. 73-78.

Proceedings of Neural Information Proces (1)

D. W. ARATHORN, A cortically plausible inverse problem solving method applied to recognizing static and kinematic 3-D objects, proceedings of Neural Information Processing Systems (NIPS) Workshop, 2005

Proceedings of the NASA Image Registrati (1)

J.B. Mulligan, "Recovery of motion parameters from distortions in scanned images," Proceedings of the NASA Image Registration Workshop (IRW97), NASA Goddard Space Flight Center, MD, 1997.

Other (5)

J. Modersitzki, Numerical Methods for Image Registration, Oxford University Press, 2004.

D.W. Arathorn, Map-Seeking Circuits in Visual Cognition: A Computational Mechanism for Biological and Machine Vision, Stanford University Press, 2002.

D. W. Arathorn and T. Gedeon, "Convergence in map finding circuits," preprint, 2004.

S.A. Harker, T. Gedeon, and C.R. Vogel, "A multilinear optimization problem associated with correspondence maximization," preprint, 2005.

<a href="http://www.math.montana.edu/~vogel/Vision/graphics/">http://www.math.montana.edu/~vogel/Vision/graphics/</a>

Supplementary Material (2)

» Media 1: AVI (3011 KB)     
» Media 2: AVI (3424 KB)     

Cited By

OSA participates in CrossRef's Cited-By Linking service. Citing articles from OSA journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (5)

Fig. 1.
Fig. 1.

Illustration of transformation effects. Under the transformation T, straight lines in the rectangular grid on the left map to curved lines on the right. Under the inverse transformation T -1, equispaced grid points on the right (blue dots) map back to non equispaced points on the left (red dots).

Fig. 2.
Fig. 2.

Sample frame from the raw video clip SLD-AR.avi. This clip consists of 24 image frames and the file size is 3.1 MB. The image size is 350 × 350 pixels, or 1.02 × 1.02 degrees, or 300 × 300 microns. The fovea is located 400 microns up and to the left of the frame. [Media 1]

Fig. 3.
Fig. 3.

Horizontal and vertical motion estimates obtained from AOSLO data. One pixel corresponds to .17 minutes or arc, or .88 microns of planar distance across the retina. The .8 second duration of the motion corresponds to 24 frames of AOSLO data.

Fig. 4.
Fig. 4.

Sample frame from dewarped video clip SLD-AR-dewarp.avi. This clip consists of 24 image frames and the file size is 3.5 MB. The image statistics are the same as in Fig. 2. [Media 2]

Fig. 5.
Fig. 5.

Raw image (top) and co-added image (bottom) obtained from AOSLO data. Image statistics are the same as in Fig. 2. Note the honeycomb structure known as a cone mosaic in the co-added image.

Equations (13)

Equations on this page are rendered with MathJax. Learn more.

d ( t ) = E ( r ( t ) + X ( t ) ) .
d i = E ( r ( t i ) + X ( t i ) ) + η i ,
X ( t ) = X ( t 0 ) + ( t t 0 ) v ,
E ( x ( t i + τ f ) ) = E ( x ( t i ) + τ f v ) + noise .
corr ( E , E ) k , = i j E ( i + k , j + ) E ( i , j ) .
T k , E ( i , j ) = E ( i + k , j + ) .
T k , = T ( 2 ) T k ( 1 ) .
E , E = i j E ( i , j ) E ( i , j ) ,
corr ( k , ) = T ( 2 ) T k ( 1 ) E , E .
corr ( g ( 1 ) , g ( 2 ) ) = ( g ( 2 ) T ( 2 ) ) ( k g k ( 1 ) T k ( 1 ) ) E , E .
E ( x ) = E ( x ) , where x = T 1 x .
1 T 0 T X true ( t ) dt ,
1 N n = 1 N X ( t + n τ s ) X bias ( t )

Metrics