Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Retinal motion estimation in adaptive optics scanning laser ophthalmoscopy

Open Access Open Access

Abstract

We apply a novel computational technique known as the map-seeking circuit algorithm to estimate the motion of the retina of eye from a sequence of frames of data from a scanning laser ophthalmoscope. We also present a scheme to dewarp and co-add frames of retinal image data, given the estimated motion. The motion estimation and dewarping techniques are applied to data collected from an adaptive optics scanning laser ophthalmoscopy.

©2006 Optical Society of America

Full Article  |  PDF Article
More Like This
Adaptive optics scanning laser ophthalmoscope for stabilized retinal imaging

Daniel X. Hammer, R. Daniel Ferguson, Chad E. Bigelow, Nicusor V. Iftimia, Teoman E. Ustun, and Stephen A. Burns
Opt. Express 14(8) 3354-3367 (2006)

Adaptive optics scanning laser ophthalmoscopy

Austin Roorda, Fernando Romero-Borja, William J. Donnelly III, Hope Queener, Thomas J. Hebert, and Melanie C.W. Campbell
Opt. Express 10(9) 405-412 (2002)

Hybrid retinal imager using line-scanning laser ophthalmoscopy and spectral domain optical coherence tomography

Nicusor V. Iftimia, Daniel X. Hammer, Chad E. Bigelow, Teoman Ustun, Johannes F. de Boer, and R. Daniel Ferguson
Opt. Express 14(26) 12909-12914 (2006)

Supplementary Material (2)

Media 1: AVI (3011 KB)     
Media 2: AVI (3424 KB)     

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (5)

Fig. 1.
Fig. 1. Illustration of transformation effects. Under the transformation T, straight lines in the rectangular grid on the left map to curved lines on the right. Under the inverse transformation T -1, equispaced grid points on the right (blue dots) map back to non equispaced points on the left (red dots).
Fig. 2.
Fig. 2. Sample frame from the raw video clip SLD-AR.avi. This clip consists of 24 image frames and the file size is 3.1 MB. The image size is 350 × 350 pixels, or 1.02 × 1.02 degrees, or 300 × 300 microns. The fovea is located 400 microns up and to the left of the frame. [Media 1]
Fig. 3.
Fig. 3. Horizontal and vertical motion estimates obtained from AOSLO data. One pixel corresponds to .17 minutes or arc, or .88 microns of planar distance across the retina. The .8 second duration of the motion corresponds to 24 frames of AOSLO data.
Fig. 4.
Fig. 4. Sample frame from dewarped video clip SLD-AR-dewarp.avi. This clip consists of 24 image frames and the file size is 3.5 MB. The image statistics are the same as in Fig. 2. [Media 2]
Fig. 5.
Fig. 5. Raw image (top) and co-added image (bottom) obtained from AOSLO data. Image statistics are the same as in Fig. 2. Note the honeycomb structure known as a cone mosaic in the co-added image.

Equations (13)

Equations on this page are rendered with MathJax. Learn more.

d ( t ) = E ( r ( t ) + X ( t ) ) .
d i = E ( r ( t i ) + X ( t i ) ) + η i ,
X ( t ) = X ( t 0 ) + ( t t 0 ) v ,
E ( x ( t i + τ f ) ) = E ( x ( t i ) + τ f v ) + noise .
corr ( E , E ) k , = i j E ( i + k , j + ) E ( i , j ) .
T k , E ( i , j ) = E ( i + k , j + ) .
T k , = T ( 2 ) T k ( 1 ) .
E , E = i j E ( i , j ) E ( i , j ) ,
corr ( k , ) = T ( 2 ) T k ( 1 ) E , E .
corr ( g ( 1 ) , g ( 2 ) ) = ( g ( 2 ) T ( 2 ) ) ( k g k ( 1 ) T k ( 1 ) ) E , E .
E ( x ) = E ( x ) , where x = T 1 x .
1 T 0 T X true ( t ) dt ,
1 N n = 1 N X ( t + n τ s ) X bias ( t )
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.