Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

High-resolution, ultrafast, wide-field retinal eye-tracking for enhanced quantification of fixational and saccadic motion

Open Access Open Access

Abstract

We introduce a novel, noninvasive retinal eye-tracking system capable of detecting eye displacements with an angular resolution of 0.039 arcmin and a maximum velocity of 300°/s across an 8° span. Our system is designed based on a confocal retinal imaging module similar to a scanning laser ophthalmoscope. It utilizes a 2D MEMS scanner ensuring high image frame acquisition frequencies up to 1.24 kHz. In contrast with leading eye-tracking technology, we measure the eye displacements via the collection of the observed spatial excursions for all the times corresponding a full acquisition cycle, thus obviating the need for both a baseline reference frame and absolute spatial calibration. Using this approach, we demonstrate the precise measurement of eye movements with magnitudes exceeding the spatial extent of a single frame, which is not possible using existing image-based retinal trackers. We describe our retinal tracker, tracking algorithms and assess the performance of our system by using programmed artificial eye movements. We also demonstrate the clinical capabilities of our system with in vivo subjects by detecting microsaccades with angular extents as small as 0.028°. The rich kinematic ocular data provided by our system with its exquisite degree of accuracy and extended dynamic range opens new and exciting avenues in retinal imaging and clinical neuroscience. Several subtle features of ocular motion such as saccadic dysfunction, fixation instability and abnormal smooth pursuit can be readily extracted and inferred from the measured retinal trajectories thus offering a promising tool for identifying biomarkers of neurodegenerative diseases associated with these ocular symptoms.

© 2020 Optical Society of America under the terms of the OSA Open Access Publishing Agreement

1. Introduction

The human eye is an optical instrument in constant motion. Even during stable fixation, eye movements exhibit a broad range of magnitudes and frequencies [1,2]. Early research on eye movement was pioneered by Hering [3] and Lamare [4] at the end of the 19th century. Emerging techniques such as mechanical recording and photography were subsequently replaced by suction caps [5], scleral search coils [6], and the dual Purkinje image eye tracker [7]. Notably, the scleral search coil approach has excellent signal-to-noise ratio (SNR) and a typical resolution of 0.25 arcmin [2,6]. However, it is a highly invasive method involving the use of topical anesthesia and specialized contact lenses. Currently, owing to their non-invasive nature and its easiness of use, the most popular eye trackers are video-based devices that utilize anterior eye features. Their typical tracking accuracy ranges within 18–30 arcmin [8,9].

An alternative approach to eye tracking consists on following the path of the eye fundus instead of monitoring the anterior segment. An early fundus eye tracker used a light spot scanned across the optic disc. Because blood vessels in the disc are less reflective than the myelinated nerve fibers, eye motion in the horizontal direction was retrieved by recording the returning light intensity along the scanning line [10]. An improved version of this idea was later developed by Ferguson, who proposed using a dithered light beam in a confocal reflectometer setup for generating error signals occurring with eye movement in both lateral directions [11,12]. The reported accuracy of this retinal tracker was 3 arcmin [13]. Confocal reflectometry was further incorporated into the modern scanning laser ophthalmoscope (SLO) to correct for motion artifacts. The SLO is nowadays routinely used in state-of-the-art optical coherence tomography (OCT) and adaptive optics (AO) systems [1319] for patient alignment and attempts on adapting it to eye tracking were natural step. Image-based fundus tracking was developed following the introduction of the SLO [20,21]. This approach consists on measuring eye displacements based on the differences between consecutive pairs of SLO images. State-of-the-art SLO systems are designed for high-quality retinal imaging and provide wide-field, densely sampled images. A major disadvantage of SLO is its inherently limited sampling rate, which is typically 10-30 frames per second (fps). One way to overcome this limitation is to estimate eye motion using only a sample region of a full frame, otherwise known as a subframe, thus lowering acquisition times and speeding up the computation of eye displacements. This concept is introduced in the tracking SLO (TSLO) system [2230]. Here, sampling strip-shaped subframes results in increased eye trace acquisition rates of 960 Hz (1,920 Hz in offline, post-processing mode) [25]. Combined with post-processing digital alignment, subframe usage allows for optical stabilization at the level of 0.2 arcmin and eye motion detection accuracy of 0.04–0.05 arcmin [30]. An ingenious implementation using azimuthally symmetric subframes further allows for the isotropic retrieval of motion in the transverse plane [31,32]. A common feature of image-based tracking systems is their use of a reference frame, the baseline to which all subsequent frames are compared. The performance of the system is critically dependent on an adequate choice of reference frame, itself subject to distortion by motion artifacts [33,34]. More significantly, a stationary reference frame sets the upper limit for the range of measurable eye displacements to the spatial extent of the reference frame and limits the detectable velocities to tens of degrees/s [25,30,35].

Accurately detecting and quantifying eye movements constitute the basis of active stabilization for all in vivo eye imaging applications [1319]. Tracking eye motion also plays a central role in numerous other research or technology fields, including psychology [3639], physiology [40] and more recently, virtual reality and entertainment [41]. In neuroscience, the ability to measure the subtle features of eye displacements is of the uttermost significance. The control of eye movement is broadly represented in the cortical and subcortical structures of the brain, the brainstem and the cerebellum. Anomalous ocular motion is thus intrinsically related to altered brain structure [42,43]. Saccadic dysfunction, fixation instability and abnormal smooth pursuit, for example, provide reliable quantitative indicators of neurodegenerative diseases such as Parkinson’s disease [44,45] and Alzheimer’s disease [46]. Increased fixation instability and altered kinematic saccadic parameters are known quantitative biomarkers of multiple sclerosis [47,48]. In a recent clinical study, microsaccades have been shown to provide objective measurements of multiple sclerosis disability level and disease worsening [35]. Oculomotor abnormalities are therefore a sensitive biomarker for diagnosis and disease progression forecast [49]. However, oculomotor changes of neurological significance are often unsubstantial, and their precise measurement poses a challenge beyond the capabilities of existing diagnostic devices.

In this work, we introduce a fast eye tracker capable of registering 1,240 retinal images per second while achieving a retinal displacement estimation accuracy of 0.039 arcmin root mean squared error (RMSE) over a large dynamic range of displacements. A key feature of this device we name FreezEye Tracker (FET) is its functional independence of reference frame. Consequently, the range of measured displacements is not bounded by the size of the acquired frames. Moreover, the eye displacement calculations are performed with inherent immunity to the accumulation of tracking error what is achieved by algorithm using the concept of Key Frames (KFs). In the upcoming sections, we describe in detail the optical design of our device, the engineering of its quantitative algorithms including the explanation of KFs concept, and we assess its tracking performance and accuracy. We demonstrate the capabilities of FET by showing examples of saccadic and fixational eye movements. The performance of FET in terms of accuracy and dynamic range of makes it a tool well suited for clinical diagnostics in ophthalmology and neurology.

2. Methodology

2.1 Optical setup

The FET’s design consists of a retinal scanner with confocal detection inspired by the SLO. In order to achieve kilohertz frame acquisition rates, we acquire relatively small images with 4,432 pixels per frame. The schematic diagram of the FET optical setup is shown in Fig. 1. The illumination source is a 785-nm laser diode (LP785-SAV50, Thorlabs, USA) coupled with a single-mode fiber and collimated to a beam diameter of 0.7 mm (1/e2) by an aspherical lens CL. The pellicle beam splitter BS reflects the beam and directs it onto a 2D scanning mirror with a 1-mm microelectromechanical systems (MEMS) based active aperture (VC3141/5/48.4, VarioS 2D microscanner, Fraunhofer IPMS) . After reflecting off the scanning mirror, the beam passes through a 4f telescope system composed of lenses L4 and L3. The telescope conjugates the scanner’s aperture with a pair of galvanometric mirrors FET PS (GVS002, Thorlabs, USA) which steer the position of the scanning pattern to the selected region of interest in the retina (for example images refer to Fig. 2). The conjugate plane of the MEMS scanning mirror is then imaged onto the eye pupil plane by a second 4f telescope composed of lenses L2 and L1. Lens L2 is mounted on the translation stage MS for the correction of spherical refractive error. The beam reflected off of the retina reverts to the same path, is de-scanned by the MEMS mirror, passes through the pellicle beam splitter, and is collected by lens L5. The pinhole PH in the focal plane of L5 is conjugate with the retinal plane and rejects light out of focus. The signal from the retina is detected by the avalanche photodiode APD (A-Cube-S1500-01, Laser Components, Germany) and is processed by a PC. System synchronization and data acquisition are performed using custom software engineered in the LabVIEW environment (National Instruments, USA).

 figure: Fig. 1.

Fig. 1. Schematic diagram of the optical design of the FET. LD—laser diode, CL—collimating lens, BS—pellicle beam splitter, L1-L9 achromatic doublet lenses, MEMS 2D—two-axis resonant MEMS scanning mirror, FET PS—positioning galvanometric mirrors, D—variable iris aperture, PH—pinhole, APD—avalanche photodiode, HIL—halogen lamp, BST—Badal stage, AES—artificial eye scanner set, AEO—artificial eye objective, AET—artificial eye test target. Stars and circles denote conjugated plane pairs. Inset A: FT1, FT2—targets for static fixation and saccadic measurements, respectively. The diameter of FT1 subtends 1.5°. The diameters of individual targets in FT2 subtend 0.6° and their variable baseline separation range is 1–8°. Inset B: artificial eye for system testing and calibration, described in subsection 2.3.

Download Full Size | PDF

 figure: Fig. 2.

Fig. 2. Examples of different tracking features in the human retina in vivo (1-15) and in the artificial eye (I-III). Images 1–2: part of an optic nerve, images 3–4: fovea; images 5–15: retinal vasculature with different sizes. Images I–III in the red frame were acquired using an artificial eye (see description in subsection 2.3) and are shown here for visual comparison with images of the living eye. Angular extent of scale bars is 1°.

Download Full Size | PDF

A critical element of our design for ultrafast image acquisition is the MEMS 2D scanning mirror. Its maximum operating scanning frequencies are 20 kHz and 620 Hz in the fast and slow axes, respectively. The maximum achievable frame rate of 1,240 fps derives from the shortest scanning half-period possible in the slow axis. This scanning frequency is achievable for an aperture size of 1 mm, with larger apertures requiring longer acquisition times. This tradeoff poses a design challenge requiring a compromise between maximum scanning angle, beam size at the cornea, and the throughput parameter T, defined as the ratio of the scanning mirror aperture to the diameter of the beam reflected by the retina [50]. In our design, the relay optics L1–L4 serves the purpose of balancing the system’s magnification. Ideally, for $T \ge 1$, no light reflected off of the eye is lost in the mirror aperture. The returning light exits the eye pupil with its full aperture in the range 4–7 mm [51]. Assuming scotopic measuring conditions with a 7-mm pupil, a magnification 1/7 yields $T = 1$. However, this case would require a beam 4.9 mm in diameter entering the eye, which would drastically reduce the scanning angles and in turn, the lateral resolution of the images [52].

In order to achieve maximum MEMS scanner deflections, the optical scanning angles were set to ±4.64° and ±4.29° in the fast and slow axes, respectively. For our studies, we have set the FOV to 3.37° × 3.24°, as measured using the USAF 1951 test target. The imaging beam diameter at the cornea was 1.96 mm. The calculated on-axis beam diameter on the retina in this case was 8.5 µm. In our setup, the design throughput was $T = 0.7$ for a 4-mm pupil and $T = 0.4$ for a 7-mm pupil. By design, the confocality of the system was traded off in order to gain sensitivity by using a 100-µm pinhole. The Airy disc diameter at the detection was 36.4 µm and the times-diffraction-limited (TDL) number of the optical system is 2.75 [50].

In order to perform the experiments with human subjects, the system was equipped with a fixation path, separated by a dichroic mirror (DM) from the imaging/tracking path. A halogen lamp was used to illuminate the target (Fig. 1 Inset A) projected onto the retina through a Badal optometer setup [53]. The Badal setup consisted of lenses L7–L8 and mirrors M3–M4, which were mounted on a movable stage BST to correct the subject’s spherical refractive error without magnifying the target. This feature allowed the angular size of the target to be constant, regardless of the refraction in the subjects’ eye.

Additionally, the system is prepared to be merged with other imaging modalities such as SLO or/and OCT that can provide high-resolution imaging of the subject’s retina and additional information such as cyclotorsion measurement [54]. Adding a module with an extra pair of scanners to the imaging modalities connected to FET via a dichroic mirror inserted, for example, between M5 and L6 opens up the possibility for active correction of the acquired images of the eye.

2.2. Algorithms

2.2.1 Retinal motion tracking algorithm

The MEMS scanner sweeps the beam following the path of Lissajous scanning pattern shown in Fig. 3(a). Acquired raw images are then re-sampled to produce the uniformly sampled rectangular images shown in Fig. 3(b). This step is performed using the matrix vector multiplication:

$${\bf f = M} \cdot {{\bf f}^L}$$
where ${\bf f}$ is an $R \times R$ rectangular image organized in a single row-by-row vector of size ${R^2}$, ${\bf M}$ is the sparse resampling matrix of size ${R^2} \times K$, and ${{\bf f}^L} = [{f_1^L,f_2^L, \ldots ,f_K^L} ]$ is the intensity data vector acquired by the APD on the Lissajous scanning path, $f_i^L$ is a single APD reading, $i \in [{1,2, \ldots ,K} ]$, and K is the number of data points per frame. The matrix ${\bf M}$ is constructed from the Lissajous coordinates so that each row of ${\bf M}$ has s non-zero elements at indices that correspond to the s-closest distances between the Lissajous coordinates and the coordinates in the resulting image.

 figure: Fig. 3.

Fig. 3. Retinal trajectory reconstruction algorithm and saccade extraction. See text for details. a) Raw data acquired along the Lissajous scanning pattern distorted by retinal motion. b) Image frame created by re-sampling the data to a rectangular grid with equidistant pixels without motion compensation. c) Image frame corrected for motion artifacts. d) Calculation of displacement between consecutive frames using image correlation. e) Displacement calculated for a number of previously acquired frames. f) Trajectory recovery using N-back algorithm. g) Trajectory correction using Key Frames. h) Saccade detection using eye motion velocity and calculation of saccade magnitude.

Download Full Size | PDF

The algorithm that reconstructs the retinal trajectory operates in two stages. In the first stage, the retinal trajectory is estimated by the N-back algorithm, in which each new frame is aligned with n previous frames as shown in Fig. 3(c-f). This stage consists of two steps: in the first trajectory estimate of the N-back algorithm is used to remove motion artifacts in the frames used in the second iteration of the N-back algorithm. In the second stage, estimation errors in the trajectory reconstructed in the first stage are corrected.

In the N-back algorithm, each new retinal trajectory point is calculated from displacements measured between the most recently acquired frame and N previous frames. In the simplest case, N=1, the displacement computed from only one previous frame is given by:

$${{\bf t}_m}{\bf = }{{\bf t}_{m - 1}}{\bf + }{{\bf p}_{m,m - 1}}$$
where ${{\bf t}_m} = \{{{x_m},{y_m}} \}$ is the trajectory point in Euclidean space, m is the frame index, and ${{\bf p}_{a,b}}$ denotes the displacement between frames a and b. We estimate ${{\bf p}_{a,b}}$ by minimizing a functional criterion $D({{{\bf f}_a},{{\bf f}_b}} )$ which quantifies the quality of the registration of frames ${{\bf f}_a}$ and ${{\bf f}_b}$. In particular, we use the enhanced correlation coefficient (ECC) [55] criterion, which readily provides sub-pixel precision. However, alternative criteria can be implemented interchangeably based on hardware and computation latency requirements.

In order to reduce the effect of stochastic noise on the trajectory point calculation, the Eq. (2) can be applied to a number of N previously acquired frames and the result averaged. This defines the full version of the N-back algorithm, where the resulting point is the averaged position calculated from displacements measured using frames from empirically chosen subset B, with some acquired up to half a second earlier:

$${{\bf t}_m}{\bf = }{{\sum\limits_{n \in B} {{w_{m,n}}({{{\bf t}_{m - n}}{\bf + }{{\bf p}_{m,m - n}}} )} } \mathord{\left/ {\vphantom {{\sum\limits_{n \in B} {{w_{m,n}}({{{\bf t}_{m - n}}{\bf + }{{\bf p}_{m,m - n}}} )} } {\sum\limits_{n \in B} {{w_{m,n}}} }}} \right.} {\sum\limits_{n \in B} {{w_{m,n}}} }}$$
where n is the index of frames calculated in reference to the newly acquired frame, with n=1 for the frame directly preceding the new frame. If the calculated criterion $D({{{\bf f}_m},{{\bf f}_n}} )$ is below a threshold set at 0.8 in our experiments, the corresponding weighting coefficient ${w_{m,n}}$ is set to zero. The drop in criterion value can result from low SNR (e.g., caused by a blink or accidental vignetting of the scanning beam) or a displacement impossible to calculate either due to lack of satisfactory retinal features or movement exceeding the size of the frame. It is worth to emphasize that even if for a given pair of distant frames, basically imaging different retinal regions, the criterion $D({{{\bf f}_m},{{\bf f}_n}} )$ cannot be calculated, the retinal position will still be estimated based on the remaining collection of frames with coinciding regions.

The above procedure can be repeated to take into account the retinal motion, which may geometrically distort the frames acquired during high velocity motion. For example, for a velocity of 1000°/s and 3°- wide frame the last frame line will be displaced by almost 1/3rd of its width with reference to the first line. Therefore, the velocity of retinal motion is estimated from the first run of N-back algorithm and the geometrical correction is applied to each frame as shear mapping in case of horizontal motion and scaling in case of vertical motion. Next, the second run of N-back algorithm is run with the criterion $D({{{\bf f}_m},{{\bf f}_n}} )$ calculated using frames corrected for geometrical distortion. We have empirically selected the subsets ${B_1} = \{{1,2} \}$ and ${B_2} = \{{1,2,4,8,16,32,64,128,256,512} \}$ of historical frame indices for the first and second run of the N-back algorithm, respectively.

The alignment of frames described above is prone to small errors in ${{\bf p}_{a,b}}$ which propagate in trajectory points ${{\bf t}_m}$ over time according to Eqs. (2)–(3). Due to the recursive nature of these equations, the trajectory estimation error can be modeled as a random walk process, and therefore for an ideally calibrated system, has a non-stationary zero mean error, and error variance linearly increasing with time (a drift). In order to suppress drift, we use the fact that the eye returns from time to time to the same location and the error accumulated by Eqs. (2)–(3) can be corrected by a new displacement calculation. This technique introduces Key Frames (KFs), which are a subset of all frames in the acquired dataset with translations (${\bf p}_{a,b}^{KF}$) that can be calculated (please refer to Fig. 3(g)). This means that the frames ${{\bf f}_a}$ and ${{\bf f}_b}$ correspond to closely spaced locations on the retina, whereas the time separation between them is not important. Next, the algorithm calculates such corrections for the KFs positions that minimize the error on the calculated (${\bf p}_{a,b}^{KF}$), see red arrow in Fig. 3(g). These corrections for KFs are performed using the multidimensional scaling (MDS) mathematical framework described in Ref. [56]. In other words, a distance matrix ${{\bf P}^{KF}}$ is constructed with norms of displacements, $|{{\bf p}_{a,b}^{KF}} |$, between the KFs which can be calculated. The error minimization for trajectory ${{\bf T}^{KF}}$ is performed with the use of a stress function $\sigma$ with respect to the KFs positions, ${\hat{{\bf T}}^{KF}} = [{\hat{{\bf t}}_1^{KF},\hat{{\bf t}}_2^{KF}, \ldots } ]$:

$$\sigma ({{{\hat{{\bf T}}}^{KF}}} ){\bf = }{\sum\nolimits_{a < b} {{w_{a,b}}({{\bf P}_{a,b}^{KF} - |{\hat{{\bf t}}_a^{KF} - \hat{{\bf t}}_b^{KF}} |} )} ^2}$$
where ${\bf P}_{a,b}^{KF} = |{{\bf p}_{a,b}^{KF}} |$ are the distances computed between the a-th and the b-th KF, and ${w_{a,b}}$ is a weighting coefficient to indicate missing values as described earlier. The KF-trajectory ${{\bf T}^{KF}}$ has an inherently low sampling rate, however it has zero mean and stationary error. Since the KF-trajectory has missing values for non-overlapping frames the final retinal trajectory ${{\bf T}^{FET}}$ is estimated by casting the N-back trajectory, ${{\bf T}^{N - back}}$ on the KF-trajectory, ${{\bf T}^{KF}}$ using linear interpolation for trajectory points in between KFs, as depicted by yellow arrows in Fig. 3(g).

2.2.2 Eye blink detection

In most cases, the effect of blinks, as well as other incidents leading to the low value of criterion $D({{{\bf f}_m},{{\bf f}_n}} )$ that quantifies the quality of frame registration, is of little significance on trajectory correction in the KF algorithm. It is incorporated in the distance matrix ${{\bf P}^{KF}}$ in the weighing coefficients ${w_{a,b}}$, as described in the previous section. During the eye blinks, the overall intensity of frames drops significantly. Therefore, for the saccade detection algorithm, we remove the frames that correspond to blinks by thresholding the measured mean frame intensity.

2.2.3 Saccade detection and quantification procedure

The design of our saccade detection algorithm is based on the velocity thresholding principle reported in Ref. [57]. In our implementation, the x- and y- components of the detected retinal trajectory ${{\bf T}^{KF}}$ are denoised using a 21-point moving average filter spanning 17 ms. The first and second derivatives are computed on the x- and y- trajectory components separately using the finite difference method, and are then used to calculate the magnitudes of the absolute velocity and acceleration.

Saccades are first identified by points of local maxima of absolute velocity, shown as an orange solid circle in Fig. 3(h). The initial boundaries of each saccade are determined on the opposite sides of the detected velocity maximum as the two sample points with a velocity below the threshold value empirically set to 1.24°/s. This value allows the elimination of random noise and associated spurious peak velocities mimicking saccades. Both starting and ending boundaries of the saccade are next expanded by 12 ms to compensate for saccade trimming due to velocity thresholding. The final boundaries are depicted by the green and red solid circles in Fig. 3(h) and Figs. 56.

The saccade magnitude is calculated using the Euclidean distance given by the denoised x, y coordinates. Next, the upper and lower boundaries are set to the maximum and minimum values and expanded into bands with a width equal to 5% of the extrema. The average values of the points located in these bands are calculated, and the total saccade magnitude is computed as the difference between them. This procedure is visually depicted in Fig. 3(h) with bands marked with grey rectangles.

2.3. Artificial eye experiments

In order to evaluate the FET system’s tracking performance, a simplified artificial eye composed of a set of X-Y galvanometric scanners (AES), imaging lens (AEO), and a test target (AET) was installed in the system, shown in Fig. 1, inset B. The imaging lens is identical to L1 and is arranged in a 4f system in order to preserve the scanning angles. The scanners were positioned in the eye pupil plane, and a voltage-to-optical-angle calibration was performed using the USAF 1951 test target. For tracking validation, an in vivo human eye fundus image was obtained by a scanning laser ophthalmoscope, printed and used as the test target. A visual comparison of the artificial eye images and the in vivo human eye images is shown in Fig. 2. By steering the AES with a known voltage we introduce controlled movements of the retinal image for calibration. The AES control circuit provides feedback outputs used to monitor the actual position of the galvanometric mirrors and compare them with the FET measurements. The AES is programmed to mimic a diversity of eye movements. Back-and-forth saccade sequences, for example, were generated using the model described in Ref. [58]. Eight 20-second sequences of 200 horizontal back-and-forth saccades and eight 20-second sequences of 200 vertical back-and-forth saccades were imaged. The measurement times of sequences were chosen arbitrarily, resulting in 25,000 collected FET image frames per sequence. The magnitudes of the saccades during each sequence were constant and spanned the range of 1–8° in 1° increments. Furthermore six waveforms of fixational eye movements, 20 s each, were generated according to the model described in Ref. [59]. The artificial eye was programmed to move according to these waveforms during the FET frames acquisition.

2.4 Human eye experiments

For our in vivo measurements, we have enlisted three healthy subjects (age group 25–40 years) with emmetropic vision and no reported or diagnosed fixation problems. The study adhered to the tenets of the Declaration of Helsinki. After the nature of the study was explained to all the participants, they gave their informed consent. Non-scanning beam power measured at the pupil plane was approximately 100 µW, which is significantly below the safety exposure limits [60].

The experiments were conducted in a dimly lit room. The subject’s eye was always randomly chosen for the measurements. Neither mydriatic nor cycloplegic drugs were used. Each subject was directed to place their head in a chin-rest mounted in front of the device, and the line of sight of their eye was aligned with the optical axis of the instrument [61]. Subjects were allowed to blink during the measurement as needed. After each measurement, the subjects were asked to withdraw their head from the chin-rest and rest for at least one minute.

2.4.1 Fixations

The first experimental goal is to demonstrate our system’s capability to image and detect in vivo fixational eye movements. For this purpose, subjects were directed to focus their sight on the center of a fixation target, which consists of a cross-hair and bull’s-eye combination with a diameter which subtends a visual angle of 1.5°. This target is shown as target FT1 in Fig. 1(A) [62]. The target was projected onto the subjects’ retina via a Badal system, which allows for the correction of defocusing without altering the angular magnification of the target. The procedure of finding the subjective far point was followed by optically moving the target away from the eye to the last position before the subjects could perceive a “just noticeable” blur [63]. Initially, in such an alignment, the system images the subjects’ fovea. By using a pair of FET positioning galvanometric mirrors, the operator moves the scanning pattern across different retinal regions. Once the scanning pattern is positioned on the desired retinal region, subjects are directed to blink and then fixate on the target for 20 s while the measurement is acquired. The experiment is repeated three times using different retinal features: the fovea, optic nerve, and retinal vessels. Typical examples of retinal features are shown in Fig. 2.

2.4.2 Saccades

In the second part of the experiment, subjects were directed to continuously switch their gaze between two fixating points separated by a known angular distance ranging from 1° to 8° increasing gradually in steps of 1°. Each fixation point has an angular extent of 0.6°. The goal in this experiment is to register and detect the saccades corresponding to the angular separation between the fixating points. This target is shown as target FT2 Fig. 1(A) [62]. For each subject, we select a vascularized area in the retina as a region of interest for imaging and tracking. Before the measurements, subjects are instructed to blink and then perform the periodic saccades with the aid of a regular auditory metronome set to 70 beats per min. Each measurement lasted 20 s and was repeated twice for each angular separation of the fixating points.

3. Results and discussion

3.1 Tracking algorithm evaluation

According to Eqs. (2)–(3), the N-back tracking algorithm accumulates errors in the registration of each pair of frames over time. A full model of the error for the ${{\bf T}^{N - back}}$ trajectory must include sources of error representing different inputs to the algorithm. Namely, errors introduced during stable fixation and fixation periods in between saccades, when eye displacements and velocities are relatively small, and errors introduced during the larger excursions of saccades. The former can be modeled as a random walk process with a constant proportionality coefficient, because in the velocity range characteristic for fixations the distortions of frames due to eye motion are negligible and the frame overlap is high. The error increases significantly during saccades because the N-back frames at higher velocities start to show geometrical distortions and the overlapping areas become smaller. As a result, the error of the system is not stationary, and its variance increases with time. In our experiments, the error was measured as the difference between the true position returned by the GVSC monitor and the position estimated by the algorithm. A typical accumulation of root squared error (RSE) for the experiment with saccades 4° in magnitude is shown in Fig. 4. The increase of accumulated RSE due to the intersaccadic parts of the trajectories is interleaved with abrupt changes occurring during the saccades, clearly illustrating the non-stationary nature of the ${{\bf T}^{N - back}}$ error. This error model is not valid for the trajectory ${{\bf T}^{KF}}$, estimated by minimizing Eq. (4) and subsequently sub-sampling its result over ${{\bf T}^{N - back}}$. Here, the accumulated errors are corrected by the MDS procedure (see Ref. [56]) and a non-stationary error is hypothesized.

 figure: Fig. 4.

Fig. 4. Root square error (RSE) time series for N-back (blue) and Key Frames (green) trajectories. These error values derive from the 4° saccade experiment. Only the first 3.25 s are shown for clearness.

Download Full Size | PDF

The augmented Dickey–Fuller (ADF) test with trend adjustment rejects the hypothesis that ${{\bf T}^{KF}}$ is a non-stationary process with $D{F_{37}} ={-} 2.870$ and $p = 0.047$, compared with the test for ${{\bf T}^{N - back}}$, which yields $D{F_{37}} ={-} 1.259$ and $p = 0.647$. The results of the tests indicate that ${{\bf T}^{KF}}$ is free from accumulative tracking error, and thus a figure for RMSE can be reported for the whole trajectory. Figure 4 shows the error time series of ${{\bf T}^{KF}}$ and ${{\bf T}^{N - back}}$ for a 4° saccade experiment. Note how the increasing N-back error is eliminated once the KFs correction is performed.

Two strategies for choosing the KFs were tested. In the first strategy, the KFs are selected at fixed intervals (20 ms, 80 ms and 160 ms). This strategy is most effective for fixation experiments, when the motion amplitude is small compared to frame size and the probability that the frames will overlap is high. In such case, the typical (minimum) RMSE for trajectories is 0.039 arcmin for a frame interval 20 ms and 0.045 arcmin for frame intervals of 80 and 160 ms. In the experiments with forced saccades, acceptable results are achieved by taking a single KF from a fixation period that occurs between saccades. The KF is chosen from the time between velocity peaks (i.e., after the end or in-between the saccades). The interval between the KFs in this strategy is almost 900 ms, therefore, the achieved RMSE is higher and increases from 0.36 arcmin for a 1° saccade to 5 arcmins for an 8° saccade. For in vivo eye movement data, both strategies for choosing the KFs are combined.

In the current stage of the device development, all the computations are implemented in C language and run on a multithreaded CPU. The trajectory reconstruction is performed in post-processing. For 20 s of eye motion recording with 25 000 frames and 1000 KFs requires approx. one minute for the N-back and three minutes for calculation of KF displacements followed by MDS trajectory optimization. Our preliminary experiments show that the N-back part of the algorithm can be performed in less than 100µs per frame when implemented in the graphics processing unit (GPU) what makes an implementation operating in real-time feasible.

3.2 Human eye experiments

Figure 5 illustrates a typical saccade of 4° in magnitude from the experiment described in Methodology subsection 3.2. In this case, retinal vessels were chosen as the tracking features. Selected frames are shown corresponding to the solid blue circles in the saccadic plot. Solid green and red circles represent the start and end of the saccade, respectively. For clearness of presentation, the start of the saccade is moved to the origin of the coordinate system.

 figure: Fig. 5.

Fig. 5. Typical saccade magnitude plot. Solid blue circles and their numbers correspond to the acquired FET images representing moving vascular features in the retina during the saccade. The yellow arrows show direction of motion. The green and red solid circles correspond to beginning and end of saccade respectively. The depicted saccade comes from a series of saccades shown in Fig. 6 shaded in blue.

Download Full Size | PDF

The saccade data shown in the Fig. 5 is a typical example from a series of voluntary back-and-forth saccades performed by subject 1 in the second part of the experiment. The complete series is presented in Fig. 6, where the saccade from Fig. 5 is shaded in blue.

 figure: Fig. 6.

Fig. 6. Typical example of the x- and y- retinal coordinates during a 20-s series of back-and-forth horizontal saccades and fixation targets with angular separation of 4° for subject 1. The region shaded in blue indicates the saccade presented in Fig. 5. Green/red solid circles mark the starts/ends of saccades. The green-shaded area marks a gap in the trajectory points due to a blink. The area shaded in grey corresponds to the position and angular size of fixation targets.

Download Full Size | PDF

The green region indicates a gap in the trajectory due to a blink. One can notice that the trajectory reconstruction is not affected by the partial loss of data during the blink. Green and red solid circles represent the detected starts and ends of the saccades, respectively. Horizontal grey-shaded stripes mark the angular size of the fixation targets, also shown in scale on the right side of the plot. The time between the detected saccades corresponds well with the settings of the metronome beats, and the majority of the magnitudes of the saccades are within the range of fixation targets.

Follow-up correcting saccades were observed in all the trajectories, with saccadic undershoot or overshoot. Because the task involved performing horizontal saccades (x-coordinate), the vertical component (y-coordinate) is small in comparison with the horizontal component. Nevertheless, the correlation of both x- and y- retinal trajectories is clearly evident, which indicates that the eye motion deviates from a horizontal line during the saccade.

Motion parameters such as velocity and acceleration can be readily calculated from the saccade trajectories. Figure 7 summarizes the displacement angular magnitudes, velocities, and accelerations of all 42 saccades from both 4°-forced saccade experiments on subject 1, with a distinction between temporal-nasal and nasal-temporal directions. A clear asymmetry can be observed amongst the directions, especially in the acceleration plots. Correcting saccades, which are comparatively much smaller in magnitude, are not shown in Fig. 7.

 figure: Fig. 7.

Fig. 7. Angular magnitude, velocity, and acceleration plots of all 42 detected saccades for both measurements from subject 1 for a fixation target separation of 4°. Orange lines correspond to temporal-nasal saccades, while blue lines correspond to nasal-temporal saccades.

Download Full Size | PDF

Figure 8(a) and (d) depict the vertical and horizontal eye-motion components, respectively. In this plot, the motion components are plotted on separate axes with different scales. By combining both components, the xy retinal trajectory is readily obtained shown in the figure with color-coded velocity values in panel (c). Panel (b) is a projection of the horizontal and vertical trajectories from panels (a) and (c) in the form of a retinal position density map. Contours of fixations targets in panel (b) are shown to scale. For further examples and results, please refer to Supplementary Materials Visualization 1, Visualization 2, Visualization 3, Visualization 4, Visualization 5, Visualization 6, Visualization 7, and Visualization 8, which show videos of FET imaging and retrieved trajectories during 2, 4, 6 and 8° saccades performed by the subjects.

 figure: Fig. 8.

Fig. 8. Example of a retinal trajectory obtained during a 20-s voluntary saccades experiment. Panels (a) and (d) show the y- and x-trajectories of the retina over time, respectively. Panel (c) shows the entire x–y trajectory. Panel (b) is the retinal position density map with up-to-scale contour of the fixation target. The region shaded in green indicates a blink. For further visualizations and examples, please refer to Visualization 1, Visualization 2, Visualization 3, Visualization 4, Visualization 5, Visualization 6, Visualization 7, and Visualization 8.

Download Full Size | PDF

Figure 9 shows typical results of in vivo fixation experiments for Subject 1. Figure 9(a) and (d) show vertical and horizontal eye motion components, respectively. Green-shaded regions represent blinks that occurred during the measurement. Figure 9(b) shows the retinal position density map during this measurement. Although eye drift and microsaccades can be easily identified, it is clear that most of the time, the eye is focused on a definite region, likely the center of the fixation target. The target is drawn to scale in the same figure panel. Eye excursions are clearly visible in individual x- and y-trajectories, as well as in panel (c), which shows the entire detected x–y retinal trajectory with color-coded velocity. In addition, despite the relatively stable eye position, typical eye drift and microsaccades are easily identified. These eye excursions are clearly visible in individual x- and y-trajectories, as well as in Fig. 9(c). For further examples and results, please refer to Supplementary Materials Visualization 9, Visualization 10, Visualization 11, and Visualization 12, which show videos of FET imaging and retrieved trajectories during fixation periods. In these examples the tracking was performed respectively on retinal vessels, the optic disc, larger retinal vessels near the optic disc and the fovea.

 figure: Fig. 9.

Fig. 9. Example of a retinal trajectory obtained during a 20-s fixation experiment. (a) and (d) show the y- and x-trajectories of the retina, respectively. (c) shows the whole x-y trajectory. (d) retinal position density map. The regions shaded in green indicate blinks. See Visualization 9, Visualization 10, Visualization 11, and Visualization 12 in Supplementary Materials.

Download Full Size | PDF

A summary of all 5,159 detected saccades and microsaccades from all the measurements performed in this study is presented in Fig. 10 in the form of a saccadic main sequence [64]. We have removed any remaining outliers by fitting a main sequence formula as proposed by Baloh [65] and by further visual inspection of data points in the sequence. The final plot consists of 5,159 points, each one corresponding to a saccade or microsaccade.

 figure: Fig. 10.

Fig. 10. A main sequence of 5,159 saccades and microsaccades detected from 57 measurements performed in vivo during a period of 1,140 s. The inset shows the magnitude of the smallest microsaccade detected in this study with an angular magnitude of 0.028°. Dashed line rectangle shows the range of detected saccades reported in [35].

Download Full Size | PDF

Results shown in Fig. 10 are in good agreement with the previously published results using different instrumentation [2,35]. Notably, our tracking system extends the range of detectable magnitudes of microsaccades and increases the accuracy of their measurement. The inset of Fig. 10 shows the smallest microsaccade magnitude detected in this study, with a magnitude of 0.028°.

The results of this study demonstrate our system’s capability for the accurate reconstruction of retinal motion with a maximum angular resolution of 0.039 arcmin RMSE and a temporal resolution of up to 790 µs. Further parametric characterization of eye motion, including intersaccadic intervals and their statistical distribution, number and duration of fixations, are currently being conducted in a clinical setting with a more statistically significant population.

4. Conclusions

We have demonstrated a novel, noninvasive eye tracking system capable of detecting retinal displacements as small as 0.028° with an angular resolution of 0.039 arcmin and a maximum velocity of 300°/s across an angular span as wide as 8°. Our tracking algorithms quantify eye displacements using the shifts of a subset of frames in a sequence spanning the full acquisition cycle, obviating the need for a single reference frame and allowing for the precise measurement of eye movements exceeding the spatial extent of single acquired frames. Therefore, our system extends the limitation on maximum detectable saccadic magnitude and velocity characteristic for current image-based retinal trackers and allows the detection of finer features of eye motion enabling new, promising opportunities in retinal imaging or clinical neuroscience. Furthermore, our system offers the ability to perform the precise measurement of both microsaccades occurring during fixation as well as large saccades without the need for any additional external imaging devices such as a wide-field SLO. The subtle features of saccadic dysfunction, fixation instability and abnormal smooth pursuit can be readily extracted and quantified in deeper detail, thus offering a promising tool set for the early identification of biomarkers of neurodegenerative diseases. Moreover, FET can be readily combined with other eye imaging modalities such as SLO or OCT to provide eye motion correction without major hardware changes to these modalities.

Funding

Fundacja na rzecz Nauki Polskiej (POIR.04.04.00-00-2070/16-00).

Acknowledgments

The project “FreezEYE Tracker – ultrafast system for image stabilization in biomedical imaging“ was conducted within the TEAM TECH Programme of the Foundation for Polish Science, co-financed by the European Union under the European Regional Development Fund.

We would like to thank Carlos López-Mariscal, PhD for valuable help during manuscript preparation.

Disclosures

AM2M Ltd. L.P: MM (E), MN (I, E), KD (I, E), AS (I), MS (I).

References

1. S. Martinez-Conde, S. L. Macknik, and D. H. Hubel, “The role of fixational eye movements in visual perception,” Nat. Rev. Neurosci. 5(3), 229–240 (2004). [CrossRef]  

2. S. Martinez-Conde, J. Otero-Millan, and S. L. Macknik, “The impact of microsaccades on vision: towards a unified theory of saccadic function,” Nat. Rev. Neurosci. 14(2), 83–96 (2013). [CrossRef]  

3. N. J. Wade and B. W. Tatler, “Did Javal measure eye movements during reading,” J. Eye Mov. Res. 2, 1–7 (2009). [CrossRef]  

4. N. J. Wade, “How Were Eye Movements Recorded Before Yarbus?” Perception 44(8-9), 851–883 (2015). [CrossRef]  

5. A. L. Yarbus, “Methods,” in Eye Movements and Vision (Plenum Press, 1967), pp. 5–58. [CrossRef]  

6. D. A. Robinson, “Movement Using a Scleral Search in a Magnetic Field,” IEEE Trans. Bio-Med. Electron. 10(4), 137–145 (1963). [CrossRef]  

7. T. N. Cornsweet and H. D. Crane, “Accurate two-dimensional eye tracker using first and fourth Purkinje images,” J. Opt. Soc. Am. 63(8), 921–928 (1973). [CrossRef]  

8. SR Research Ltd., “EyeLink 1000 User Manual,” http://sr-research.jp/support/EyeLink%201000%20User%20Manual%201.5.0.pdf

9. Tobii Technology Inc., “Tobii Pro Spectrum Product Description,” https://www.tobiipro.com/siteassets/tobii-pro/product-descriptions/tobii-pro-spectrum-product-description.pdf/?v=2.0%0A

10. T. N. Cornsweet, “New Technique for the Measurement of Small Eye Movements,” J. Opt. Soc. Am. 48(11), 808–811 (1958). [CrossRef]  

11. R. D. Ferguson, “Servo tracking system utilizing phase-sensitive detection of reflectance variations,” U.S. patent5(767), 941 (1998).

12. R. D. Ferguson, “Servo tracking system utilizing phase-sensitive detection of reflectance variations,” U.S. patent5(943), 115 (1999).

13. D. X. Hammer, R. D. Ferguson, J. C. Magill, M. A. White, A. E. Elsner, and R. H. Webb, “Image stabilization for scanning laser ophthalmoscopy,” Opt. Express 10(26), 1542–1549 (2002). [CrossRef]  

14. D. X. Hammer, R. D. Ferguson, J. C. Magill, M. a White, A. E. Elsner, and R. H. Webb, “Compact scanning laser ophthalmoscope with high-speed retinal tracker,” Appl. Opt. 42(22), 4621–4632 (2003). [CrossRef]  

15. R. D. Ferguson, D. X. Hammer, L. A. Paunescu, S. Beaton, and J. S. Schuman, “Tracking optical coherence tomography,” Opt. Lett. 29(18), 2139–2141 (2004). [CrossRef]  

16. D. X. Hammer, R. D. Ferguson, J. C. Magill, L. A. Paunescu, S. Beaton, H. Ishikawa, G. Wollstein, and J. S. Schuman, “Active retinal tracker for clinical optical coherence tomography systems,” J. Biomed. Opt. 10(2), 024038 (2005). [CrossRef]  

17. D. X. Hammer, R. D. Ferguson, C. E. Bigelow, N. V. Iftimia, T. E. Ustun, and S. A. Burns, “Adaptive optics scanning laser ophthalmoscope for stabilized retinal imaging,” Opt. Express 14(8), 3354–3367 (2006). [CrossRef]  

18. S. A. Burns, R. Tumbar, A. E. Elsner, D. Ferguson, and D. X. Hammer, “Large-field-of-view, modular, stabilized, adaptive-optics-based scanning laser ophthalmoscope,” J. Opt. Soc. Am. A 24(5), 1313–1326 (2007). [CrossRef]  

19. O. P. Kocaoglu, R. D. Ferguson, R. S. Jonnal, Z. Liu, Q. Wang, D. X. Hammer, and D. T. Miller, “Adaptive optics optical coherence tomography with dynamic retinal tracking,” Biomed. Opt. Express 5(7), 2262–2284 (2014). [CrossRef]  

20. R. H. Webb and G. W. Hughes, “Scanning laser ophthalmoscope,” IEEE Trans. Biomed. Eng. BME-28(7), 488–492 (1981). [CrossRef]  

21. D. P. Wornson, G. W. Hughes, and R. H. Webb, “Fundus tracking with the scanning laser ophthalmoscope,” Appl. Opt. 26(8), 1500–1504 (1987). [CrossRef]  

22. M. Stetter, R. A. Sendtner, and G. T. Timberlake, “A novel method for measuring saccade profiles using the scanning laser ophthalmoscope,” Vision Res. 36(13), 1987–1994 (1996). [CrossRef]  

23. J. B. Mulligan, “Recovery of motion parameters from distortions in scanned images,” in Proceedings of the Image Registration Workshop, J. Le Moigne, ed. (NASA Goddard Space Flight Center, 1997), pp. 281–292.

24. D. W. Arathorn, Q. Yang, C. R. Vogel, Y. Zhang, P. Tiruveedhula, and A. Roorda, “Retinally stabilized cone-targeted stimulus delivery,” Opt. Express 15(21), 13731–13744 (2007). [CrossRef]  

25. C. K. Sheehy, Q. Yang, D. W. Arathorn, P. Tiruveedhula, J. F. de Boer, and A. Roorda, “High-speed, image-based eye tracking with a scanning laser ophthalmoscope,” Biomed. Opt. Express 3(10), 2611–2622 (2012). [CrossRef]  

26. K. V. Vienola, B. Braaf, C. K. Sheehy, Q. Yang, P. Tiruveedhula, D. W. Arathorn, J. F. de Boer, and A. Roorda, “Real-time eye motion compensation for OCT imaging with tracking SLO,” Biomed. Opt. Express 3(11), 2950–2963 (2012). [CrossRef]  

27. B. Braaf, K. V. Vienola, C. K. Sheehy, Q. Yang, K. A. Vermeer, P. Tiruveedhula, D. W. Arathorn, A. Roorda, and J. F. de Boer, “Real-time eye motion correction in phase-resolved OCT angiography with tracking SLO,” Biomed. Opt. Express 4(1), 51–65 (2013). [CrossRef]  

28. C. K. Sheehy, P. Tiruveedhula, R. Sabesan, and A. Roorda, “Active eye-tracking for an adaptive optics scanning laser ophthalmoscope,” Biomed. Opt. Express 6(7), 2412–2423 (2015). [CrossRef]  

29. S. B. Stevenson, C. K. Sheehy, and A. Roorda, “Binocular eye tracking with the tracking scanning laser ophthalmoscope,” Vision Res. 118, 98–104 (2016). [CrossRef]  

30. Q. Yang, J. Zhang, K. Nozato, K. Saito, D. R. Williams, A. Roorda, and E. A. Rossi, “Closed-loop optical stabilization and digital image registration in adaptive optics scanning light ophthalmoscopy,” Biomed. Opt. Express 5(9), 3174–3191 (2014). [CrossRef]  

31. M. Damodaran, K. V. Vienola, B. Braaf, K. A. Vermeer, and J. F. de Boer, “Digital micromirror device based ophthalmoscope with concentric circle scanning,” Biomed. Opt. Express 8(5), 2766–2780 (2017). [CrossRef]  

32. K. V. Vienola, M. Damodaran, B. Braaf, K. A. Vermeer, and J. F. de Boer, “In vivo retinal imaging for fixational eye motion detection using a high-speed digital micromirror device (DMD)-based ophthalmoscope,” Biomed. Opt. Express 9(2), 591–602 (2018). [CrossRef]  

33. M. Azimipour, R. J. Zawadzki, I. Gorczynska, J. Migacz, J. S. Werner, and R. S. Jonnal, “Intraframe motion correction for raster-scanned adaptive optics images using strip-based cross-correlation lag biases,” PLoS One 13(10), e0206052–24 (2018). [CrossRef]  

34. A. E. Salmon, R. F. Cooper, C. S. Langlo, A. Baghaie, A. Dubra, and J. Carroll, “An automated reference frame selection (ARFS) algorithm for cone imaging with adaptive optics scanning light ophthalmoscopy,” Trans. Vis. Sci. Tech. 6(2), 9–15 (2017). [CrossRef]  

35. C. K. Sheehy, E. S. Bensinger, A. Romeo, L. Rani, N. Stepien-Bernabe, B. Shi, Z. Helft, N. Putnam, C. Cordano, J. M. Gelfand, R. Bove, S. B. Stevenson, and A. J. Green, “Fixational microsaccades: A quantitative and objective measure of disability in multiple sclerosis,” Mult. Scler. J., 1352458519894712 (2020).

36. A. T. Duchowski, “A breadth-first survey of eye-tracking applications,” Behav. Res. Methods, Instruments. Comput. 34(4), 455–470 (2002). [CrossRef]  

37. J. Otero-Millan, A. Serra, R. J. Leigh, X. G. Troncoso, S. L. Macknik, and S. Martinez-Conde, “Distinctive features of saccadic intrusions and microsaccades in progressive supranuclear palsy,” J. Neurosci. 31(12), 4379–4387 (2011). [CrossRef]  

38. J. Otero-Millan, R. Schneider, R. J. Leigh, S. L. Macknik, and S. Martinez-Conde, “Saccades during attempted fixation in Parkinsonian disorders and recessive ataxia: from microsaccades to square-wave jerks,” PLoS One 8(3), e58535 (2013). [CrossRef]  

39. J. Kerr-Gaffney, A. Harrison, and K. Tchanturia, “Eye-tracking research in eating disorders: A systematic review,” Int. J. Eat. Disord. 52(1), 3–27 (2019). [CrossRef]  

40. M. Rolfs, “Microsaccades: small steps on a long way,” Vision Res. 49(20), 2415–2441 (2009). [CrossRef]  

41. C. Viviane, K. Peter, and K. Sabine, “Eye tracking in virtual reality,” J. Eye Mov. Res. 12(1), 1–8 (2019). [CrossRef]  

42. M. R. MacAskill and T. J. Anderson, “Eye movements in neurodegenerative diseases,” Curr. Opin. Neurol. 29(1), 61–68 (2016). [CrossRef]  

43. P. J. Benson, S. A. Beedie, E. Shephard, I. Giegling, D. Rujescu, and D. St. Clair, “Simple viewing tests can detect eye movement abnormalities that distinguish schizophrenia cases from controls with exceptional accuracy,” Biol. Psychiatry 72(9), 716–724 (2012). [CrossRef]  

44. C. C. Wu, B. Cao, V. Dali, C. Gagliardi, O. J. Barthelemy, R. D. Salazar, M. Pomplun, A. Cronin-Golomb, and A. Yazdanbakhsh, “Eye movement control during visual pursuit in Parkinson’s disease,” PeerJ 6, e5442 (2018). [CrossRef]  

45. G. T. Gitchel, P. A. Wetzel, and M. S. Baron, “Pervasive ocular tremor in patients with Parkinson disease,” Arch. Neurol. 69(8), 1011–1017 (2012). [CrossRef]  

46. W. A. Fletcher and J. A. Sharpe, “Saccadic eye movement dysfunction in Alzheimer’s disease,” Ann. Neurol. 20(4), 464–471 (1986). [CrossRef]  

47. R. M. Mallery, P. Poolman, M. J. Thurtell, J. M. Full, J. Ledolter, D. Kimbrough, E. M. Frohman, T. C. Frohman, and R. H. Kardon, “Visual fixation instability in multiple sclerosis measured using SLO-OCT,” Invest. Ophthalmol. Visual Sci. 59(1), 196–201 (2018). [CrossRef]  

48. J. A. Bijvank, L. J. Van Rijn, L. J. Balk, H. S. Tan, B. M. J. Uitdehaag, and A. Petzold, “Diagnosing and quantifying a common deficit in multiple sclerosis: Internuclear ophthalmoplegia,” Neurology 92(20), e2299–e2308 (2019). [CrossRef]  

49. R. Rodríguez-Labrada, Y. Vázquez-Mojena, and L. Velázquez-Pérez, “Eye movement abnormalities in neurodegenerative diseases,” in Eye Motility (IntechOpen, 2019).

50. F. LaRocca, A.-H. Dhalla, M. P. Kelly, S. Farsiu, and J. A. Izatt, “Optimization of confocal scanning laser ophthalmoscope design,” J. Biomed. Opt. 18(7), 076015 (2013). [CrossRef]  

51. A. B. Watson and J. I. Yellott, “A unified formula for light-adapted pupil size,” J. Vis. 12(10), 12 (2012). [CrossRef]  

52. W. J. Donnelly III and A. Roorda, “Optimal pupil size in the human eye for axial resolution,” J. Opt. Soc. Am. A 20(11), 2010–2015 (2003). [CrossRef]  

53. W. N. Charman and G. Heron, “Fluctuations in accommodation: a review,” Oph Phys Optics 8(2), 153–164 (1988). [CrossRef]  

54. F. Lengwiler, D. Rappoport, G. P. Jaggi, K. Landau, and G. L. Traber, “Reliability of Cyclotorsion measurements using Scanning Laser Ophthalmoscopy imaging in healthy subjects: The CySLO study,” Br. J. Ophthalmol. 102(4), 535–538 (2018). [CrossRef]  

55. G. D. Evangelidis and E. Z. Psarakis, “Parametric Image Alignment Using Enhanced Correlation Coefficient Maximization,” IEEE Trans. Pattern Anal. Mach. Intell. 30(10), 1858–1865 (2008). [CrossRef]  

56. I. Borg and P. Groenen, Modern Multidimensional Scaling: Theory and Applications (Springer, 1997).

57. R. Engbert and R. Kliegl, “Microsaccades uncover the orientation of covert attention,” Vision Res. 43(9), 1035–1045 (2003). [CrossRef]  

58. W. Dai, I. Selesnick, J. R. Rizzo, J. Rucker, and T. Hudson, “A parametric model for saccadic eye movement,” in 2016 IEEE Signal Processing in Medicine and Biology Symposium (SPMB) (IEEE, 2016), pp. 1–6.

59. R. Engbert, K. Mergenthaler, P. Sinn, and A. Pikovsky, “An integrated model of fixational eye movements and microsaccades,” Proc. Natl. Acad. Sci. 108(39), E765–E770 (2011). [CrossRef]  

60. F. C. Delori, R. H. Webb, D. H. Sliney, and American National Standards Institute, “Maximum permissible exposures for ocular safety (ANSI 2000), with emphasis on ophthalmic devices,” J. Opt. Soc. Am. A 24(5), 1250–1265 (2007). [CrossRef]  

61. M. Nowakowski, M. Sheehan, D. Neal, and A. V. Goncharov, “Investigation of the isoplanatic patch and wavefront aberration along the pupillary axis compared to the line of sight in the eye,” Biomed. Opt. Express 3(2), 240–258 (2012). [CrossRef]  

62. L. Thaler, A. C. Schütz, M. A. Goodale, and K. R. Gegenfurtner, “What is the best fixation target? The effect of target shape on stability of fixational eye movements,” Vision Res. 76, 31–42 (2013). [CrossRef]  

63. D. A. Atchison, S. W. Fisher, C. A. Pedersen, and P. G. Ridall, “Noticeable, troublesome and objectionable limits of blur,” Vision Res. 45(15), 1967–1974 (2005). [CrossRef]  

64. A. T. Bahill, M. R. Clark, and L. Stark, “The main sequence, a tool for studying human eye movements,” Math. Biosci. 24(3-4), 191–204 (1975). [CrossRef]  

65. R. W. Baloh, A. W. Sills, W. E. Kumley, and V. Honrubia, “Quantitative measurement of saccade amplitude, duration, and velocity,” Neurology 25(11), 1065 (1975). [CrossRef]  

Supplementary Material (12)

NameDescription
Visualization 1       Video showing the retinal trajectory reconstruction. Subject 1 performing 2 degree saccades during 20 seconds with the first 5 seconds shown in slow motion.
Visualization 2       Video showing the retinal trajectory reconstruction. Subject 2 performing 2 degree saccades during 20 seconds with the first 5 seconds shown in slow motion.
Visualization 3       Video showing the retinal trajectory reconstruction. Subject 1 performing 4 degree saccades during 20 seconds with the first 5 seconds shown in slow motion.
Visualization 4       Video showing the retinal trajectory reconstruction. Subject 2 performing 4 degree saccades during 20 seconds with the first 5 seconds shown in slow motion.
Visualization 5       Video showing the retinal trajectory reconstruction. Subject 3 performing 6 degree saccades during 20 seconds with the first 5 seconds shown in slow motion.
Visualization 6       Video showing the retinal trajectory reconstruction. Subject 2 performing 6 degree saccades during 20 seconds with the first 5 seconds shown in slow motion.
Visualization 7       Video showing the retinal trajectory reconstruction. Subject 1 performing 8 degree saccades during 20 seconds with the first 5 seconds shown in slow motion.
Visualization 8       Video showing the retinal trajectory reconstruction. Subject 3 performing 8 degree saccades during 20 seconds with the first 5 seconds shown in slow motion.
Visualization 9       Video showing the retinal trajectory reconstruction. Subject 1 fixating on a target during 20 seconds with the first 5 seconds shown in slow motion. Imaging retinal vessels.
Visualization 10       Video showing the retinal trajectory reconstruction. Subject 3 fixating on a target during 20 seconds with the first 5 seconds shown in slow motion. Imaging the optics nerve.
Visualization 11       Video showing the retinal trajectory reconstruction. Subject 2 fixating on a target during 20 seconds with the first 5 seconds shown in slow motion. Imaging retinal vessels close to the optic nerve.
Visualization 12       Video showing the retinal trajectory reconstruction. Subject 1 fixating on a target during 20 seconds with the first 5 seconds shown in slow motion. Imaging the fovea.

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (10)

Fig. 1.
Fig. 1. Schematic diagram of the optical design of the FET. LD—laser diode, CL—collimating lens, BS—pellicle beam splitter, L1-L9 achromatic doublet lenses, MEMS 2D—two-axis resonant MEMS scanning mirror, FET PS—positioning galvanometric mirrors, D—variable iris aperture, PH—pinhole, APD—avalanche photodiode, HIL—halogen lamp, BST—Badal stage, AES—artificial eye scanner set, AEO—artificial eye objective, AET—artificial eye test target. Stars and circles denote conjugated plane pairs. Inset A: FT1, FT2—targets for static fixation and saccadic measurements, respectively. The diameter of FT1 subtends 1.5°. The diameters of individual targets in FT2 subtend 0.6° and their variable baseline separation range is 1–8°. Inset B: artificial eye for system testing and calibration, described in subsection 2.3.
Fig. 2.
Fig. 2. Examples of different tracking features in the human retina in vivo (1-15) and in the artificial eye (I-III). Images 1–2: part of an optic nerve, images 3–4: fovea; images 5–15: retinal vasculature with different sizes. Images I–III in the red frame were acquired using an artificial eye (see description in subsection 2.3) and are shown here for visual comparison with images of the living eye. Angular extent of scale bars is 1°.
Fig. 3.
Fig. 3. Retinal trajectory reconstruction algorithm and saccade extraction. See text for details. a) Raw data acquired along the Lissajous scanning pattern distorted by retinal motion. b) Image frame created by re-sampling the data to a rectangular grid with equidistant pixels without motion compensation. c) Image frame corrected for motion artifacts. d) Calculation of displacement between consecutive frames using image correlation. e) Displacement calculated for a number of previously acquired frames. f) Trajectory recovery using N-back algorithm. g) Trajectory correction using Key Frames. h) Saccade detection using eye motion velocity and calculation of saccade magnitude.
Fig. 4.
Fig. 4. Root square error (RSE) time series for N-back (blue) and Key Frames (green) trajectories. These error values derive from the 4° saccade experiment. Only the first 3.25 s are shown for clearness.
Fig. 5.
Fig. 5. Typical saccade magnitude plot. Solid blue circles and their numbers correspond to the acquired FET images representing moving vascular features in the retina during the saccade. The yellow arrows show direction of motion. The green and red solid circles correspond to beginning and end of saccade respectively. The depicted saccade comes from a series of saccades shown in Fig. 6 shaded in blue.
Fig. 6.
Fig. 6. Typical example of the x- and y- retinal coordinates during a 20-s series of back-and-forth horizontal saccades and fixation targets with angular separation of 4° for subject 1. The region shaded in blue indicates the saccade presented in Fig. 5. Green/red solid circles mark the starts/ends of saccades. The green-shaded area marks a gap in the trajectory points due to a blink. The area shaded in grey corresponds to the position and angular size of fixation targets.
Fig. 7.
Fig. 7. Angular magnitude, velocity, and acceleration plots of all 42 detected saccades for both measurements from subject 1 for a fixation target separation of 4°. Orange lines correspond to temporal-nasal saccades, while blue lines correspond to nasal-temporal saccades.
Fig. 8.
Fig. 8. Example of a retinal trajectory obtained during a 20-s voluntary saccades experiment. Panels (a) and (d) show the y- and x-trajectories of the retina over time, respectively. Panel (c) shows the entire x–y trajectory. Panel (b) is the retinal position density map with up-to-scale contour of the fixation target. The region shaded in green indicates a blink. For further visualizations and examples, please refer to Visualization 1, Visualization 2, Visualization 3, Visualization 4, Visualization 5, Visualization 6, Visualization 7, and Visualization 8.
Fig. 9.
Fig. 9. Example of a retinal trajectory obtained during a 20-s fixation experiment. (a) and (d) show the y- and x-trajectories of the retina, respectively. (c) shows the whole x-y trajectory. (d) retinal position density map. The regions shaded in green indicate blinks. See Visualization 9, Visualization 10, Visualization 11, and Visualization 12 in Supplementary Materials.
Fig. 10.
Fig. 10. A main sequence of 5,159 saccades and microsaccades detected from 57 measurements performed in vivo during a period of 1,140 s. The inset shows the magnitude of the smallest microsaccade detected in this study with an angular magnitude of 0.028°. Dashed line rectangle shows the range of detected saccades reported in [35].

Equations (4)

Equations on this page are rendered with MathJax. Learn more.

f = M f L
t m = t m 1 + p m , m 1
t m = n B w m , n ( t m n + p m , m n ) / n B w m , n ( t m n + p m , m n ) n B w m , n n B w m , n
σ ( T ^ K F ) = a < b w a , b ( P a , b K F | t ^ a K F t ^ b K F | ) 2
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.