As a proof of concept we apply a technique called SLODAR as implemented in astronomy to the human eye. The technique uses single exposures of angularly separated “stars” on a Hartmann-Shack sensor to determine a profile of aberration strength localised in altitude in astronomy, or path length into the eye in our application. We report on the success of this process with both model and real human eyes. There are similarities and significant differences between the astronomy and vision applications.
© 2008 Optical Society of America
In astronomy adaptive optics is a well-accepted technology with millions of dollars spent removing the changing effects of the turbulent atmosphere to achieve diffraction limited imagery. The use of adaptive optics in vision science for the study of the human eye’s function and retinal imaging is also becoming well-accepted in research, but the extent of its application is only in its infancy compared to the reaches attained in astronomy. The significance of its use in disease diagnosis, optical correction, and physiological understanding within society is marked. The field of adaptive optics includes both wavefront sensing and wavefront correction, and so far most of the application to vision science/ophthalmology has been with the former. This paper proposes the use of Slope Detection and Ranging (SLODAR) which is a wavefront sensing technique that has been developed in astronomy but is yet to be used in vision science. The use of SLODAR in vision science will allow the determination of the origin of aberrations in the eye and hence learning more about ocular optical structure.
Of the components of the eye, the cornea has the greatest refractive power and as a result produces some of the largest aberrations. Other optical elements within the eye produce further aberrations that either add to or compensate those generated at the cornea. He et al. 1, Artal et al. 2, Wang et al. 3 measured the aberrations produced by the cornea and the complete eye using corneal topography and a wavefront sensor. Internal aberrations were calculated by the subtraction of the two previous measurements. Artal et al. 4 and more recently Dubinin et al. 5 measured the internal aberrations with a Hartmann-Shack sensor by submerging the eye in water to eliminate the effects of the cornea. The aberrations generated from the posterior cornea and crystalline lens usually compensated partly for the aberrations produced of the anterior cornea. Recent work has attempted to model the structure and performance of the crystalline lens6–8. Some information is obtained through alternative methods such as MRI9, but there remains a need for fast, in-situ measurement of the contributions within the structure of the eye to the total aberration. These attempts to quantify the internal structure and operation of the eye are the forefront of activity in vision – the needs for and benefits of this knowledge have been well documented by others.10
Several methods exist in astronomy to measure the severity and location of turbulence and the consequential wave aberrations. Modal tomography as suggested by Ragazzoni et al. 11 is one technique available in astronomy to calculate the aberrations produced at certain altitudes above the telescope. The technique used several laser guide stars to retrieve the three-dimensional distribution of the perturbing layers using a tomographical technique. Gonchorav et al. 12 implemented a modified version of Ragazzoni et al.’s technique to take into account the refractive power of the layers within the eye to determine the aberrations at fixed planes. Two other methods that may be applied without changing existing optical systems are Scintillation Detection and Ranging (SCIDAR) and Slope Detection and Ranging (SLODAR). They employ imaging or manipulation of the pupil field rather than of the image formed by the telescope, in conjunction with binary or multiple star scenes, to effectively triangulate the altitudes of turbulent layers and determine their motion and severity. In astronomy both are restricted in their use by the brightness of the stars in the area of interest. SCIDAR has been adopted in many observatories around the world13,14. It relies on scintillation caused by intervening phase screens on two angularly separated sources to extract the correlation profiles along those paths that suffer similar aberrations somewhere on their paths to the telescope’s pupil. By changing the plane to which the instrument is conjugated to one below the pupil, one can make a better scintillation pattern and become sensitive to variations close to the pupil. This refinement is called Generalised SCIDAR14.
SLODAR has been successfully employed by Wilson15, Johnston et al.13, and by Goodwin, Jenkins and Lambert16,17. The pupil is subdivided into an array of lenslets, with a slightly different image of the binary sources being formed from each lenslet. With reference to Fig. 1, if star A and star B are considered and replicated in lenslets i and j, then movement common to both stars is a result of tip-tilt in the locality of the lenslet. Each lenslet can have a different tip-tilt as an approximation to the wavefront across the pupil. This is the operation of the traditional Hartmann-Shack wavefront sensor. The existence of dual star images, and slightly different wavefronts at the sensor from each, allows monitoring of movement between star A and B in lenslet i, and relative movements between star A in lenslet i and star B in lenslet j, etc. Correlating these movements the lenslet combinations that offer common motions can be determined, and hence triangulated to the region in space where light from star A and star B share a common aberration (circled in Fig. 1). This is the extent of the analysis in astronomy, whereby the changing layers of turbulence can be described in terms of altitude above the pupil and density of aberration structure, and the speed of motion by temporal correlation for both analysis and correction.
It should be noted that this is not the only adoption of terminology from astronomy to vision. Observations away from the line of sight exhibit different aggregate aberration to that found on the line of sight18, and so Dubinin et al. 5 examined the concept of an isoplanatic patch, used regularly in astronomy to assess the reduction in imaging resolution caused by turbulence, to characterize the best average correction over range of angles in vision. The eye exhibits aberration at different depths due to the structure of the refracting elements, so the wavefront observed from an angle is the projected combination of these aberrations. The variation of the aberration with angle is anisoplanatism. SLODAR is a proven technique for assessing the location and strength of the aberration with altitude. We employ it here with respect to depth in the eye.
2. Experimental overview
When using SLODAR with the eye there is no predetermined star spacing since the stars are created using an illumination of the retina, and are not confined to those that happen to be suitable in the region of the sky during the observation period. This is a significant advantage from an algorithmic perspective. Any and many arrangements of angular separation of the point sources on the retina can be used, allowing the aberrations at specific layers within the eye to be calculated. Fig. 1 shows the application of SLODAR for the human eye, and its refracting surfaces such as tear film/anterior cornea, posterior cornea, anterior lens and posterior lens, and variations due to the gradient index profile of the lens. A probe beam pattern is imaged into the eye (not shown) and is scattered from the retina to provide the necessary point source stars. The linear spacing of these on the retina is the vertical axis, and at this separation the resolvable surfaces set by the intersection of the rays show three within the domain of the crystalline lens and one at the cornea. As the layer spacing is a function of the angle between the “stars”, the equally resolvable altitudes or more aptly, depths, occur at uneven spacings, as is illustrated by the vertical lines. These will in fact be surfaces that are not necessarily planar but described by the intersection of the rays at increments in optical path length. The optical path length accounts for the refractive index of the media through which the rays travels on their way to each lenslet. Moving the surface to which the Hartmann-Shack is conjugated (where the principal rays cross) is the recently proposed technique of Generalised SLODAR16.
Another advantage of applying SLODAR to vision is that the necessary information may be collected in a single exposure, effectively capturing a frozen eye structure. This structure will change as the subject performs a visual task. Still within this “frozen” time, which is currently accepted to be in the order of 30 ms19, multiple snapshots of the eye may be used with differently spaced “stars”, or different conjugate frames to obtain finer resolution in depth across each surface. This mix of angular separations of the “stars” and conjugated depths while the eye does not have time to change is possible with active optics in both the illumination systems that creates the “stars” and the relay to the Hartmann-Shack sensor. Throughout this paper we refer to the point source scattering at the subject’s retina as “stars” to use the terminology originating in the astronomical application.
3. Analysis of the data
As with any Hartmann-Shack system, the first processing step is the determination of the image centroids corresponding to each lenslet because these are related to the slope of the wavefront in that local region of the plane to which the lenslet array is conjugated. Usually this plane is the pupil of the subject. For each possible star the approximation to the wavefront originating at that star may be determined from the centroid position, and could be represented by a modal decomposition into Zernike polynomial coefficients.
The SLODAR algorithm yields an equivalent to turbulent layer strength as a function of triangulated altitude, or more aptly, optical path length into the eye. The elements of the eye, while changeable, are nowhere near as volatile as eddy formation and dissipation in atmospheric turbulent layers, so there should be more information to be gleaned from the algorithm in this case than equivalent turbulence strength C n 2 which is derived from the phase structure function of the bulk turbulence refractive index variation, Dn(r⃗ 1,r⃗ 2)=〈n(r⃗ 1)n(r⃗ 2)〉. In astronomy, while inhomogeneous media is considered, the phase structure is assumed to be an isotropic function of the separation between any two points in the turbulent volume and hence Dn(r)=〈n(r⃗ 1)n(r⃗ 1+r)〉. The covariance of centroids can then be related to the power spectral density, , which in turn is proportional to the cumulative C n 2 by fluid dynamics models, allowing for the C n 2 parameter to be structured at different heights of different strength layers.
We apply the method of Wilson15 as in astronomy to identify the depth location of the aberrating elements. The centroids are transformed into the coordinate space, parallel and perpendicular to the interstellar axis joining the “stars”. The spatial arrangements of centroids from different stars are cross-correlated, and those of the same stars are auto-correlated. The spatial Fourier transform of the cross-correlation yields the cross-power spectrum and, of the auto-correlation, the power spectrum. The use of a noise-compensation factor in the denominator of the subsequent division of the spatial cross-power spectrum by the spatial power spectrum, prior to inverse Fourier transforming, yields a deconvolution result whose radial deviation from the origin is related to the strength of the aberration at each depth layer. A cut along the direction of the inter-stellar axis reveals the required C n 2 profile.
This is performed on one frame, or for each of several frames. Unlike the case in astronomy where the SLODAR process runs over many thousands of frames to statistically average the phase structure, we have a limited number of frames to work with for any observation, but do not expect large variation in the phase structure. Nevertheless there are other noise sources in the data acquisition including centroiding noise so a measure of averaging is helpful. The limited number of observations has implications when attempting to normalise the cross-correlation for the number of baselines (lenslet separations) possible in the data, O(i,j) 15. There are less baselines that determine contributions at deeper layers than the combinations of lenslet spacings that reflect information for the closer layers, and so the deeper layers are very susceptible to noise effects and can ill-condition the process. For the results presented here we chose not to normalise for the number of baselines, so a bias is present in the result towards those layers closest to the conjugated plane. To test the implications of this bias we induce an artificial layer in the data at the depth of the seventh resolvable layer using a 14×14 lenslet grid. The resultant C n 2 peak is reduced to two-thirds that represented if the layer were instead inserted at the pupil.
To compensate for the small number of frames in each dataset, we suggest an optional further step in the analysis involving an iterative method resembling the Ayers-Dainty20 algorithm to improve the deconvolved depth profile. The restriction of confining the 2-D FFT of the deconvolution result to the known lenslets is employed in one domain, and a weak positivity constraint is enforced on the result domain when iterating between 2-D FFT and profile domains. The refractive strength can not physically take negative values. Ten to twenty iterations are sufficient in obtaining a convergence for this dataset.
In astronomy the cumulative C n 2 from this profile is normalised using other information such as an estimate of Fried parameter (ro) or variance of the phase. The Fried parameter ro and the isoplanatic angle θo are both dependant on the cumulative C n 2 profile over the height/depth range, so if these are estimated by another method, the aberration strength at each layer can be quantified. In astronomy these quantities might be measured by the likes of a Differential Image Motion Monitor (DIMM), which can be implemented on the Hartmann-Shack centroids directly17. However, the C n 2(h) profile has different meaning in our application as there is no immediate power law arrangement based on turbulent eddy development and flow; rather a more deterministic aberration exists at each layer. The phase change spatially in each layer is deterministic, so it will depend on the coordinates in the layer being described, r⃗ 1 and r⃗ 2, rather than simply the distance of separation, r, so unlike the turbulence, the phase change may not be considered isotropic and hence ro will now be a function of the mean angle of the stars from the optic axis. The Fried parameter (ro) is the width where the mean square phase variance is 1 rad2. This is perhaps the best way to estimate an ro equivalent, using the difference in the wavefront reconstruction from each star, and indeed is the focus of recent work by Dubinin et al. 5 We make no use of this other than for the reader to compare severity of the aberrations with that experienced in astronomy.
4. Experimental arrangement
To demonstrate applying SLODAR to the eye, several experiments on a single lens model eye were conducted using a 543 nm HeNe laser illuminating through a HoloEye liquid crystal spatial light modulator (SLM) and a relay lens system to generate the stars on the retina. The optical system is illustrated in Fig. 2. The SLM uses phase modulated diffractive elements as variable focal length lens to control the positioning of the point sources on the retina. Assessment of the confinement of the “star”, and hence whether it functions as a point source is done by observing the images formed by the lenslets of the Hartmann-Shack sensor on return of the scattered light from the retina. A power of 0.31 µW was measured at the cornea of the eye with the phase only diffractive function displayed on the SLM. This 0.78 W m-2 exposure is well within the IEC60825-1:2001 thermal Maximum Permissible Exposure (MPE) of 10 W m-2 and is within the photochemical restrictions for prolonged (of the order of 100 s) continuous use at the same star location should it be used in-vivo, and indicates there is latitude for increased signal should it be required. The Hartmann-Shack sensor samples over a 5.6 mm diameter pupil with 14×14 lenslet coverage, with an exposure time of 1/60th s. The centroids of the images are determined in Matlab for each star set. When the angular separation of the stars is small the two sets of star images are intertwined, but as the star separation increases the images occupy different regions of the image plane. It is possible to illuminate one star after the other and make use of the full capability of the centroiding process provided the eye is comparatively static between exposures. It is also possible to move the plane to which the Hartmann-Shack lenslet array is conjugated, and so perform generalised SLODAR for finer depth resolution. We chose three such datasets using a model eye with the array conjugated to the pupil and at 5 mm optical path length differences either side of this plane.
The resulting profiles from these generalised SLODAR experiments on a model eye are given in Fig. 3. The model eye has a plano-convex polymethacrylate lens with anterior radius of curvature of 7.58 mm, an anterior asphericity of Q -0.229, a thickness of 4.5 mm, and we assumed a refractive index of 1.49. The stars are given a 0.002 radian separation prior to the lens system shown in Fig. 2, that magnifies the angle by 22 times; a 5.8 degree separation. The lenslet array is centred on the principal ray for each star, but we have shifted the plane to which the Hartmann-Shack sensor is conjugated by 5mm in optical path length for the two profiles shown in Fig. 3. It is evident that the shift moves the profiles across resolvable layers and the location of the refractive surfaces can be determined by this shift. The shift has moved the contribution of the anterior surface of the lens in the model eye out from under the null correlation peak at depth 0, now beginning to appear at the 1st layer (blue line), and the contribution of the posterior surface of the lens is seen around the -5th to -6th layers (red line) has similarly moved forward -4th to -5th layers (blue line). Since the single optic of the model eye is homogeneous between front (convex) and back (planar) surfaces we do not expect nor observe any significant aberration strength in the intervening layers.
It is our intention to change to a non-visible wavelength for testing this apparatus on a human eye, but to confirm the effectiveness of SLODAR, we have access to datasets of a single subject using a commercial COAS-HD aberrometer (Wavefront Sciences, Albuquerque). These allowed greater separation of the sources, and as such potential for finer resolution of the refraction within the depth of the eye. These exposures were taken at dissimilar times, one star (angle) at a time; therefore they do not carry simultaneous information of the state of the eye, and may introduce slightly different conjugation depths. The instrument was conjugated to the anterior cornea, but the principal rays from each star crossed at the pupil, so in essence the apparatus is as ray traced in Fig. 4, but the Hartmann-Shack had slightly defocused PSF to work with – at the f-number of the lenslets this is barely detectable, so we consider the experiment as if the whole apparatus was conjugated to the pupil. This assumption is in keeping with our definition that the conjugation plane is where the principal rays intersect. The data for each star were referenced by angle to the optic axis at the detector of (0, 3.84), (7.12, 3.84), (20.55, 3.84), and (−20.55, 3.84) degrees in (horizontal, vertical) to the line of sight. In the analysis this places each combination of stars so the interstellar axis is also horizontal with respect to the lenslets. A single dataset (0, 0) is acquired perpendicular to the axis of the other datasets, on the line of sight. A graphic to illustrate the two-dimensional angular arrangement of this data is shown in Fig. 4. The subset of lenslets used in this experiment spatially sample the phase surface with 44×44 grid centred on and perpendicular to the principal ray. We have chosen not to correct for minor change in angle of the detector at each observation. For each data set a single exposure is acquired.
Figure 4 ray-traces the scenario for (0, 3.84) and (-20.55, 3.84) angle datasets and shows where the intersection planes are expected in the SLODAR depth profile. In order to quantify the usefulness of SLODAR in this application, we need to sample the refractive elements in finer detail. Thus Fig. 4 illustrates the ray-trace obtainable from the twenty degree separated stars, but the reader should bear in mind the resolvable surfaces, illustrated by the red dotted surfaces or layers shown at the intersection of the rays, are for a 15×15 lenslet array and we have restricted the number of rays for clarity. For the COAS-HD 44×44 array there will be two intervening surfaces between each of those shown. The available signal will therefore be spread over many more layers, with the crystalline lens for example expected to be examined by 18 to 21 layers, the non refracting aqueous humour by the six closest layers to the pupil, and the cornea at 5 to 6 depth layers on the opposite side to the conjugation layer. To aid understanding of the results we have superimposed on the Figs. 5–7 the expected locations of the surfaces in the eye, and expect to see aberration strength between the pairs of these, with no strength in the aqueous and vitreous humor regions.
Figures 5–7 show the processed profiles derived from the radius of motion of the centroids, without compensation for longer baselines, and using a Weiner deconvolution correction factor of 0.001, inside a 20 iteration modified Ayers-Dainty algorithm. All profiles are cut from the deconvolution result (see Fig. 8(a) which we discuss later) along the horizontal interstellar axis except for Fig. 6(a) which is a vertical cut because the stars are separated vertically in this case. In Fig. 6 the progression from (a) to (b) to (c) is an increase in star separation by two at each stage, with the aberration strength being sampled by twice as many layers as the previous profile. The implication of spreading the limited signal over more layers is that they are easily swamped by noise in the correlation process. However, there is reasonable evidence in the profiles for the presence and relative strength of the aberration at each layer. We believe these results give a sufficient impetus for further study and perfection of SLODAR as a valuable assessment tool for vision.
The order of the stars used in the cross-correlation defines whether the depth on the x-axis of each profile is positive inside or outside the eye, and hence whether the corneal contribution appears on the left or right of the zero depth “layer” or conjugation depth. We choose the convention of the leftmost star as the first argument of the cross-correlation. This allows us to compare the two profiles in Fig. 5(a) and (b) which arise from stars separated from the optics axis by the same angle but on different sides of the eye, i.e. (-20.55,3.84) and (0,3.84) in (a) and (0,3.84) and (20.55, 3.84) in (b). It would be expected that these were coarsely symmetrical but the profiles show subtle nuances due to the different structure examined on either side of the eye. Further improvement in depth resolution is seen in Fig. 7(a) for 27 degree separation between the stars and Fig. 7(b) for the 41 degree separation. The finest resolution in depth arises from the (-20.55, 3.84) and (20.55,3.84) dataset shown in Fig. 7(b). At this separation of 41 degrees there are very few baselines that examine the corneal contribution or the rear surface of the lens. The normalisation O(i,j) would compensate for this if it could be applied. Instead the prevalence of the cornea is diminished and we get the chance to examine the crystalline lens in greater detail. For comparison with astronomy, the Fried parameter, least at the largest separation, yields a D/ro of 2.8 using the unit phase variance of the first three order Zernike polynomial expansion of the wavefronts from each star.
The refractive power of the cornea is two times that of the lens21, and as was identified by Goncharov12 the refractive power this exhibits is a dominant effect in any reconstruction. We expect a significant power of the crystalline lens to be associated with the posterior and anterior surfaces. The remaining power within the crystalline lens is of extreme interest but significantly lower than these other components. Therefore wider star separations or changes in the conjugation depth are required to improve the signal corresponding to the internal layers sampling the lens. So unlike astronomy, dominant features are present in the correlations, and even in the autocorrelations. This is seen easily in Fig. 8(b) where the elliptical “plateau” arises because of correlations across the rays that sample the cornea. Refer again to Fig. 4, where it is evident the outer rays arriving at the Hartmann-Shack lenslets do not participate in correlations to do with the cornea because they do not intersect any other rays within this region. The corneal power hence is confined to those rays that do cross within the region of the cornea, and is seen as a plateau-like feature whose size is reduced as the angular separation of the stars is increased, and whose shape is the spatial autocorrelation of the region where the rays do intersect at the cornea. (For large separation of stars there are very few rays sampling the cornea, as is borne out by the reduced strength of the cornea in Fig. 7(b) for example). Figure 8(c) and 8(d) show the similar effect in the cross-correlation for the model eye experiment for the 0 mm and 5mm change in conjugation plane. This feature takes a different location in the correlation results from each reconjugation. Figure 8(a) shows how the reduction by the SLODAR algorithm renders the depth profile.
More combinations of the star separations are possible from the COAS-HD datasets. For example combining the data sets at (7.12,3.84) and (0,0) specifies an interstellar axis that is neither horizontal nor vertical but rather at ~29 degrees orientation to the horizontal lenslet arrangements. With reference to the result such as in Fig. 8(a), where the profile would be cut horizontally through the centre (and displayed in Fig. 6(b)), this profile would be cut at the angle of 29 degrees through the centre. Consequently each depth “layer” is 1.14 times the optical path length spacing of Fig. 6(b) and so acquires one more sampling layer within the lens. Interpolation artifacts would result when determining the profile at this angle so such combinations are not shown here, but are valid data to refine the depth estimate.
There are a number of considerations related to the process and analysis of results. The first of these refers to Fig. 4, where it is ray-traced through the human eye, rather than the traditional scenario experienced in astronomy where the stars are an infinite distance from the turbulence volume. Rays arriving at each lenslet would be considered parallel throughout the turbulence volume, resulting in resolvable altitudes that are evenly spaced. In the case of the eye the rays are still fan shaped when they arrive at the anterior lens and hence the points of intersection of the rays are not evenly spaced within a layer, and the layers are not planar, but curved to fit these intersections.
Surfaces formed by the intersection of these rays will not coincide with actual surfaces of the refractive elements, nor with the lenticular structure of the crystalline lens, and so contributions from these will be spread over a few layers as collected in Figs. 5–7. Indeed the fine structure within the depths of the lens will not be immediately resolvable so effects local to each layer will be integrated. The ability to ray-trace a model eye to determine the optical path lengths and, accordingly the surfaces where the rays intersect, means that a “layer” spread function might be determined for each surface22. With these as a model a least squares fit to each layer would allow the inclusion of data from many observations, and hence a more resolved profile than possible from a single. The model would allow us to rule out attribution to those depths that are expected to have no refractive elements, and to expand the information about the aberration at or around each surface by calculation of the “voxel” spread function as a function of depth and spatial extent. It is envisaged that the spread functions could be scaled to fit a particular subject’s eye after a first pass of the data as performed here, using the easily identified locations of the refractive surface locations. The fit to these spread functions can incorporate diffractive processes that have shown to be important in the astronomy algorithm16. There is much research to undertake in this area.
The mapping to “voxel” spread functions is the basis for a tomographic reconstruction. Such would account well for all the feature energy that is not confined to the interstellar axis in Fig. 8. The results presented here have not accounted for the absence of this energy from the profile, yet the features are substantial and their absence from the cut profile are a principal reason for the variations of the profile. At a minimum it would be sensible when orientation of the interstellar axis is not exactly known, to perform a Radon transform of the data around this axis and sum along a narrow angular range, for example ±3 degrees to consolidate the impact, but the substantial features present in the result would still only be captured by tomography. Off-axis structure is not observed in astronomy data because of the isotropic phase structure, and the statistically averaging over large numbers of observations.
The anterior elements or surfaces alter both the exit and entry paths of rays, causing a discrepancy between the assumed and actual positions of beacons or “stars” on the retina, and hence on the actual depth of each layer. This is the so-called tilt-anisoplanatism found when using laser guide stars in astronomy, and as in astronomy the only true assessment of this is through use of a natural reference star. In vision we suggest this reference could be a feature on the retina such as a blood vessel. Causing the illumination to enter over the entirety of the pupil will minimise the effects of the tilt anisoplanatism as we employ in the system of Fig. 2. The illumination system can be adaptive to achieve the smallest stars observed back at the wavefront sensor. In the COAS-HD data the illumination is a narrow beam through the centre of the pupil. In either case, recall the lenslets of the Hartmann-Shack sensor have a very large f-number and correspondingly a large depth of field which makes it blind to the effects of higher order aberrations on the path of the illumination to the retina and to what depth on the retinal surface the reflection originates.
On exit of the scattered light we are of course attempting to notice deviation caused by the intervening media. Aberrations between the wavefront sensor and the supposed conjugate plane can be localized and corrected for using multiple observation angles, with the assumption that the angular average gives the true structure. This is relied on in all the “inverse” ray-tracing23 or propagation24 and maximum likelihood algorithms10. We have not combined these observations in this proof-of-concept because data are taken at disparate times. The inclusion of many SLODAR observations, each taken in rapid succession, is the focus of alternative algorithms16 including tomography.
A complementary improvement could be achieved using data from instruments that may assess the anterior corneal surface. Knowledge of this surface may be incorporated to resolve or remove information at this depth. Known contributions may be removed in the data reduction or adjusted optically on exit from the eye before reaching the Hartmann-Shack sensor by use of adaptive optics in the sensing path. This would improve the analysis of the remaining components of interest. Iterative correction may be undertaken using adaptive optics in this fashion based on the evolving estimate of the surface under correction.
If one measures the eye while it is engaged in a visual task, many other useful quantities can be established. Temporal Cross-correlation as suggested by Wilson13 can be used to determine movement of the refractive elements, and hence the analogies to the Greenwood frequency and coherence time for the eye structure in AO modeling and correction. Indeed temporal changes in the eye while undertaking accommodation tasks will yield further data sets to aid the assessment. It is also expected that time spectral analysis of the centroid motion will yield trends that can be attributed to sinus rhythm or tear film thinning etc, and removing these from the sequences will give better estimation of the other elements. This is akin to removing the so-called Dome turbulence in astronomy by temporal filtering16.
The application to the human eye of the SLODAR turbulence profiling technique determines the location and severity of aberrations as a function of depth. The technique with little modification has been applied to a model eye and a real subject, and hence confirmed using a dedicated setup and with commercial wavefront sensor data. The profiles arising allow immediate comparison between the astronomy and vision cases, but many further developments are possible in the latter than are possible or practical in astronomy. This is because we can introduce a number of factors such as control over the light levels, wavelength, and angular spacing. With further algorithmic development the deterministic aberrations rather than just statistical strength at each layer may be determined. The SLODAR method provides this profile information in a single or limited set of exposures that may be taken while the eye movement is effectively “frozen”, with repeated such measurements allowing evaluation of changes within the aberrations and eye structure while the subject is undertaking visual tasks, and as such has the potential to be a useful technique in vision science.
This study was supported by a QUT IHBI research seeding grant. We would also like to thank Ankit Mathur from the School of Optometry, QUT for the COAS-HD aberrometer data.
References and Links
1. J. C. He, J. Gwiazda, F. Thorn, and R. Held, “Wave-front aberration in the anterior corneal surface and the whole eye,” J. Opt. Soc. Am. A 20, 1155–1163 (2003), http://www.opticsinfobase.org/abstract.cfm?URI=josaa-20-7-1155. [CrossRef]
2. P. Artal and A. Guirao, “Contributions of the cornea and the lens to the aberrations of the human eye,” Opt. Lett. 23, 1713–1715 (1998), http://www.opticsinfobase.org/abstract.cfm?URI=ol-23-21-1713. [CrossRef]
3. W. Wang, Z.-Q. Wang, y. Wang, and T. Zuo, “Optical aberration of the cornea and the crystalline lens,” Optik 117, 399–404 (2006). [CrossRef]
4. P. Artal, A. Guirao, E. Berrio, and D. R. Williams, “Compensation of corneal aberration by the internal optics in the human eye,” J. Vis. 1, 1–8 (2001), http://www.journalofvision.org/1/1/1/Artal-2001-jov-1-1-1.pdf. [CrossRef]
5. A. Dubinin, T. Cherezova, A. Belyakov, and A. Kudyashov, “Human retina imaging: widening of high resolution area,” J. Mod. Opt. 55, 671–681 (2008), http://dx.doi.org/10.1080/09500340701467710. [CrossRef]
6. G. Smith, D. Atchison, and B. Pierscionek, “Modeling the power of the aging human eye,” J. Opt. Soc. Am. A 9, 2111–2117 (1992), http://www.opticsinfobase.org/abstract.cfm?URI=josaa-9-12-2111. [CrossRef]
7. A. V. Goncharov and C. Dainty, “Wide-field schematic eye models with gradient-index lens,” J. Opt. Soc. Am. A 24, 2157–2174 (2007), http://www.opticsinfobase.org/abstract.cfm?URI=josaa-24-8-2157. [CrossRef]
8. R. Navarro, F. Palos, and L. Gonzalez, “Adaptive model of the gradient index of the human lens. I. Formulation and model of aging ex vivo lenses,” J. Opt. Soc. Am. A 24, 2175–2185 (2007), http://www.opticsinfobase.org/abstract.cfm?URI=josaa-24-8-2175. [CrossRef]
9. D. A. Atchison, “Anterior corneal and internal contributions to peripheral aberrations of human eyes,” J. Opt. Soc. Am. A 21, 355–359 (2004), http://www.opticsinfobase.org/abstract.cfm?URI=josaa-21-3-355. [CrossRef]
10. J. A. Sakamoto, H. H. Barrett, and A. V. Goncharov, “Inverse optical design of the human eye using likelihood methods and wavefront sensing,” Opt. Express 16, 304–314 (2008), http://www.opticsinfobase.org/abstract.cfm?URI=oe-16-1-304. [CrossRef]
11. R. Ragazzoni, E. Marchetti, and F. Rigaut, “Modal tomography for adaptive optics,” Astron.Astrophys. 342, L53–L56 (1999).
12. A. S. Goncharov, A. V. Larichev, N. G. Iroshnikov, I. Y. Yu., and S. A. Gorbunov, “Modal tomography of aberration of the human eye,” Laser Phys. 16, 1689–1695 (2006). [CrossRef]
13. R. A. Johnston, J. L. Mohr, P. L. Cottrell, and R. G. Lane, “A Bread-Board SCIDAR system at Mount John,” presented at the Image and Vision Computing ’04, Akaroa, New Zealand, 21–23 Nov. 2004.
14. V. A. Kluckers, N. J. Wooder, T. W. Nicholls, M. J. Adcock, I. Munro, and J. C. Dainty, “Profiling of atmospheric turbulence strength and velocity using a generalised SCIDAR technique,” Astron. Astrophys. supplement series 130, 141–155 (1998). [CrossRef]
15. R. W. Wilson, “SLODAR: measuring optical turbulence altitude with a Shack-Hartmann wavefront sensor,” Mon. Not. R. Astron. Soc. 337, 103–108 (2002), http://www.blackwell-synergy.com/doi/abs/10.1046/j.1365-8711.2002.05847.x. [CrossRef]
16. M. Goodwin, C. Jenkins, and A. Lambert, “Improved detection of atmospheric turbulence with SLODAR,” Opt. Express 15, 14844–14860 (2007), http://www.opticsinfobase.org/abstract.cfm?URI=oe-15-22-14844. [CrossRef]
17. A. Lambert, C. Jenkins, and M. Goodwin, “Turbulence profiling using extended objects for Slope Detection and Ranging (SLODAR),” Proc. SPIE 6316 (2006), http://spie.org/x648.xml?product_id=682428. [CrossRef]
18. A. Mathur, D. A. Atchison, and D. H. Scott, “Ocular aberrations in the peripheral visual field,” Opt. Lett. 33, 863–865 (2008), http://www.opticsinfobase.org/abstract.cfm?URI=ol-33-8-863. [CrossRef]
19. L. Diaz-Santana, C. Torti, I. Munro, P. Gasson, and C. Dainty, “Benefit of higher closed-loop bandwidths in ocular adaptive optics,” Opt. Express11, 2597–2605 (2003), http://www.opticsinfobase.org/abstract.cfm?URI=oe-11-20-2597.
20. G. R. Ayers and J. C. Dainty, “Iterative blind deconvolution method and its applications,” Opt. Lett. 13, 547–549 (1988), http://www.opticsinfobase.org/abstract.cfm?URI=ol-13-7-547. [CrossRef]
21. D. A. Atchison and G. Smith, Optics of the Human Eye (Butterworth-Heinemann, Oxford, 2000).
22. T. Butterly, R. W. Wilson, and M. Sarazin, “Determination of the profile of atmospheric optical turbulence strength from SLODAR data,” Mon. Not. R. Astron. Soc. 369, 835–845 (2006), http://www.blackwell-synergy.com/doi/abs/10.1111/j.1365-2966.2006.10337.x. [CrossRef]
23. A.V. Goncharov, M. Nowakowski, M.T. Sheehan, and C. Dainty, “Reconstruction of the optical system of the human eye with reverse ray-tracing,” Opt. Express 16, 1692–1703 (2008), http://www.opticsinfobase.org/abstract.cfm?URI=oe-16-3-1692. [CrossRef]
24. J. Rouarch, J. Espinosa, J. J. Miret, D. Mas, J. Perez, and C. Illueca, “Propagation and phase reconstruction of ocular wavefronts with SAR techniques,” J. Mod. Opt. 55, 717–725 (2008), http://dx.doi.org/10.1080/09500340701470011. [CrossRef]