The adaptive optics scanning laser ophthalmoscope has been fitted with three light sources of different wavelengths to allow simultaneous or separate imaging with one, two or three wavelength combinations. The source wavelengths used are 532 nm, 658 nm and 840 nm. Typically the instrument is used in dual-frame mode, performing imaging at 840 nm and precisely coincident retinal stimulation in one of the visible wavelengths. Instrument set-up and single-detector image capture are described. Simultaneous multi-wavelength imaging in the living human retina is demonstrated. The chromatic aberrations of the human eye lead to lateral and axial shifts, as well as magnification differences in the image, from one wavelength to another. Measurement of these chromatic effects is described for instrument characterization purposes.
©2006 Optical Society of America
The adaptive optics scanning laser ophthalmoscope (AOSLO) has been used to obtain high resolution images of the human retina, revealing features such as cone photoreceptors, blood vessels and capillaries and leukocytes flowing inside them, and details of the nerve fiber layer [1, 2]. Recently, the AOSLO has been modified to allow presentation of dynamic visual stimuli by modulation of the illumination source . Adaptive optics corrected stimuli may therefore be viewed by the subject while the location of this stimulus on the retina is simultaneously recorded in the image, so that any ambiguity of stimulus incidence is completely removed.
The use of multiple wavelengths in a scanning laser ophthalmoscope (SLO) has been described previously . As most SLO are monochromatic, it is expected that the combination of multiple wavelength images will provide more information than the monochromatic image in a manner analogous to color versus grayscale fundus photographs. It was found  that the combination of three monochromatic images improved visibility and distinction of retinal features such as the nerve fiber layer, blood vessels and choroid. In addition, the versatility of the system was improved by having three separate wavelength channels in which intensity, gain and aperture could be adjusted independently.
Although true-color imaging would not be possible in the AOSLO due to the absence of a blue-wavelength source, the introduction of three independent illumination channels to the AOSLO increases versatility and information content of the images in a similar fashion to that described in Ref. . The superior resolution capacity of the AOSLO in comparison to a conventional SLO however means that we see a greater level of detail in retinal tissues. We may therefore hope to extract information from the resolved cone photoreceptor mosaic and leukocytes contained within retinal capillaries simultaneously, for example. It was previously found that the cone photoreceptor mosaic is best viewed under low coherence infrared illumination, which reduces speckle artifacts , while leukocytes in blood are best viewed under green (532 nm) illumination . Dual-wavelength AOSLO imaging has recently been demonstrated for fluorescence imaging, where a series of low signal-to-noise fluorescence frames were registered and coadded by using a simultaneously acquired high signal-to-noise reflectance image as a guide .
Superposition of retinal images recorded in different wavelengths is possible with our instrument, but is not the focus of our current research and so is not explored in this article. Most importantly for our interests in terms of application of the instrument is our capacity to modulate each laser beam independently  at high frequency (50 MHz). This introduces the possibility of presenting complex, patterned stimuli to the retina in visible light while simultaneously imaging the reaction to these stimuli under infrared conditions, where the light is not so apparent and is more comfortable to the subject. It is in this dual-frame simultaneous imaging and stimulation capacity that the multi-wavelength AOSLO is typically employed, allowing a host of interesting experiments to be performed.
It is well known that the eye exhibits chromatic aberration which causes different wavelengths of light to focus at different lateral and axial points of the retina [7–11]. Chromatic aberrations can be separated into longitudinal chromatic aberration (LCA) and transverse chromatic aberration (TCA) components, where the former describes focus difference between wavelengths and the latter causes lateral shift of features between wavelengths. We can measure focus and lateral shifts between dual-wavelength image pairs to allow us to calculate the chromatic aberration of the eye. Chromatic aberration is also known to cause chromatic difference in magnification (CDM) between different wavelengths [12, 13]. The effect of CDM is however very small (<1 % between 400 and 700 nm), and negligible in terms of our imaging results. The need to compensate chromatic aberrations is application dependent. TCA may be eliminated by imaging along the achromatic axis. Elimination of LCA would require axial repositioning of the sources and addition of a second detector to the instrument, if simultaneous LCA-free imaging was desired. This has been demonstrated elsewhere in a multi-wavelength AOSLO . For our simultaneous imaging/stimulation experiments however, elimination of LCA is typically not necessary. Should we wish for stimulation and imaging to occur in identical planes (e.g. for microperimetry or other precision stimuli applications), this can be achieved by source repositioning and the visible image quality can be sacrificed. This article will concentrate on describing the technological aspects of the multiwavelength AOSLO, addressing the chromatic aberration issues that multi-wavelength imaging implies, and demonstrating in vivo dual-frame imaging. Use of the instrument in various vision science applications is ongoing and will not be described here.
2. Instrument set-up and image capture
The AOSLO set-up has been described previously . The modifications made to the instrument for this article involve the light delivery portion of the instrument and the scanning and frame grabbing method. Explanation of the instrument set-up will therefore concentrate on describing these parts of the instrument, with a brief review of the wavefront sensing and wavefront compensation components.
Figure 1 is a schematic representation of the AOSLO set-up. The instrument occupies a 1.5 square meter area on the optical bench.
2.1 Light delivery
Green light is provided by a diode pumped frequency doubled Nd:YAG crystal laser at 532 nm (CrystaLaser, Reno, NV). Red light comes from a laser diode (Hitachi HL6501MG) at 658 nm. Infrared light comes from a broadband superluminescent diode (Broadlighter S840, Superlum, Russia) centered at 840 nm with bandwidth of about 50 nm. The maximum power we used for imaging in either of the visible wavelengths was 20 µW, while we may use up to 200 µW of infrared light. For dual-frame imaging, the gain of the PMT was set to optimize image brightness in the visible channel, and the infrared source power was then reduced to match the intensity of the infrared image to that of the visible image. Power levels at all wavelengths were at least a factor of 10 below limits recommended by ANSI .
Each source is coupled into a single mode optical fiber whose tip acts as a point source, and is collimated by an f=18.4 mm aspheric lens. The collimation lens is mounted on a micrometric axial translation stage to allow for fine positioning of the lens to achieve best collimation, while the fiber end is mounted in a rotational plate which may be rotated to change the polarization state of the beam. This is rotated to give maximum power output from the source into the system. The collimated light is focused by an f=150 mm achromatic doublet onto the entrance window of an acousto optic modulator (AOM) which modulates the beam. The AOM diffracts the light into multiple orders. The fiber end plate, collimating lens and f=150 mm focusing lens are mounted on a swivel mount so that they may be rotated to direct the first order beam into the instrument. The AOM is mounted on a five-axis aligner which allows fine positioning of the AOM so as to achieve maximum deflection into the first order. The zeroth order beam is therefore positioned off axis. After leaving the AOM, the beam travels through a second polarizer mounted on a rotational plate. Rotating this and the fiber tip gives complete control over the polarization state of the beam. The second polarizer is used in practice to attenuate the beam to give a suitable intensity level in the final image. A second f=150 mm achromatic doublet completes the telescope around the AOM. The beam then leaves its individual light delivery arm and must be combined with the other two wavelength paths to form one collinear beam.
The green (532 nm) laser is chosen to be the primary beam for alignment. On exiting its light delivery arm described above, it encounters a dichroic filter at 45° from which it is reflected through the entrance pupil and then runs through the beam path of the system. The red (658 nm) laser is next to be aligned. Its light delivery arm sits alongside the green arm. On exiting the red delivery arm, the beam hits a silver mirror at 45° and a cold mirror at 45° in a periscope arrangement that directs the beam onto the back face of the dichroic filter. The red beam is reflected by the cold mirror and transmitted through the dichroic filter into the artificial pupil. The two mirrors in the periscope arrangement are mounted on tip tilt platforms which may be adjusted to steer the red beam to follow precisely the same path as the green laser. The third arm for alignment is the infrared (840 nm) arm. On exiting its delivery arm, the infrared beam encounters a pair of silver mirrors at 45° in a periscope set-up. Fine adjustments to mirror positions are made using tip tilt platform mirror mounts to steer the beam into alignment with the green and red path. The infrared beam transmits through the cold mirror and the dichroic filter to form a collinear path with the green and red lasers. Precise alignment of the three beams is performed by viewing two beams at a time at accessible retinal conjugate planes along the beam path. A final adjustment of the three source positions is performed while viewing the histogram of intensity values in the image received by the PMT. Each source position is finely adjusted to maximize the light that passes through the confocal pinhole.
The artificial pupil through which the combined beam is transmitted is an aperture which sets the entrance pupil diameter. Mirror telescopes then relay the entrance pupil to the deformable mirror, the horizontal scanner, the vertical scanner and then on to the eye.
Zemax software (Zemax, Focus Software, Tucson, AZ) is used to simulate the instrument set-up and set the alignment which minimizes aberrations in the system. Any remaining defocus or astigmatism inherent to the system is corrected with a trial lens placed in front of the eye before imaging.
Correct alignment of the instrument aims to make all three sources conjugate with the pinhole and the focused spot on the retina, so that optimal AO correction occurs for all three wavelengths at once, giving identical best resolution in all wavelengths, though in different planes due to LCA. Lateral and axial resolution of the AOSLO are discussed in previous articles by our group [15, 16].
A custom GUI interface has been developed to control the three AOMs. It allows fast switching between different wavelengths and control of stimulus delivery in all wavelength channels.
2.2 Wavefront sensing and compensation and light detection
A single wavelength is used for wavefront sensing and compensation and the second wavelength switched on once the correction has been fixed. Since all sources are conjugate with both the pinhole and the focused spot on the retina, the best AO correction at 532 nm is also the best correction at both 658 nm and 840 nm, and so the AO loop can be closed in any wavelength. This is a valid assumption since it has been shown that the only aberration that depends on wavelength is defocus .
Wavefront sensing and compensation components of the instrument have been described in detail elsewhere . Briefly, wavefront aberrations are measured with the same light as is used to form the image. A beamsplitter directs a small portion of this light to the Shack-Hartmann wavefront sensor (SHWFS) while the remainder passes through to the detector for imaging. The SHWFS uses a square lenslet array with 0.4 mm square apertures and focal length 24 mm. A digital CCD camera detects the focused spots, and aberrations are fit to a 10th order Zernike polynomial.
Wavefront compensation is performed by a 37 channel deformable mirror (Xinetics, Andover, MA) in a pupil conjugate plane. Aberrations are corrected on ingoing and outgoing paths, on the ingoing path to focus the light to a compact spot on the retina and on the outgoing path to refocus the diffusely reflected light from the eye through the confocal pinhole. Wavefront correction is controlled using custom C++ and Labview software in a low bandwidth closed loop.
The detection arm contains a confocal pinhole onto which the 3.5 mm descanned beam returning from the retina is focused with an f=100 mm collector lens. The pinhole diameter selected is application dependent; pinhole diameters ranging from 50 µm to 150 µm may be used. The light which passes through the pinhole falls on a photomultiplier tube module (H7422-20, Hamamatsu, Japan) and a transimpedance amplifier which detects and amplifies the signal, converts current to voltage, and passes it to the analog input of the frame grabbing board.
2.3 Raster scanning and frame grabbing
Scanning is carried out using a resonant scanner – galvanometric scanner combination (Electro-Optics Products Corp, Flushing Meadows, NY). The beam is scanned at 16 kHz in a sinusoidal pattern by the resonant scanner, which is coupled to the galvanometric scanner that operates in a sawtooth pattern at 1/525th of the horizontal scan frequency. This gives a frame sampling rate of 525 lines/frame at about 30 frames/second. Scanning is performed in pupil conjugate planes so that the scanning beam does not move at its pivot point in the pupil plane. Scan amplitude determines field size. Adjusting scan amplitude according to a precisely calibrated grid in the retinal plane allows choice of field size from 1°× 1° to 3° × 3° for a 5.81 mm pupil (i.e. the maximum pupil size in our system, equal to beam diameter at the entrance pupil. The ultimate limit on pupil size is the aperture of the deformable mirror).
The horizontal scan acts as master timer for the system. A digital hsync signal is generated from the analog output of the resonant scanner. The vsync signal is calculated from the hsync signal. These hsync and vsync signals are used by the frame grabbing board to define a frame.
The frame grabbing board (GenesisLC, Matrox, Montreal, Canada) takes three input signals from the instrument, named hsync, vsync and signal. One full cycle of the horizontal scan occurs between consecutive hsyncs. Since the horizontal scan is sinusoidal, this full cycle includes a forward section, a turn around section and a return section. In the previous implementation of this instrument, the frame grabber only digitized pixels during the linear forward-going portion of the horizontal scan. In the new configuration of the instrument described here, both the forward and return portions are used. During the forward section, one wavelength of illumination will be switched on by that illumination arm’s AOM (e.g. red). The “on” laser may or may not contain stimuli within that “on” period. During the return portion, the first source will be switched off, and another source will be switched on (e.g. infrared). Hence we capture two interleaved images in a quasi simultaneous manner – i.e. these two frames are captured within the same time period as a single (forward going path only) frame using the previous frame grabbing configuration. The 30 Hz frame rate of the AOSLO is therefore maintained. Displaying the two frames as they are seen by the frame grabber shows a pair of “book-matched” images, with the forward path on the left, the turnaround section in the center and the return path on the right, which appears left-right reversed compared to the forward path image [see Fig. 2(a)]. The book-matched videos are manipulated in post-processing to be separated into two frames containing only the linear portions of the scans (forward and return) using the following steps: i) the return path image is left-right reversed to match the forward image so that the pairs of frames of different wavelengths may be compared, ii) the sinusoidal distortion caused by the nonuniform velocity of the horizontal scanner is corrected, and iii) distortions in each frame caused by eye movements are corrected using custom software [18, 19]. All video processing steps are performed using custom software written in C++ and Matlab. In Fig. 2, the left hand image was captured at 658 nm, while the right hand image was simultaneously recorded under 840 nm illumination. Figure 2(a) shows the raw images as viewed in real time, Fig. 2(b) shows the separated, processed image pair, where 20 frames were averaged to reduce noise.
2.4 Imaging protocol
Informed consent was obtained from the subjects after we explained the nature and possible complications of the study. Our experiment was approved by the University of California, Berkeley Committee for the Protection of Human Subjects. As for previous studies with the AOSLO [1–3, 5], patients were aligned and stabilized in the system using a dental impression plate mounted on x-y-z translation stages. The pupil was dilated using a drop of 2.5 % phenylephrine hydrochloride and accommodation was frozen using a drop of 1 % tropicamide. The subject fixated on some area of the raster scan as specified by the operator. A single wavelength was used for wavefront sensing and correction, with a 5.81 mm pupil. Wavefront sensing was first carried out to measure the subject’s refractive error. Trial lenses were used to correct the subject’s refractive error as accurately as possible before wavefront correction by the deformable mirror (DM) was implemented. This reduced the error that needed to be corrected by the DM so as to reserve the stroke of the DM for correction of higher order aberrations as well as for axial scanning. Once the refractive error was at a suitable value, wavefront correction was performed by closing the adaptive optics loop. After a few iterations (typically less than 10), the best correction was reached and fixed to give static correction during movie capture. The second laser was switched on once the static correction had been fixed. To perform through-focusing during movies, defocus was added to the deformable mirror shape, while the rest of the wavefront correction remained static. The static correction was updated regularly (or whenever the image appearance began to degrade) throughout the imaging sessions. While dynamic AO-closed loop has demonstrated advantages for retinal imaging , static correction of cyclopleged eyes with regular updates also show significant improvements in retinal image quality .
Figure 2 shows images recorded simultaneously in the human retina in vivo in two different wavelengths, 658 nm and 840 nm. The 658 nm image is focused in the plane of the blood vessels, where flow is visible, while the 840 nm image is focused on the cone photoreceptor mosaic, and the vessels are seen as a shadow.
Figure 3 shows the live movie from which static Fig. 2 was generated. The movie of Fig. 3 steps through focus while imaging simultaneously at 658 nm (left hand image) and 840 nm (right hand image). In each image, we travel upwards from the cone photoreceptor layer, through the focus of blood vessels where flow is visible and into the nerve fiber layer. Longitudinal chromatic aberration causes the different wavelengths to focus at different depths in the retina, so this sequence of features is seen first in the 658 nm image, then in the 840 nm image, with a 0.366D focus offset between the two for this subject (EAR).
3.2 Chromatic aberrations
Longitudinal chromatic aberrations (LCA) were calculated using fly-through movies similar to the one shown in Fig. 3. To record the fly-through movies, one source was switched off while the AO loop was closed using the remaining wavelength source. The AO loop will close to best focus on the photoreceptor layer (i.e. the most highly reflective retinal layer), while the second wavelength image will be focused on another layer. Imposing defocus on the DM changes the plane of focus of the image. Fly-through movies were recorded with DM defocus steps of 0.025 D being added every second over a 2.5 D range. Each movie was then analyzed frame by frame to find the mean intensity of each frame. Mean intensity versus frame number for each movie was plotted, and fitted with a Gaussian curve [see Fig. 4(a)]. We assume that the plane of peak intensity corresponds to the plane of best focus on the photoreceptor layer, since it is the most highly reflective layer near the fovea. Therefore, the focal distance, measured in diopters, between the positions of the Gaussian intensity peaks at 658 nm and 840 nm is equal to the LCA. To confirm the assumption that the plane of highest intensity is that in which the cone photoreceptors are in best focus, we looked at the normalized power spectra of the images to check that the dominant frequency in the highest intensity frames corresponded to the cone spacing. We also verified that the signal corresponding to the cone photoreceptors in the power spectra was strongest at the frame of highest intensity. The power spectra of the movie frames at both 658 nm and 840 nm were plotted [Fig. 4(a)]. In order to produce power spectra with detectable features, images of good signal to noise ratio were required. Since the focus is stepped every second, 30 frames are recorded at each focal position. These 30 frames may be stabilized and averaged to obtain high signal to noise ratio images . The power spectra of images of clearly defined cone photoreceptors showed Yellott’s ring surrounding the center, of radius equal to the cone photoreceptor spacing in cycles/degree  [Fig. 4 (b)]. Frames recorded at focal planes further from the best focal plane on the cones show a weaker Yellott’s ring or no ring at all [Fig. 4 (c)]. An annulus was defined surrounding Yellott’s ring, of maximum and minimum radius covering the expected cone spacing range at the relevant eccentricity, and the values inside the annulus were integrated. As expected, it was found that the plane with the peak signal at the frequency of the cone photoreceptors coincided with the plane of the intensity peak [Fig. 4 (a)]. We therefore calculated the LCA values solely from the intensity curves for the rest of our analysis. LCA for three subjects is given in table 1. After accounting for the different wavelength ranges, these values compare favorably with those found in references [7, 11] at similar wavelengths and the range of LCA values across the 3 subjects is similar to the range found in reference . The errors quoted correspond to the 95 % confidence bounds of the fitted curve.
Transverse chromatic aberrations (TCA) were measured by an objective method. For calibration purposes, two-wavelength images of a model eye with a piece of white card placed at the retinal position were recorded and correlated to find the coordinates of the maximum correlation point. The model eye was carefully aligned along its achromatic axis to eliminate any chromatic aberration that it might itself present. The model eye lens was positioned by using the spot pattern on the wavefront sensor to detect the edges then centering it on micrometer stages. The paper in the retinal plane was marked with a crosshair pattern which could be matched up to a crosshair pattern created in the scanning beam and therefore written on the image by the AOM modulation. This ensured that the beam was not tilted with respect to the optical axis of the lens. With chromatic aberration in the model eye removed in this way, any offset of the maximum correlation point from zero then corresponded to chromatic aberration of the system which could be subtracted uniformly from every image pair recorded in the real eye. Dual wavelength image pairs in the human eye in vivo at 3° temporal to the fovea were also correlated and the shift between the position of the maximum correlation point for the pair recorded in the human eye compared to the position of the maximum correlation point for the model eye images is equal to TCA, measured in image pixels, which may then be converted into degrees. Images were taken with a 1.5 degree field size, and were 480 pixels high, meaning we have 320 pixels/degree. Each pixel therefore represents 11.25 arcseconds. For image pairs where retinal features such as cones were visible in both wavelengths, we obtained reliable correlations. However the presence of LCA meant that often while one wavelength showed an image of cone photoreceptors for example, the other wavelength would be far from its best focal plane making correlation of the pair difficult. Steps were taken to ensure the reliability of the correlations. Correlations were performed on series of many consecutive frames, and results averaged. The standard deviation in the correlation maximum position was monitored. Thresholding was imposed to remove values from the average that were too far removed from the expected maximum correlation position. For three subjects imaged, sample values of TCA magnitude are shown in table 1. These values are within the range of those predicted by , when taking into account the fact that we did not control for line of sight, and TCA is known to change significantly with pupil position (see Fig. 5). The large error in these values arises from the uncertainty in the correlation of the images of different planes, as explained previously, but also from the fact that the image position within the frame has an inherent uncertainty of +/- 1 pixel due to both fluctuation of frame borders caused by electronic fluctuation of the hsync signal, and rounding errors introduced by the dewarping process. As each pixel is equivalent to 11.25 arcsec, the errors shown represent, at most, 4 pixels out of the overall image.
We investigated the influence of pupil position by measuring TCA at different locations across the pupil diameter (see Fig. 5). Images may be recorded at all locations along the pupil diameter that allow a wavefront correction to be performed. Once the spots at the edges of the wavefront pattern cease to be visible on the wavefront sensor, the illumination beam is hitting the edge of the pupil and no AO correction can be made. We measured TCA at 12 locations across the central pupil diameter in 0.2 mm steps. The pupil position was changed by moving the bite bar on its micrometer stage. TCA was found to vary linearly across the central pupil, with a variation of over 100 arc seconds from edge to edge. The achromatic axis (i.e. the location of zero TCA) is situated temporal to the center of the pupil. In practice, the bite bar that is used for head positioning in our AOSLO instrument aids correct pupil centering and repeatable TCA measurement, although eye movements of up to 300 µm still appear to be present in the data. Should a more precise control of TCA be required, for example if we should wish to fix our line of sight along the achromatic axis, pupil tracking could eventually be implemented.
Chromatic difference of magnification (CDM) causes features in different wavelength images to be magnified with respect to one another. Previous studies [12, 13] have shown that CDM is <1 % between 400 nm and 700 nm, and by calculation from [12, 13] we find that CDM in our system is <0.5 % maximum (i.e. between 532 nm and 840 nm). Were the CDM larger, we could evaluate it between the two wavelength images by varying the magnification of one of the images and correlating the magnified image with its non-magnified pair. By progressing iteratively through magnification values, we could find the value that gave the best correlation between the image pair. However, a change of <0.5 % is too small for us to detect since our minimum magnification step is dictated by pixel number and size in the image. CDM is therefore too small to have an appreciable effect on our images.
4. Discussion and conclusions
Multi-wavelength imaging using the AOSLO has been demonstrated in the living human eye. A variety of new experiments are possible using this set-up, in the fields of cone microperimetry, visual acuity measurements, and blood oximetry to offer a few examples.
Chromatic aberration of the human eye is seen to cause shifts in lateral position and longitudinal focus between the different wavelength images. We are capable of measuring these aberrations with our instrument so that they may be taken into account when analyzing images. We wish to point out that measuring chromatic aberrations is not intended as an application of the AOSLO, since this can be performed more simply and accurately by other methods [7–13], however we must be able to measure these aberrations should we wish to compensate for them in a specific experiment. Should we wish to image simultaneously in identical planes with two different wavelengths, it would require i) that the vergence of each source be adjusted to correct for chromatic aberration and ii) that we employ separate detectors with their pinhole positions optimized for their respective wavelengths. In general however, the applications we wish to perform with this instrument will involve imaging in one wavelength while using the other as stimulus, thereby allowing us to use a single detector. In this case, we may close the AO loop using only the imaging wavelength, to give best focus in the imaging channel, and use either a slightly out-of-focus stimulus or adjust the stimulus channel to correct for chromatic aberration on the ingoing direction only. Alternatively we may close the AO loop on the stimulus channel to present AO corrected stimuli to the patient and image in a plane other than the photoreceptor plane.
This work is funded by National Institutes of Health Bioengineering Research Partnership Grant EY014375 and by the National Science Foundation Science and Technology Center for Adaptive Optics, managed by the University of California at Santa Cruz under cooperative agreement AST-9876783.
References and links
1. A. Roorda, F. Romero-Borja, W. Donnelly, III, H. Queener, T. Hebert, and M. Campbell, “Adaptive optics scanning laser ophthalmoscopy,” Opt. Express 10, 405–412 (2002). [PubMed]
3. S. Poonja, S. Patel, L. Henry, and A. Roorda, “Dynamic visual stimulus presentation in an adaptive optics scanning laser ophthalmoscope,” Journal of Refractive Surgery 21, 575–580 (2005).
6. D. C. Gray, W. Merigan, J. I. Wolfing, B. P. Gee, J. Porter, A. Dubra, T. H. Twietmeyer, K. Ahamd, R. Tumbar, F. Reinholz, and D. R. Williams, “In vivo fluorescence imaging of primate retinal ganglion cells and retinal pigment epithelial cells,” Opt. Express 14, 7144–7158 (2006). [CrossRef] [PubMed]
10. C. Wildsoet, D. A. Atchison, and M. J. Collins, “Longitudinal chromatic aberration as a function of refractive error,” Clin. Exper. Optom. 76, 119–122 (1993). [CrossRef]
11. E. Fernandez, A. Unterhuber, B. Povazay, B. Hermann, P. Artal, and W. Drexler, “Chromatic aberration correction of the human eye for retinal imaging in the near infrared,” Opt. Express 14, 6213–6225 (2006). [CrossRef] [PubMed]
12. X. Zhang, L. N. Thibos, and A. Bradley, “Relation between the chromatic difference of refraction and the chromatic difference of magnification for the reduced eye,” Optom Vision Sci. 68, 456–458 (1991). [CrossRef]
13. X. Zhang, A. Bradley, and L. N. Thibos, “Experimental determination of the chromatic difference of magnification of the human eye and the location of the anterior nodal point,” J. Opt. Soc. Am. A 10, 213–220 (1993). [CrossRef] [PubMed]
14. Safe Use of Lasers, ANSI Z136.1-1993. New York: American National Standards Institute (1993) and ANSI, American National Standard for the Safe Use of Lasers, ANSI Z136.1 (Laser Institute of America, Orlando, FL, 2000).
16. K. Venkateswaran, F. Romero-Borja, and A. Roorda, “Theoretical modeling and evaluation of the axial resolution of the Adaptive Optics Scanning Laser Ophthalmoscope,” J. Biomed. Opt. 9, 132–138 (2004). [CrossRef] [PubMed]
17. S. Marcos, S. A. Burns, E. Moreno-Barriusop, and R. Navarro, “A new approach to the study of ocular chromatic aberrations,” Vision Res. 39, 4309–4323 (1999). [CrossRef]
18. S. B. Stevenson and A. Roorda, “Correcting for miniature eye movements in high resolution scanning Laser Ophthalmoscopy” in Ophthalmic Technologies XV, F. Manns, P. Soderberg, and A. Ho, eds., Proc. SPIE Vol. 5688A, 145–151 (2005). [CrossRef]
20. H. Hofer, L. Chen, G. -Y. Yoon, B. Singer, Y. Yamauchi, and D. R. Williams, “Improvement in retinal image quality with dynamic correction of the eye’s aberrations,” Opt. Express 8, 631–643 (2001). [CrossRef] [PubMed]
22. J. I. Yellot Jr, “Spectral analysis of spatial sampling by photoreceptors: Topological disorder prevents aliasing,” Vision Res. 22, 1205–1210 (1982). [CrossRef]