We describe what we believe to be the first wave-front measurements of the human eye at a sampling rate of 300 Hz with a custom Hartmann–Shack wave-front sensor that uses complementary metal-oxide semiconductor (CMOS) technology. This sensor has been developed to replace standard charge-coupled device (CCD) cameras and the slow software image processing that is normally used to reconstruct the wave front from the focal-plane image of a lenslet array. We describe the sensor’s principle of operation and introduce the performance with static wave fronts. The system has been used to measure human-eye wave-front aberrations with a bandwidth of 300 Hz, which is approximately an order of magnitude faster than with standard software-based solutions. Finally, we discuss the measured data and consider further improvements to the system.
© 2003 Optical Society of America
Adaptive optical (AO) systems originated in astronomy to cancel out the wave-front aberrations induced by the atmosphere when astronomical objects are imaged . These systems consist of a wave-front sensor; a processing unit; and an actuator, often a deformable mirror. When the bandwidth of the closed-loop system is larger than the bandwidth of the aberrations, the imaging can be enhanced up to the diffraction limit of the optical system. In the human eye, different physiological effects cause wave-front aberrations with a temporal spectrum of up to 80 Hz and thereby reduce the resolution of nonadaptive retinal imaging systems, e.g., confocal microscopes for glaucoma diagnosis. A common wave-front sensor suitable for real-time systems is the Hartmann–Shack sensor, which was first used by Liang et al.  to measure the eye’s wave-front aberrations. Liang et al.  later achieved the first improvement of optical performance with an AO system of the living human eye. Prieto et al.  gave a detailed performance analysis of the Hartmann–Shack sensor for human-eye aberration measurements. The bandwidth of current AO systems in ophthalmology is limited mostly by the charge-coupled device (CCD) camera image acquisition of the focal-plane image from the lenslet array and by the software image processing of this image to approximately 30 Hz [5, 6]. Nevertheless, in measuring the power spectrum density of the eye’s aberrations with a large frame rate, considerable aberration power is present up to approximately 70 Hz, which requires a fast wave-front sensor with at least a double frame rate. Diaz-Santana et. al  recently showed the benefits of a high closed-loop bandwidth for ocular measurements. We therefore developed a custom complementary metal-oxide semiconductor (CMOS) imager with on-chip processing to over-come this speed limitation and reach a bandwidth that is large enough to capture the complete temporal spectrum of the wave-front aberrations in the human eye.
The need for lower-cost adaptive optics has motivated the development of integrated wave-front sensors, which include photodetection and spot-position measurement on a single chip. Droste and Bille  and Lima Monteiro  have succeeded in developing integrated wave-front sensors by using different technologies, but neither wave-front sensor has been sensitive enough for ocular-aberration measurements. Lima Monteiro’s wave-front sensor, based on quad-cell photodetectors, can be used only in applications with light powers above 1 µW per spot. Droste’s sensor measured focal-spot positions at 1 nW per focal spot, which is not sufficient for ocular-aberration measurements. The sensor presented is of the photodetector type with the largest quantum efficiency in the current process technology and has position-sensitive detectors with a decreased noise sensitivity. The sensor therefore allowed measurements at continuous light powers of 100 pW per spot at λ=680 nm with a 400 µm×400 µm lenslet array, which is safely applicable to the human eye at the employed wavelength.
2. Hartmann–Shack wave-front sensors
The Hartmann–Shack sensor is a common noninterferometric wave-front sensor; it is suitable for real-time applications because it does not contain any movable parts. The sensor consists of a microlens array and a focal-plane detector to capture the spot pattern from the lenslet array (see Fig. 1 for a schematic of the sensor and a focal-plane image). If the wave front has an average local gradient ∇W(xi,j,yi,j ) over the area of lenslet (i, j), the associated spot moves laterally in the focal plane (Δxi, j,Δyi,j ) according to geometrical optics:
where f is the focal length of the lenslets. The intensity distribution of each spot is the diffraction pattern of each lenslet; i.e., the width of the spot is approximately λ f/D. The spot displacements have to be determined to reconstruct the wave front, e.g., by a least-squares fit of the measured spot displacements to a set of orthogonal polynomials , usually Zernike polynomials. Finally, an actuator, usually an adaptive mirror, is used in a closed loop to cancel out the wave-front aberrations, achieve a plane wave front, and reach the diffraction limit.
The standard solution to determine the spot displacements relies on the image processing of the focal-plane image, captured with a CCD camera. The bandwidth of reported systems is currently limited to some tens of hertz . To cancel out the complete temporal spectrum of aberrations, the sensor should offer a frame rate of at least some hundreds of hertz.
3. CMOS-based wave-front sensor
To achieve the necessary frame rate for real-time adaptive optics, a sensor has been developed that includes photodetectors and image processing on a single chip, based on an industrial submicrometer CMOS process technology (AMS 0.35 µm). The core of this application-specific integrated circuit (ASIC) contains a detector array to track the position of 8×8 focal spots with a resolution of 17 µm. The detector array matches rectangular microlens arrays with 400-µm pitch.
The maximum laser power safely applicable to the human eye amounts to only ~1 W/m2 in the red and the near-infrared spectrum. Because the reflectivity of the human retina is of the order of only ~1%, we can expect the focal spot to have a power of only 1.6 nW at each subaperture. Since the quantum efficiency of the chosen CMOS photodetectors amounts to 40%, signal processing has to deal with photocurrents in the range of a few hundred picowatts. Integrating sensors can deal with a small incident photon flux but usually require long integration times at low light levels. We have therefore developed a sensor that works in constant current mode.
Each detector encodes the spot position in two 21-bit words. All detectors are read out subsequently by addressing them and reading the bit-vector that encodes the position. A random pattern can be chosen for readout, e.g., a circular subset of the detector array. When only a subset is read, the extrinsic frequency can be increased, whereas the intrinsic bandwidth of the circuitry is limited to 5.8 kHz. Figure 2 shows a general overview of the chip architecture and the architecture of the spot detectors, which consists of a matrix of 21×21 passive pixels, the position-sensitive detectors (PSD), and two multiplexer rows.
Silicon shows a large absorption of photons in the energy range up to the near infrared (λ<1200 nm). The electron–hole pairs, which are generated in the silicon bulk through the absorption of a photon, have to be separated by the electrical field of a pn junction. In standard CMOS process technologies with a p-type bulk there are three passive types of photodiode available for photodetection: the photodiode formed by the diffusion zone of n+ and the silicon substrate, the diffusion zone of p+ and an n-well, and the junction between an n-well and the p-type substrate. These photodetectors are basically characterized by their spectral quantum efficiency η(λ) and the parasitic junction capacitance per area cj . A small parasitic capacitance is important for the readout electronics for the photocurrent. In the chosen process technology the n-well/substrate photodiode has the largest quantum efficiency at red and infrared wavelengths (η=40% at 680 nm) and a parasitic capacitance that is ~10 times smaller than that of the diffusion diodes. A total number of 28,224 pixels has been implemented with an area of 17 µm×17 µm each. We chose a lenslet array with a 53-mm focal length and expect the focal spot to have a diffraction-limited spot size of FWHM=λ f/D=83 µm for a wavelength of 630 nm.
In the chosen process technology the dark current of the photodetectors is 0.03 fA/µm2 (from silicon foundry AMS). On each detector a total dark current of 2.6 pA is produced. At a quantum efficiency of 40% the light power of each focal spot must therefore exceed 6.5 pW to have a reasonable signal-to-noise ratio. This rough estimation agrees with the lowest light level of 20 pW per spot at which we were able to measure.
3.2. Analog image processing
In standard Hartmann–Shack sensors the spot positions are derived from the focal-plane image, normally acquired with a CCD camera. As a first step, spots have to be detected from the gray-scale image gij , and second, the position of the individual spot has to be calculated from the gray-scale values. In the software image processing of Hartmann–Shack patterns a single spot is located first, e.g., by means of searching for the largest gray-scale value in the image, and the other spots are searched for in the vicinity. Once a spot is detected, its position has to be calculated. The position of the maximum gray-scale value on the subaperture
could be used for this purpose, but the maximum is sensitive to noise. A position estimation that is preferred in software image processing is to take the centroid of the gray-scale values in an area of
The area has to be chosen such that the position estimate is unbiased. A biased estimation occurs when large background illumination is present.
Because in our case each lenslet is associated with an individual spot detector on the sensor chip, the dynamic range of the wave-front derivative is limited to ∂W/∂x<A/2f, or ±200 µm in our case. For a lenslet array with a 53-mm focal length the total dynamic range for defocus is only 1.34 dpt. When subjects with large defocus have to be measured, a precompensation should be added, e.g., through a Badal optometer, not yet included in our experimental setup. The use of a lenslet array with a smaller focal length is an alternative, with the drawback of reducing the resolution of the sensor.
A few position-sensitive detectors realizable in hardware approximate the above functional estimators. In two dimensions the most simple one is probably the quad cell, which consists of a large-area photodiode that is divided into four parts with separate contacts and photocurrents. A drawback of this concept is the nonlinearity of the position measurement and the large parasitic capacitance resulting from the large area of the photodiode, which decreases the bandwidth at small photocurrents. A pixelized version of the quad cell circumvents these nonlinearities and approximates Eq. (3) very well. Photocurrents from pixel rows and columns are added together by simple connection of the photodiode cathodes to a common net,
where the sets Mx,i and My,j contain the index of pixels connected to row i and column j, for example, in our case in a chessboard-like manner. Both sets have to be disjoint, because a photocurrent can only flow either to the x or the y direction. The pixels may also contain two photodiodes, one for row and one for column currents. The photocurrents are fed into a line of resistors, forming two output currents to the left IL,x and to the right IR,x . When all contributing currents from both sides of the resistive network  are added up, it can be shown that
is equivalent to the centroiding function given by Eq. (3). Nevertheless, an integrated CMOS sensor with a resistive-line network would require on-chip amplification and analog-to-digital conversion of the very small photocurrents.
A circuitry that establishes a direct binary representation of the spot position and works on the column and row photocurrents given by Eq. (4) is the winner-take-all (WTA) circuit (see Refs. ,, and  for details about the circuitry). The WTA circuit receives n input photocurrents through n neurons, which are interconnected through a common sense-net with attached current sink I src. All neurons interact through this common net. From the input photocurrents the circuit establishes an output current vector
where all entries are zero, expect at the position of the largest input photocurrent, such that the position estimate is the position of the winning neuron (xW,yW ),
Because the output of the WTA circuit is binary, there is no longer a need for analog-to-digital conversion prior to readout.
3.3. Focal-spot position measurement
To circumvent the limitations of the simple WTA circuit with respect to impulsive noise and the limitations of the resistive-line network with respect to the sensitivity to a constant background, we implemented a new topology for the WTA circuit in the Hartmann–Shack sensor ASIC (see Fig. 3 for a schematic).
As a first step, the WTA circuit with n neurons is split into w groups, each with n/w neurons. The number of groups w is chosen such that its number multiplied by the pixel pitch p equals the full-width at half-maximum (FWHM) of the focal spot, i.e., w=FWHM/p. The neurons Ni are connected in an interdigital manner to w current sinks I src,n instead of one,
This circuit establishes an output current-vector of the form
with the w largest input photocurrents at positions xW,k . This output current-vector now contains w independent spot-position measurements. Because there are w independent spot-position measurements, the variance of the position measurement is reduced by σ 2(x)→σ 2(x)/w. To make the averaging robust against outliers, the median of the positions of the largest photocurrents is taken as the following position:
To control the number of winning neurons and to adapt the position measurement to smaller spot sizes, e.g., when a lenslet array with a larger focal length is used, a network of resistors was connected to the individual sense-nets of the w WTA circuits. When the resistances between the sense-nets are reduced, a WTA circuit with a much larger input current can inhibit a second one and thus reduce the number w, which corresponds to a smaller spot size. In the case of zero resistance, this network acts like a simple WTA circuit and finds the largest input photocurrent.
3.4. Sensor chip integration
The Hartmann–Shack sensor ASIC has been implemented in the Austria Micro Systems 0.35- µm process technology, which provides three metal and two polysilicon layers. The complete core of the ASIC is a full custom design in order to reach a large fill factor (80%). The third available metal layer in this process technology is used to shield the devices from incident light and also as a supply rail.
The resulting die has an overall size of 4.01 mm×4.01 mm (shown in Fig. 4). The chip has been packaged in an 84-pin ceramic package and has a typical power consumption of only 0.41 mW.
The sensor has been characterized with respect to quantum efficiency, spot-tracking resolution and linearity, and wave-front accuracy. After these measurements, an optical setup was used to capture human-eye wave-front aberrations.
A data-acquisition card (ME-2600, Meilhaus Electronics) has been used to read the spot positions from the chip. For a sufficiently fast readout a Linux custom device driver has been used that addresses all detectors subsequently through the data-acquisition card and reads the bit-vector with the position data in four cycles. The analog bias voltages to control the resistivering network are also provided by the data-acquisition card. The system reached an external frame rate of 1.2 kHz for a subset of 4×4 detectors and of 300 Hz for all 8×8 detectors, including the wave-front reconstruction through the least-squares fit. To use the full intrinsic bandwidth of the sensor, we propose a solution based on a field-programmable gate array or a digital signal processor instead of the data-acquisition card.
4.1. Spot tracking
The achievable wave-front resolution is directly related to the position resolution of the individual sensor. For a position resolution of 17 µm and a wave-front reconstruction with 14 terms, a rms error of the wave-front reconstruction error of ~λ/5 can be expected .
To measure the spot-tracking accuracy at different spot powers, the Hartmann–Shack sensor was mounted on an x–y-translation stage that could be moved up to a speed of 100 mm/s. This movement corresponds to a bandwidth of B=v/p, where p is the pixel pitch of 17 µm. A laser diode (λ=680 nm) illuminated the lenslet array (53-mm focal length) with variable power through a neutral-density filter. The position of the spot on a single detector could be sampled with at most 19.2 kHz. Table 1 shows the spot-position uncertainty at different spot powers with the simple WTA circuit and with the resistive-ring network of WTA circuits. The simple WTA circuit has been accessed by setting the resistors in the resistive-ring network to the lowest available resistance.
As expected from simulations, the use of the resistive-ring network reduced the spot-position uncertainty by a factor of 2.24 and increased the bandwidth. Spot tracking at tens of picowatts has been possible only in this mode of operation. Figure 5 shows two spot-tracking examples, one at a very small and one at a very fast spot velocity, which corresponds to a large bandwidth. At very fast velocities there is a slight increase in the spot-position uncertainty.
4.2. Static wave-front measurements
To verify the achievable accuracy, a number of static and dynamic wave-front measurements were conducted. Arbitrary defocus values were added to a plane wave-front by misalignment of the lens of a telescope by a distance Δz. The added defocus is related to the focal length f through =Δz/f 2. We used a lens with a 14-cm focal length, which results in adding one diopter of defocus every 1.96 cm. The measured defocus standard deviation was 0.16 dpt or 0.06 µm at a 3.2-mm pupil, which is close to the theoretical limit of 0.10 dpt.
The temporal standard deviation of the lowest 14 Zernike terms of a static wave front was evaluated from a measured wave-front time series. The result at a rather large and a very small laser spot power is shown in Fig. 6. Temporal noise decreases with the square root of the spot power σ(C)~√J. The lowest possible spot power is 20 pW/spot. The achievable wave-front accuracy was better for Zernike terms with small azimuthal order, such as defocus and third-order spherical aberration.
4.3. Human-eye wave-front measurements
Once assured from static wave-front measurements that the sensor is able to measure at safely applicable light levels, we set up an optical system for human-eye wave-front measurements. The setup, depicted in Fig. 7 and shown in Fig. 8, consisted of the laser source, a fiber-coupled super luminescent diode (SLD) centered at 780 nm and with 12 nm approximate bandwidth. Spatial noise introduced by speckle is substantially reduced by the short coherence length of the SLD, although an even further reduction may be possible by insertion of a scanning mirror . The beam is sent through a polarizing beam splitter into the eye, after ensuring with a photometer that the safely applicable power of 50 µW is adjusted on the laser diode. The beam reflected from the retina crosses the beam splitter and passes a telescope with lenses L1 and L2 without magnification. A pinhole is included to ensure that all but retinal reflections are excluded from reaching the wave-front sensor. The Hartmann–Shack sensor itself consisted of a microlens array with 400 µm×400 µm square lenslets with a 53-mm focal length from which 8×8 were used. Prior to the measurements with the custom wave-front sensor, a CCD camera was set into the focal plane and a subject was trained for fixation.
First, the spot pattern has been visualized and analyzed with a software tool to ensure that a reasonable spot pattern is available and to measure the pupil size of the subject (5.8 mm). The eyes were measured under normal conditions in a dark room for natural pupil dilation. After the preliminary adjustments with the CCD camera, this was replaced with the CMOS-based custom wave-front sensor. The data from the sensor chip were recorded with a 300-Hz repetition rate for a number of seconds to measure 1000 frames. Exposure lasted a few seconds.
The Zernike coefficients of the measurement are recorded and evaluated offline. In a closed-loop AO system, they could be used directly to drive an actuator. Exemplary wave-front time series and wave-front rms error are visualized in Figs. 9 and 10. Measurements of the temporal spectrum at the human-eye wave-front aberrations were conducted by Hofer et al. with a repetition rate of 24.6 Hz . Diaz-Santana et al. recently reached 240 Hz with a high-speed CCD camera and achieved similar results . We were able to measure at a repetition rate of 300 Hz with the CMOS-based wave-front sensor. The time series of the different aberration terms have been used to calculate their power spectrum density through a discrete 256-point Fourier transform. The sampling rate allows an uppermost calculation of the power spectrum density of 150 Hz. From the power spectral analysis we found the same drop in the power spectrum of approximately 4 dB per octave previously described by Hofer et al. . This drop continues up to ~70 Hz, after a drop of 30 dB, where the noise level of the sensor is reached. A number of examples of the power spectrum density can be found in Fig. 11.
A small, fast, and real-time–capable wave-front sensor is a key device for building small and cost-effective adaptive optical (AO) systems. We included image acquisition and processing into a single CMOS-based wave-front sensor and thus reached frame rates that are several orders of magnitude larger than standard software-based systems allow. The device measures the position of 8×8 focal spots from a lenslet array with a sensitivity that is large enough for ophthalmic applications. The defocus term has been measured with an accuracy of 0.16 dpt, which is close to the theoretical limit of 0.10 dpt at a spot-tracking resolution of 17 µm.
Initial measurements of the human eye show the feasibility for using the sensor in retinal imaging systems, although the chip architecture limits the aberration’s dynamic range and determines the geometry of the lenslet array. We recorded wave-front time series with a 3-ms repetition rate and used the data to calculate the power spectrum density of different Zernike terms. The results reported by Hofer et al.  measured with a 24-Hz frame rate, stated a drop with approximately 4 dB per octave. Diaz-Santana et al.  observed ocular aberrations with a 240-Hz frame rate up to 30 Hz with a similar drop rate. In our measurements at a 300-Hz frame rate, aberrations could be detected below about 70 Hz.
A drawback of the present sensor is still the limited size and resolution, which will be addressed in the next generation of the sensor. The sensor will also be included in a closed-loop AO system to determine stability and noise characteristics. Currently a new readout system is being developed, based on a field-programmable gate array, which enables use of the full intrinsic bandwidth of the sensor.
References and links
1. J. Hardy, “Active optics: a new technology for the control of light,” Proc. IEEE 66, 651–697 (1978). [CrossRef]
2. J. Liang, B. Grimm, S. Goelz, and J. Bille, “Objective measurements of wave aberrations of the human eye with the use of a Hartmann–Shack wave-front sensor,” J. Opt. Soc. Am. 11, (1994).
3. J. Liang, D. Williams, and D. T. Miller, “Supernormal vision and high-resolution retinal imaging through adaptive optics,” J. Opt. Soc. Am. A 14, 2884–2892 (1997). [CrossRef]
4. P. Prieto, F. Vargas-Martin, S. Goelz, and P. Artal, “Analysis of the performance of the Hartmann–Shack sensor in the human eye,” J. Opt. Soc. Am. A 17, 1388–1398 (2000). [CrossRef]
5. H. Hofer, L. Chen, G. Yoon, B. Singer, Y. Yamauchi, and D. Williams, “Improvement in retinal imaging quality with dynamic correction of the eye’s aberrations,” Opt. Express 8, 631–643 (2001), http://www.opticsexpress.org/abstract.cfm?URI=OPEX-8-11-631. [CrossRef]
6. E. Fernandez, I. Iglesias, and P. Artal, “Closed-loop adaptive optics in the human eye,” Opt. Lett. 26, (2001). [CrossRef]
7. L. Diaz-Santana, C. Torti, I. Munro, P. Gasson, and C. Dainty are preparing a manuscript to be called “Benefit of higher closed-loop bandwidths in ocular adaptive optics.” Opt. Express 11, 2597–2605 (2003), http://www.opticsexpress.org/abstract.cfm?URI=OPEX-11-20-2597. [CrossRef]
8. D. Droste and J. Bille, “An ASIC for Hartmann–Shack wavefront detection,” IEEE J. Solid-State Circuits 37, 173–182 (2002). [CrossRef]
9. D. de Lima Monteiro, CMOS-Based Integrated Wavefront Sensor (Delft University, Delft, The Netherlands, 2002).
10. D. Malacara, Optical Shop Testing (Wiley, New York, 1992).
11. H. Hofer, P. Artal, B. Singer, J. Aragon, and D. Williams, “Dynamics of the eyes wave aberration,” J. Opt. Soc. Am. 18, 497–506 (2001). [CrossRef]
12. M. Tartagni and P. Perona, “Computing centroids in current-mode technique,” Electron, Lett. 29, 1811–1813 (1993). [CrossRef]
13. J. Lazarro, S. Ryckebusch, M. Mahowald, and C. Mead, “Winner-take-all networks of O(n) complexity,” in Advances in Neural Information Processing Systems (D. S. Touretzky, San Mateo, Calif., 1998), pp. 703–711.
14. T. Nirmaier, G. Pudasaini, and J. Bille, “CMOS-based wavefront sensor for real-time adaptive optics,” in Proceedings of the 13th IEEE Real Time Conference (IEEE, New York, 2003).