A proposal to dynamically compensate chromatic aberration of a programmable phase Fresnel lens displayed on a liquid crystal device and working under broadband illumination is presented. It is based on time multiplexing a set of lenses, designed with a common focal length for different wavelengths, and a tunable spectral filter that makes each sublens work almost monochromatically. Both the tunable filter and the sublens displayed by the spatial light modulator are synchronized. The whole set of sublenses are displayed within the integration time of the sensor. As a result the central order focalization has a unique location at the focal plane and it is common for all selected wavelengths. Transversal chromatic aberration of the polychromatic point spread function is reduced by properly adjusting the pupil size of each sublens. Longitudinal chromatic aberration is compensated by making depth of focus curves coincident for the selected wavelengths. Experimental results are in very good agreement with theory.
© 2006 Optical Society of America
Although diffractive lenses show good properties of large collecting aperture, light weight, ease of replication, flexible design to correct aberrations in comparison with conventional refractive lenses, their usefulness in broadband imaging systems is very little so far because of their severe chromatic aberration. The focal length of the phase diffractive lens is given by f(λ)=(λ 0/λ) f 0, where λ is the illumination wavelength and f 0 is the focal length for the design wavelength λ 0. Some solutions, based on hybrid diffractive-refractive configurations consisting of a number of properly separated lenses, have been proposed to obtain dispersion-compensated imaging systems and correlators [1, 2]. Electronically addressed spatial light modulators (SLM), such as liquid-crystal pixelated displays, have been effectively used to generate programmable diffractive optical components. Concerning phase Fresnel lenses, they are capable of dynamically changing their focal length by addressing the proper phase function onto a well characterized display. Under polychromatic illumination, as obtained from a white-light source, the performance of a diffractive lens encoded on SLM shows the expected severe chromatic aberration  that deteriorates quickly the image.
Very recently, some efforts invested to obtain an achromatic phase Fresnel lens have result in several spatial multiplexing schemes to design a diffractive lens with the same focal length at a number of discrete wavelengths of the visible spectrum [4, 5]. The resulting lens is programmed to be displayed on a liquid crystal device that works in a phase-only regime. Márquez et al.  propose a spatial multiplexing scheme optimized for three wavelengths in the red, green and blue regions. Under the illumination of the three wavelengths, the multiplexed lenses produce multiple monochromatic focalizations in the central order although three of these focalizations are more efficient than others and coincide in the same plane, which is the focus plane of design. They proposed various spatial multiplexing schemes and reproduced them experimentally. Among the proposals, the random multiplexing scheme showed the best results of point spread functions (PSF) with no sidelobes and the possibility of finely tuning the chromaticity of the focal spot by changing the relative weights of the channels corresponding to the selected wavelengths. In our former work  we filter the spectral band that illuminates each subaperture so that it works nearly monochromatically. Multiple focalizations under broadband illumination are then eliminated and a common focal plane where the different focusing wavefronts add with temporally incoherent superposition is obtained. Moreover, a fine control of the transversal chromaticity of the PSF and depth of focus is achieved by properly adjusting the pupil size of each sublens . We proposed two different geometries in Ref.  for the spatial multiplexing of the set of phase Fresnel sublenses. One of them was based on a mosaic color filter placed against a multilens with mosaic aperture. The other geometry was based on a rotating multisector aperture that introduced both spatial and time integration. It used a color filter and a multilens aperture, both divided into multiple circular sectors, which rotated synchronized. For both the mosaic and the rotating aperture schemes, the results obtained by simulation in Ref.  showed a significant improvement in achieving compensation of both transversal and longitudinal chromatic aberrations. Regarding practical aspects, both schemes have complementary advantages. Despite the promising simulation results, we also outlined the technical difficulties to carry out the experiments in Ref. . The mosaic color filter, which is similar to a Bayer patterned filter stuck on the camera sensor in digital photography, is not available and cannot be manufactured easily by the authors. For the rotating aperture scheme, it is no trivial to achieve a good synchronism between the rotating multisector filter and the time variant phase distribution displayed by the SLM.
In this work, we try to overcome the difficulties arising from spatial multiplexing and use exclusively time integration of sublenses for chromatic compensation of SLM encoded diffractive lenses operating under broadband illumination. Such a time multiplexing of sublenses can be achieved by using a white light source and a tunable spectral filter that transmits a sequence of narrowbands centred at the selected wavelengths with respect to which the sublenses have been programmed. A liquid crystal tunable filter is like an interference filter, but the wavelengths of the light it transmits are electronically controllable. The required synchronism of both the tunable filter and the phase distribution displayed by the SLM can be achieved by computer. A rotating spectral filter can be alternatively used for time integration only, but in contrast with the rotating spectral filter used in Ref , we propose a filter that rotates around an axis shifted from the optical axis. All the channels or sublenses need to be displayed within the integration time of the sensor, otherwise chromatic distortions could be noticed and the whole system would not be completely effective. We have implemented the system and present the experimental results in the next sections.
2. Design of the set of sublenses
Let us consider a converging lens l, placed at the plane of rectangular coordinates (x, y), with a quadratic phase function given by
where f 0 is the focal length of design for the wavelength λ. Let us consider that this lens function is sampled with a sampling period given by the pixel space (or pixel pitch). The lens is displayed on a M×M pixel array SLM, with square pixel pitch Δ and fill factor less than unity. For the sake of simplicity, the active area of a pixel is represented by a rectangle of dimensionsΔx’,Δy’, that is, by rect , although other pixel shape function could be alternatively taken into consideration. In general, Δx’<Δ andΔy’<Δ. Under broadband illumination a quasimonochromatic filter selects a narrow bandwidth centred in λ. The lens of Eq. (1) has a circular aperture of radius R and is only active for a time T. Taking into account all these circumstances, the sublens li (x, y, t) can be expressed by
where a rect function, centred on the instant ti , has been considered to limit the time Ti to which the sublens is active; τi (Δλ) is the amplitude transmittance of the quasimonochromatic filter that selects a narrow bandwidth around λi (consequently, τi (Δλ)≈0 except for Δλ=|λ-λi |≈0); the circ function corresponds to a circular pupil of radius Ri ; the summation corresponds to a 2D-comb function that establishes the positions of the sampling points and the convolution (⊗) by the active area of the pixel ensures that there is a uniform single value throughout its extension. We assume that the sublens pattern reaches, at most, the Nyquist frequency at the circular contour of the aperture and, consequently, no secondary lenses appear. This implies that the focal length f 0 has to be longer than or equal to the reference focal fr  that depends on the sampling period (Δ), the number of samples (M) and the wavelength λi ,
This condition sets the strongest constraint for the shortest wavelength. Consequently, we assume that Ri ≤R=MΔ/2, where R is the circular aperture of maximum radius placed against the SLM screen. In particular, we demonstrated in a previous work  the advantages of taking a sublens pupil that meets the condition τi (Δλ) /λi =constant(Fig. 1), which represents the condition to obtain PSF profiles of the same maximum height for all the wavelengths λi . The condition was referred to as the PSFI-condition.
The time variant lens L(x,y,t)displayed in the SLM can be described by
where εi represents the time necessary to refresh both the spectral filter and the lens displayed on the SLM from sublens li to li +1. The sublenses are successively displayed, thus satisfying the following relationships between the temporal constants:
The sequence of sublenses, represented by TS in Eq. (5), lasts a time given by TS = =1(Ti +εi ). Although the refreshing time εi could be dependent on the sublenses, we will assume that it is approximately constant (ε) for all the sublenses of the set. Since the whole set of sublenses has to be displayed during the integration time of the sensor T 0, the condition T 0≥TS =Nε+ =1 Ti has to be satisfied. In such a case, L(x,y,t)periodically repeats identical sequences of N sublenses designed to have a unique focal plane for all the selected wavelengths. In this way, the strong chromatic aberration of a diffractive lens can be compensated. Thus, in the instant Ti , the phase Fresnel lens function of Eq. (1) is displayed on the SLM while a uniform plane wave of λi impinges the aperture with radius Ri . We calculate the Fresnel propagation of the amplitude distribution from the plane behind the lens to the focal plane following a procedure similar to that applied in Ref. . In good approximation, the amplitude of the central order in the focal plane U00i is
where di =Ri /λif 0. Equation (6) is the convolution of a wavelength dependent term by a wavelength independent rectangle function. The variation of the central lobe of U00i with λ can therefore be analyzed through the variation of the first term of Eq. (6) withλ. The width of the central lobe of the Bessel J 1 function in Eq. (6) is 1.22λif 0/Ri and its height is weighted by the precedent factor. Although the focal plane is the same for all λi , the central lobe of U00i shows different sizes and heights. As it was aforementioned, the PSFI-condition (τi (Δλ) /λi =constant) represents the condition to obtain PSF profiles of the same maximum height for all the wavelengths λi . We demonstrated in Ref.  that, if this condition is fulfilled, the dispersion is compensated along the optical axis, with depths of focus coincident for all the wavelengths and the transversal dispersion is almost compensated in the focal plane.
In comparison with the two spatial multiplexing schemes (mosaic and rotating multisector apertures) proposed in Ref. , the time multiplexing scheme proposed in this work has some clear properties. Since each sublens is allowed to occupy the whole aperture of the display (just limited by the radius Ri ), the time multiplexing scheme keeps two complementary advantages of the mosaic and rotating multisector apertures . It leads to distant first diffraction orders, which improves the field of view of the imaging system, as in the case of the rotating multisector aperture. In addition to this, it produces a sharp and narrow central lobe for the PSF as in the case of the mosaic aperture.
However, two more aspects have to be considered in order to have a really effective achromatic lens. Firstly, the total refreshing time needs to be much lower than the time spent to display sublenses, that is, Nε≪ =1 Ti . Otherwise, the result would be a rather noisy signal. Secondly, to ensure a white focal spot, the possibility of fine tuning the chromaticity of the focal spot is needed. This must be optimized for color imaging tasks such as white balance and analysis of the color content conveyed by the system. The problem is not trivial because it depends at least on the following factors: the wavelength sampling, the efficiency of the spectral filter and the bandwidth for each selected wavelength, the efficiency and the phase and amplitude modulation configuration of the liquid crystal display for each selected wavelength, the spectral power distribution of the white light source and the spectral sensibility of the sensor. In fact, Márquez et al. already dealt with the problem in Ref.  and they proposed to adjust the relative weights of the spatial multiplexing for the various wavelengths. In our case of time multiplexing, we propose to consider the time Ti , for which the sublens li is displayed, as another degree of freedom and adjust it to be its weight within the sequence of N sublenses.
Concerning the time varying spectral filter, placed either on the entrance or the exit pupil of the imaging system, we consider two different possibilities: one of them is a liquid crystal tunable filter, whose transmittance can be electronically controllable, providing rapid and vibrationless selection of a wavelength and its bandwidth. The other one, less expensive but involving technical drawbacks, is a rotating multisector spectral filter whose center is shifted from the optical axis. In this second case, Ti is the time that the rotating filter sector with transmittance in λi is covering the whole aperture of the liquid crystal display (Fig. 2). For sublenses of different weights, their corresponding sectors will accordingly have different angular amplitudes in the design of the multisector spectral filter.
As an example, let us consider the 20 frames per second video rate of a sensor, a liquid crystal display working as a SLM capable of display 180 frames per second and a tunable spectral filter capable to switch from one wavelength to another with a maximum rate similar to the SLM. Let us assume three selected wavelengths in the blue, green and red regions of the visible spectrum (λR=450nm, λG=550nm, λR=650nm). The integration time of the sensor is T 0=1/20=0.05s that is nine times as much as the time corresponding to the maximum frame rate of the SLM. At this maximum rate, the SLM needs ε=(1 180)s to refresh the image. If we take the integration time T 0 equal to the time of the sublens sequence TS and that each sublens can be displayed with the same weight, it results Ti =(2/180)s. The ratio (Ti /ε)=2 should predict an acceptable signal with relative low noise.
3. Experimental results
We have experimentally evaluated the feasibility of the proposal. To this end, we have aligned the following components in an optical bench:
A white light source: a Xenon arc lamp. The light spot from the lamp was imaged using a macro-telephoto lens and filtered by a pinhole of 25 microns clear aperture to have a point source. The white light beam was collimated using an achromatic doublet of 100mm focal length to have a plane wave illuminating the programmable diffractive lens displayed on the SLM.
A tunable spectral filter: a model Varispec (CRI) Liquid Crystal tunable filter, 20mm clear aperture, controlled by computer, was used in the experiment. It provided vibrationless selection of any wavelength in the visible range (400nm–720nm) with fixed 10nm bandwidth. According to the technical specifications provided by the manufacturer , the response time or time it takes to switch from a wavelength to another, typically was 50ms to 150ms. This characteristic would not allow us to reach a video rate for a reliable display of the sequence of the set of sublenses. The tunable filter had an additional linear polarizer effect and, for this reason, it can replace one polarizer in the conventional setup with SLM. The transmission efficiency of the filter was wavelength dependent: lower for short wavelengths and higher for long wavelengths.
A transmissive liquid crystal display working as a SLM: a model XGA2 (CRL) twistnematic liquid crystal display was used in the experiment. The device was characterized using a laser beam of 543nm and the configuration corresponding to maximum phase modulation (1.2π) with minimum coupled amplitude modulation for such a wavelength (Fig. 3) was selected to display the whole set of sublenses. Further technical specifications of interest are contained in Table 1. Other optical components such as a half wave plate were used to control the orientation of polarized light at the entrance of the SLM.
A CCD monochrome camera placed at the focal plane of the time multiplexed lens: 12 bits dynamic range, pixel pitch 6.45µm, and software to control the image acquisition with high flexibility in setting the integration time (from few miliseconds to several seconds).
The focal length of design was f 0=1250mm. This value is the longest reference focal that corresponds to the short extreme of visible spectrum (400nm). We displayed sublenses corresponding to λ=500, 550, 600, 650 nm. We made another assumption for the depths of the phase modulations for these wavelengths [Fig. 3(b)]: they were linear extensions of the plot obtained for λ=543nm. This assumption was not acceptable for λ≤480nm, for which the behaviour of the SLM showed some significant deviations.
Since a complete 2π phase modulation could not be achieved in any case, pixels whose assigned phase value was out of the range were reassigned a phase value within the range according to the minimum Euclidean distance principle (MED) [9,10]. The camera was placed at the focal plane of design for all the wavelengths and captured each PSF separately. In each case, the integration time of the camera was adjusted just to the level of saturation. The PSF captured was in fact the result of convolving the Bessel function of Eq. (6) by the magnified image of the point source. The magnification was approximately ×12 because it corresponds to the ratio of the focal lengths of the programmable lens (1250mm) and the collimating lens (100mm).
The cross section curves of the experimental PSF obtained for the each wavelength in the focal plane of design are represented in Fig. 4(a). Each of them can be compared with the theoretical plot for λ=550nm (in solid black line) obtained for the PSFI-condition by computing the convolution of the Bessel function by the ×12 magnified image of the point source and adding the background level.
From this figure, two clear results appear: one, the transversal chromatic aberration is compensated; second, there is a very good agreement between theoretical and experimental results. For the sake of comparison, we have programmed a lens with maximum aperture R and the same focal plane as before but only for the wavelength of λ=550nm. We kept this lens constant for the successive illumination with the set of four wavelengths. Fig. 4(b) shows the cross section curves of the experimental PSF obtained in this case in the focal plane (it is only a focal plane of design for λ=550nm). As we can see in this figure, only the light of wavelength λ=550nm focuses in this plane. The rest of wavelengths contributed with very blurred and low intense figures due to the presence of the strong and non-compensated chromatic aberration. Figures 4(c) and 4(d) show 2D-pseudocolored images of the focal spots corresponding to the respective figures above [Figs. 4(a) and 4(b)]. They have been obtained by representing the four PSF images in the R, G, B channels according to the following proportions(R,G,B)=(PSF500nm, PSF550nm, 0.2PSF600nm+0.8PSF650nm) and merging them for visualization. In the case of Fig. 4(c), a spot close to white is obtained. The relative weights (1, 1, 0.2, 0.8) can be maintained in first approximation to compute the lifetimes Ti of the sublenses in the total sequence time TS . This numbers must be considered just as an illustrative example of the feasibility of the procedure. Depending on the set of selected wavelengths, many different combinations of the relative weights can be found and they should be more finely tuned to obtain a white spot. Finally, Figs. 4(e) and 4(f) show the distribution of intensity along the optical axis for each wavelength. In Fig. 4(e), the time multiplexed programmable lens provides a nearly common depth of focus for the four wavelengths. Thus, it reaches a high compensation of longitudinal chromatic aberration. In contrast, the lens programmed just for the wavelength λ=550nm with maximum aperture R and no compensation by time multiplexing, obtains a distribution of intensity along the axis strongly variant with wavelength, as it was aforementioned, and according to the expression f(λ)=(λ 0/λ) f 0. Results of Figs. 4(b), 4(d), and 4(f) are consistent with the high chromatic aberration, typical of diffractive lenses, that motivated this work.
We have proposed to dynamically compensate chromatic aberration of a programmable phase Fresnel lens displayed on a liquid crystal device and working under broadband illumination. The proposal is based on time multiplexing a set of lenses, designed with a common focal plane for different wavelengths, and a tunable spectral filter that makes each sublens work almost monochromatically. The tunable filter and the sublens displayed by the SLM have to be synchronized. The sequence of sublenses is displayed within the integration time of the sensor and periodically repeated. The result has a central order focalization uniquely located at the focal plane that is common for all the selected wavelengths. Transversal chromatic aberration of the polychromatic PSF has been reduced by properly adjusting the pupil size of each sublens according to the PSFI-condition. This condition makes the depth of focus curves coincident too for the selected wavelengths. Experimental results have been obtained by working with broadband white light, a tunable spectral filter and a programmable liquid crystal device working in a mostly-phase regime. The analysis of the results shows a very good agreement with theoretical predictions in both transversal and longitudinal compensation of chromatic aberration. By a proper choice of relative weights the achromacy of the focal spot can be adjusted. These weights can be used to determine the lifetime of each sublens within the time taken to display the total sequence. Although the technical features of the devices used in the current experiment, particularly the tunable filter and the SLM, did not allow us to display the sequence of sublenses within the video rate, the feasibility of the method has however been demonstrated. With total probability, the fast technological progress of liquid crystal based optoelectronic devices will make these difficulties minor in near future. Then, it would be possible to sample the visible spectrum in 20 selected wavelengths or more, to display the corresponding set of sublenses at higher rates and then to obtain a nearly fully compensation of chromatic aberration in practice.
This research has been funded by the Spanish Ministerio de Educación y Ciencia and FEDER (project DPI2003-03931).
References and links
1. D. Faklis and G. M. Morris, “Broadband imaging with holographic lenses,” Opt. Eng. 28, 592–598 (1989).
2. P. Andrés, V. Climent, J. Lancis, G. Mínguez-Vega, E. Tajahuerce, and A. W. Lohmann, “All-incoherent dispersion- compensated optical correlator,” Opt. Lett. 24, 1331–1333 (1999). [CrossRef]
3. V. Laude, “Twisted-nematic liquid-crystal pixilated active lens,” Opt. Commun. 153, 134–152 (1998). [CrossRef]
6. J. Bescós, J. H. Altamirano, J. Santamaria, and A. Plaza, “Apodizing filters in colour imaging,” J. Optics (Paris) 17, 91–96 (1986). [CrossRef]
7. D. M. Cottrell, J. A. Davis, T. R. Hedman, and R. A. Lilly, “Multiple imaging phase-encoded optical elements, written as programmable spatial light modulators,” Appl. Opt. 29, 2505–2509 (1990). [CrossRef] [PubMed]
10. I. Moreno, J. Campos, C. Gorecki, and M. J. Yzuel, “Effects of amplitude and phase mismatching errors in the generation of a kinoform for pattern recognition,” Jpn. J. Appl. Phys. 34, 6423–6432 (1995). [CrossRef]