Phase-only light modulation shows great promise for many imaging applications, including future projection displays. While images can be formed efficiently by avoiding per-pixel attenuation of light most projection efforts utilizing phase-only modulators are based on holographic principles which rely on interference of coherent laser light and a Fourier lens. Limitations of this type of an approach include scaling to higher power as well as visible artifacts such as speckle and image noise.
We propose an alternative approach: operating the spatial phase modulator with broadband illumination by treating it as a programmable freeform lens. We describe a simple optimization approach for generating phase modulation patterns or freeform lenses that, when illuminated by a collimated, broadband light source, will project a pre-defined caustic image on a designated image plane. The optimization procedure is based on a simple geometric optics image formation model and can be implemented computationally efficient. We perform simulations and show early experimental results that suggest that the implementation on a phase-only modulator can create structured light fields suitable, for example, for efficient illumination of a spatial light modulator (SLM) within a traditional projector. In an alternative application, the algorithm provides a fast way to compute geometries for static, freeform lens manufacturing.
© 2015 Optical Society of America
In this work we propose to use phase only spatial light modulation combined with broadband illumination for image formation. We achieve this by treating the spatial phase modulator as a programmable freeform lens, and devising a simple and computationally efficient optimization procedure to derive a lens surface or modulation pattern that will form a caustic representing a predefined target image when illuminated by a collimated, broadband light source. Our research draws from a number of different research fields including holography and goal-based caustics.
1.1. Holographic displays
Early holographic image formation models  have been adapted to create digital holograms . Most of the common approaches require coherent light which has several disadvantages. Coherent light can result in high resolution artifacts, including screen speckle and diffraction on structures such as the discrete pixel grid of a SLM. On the other hand, using broadband light sources as illumination source can eliminate screen speckle, but is not feasible for holography. Phase patterns used in holography typically contain high spatial frequencies while low-frequency phase modulation patterns would help in mitgating diffraction artifacts. Finally scaling holography-based approaches cost-efficiently to high power is currently not feasible due to its incompatibility with broadband light sources as well as poor beam quality of high power diode lasers.
1.2. Freeform lenses
Recently, there has been strong interest in freeform lens design, both for general lighting applications and also to generate images from caustics . In the latter application, we can distinguish between discrete optimization methods that work on a pixelated version of the problem (e.g. ), and those that optimize for continuous surfaces without obvious pixel structures (e.g. [6, 7, 8]). The current state of the art  defines an optimization problem on the gradients of the lens surface, which then have to be integrated up into a height field. In addition to low computational performance, this leads to a tension between satisfying a data term (the target caustic image) and maintaining the integrability of the gradient field.
Our goal is to derive an efficient algorithm to compute freeform lens patterns for dynamic phase modulation on a SLM, that produces images when illuminated with non-coherent, broadband light. While diffraction will occur off the SLM (or off any small, pixelated grid), we expect the resulting diffraction artifacts to be averaged out by the broadband nature of the illumination, resulting in a small amount of blur that can be modeled and compensated for . In our work we derive a simple and efficient formulation in which we optimize directly for the phase function (i.e. the shape of the wavefront in the lens plane) without the need for a subsequent integration step. This is made possible by a new parameterization of the problem that allows us to express the optimization directly in the lens plane rather than the image plane.
2. Freeform lensing
2.1. Phase modulation image formation
To derive the image formation model for a phase modulator, we consider the geometry shown in Fig. 1: a lens plane and an image plane (screen) are parallel to each other at focal distance f. Collimated light is incident at the lens plane from the normal direction, but a phase modulator in the lens plane distorts the phase of the light, resulting in a curved phase function p(x) which corresponds to a local deflection of the light rays.
With the paraxial approximation sinϕ ≈ ϕ we obtain the following Eq. for the mapping between x on the lens plane and u on the image plane:
Using the above geometric mapping, we derive the intensity change associated with this distortion as follows. Let dx be a differential area on the lens plane, and let du = m(x) · dx be the differential area of the corresponding region in the image plane, where m(.) is a spatially varying magnification factor. The intensity on the image plane is then given as
The magnification factor m(.) can be expressed in terms of the derivatives of the mapping between the lens and image planes (also compare Fig. 2):
This yields the following expression for the intensity distribution in the image plane:
In other words, the magnification m, and therefore the intensity i(u) on the image plane can be directly computed from the Laplacian of the scalar phase function in the lens plane.
2.2. Optimization problem
While it is possible to directly turn the image formation model from Eq. (4) into an optimization problem, we found that we can achieve better convergence by first linearizing the Eq. with a first-order Taylor approximation, which yieldsAlgorithm 1. The algorithm is initialized with the target image intensity. From this, the first phase pattern is computed, which in turn is used to warp the original target image intensity to provide a distorted intensity image for use in the next iteration.
After discretization of i(.) and p(.) into pixels, the phase update corresponds to solving a linear least squares problem with a discrete Laplace operator as the system matrix. We can solve this positive semi-definite system using a number of different algorithms, including Conjugate Gradient, BICGSTAB and Quasi Minimal Residual (QMR). The image warp corresponds to a texture mapping operation and can be implemented on a GPU. We implement a non-optimized prototype of the algorithm in the Matlab programming environment using QMR as the least squares solver. Table 1 shows run times for Algorithm 1 and a selection of artificial and natural test images at different resolution. It was executed on a single core of a mobile Intel Core i7 clocked at 1.9 GHz with 8 GByte of memory. We note that due to the continuous nature of the resulting lens surfaces, computation of the phase with resolutions as low as 128 × 64 are sufficient for applications such as structured illumination in a projector. We also note that the algorithm could, with slight modifications, be rewritten as a convolution in the Fourier domain which would result in orders of magnitude shorter computation time for single threaded CPU implementations and even further speed-ups on parallel hardware such as GPUs. With these improvements, computations at, for example, 1920 × 1080 resolution will be possible at video frame rates. In addition both, the resulting contrast of the caustic image as well as the sharpness (effective resolution), benefit from higher working resolution.
The progression of this algorithm is depicted in Fig. 3. We show the undistorted target image, from which we optimize an initial phase function. Using this phase function, we update the target image in the lens plane by backward warping the image-plane target. This process increasingly distorts the target image for the modulator plane as the phase function converges. The backward warping step implies a non-convex objective function, but we empirically find that we achieve convergence in only a small number of iterations (5–10).
3. Simulation results
We evaluate the performance of our algorithm by utilizing different simulation techniques: a common computer graphics ray tracer and a wavefront model based on the Van Huygens principle, to simulate diffraction effects at a spectral resolution of 5nm.
3.1. Ray tracer simulation
For the ray tracer simulation we use the LuxRender framework, an unbiased, physically-based rendering engine for the Blender tool. The setup of the simulation is quite straightforward: the freeform lens is imported as a mesh, and material properties are set to mimic a physical lens manufactured out of acrylic.
A distant spot light provides approximately collimated illumination, a white surface with Lambertian reflectance properties serves as screen. The linear, high dynamic range data output from the simulation is tone mapped for display. The results (see Fig. 4) visually match the target well.
3.2. Physical optics simulation
To analyze possible diffraction effects that cannot be modeled in a ray tracer based on geometric optics principles, we perform a wave optics simulation based on the Van Huygens principle. We compute a freeform lens surface for a binary test image (see Fig. 5) and illuminate it in simulation with light from a common 3-LED (RGB) white light source (see Fig. 6, dotted line) in 5nm steps. We integrate over spectrum using the luminous efficiency of the LED and the spectral sensitivity curves of the CIE color matching functions (see Fig. 6, solid line), as well as a 3×3 transformation matrix and a 2.2 gamma to map tristimulus values to display/print RGB primaries for each LED die and for the combined white light source (see Fig. 7). As expected, the wavefront simulation reveals chromatic aberrations within the pattern and diffraction off the edge of the modulator, which can be (partially) mitigated, for example, by computing separate lens surfaces for each of R,G and B.
4. Experimental results
In addition to the simulations, we report on early experimental results using the computed freeform lenses in a static (acrylic, physical lens) and programmable (dynamically addressable phase modulator) fashion.
4.1. Static lenses
For refractive lens surfaces the phase function p(x) is converted to a geometric model describing the lens shape. We design a lens that is flat on one side, and has a freeform height field h(x) on the other side. In the (x, z) plane, the deflection angle ϕ is related to the incident (θi) and the exitant (θo) angles at the height field as follows:Figure 8 shows a prototype of a 3D printed (42μm resolution) lens. Improved results and longer focal lengths can be achieved using other fabrication methods .
4.2. Implementation on spatial light modulators
The phase function p(x) can be directly implemented on a phase-only modulator: in our experiment an LCoS-based SLM with a pixel pitch of 8.0μm and a maximum phase retardation of 2π, the PLUTO SLM, by HOLOEYE Photonics AG. Since most high contrast images, for focal lengths reasonably far away from a modulator, require lens thickness of multiple wavelengths, we wrap the phase from Fig. 3 at multiples of 2π, comparable to grooves in a Fresnel lens (see Fig. 9, left). A broadband, white LED spot light provides collimated light on the reflective phase modulator and we observe the resulting image on a small Lambertian screen in Fig. 9, right.
We introduce a novel, computationally inexpensive method to compute freeform lenses and propose a new implementation for applications requiring dynamic updates. Wavefront and ray-tracer simulations as well as experiments show promising results. However, several improvements in computation time, contrast of the resulting caustic images and sharpness are possible. An implementation of the algorithm on the GPU will allow for a shorter runtime of the algorithm making more iterations and higher working resolutions possible. We anticipate this will improve contrast and sharpness of the results further. Our current implementation of the algorithm results in smooth phase patterns. As part of future work we plan to investigate, whether allowing for steep gradients in the phase pattern (e.g. sharp ridges and valleys) would lead to higher contrast results for certain images. The wavefront simulation results were computed for three separate phase patterns for the spectra of red, green and blue LEDs, then combined into one white light field, an implementation common in projection systems. The experiments, however due to hardware availability, were performed using a single, broadband LED for both the physical lens and phase modulator implementation. The use of separate color LEDs for red, green and blue illumination in the experiments will produce images with higher contrast and sharper edges. Finally, better light collimation optics will further improve the results.
Research reported in this publication was supported by MTT Innovation Inc., NSERC, and the King Abdullah University of Science and Technology (KAUST).
References and links
1. L. Lesem, P. Hirsch, and J. Jordan, “The kinoform: a new wavefront reconstruction device,” IBM J. Res. Dev. 13, 150–155 (1969). [CrossRef]
3. G. Damberg, H. Seetzen, G. Ward, W. Heidrich, and L. Whitehead, “3.2: High dynamic range projection systems,” in “SID Symposium Digest of Technical Papers,” (Wiley Online Library, 2007), vol. 38, pp. 4–7.
4. M. Berry, “Oriental magic mirrors and the laplacian image,” Eur. J. Phys. 27, 109 (2006). [CrossRef]
5. M. Papas, W. Jarosz, W. Jakob, S. Rusinkiewicz, W. Matusik, and T. Weyrich, “Goal-based caustics,” in “Computer Graphics Forum,” (Wiley Online Library, 2011), vol. 30, pp. 503–511.
6. T. Kiser, M. Eigensatz, M. M. Nguyen, P. Bompas, and M. Pauly, Architectural Caustics - Controlling Light with Geometry (Springer, 2013).
7. Y. Schwartzburg, R. Testuz, A. Tagliasacchi, and M. Pauly, “High-contrast computational caustic design,” ACM T. Graphic. 33, 74 (2014). [CrossRef]
8. Y. Yue, K. Iwasaki, B.-Y. Chen, Y. Dobashi, and T. Nishita, “Pixel art with refracted light by rearrangeable sticks,” in “Computer Graphics Forum,” (Wiley Online Library, 2012), vol. 31, pp. 575–582.
9. Y. Ohno, “Color rendering and luminous efficacy of white led spectra,” in “Optical Science and Technology, the SPIE 49th Annual Meeting,” (International Society for Optics and Photonics, 2004), pp. 88–98.