Abstract

Phase-only light modulation shows great promise for many imaging applications, including future projection displays. While images can be formed efficiently by avoiding per-pixel attenuation of light most projection efforts utilizing phase-only modulators are based on holographic principles which rely on interference of coherent laser light and a Fourier lens. Limitations of this type of an approach include scaling to higher power as well as visible artifacts such as speckle and image noise.

We propose an alternative approach: operating the spatial phase modulator with broadband illumination by treating it as a programmable freeform lens. We describe a simple optimization approach for generating phase modulation patterns or freeform lenses that, when illuminated by a collimated, broadband light source, will project a pre-defined caustic image on a designated image plane. The optimization procedure is based on a simple geometric optics image formation model and can be implemented computationally efficient. We perform simulations and show early experimental results that suggest that the implementation on a phase-only modulator can create structured light fields suitable, for example, for efficient illumination of a spatial light modulator (SLM) within a traditional projector. In an alternative application, the algorithm provides a fast way to compute geometries for static, freeform lens manufacturing.

© 2015 Optical Society of America

1. Introduction

In this work we propose to use phase only spatial light modulation combined with broadband illumination for image formation. We achieve this by treating the spatial phase modulator as a programmable freeform lens, and devising a simple and computationally efficient optimization procedure to derive a lens surface or modulation pattern that will form a caustic representing a predefined target image when illuminated by a collimated, broadband light source. Our research draws from a number of different research fields including holography and goal-based caustics.

1.1. Holographic displays

Early holographic image formation models [1] have been adapted to create digital holograms [2]. Most of the common approaches require coherent light which has several disadvantages. Coherent light can result in high resolution artifacts, including screen speckle and diffraction on structures such as the discrete pixel grid of a SLM. On the other hand, using broadband light sources as illumination source can eliminate screen speckle, but is not feasible for holography. Phase patterns used in holography typically contain high spatial frequencies while low-frequency phase modulation patterns would help in mitgating diffraction artifacts. Finally scaling holography-based approaches cost-efficiently to high power is currently not feasible due to its incompatibility with broadband light sources as well as poor beam quality of high power diode lasers.

1.2. Freeform lenses

Recently, there has been strong interest in freeform lens design, both for general lighting applications and also to generate images from caustics [4]. In the latter application, we can distinguish between discrete optimization methods that work on a pixelated version of the problem (e.g. [5]), and those that optimize for continuous surfaces without obvious pixel structures (e.g. [6, 7, 8]). The current state of the art [8] defines an optimization problem on the gradients of the lens surface, which then have to be integrated up into a height field. In addition to low computational performance, this leads to a tension between satisfying a data term (the target caustic image) and maintaining the integrability of the gradient field.

Our goal is to derive an efficient algorithm to compute freeform lens patterns for dynamic phase modulation on a SLM, that produces images when illuminated with non-coherent, broadband light. While diffraction will occur off the SLM (or off any small, pixelated grid), we expect the resulting diffraction artifacts to be averaged out by the broadband nature of the illumination, resulting in a small amount of blur that can be modeled and compensated for [3]. In our work we derive a simple and efficient formulation in which we optimize directly for the phase function (i.e. the shape of the wavefront in the lens plane) without the need for a subsequent integration step. This is made possible by a new parameterization of the problem that allows us to express the optimization directly in the lens plane rather than the image plane.

2. Freeform lensing

2.1. Phase modulation image formation

To derive the image formation model for a phase modulator, we consider the geometry shown in Fig. 1: a lens plane and an image plane (screen) are parallel to each other at focal distance f. Collimated light is incident at the lens plane from the normal direction, but a phase modulator in the lens plane distorts the phase of the light, resulting in a curved phase function p(x) which corresponds to a local deflection of the light rays.

 figure: Fig. 1

Fig. 1 Geometry for image formation model: Phase modulation in lens plane at focal distance f from image plane resulting in curvature of the wavefront (phase function p(x)).

Download Full Size | PPT Slide | PDF

With the paraxial approximation sinϕϕ we obtain the following Eq. for the mapping between x on the lens plane and u on the image plane:

u(x)x+fp(x).

Using the above geometric mapping, we derive the intensity change associated with this distortion as follows. Let dx be a differential area on the lens plane, and let du = m(x) · dx be the differential area of the corresponding region in the image plane, where m(.) is a spatially varying magnification factor. The intensity on the image plane is then given as

i(u(x))=dxdui0=1m(x)i0,
where i0 is the intensity of the collimated light incident at the lens plane. In the following we set i0 = 1 for simplicity of notation.

The magnification factor m(.) can be expressed in terms of the derivatives of the mapping between the lens and image planes (also compare Fig. 2):

m(x)=(xu(x))×(yu(x))1+f2p(x).

This yields the following expression for the intensity distribution in the image plane:

i(x+fp(x))=11+f2p(x).

In other words, the magnification m, and therefore the intensity i(u) on the image plane can be directly computed from the Laplacian of the scalar phase function in the lens plane.

 figure: Fig. 2

Fig. 2 Intensity change due to distortion of a differential area dx.

Download Full Size | PPT Slide | PDF

2.2. Optimization problem

While it is possible to directly turn the image formation model from Eq. (4) into an optimization problem, we found that we can achieve better convergence by first linearizing the Eq. with a first-order Taylor approximation, which yields

i(x+fp(x))1f2p(x),
where the left hand side can be interpreted as a warped image ip(x) = i(x + f · ∇p(x)) for which the target intensity i(u) in the image plane has been warped backwards onto the lens plane using the geometric distortion u(x) produced by a known phase function p(x). With this parameterization, the continuous least-square optimization problem for determining the desired phase function becomes
p^(x)=argminp(x)x(ip(x)1+f2p(x))2dx.
This problem can be solved by iterating between updates to the phase function and updates to the warped image, as shown in Algorithm 1. The algorithm is initialized with the target image intensity. From this, the first phase pattern is computed, which in turn is used to warp the original target image intensity to provide a distorted intensity image for use in the next iteration.

Tables Icon

Algorithm 1. Freeform lens optimization

After discretization of i(.) and p(.) into pixels, the phase update corresponds to solving a linear least squares problem with a discrete Laplace operator as the system matrix. We can solve this positive semi-definite system using a number of different algorithms, including Conjugate Gradient, BICGSTAB and Quasi Minimal Residual (QMR). The image warp corresponds to a texture mapping operation and can be implemented on a GPU. We implement a non-optimized prototype of the algorithm in the Matlab programming environment using QMR as the least squares solver. Table 1 shows run times for Algorithm 1 and a selection of artificial and natural test images at different resolution. It was executed on a single core of a mobile Intel Core i7 clocked at 1.9 GHz with 8 GByte of memory. We note that due to the continuous nature of the resulting lens surfaces, computation of the phase with resolutions as low as 128 × 64 are sufficient for applications such as structured illumination in a projector. We also note that the algorithm could, with slight modifications, be rewritten as a convolution in the Fourier domain which would result in orders of magnitude shorter computation time for single threaded CPU implementations and even further speed-ups on parallel hardware such as GPUs. With these improvements, computations at, for example, 1920 × 1080 resolution will be possible at video frame rates. In addition both, the resulting contrast of the caustic image as well as the sharpness (effective resolution), benefit from higher working resolution.

Tables Icon

Table 1. Run times of Algorithm 1 using five iterations for a set of different test images and image resolutions.

The progression of this algorithm is depicted in Fig. 3. We show the undistorted target image, from which we optimize an initial phase function. Using this phase function, we update the target image in the lens plane by backward warping the image-plane target. This process increasingly distorts the target image for the modulator plane as the phase function converges. The backward warping step implies a non-convex objective function, but we empirically find that we achieve convergence in only a small number of iterations (5–10).

 figure: Fig. 3

Fig. 3 Algorithm progression for six iterations: target i gets progressively distorted by backwards warping onto lens plane ip(k) as phase function p(k) converges towards a solution. The 3D graphic depicts the final lens height field.

Download Full Size | PPT Slide | PDF

3. Simulation results

We evaluate the performance of our algorithm by utilizing different simulation techniques: a common computer graphics ray tracer and a wavefront model based on the Van Huygens principle, to simulate diffraction effects at a spectral resolution of 5nm.

3.1. Ray tracer simulation

For the ray tracer simulation we use the LuxRender framework, an unbiased, physically-based rendering engine for the Blender tool. The setup of the simulation is quite straightforward: the freeform lens is imported as a mesh, and material properties are set to mimic a physical lens manufactured out of acrylic.

A distant spot light provides approximately collimated illumination, a white surface with Lambertian reflectance properties serves as screen. The linear, high dynamic range data output from the simulation is tone mapped for display. The results (see Fig. 4) visually match the target well.

 figure: Fig. 4

Fig. 4 LuxRender simulation results of a caustic image caused by an acrylic freeform lens. The inset shows the absolute intensity difference between simulated and original image, where the original image is encoded in the interval [0–1], and 0 in the difference map (green) means no difference. There are three possible sources of error: reflections off the edges of the physically thick lens (vertical and horizontal lines, misalignment and scaling of the output relative to the original (manual alignment) and the nature of the light source (not perfectly collimated).

Download Full Size | PPT Slide | PDF

3.2. Physical optics simulation

To analyze possible diffraction effects that cannot be modeled in a ray tracer based on geometric optics principles, we perform a wave optics simulation based on the Van Huygens principle. We compute a freeform lens surface for a binary test image (see Fig. 5) and illuminate it in simulation with light from a common 3-LED (RGB) white light source (see Fig. 6, dotted line) in 5nm steps. We integrate over spectrum using the luminous efficiency of the LED and the spectral sensitivity curves of the CIE color matching functions (see Fig. 6, solid line), as well as a 3×3 transformation matrix and a 2.2 gamma to map tristimulus values to display/print RGB primaries for each LED die and for the combined white light source (see Fig. 7). As expected, the wavefront simulation reveals chromatic aberrations within the pattern and diffraction off the edge of the modulator, which can be (partially) mitigated, for example, by computing separate lens surfaces for each of R,G and B.

 figure: Fig. 5

Fig. 5 Binary test pattern (left) and resulting lens height field (right) used in the wave optics simulation.

Download Full Size | PPT Slide | PDF

 figure: Fig. 6

Fig. 6 Spectra of standard white 3-LED (RGB) [9] (dotted graph) and the CIE standard observer color matching functions (solid graph) used in the wave optics simulation.

Download Full Size | PPT Slide | PDF

 figure: Fig. 7

Fig. 7 Wave optics simulation for a test lens using standard white 3-LED (RGB) spectra. The simulation was performed at 5nm intervals and mapped to a RGB color space for print.

Download Full Size | PPT Slide | PDF

4. Experimental results

In addition to the simulations, we report on early experimental results using the computed freeform lenses in a static (acrylic, physical lens) and programmable (dynamically addressable phase modulator) fashion.

4.1. Static lenses

For refractive lens surfaces the phase function p(x) is converted to a geometric model describing the lens shape. We design a lens that is flat on one side, and has a freeform height field h(x) on the other side. In the (x, z) plane, the deflection angle ϕ is related to the incident (θi) and the exitant (θo) angles at the height field as follows:

p(x)xϕ=θoθi.
The analogous relationship holds in the (y, z) plane. In addition, the lens material has a refractive index of n. Using Snell’s law, and again the paraxial approximation, we obtain
1n=sinθisinθoθiθo.

Using Eqs. (7) and (8), as well as θi∂h(x)/∂x, we can derive the lens shape as

h(x)=h0+1n1p(x),
where h0 is a base thickness for the lens. Figure 8 shows a prototype of a 3D printed (42μm resolution) lens. Improved results and longer focal lengths can be achieved using other fabrication methods [7].

 figure: Fig. 8

Fig. 8 3D printed refractive lens (left), broadband LED spotlight and rear-projection screen with image (right). Differences to the simulation results in Fig. (4) partially stem from an increased beam divergence compared to an ideal light source as well as limitations in the manufacturing process (3D printer).

Download Full Size | PPT Slide | PDF

4.2. Implementation on spatial light modulators

The phase function p(x) can be directly implemented on a phase-only modulator: in our experiment an LCoS-based SLM with a pixel pitch of 8.0μm and a maximum phase retardation of 2π, the PLUTO SLM, by HOLOEYE Photonics AG. Since most high contrast images, for focal lengths reasonably far away from a modulator, require lens thickness of multiple wavelengths, we wrap the phase from Fig. 3 at multiples of 2π, comparable to grooves in a Fresnel lens (see Fig. 9, left). A broadband, white LED spot light provides collimated light on the reflective phase modulator and we observe the resulting image on a small Lambertian screen in Fig. 9, right.

 figure: Fig. 9

Fig. 9 Left: Wrapped phase function p(x) from Fig. (3). Right: Experimental test set-up using a phase modulator, a partially collimated, low power white LED light source and a small front projection screen. Light from an LED reflects off a phase modulator at an angle of 5 degrees and onto a projection screen at the focal distance of our lens computation. The resulting caustic image resembles the target image, but appears slightly blurry, distorted and at lower contrast compared to the simulations. This can be attributed to the broadband nature of light (the image would be sharper for separate lenses for red, green and blue LED light), and the degree of collimation of the incoming beam.

Download Full Size | PPT Slide | PDF

5. Discussion

We introduce a novel, computationally inexpensive method to compute freeform lenses and propose a new implementation for applications requiring dynamic updates. Wavefront and ray-tracer simulations as well as experiments show promising results. However, several improvements in computation time, contrast of the resulting caustic images and sharpness are possible. An implementation of the algorithm on the GPU will allow for a shorter runtime of the algorithm making more iterations and higher working resolutions possible. We anticipate this will improve contrast and sharpness of the results further. Our current implementation of the algorithm results in smooth phase patterns. As part of future work we plan to investigate, whether allowing for steep gradients in the phase pattern (e.g. sharp ridges and valleys) would lead to higher contrast results for certain images. The wavefront simulation results were computed for three separate phase patterns for the spectra of red, green and blue LEDs, then combined into one white light field, an implementation common in projection systems. The experiments, however due to hardware availability, were performed using a single, broadband LED for both the physical lens and phase modulator implementation. The use of separate color LEDs for red, green and blue illumination in the experiments will produce images with higher contrast and sharper edges. Finally, better light collimation optics will further improve the results.

Acknowledgments

Research reported in this publication was supported by MTT Innovation Inc., NSERC, and the King Abdullah University of Science and Technology (KAUST).

References and links

1. L. Lesem, P. Hirsch, and J. Jordan, “The kinoform: a new wavefront reconstruction device,” IBM J. Res. Dev. 13, 150–155 (1969). [CrossRef]  

2. P. R. Haugen, H. Bartelt, and S. K. Case, “Image formation by multifacet holograms,” Appl. Opt. 22, 2822–2829 (1983). [CrossRef]   [PubMed]  

3. G. Damberg, H. Seetzen, G. Ward, W. Heidrich, and L. Whitehead, “3.2: High dynamic range projection systems,” in “SID Symposium Digest of Technical Papers,” (Wiley Online Library, 2007), vol. 38, pp. 4–7.

4. M. Berry, “Oriental magic mirrors and the laplacian image,” Eur. J. Phys. 27, 109 (2006). [CrossRef]  

5. M. Papas, W. Jarosz, W. Jakob, S. Rusinkiewicz, W. Matusik, and T. Weyrich, “Goal-based caustics,” in “Computer Graphics Forum,” (Wiley Online Library, 2011), vol. 30, pp. 503–511.

6. T. Kiser, M. Eigensatz, M. M. Nguyen, P. Bompas, and M. Pauly, Architectural Caustics - Controlling Light with Geometry (Springer, 2013).

7. Y. Schwartzburg, R. Testuz, A. Tagliasacchi, and M. Pauly, “High-contrast computational caustic design,” ACM T. Graphic. 33, 74 (2014). [CrossRef]  

8. Y. Yue, K. Iwasaki, B.-Y. Chen, Y. Dobashi, and T. Nishita, “Pixel art with refracted light by rearrangeable sticks,” in “Computer Graphics Forum,” (Wiley Online Library, 2012), vol. 31, pp. 575–582.

9. Y. Ohno, “Color rendering and luminous efficacy of white led spectra,” in “Optical Science and Technology, the SPIE 49th Annual Meeting,” (International Society for Optics and Photonics, 2004), pp. 88–98.

References

  • View by:
  • |
  • |
  • |

  1. L. Lesem, P. Hirsch, and J. Jordan, “The kinoform: a new wavefront reconstruction device,” IBM J. Res. Dev. 13, 150–155 (1969).
    [Crossref]
  2. P. R. Haugen, H. Bartelt, and S. K. Case, “Image formation by multifacet holograms,” Appl. Opt. 22, 2822–2829 (1983).
    [Crossref] [PubMed]
  3. G. Damberg, H. Seetzen, G. Ward, W. Heidrich, and L. Whitehead, “3.2: High dynamic range projection systems,” in “SID Symposium Digest of Technical Papers,” (Wiley Online Library, 2007), vol. 38, pp. 4–7.
  4. M. Berry, “Oriental magic mirrors and the laplacian image,” Eur. J. Phys. 27, 109 (2006).
    [Crossref]
  5. M. Papas, W. Jarosz, W. Jakob, S. Rusinkiewicz, W. Matusik, and T. Weyrich, “Goal-based caustics,” in “Computer Graphics Forum,” (Wiley Online Library, 2011), vol. 30, pp. 503–511.
  6. T. Kiser, M. Eigensatz, M. M. Nguyen, P. Bompas, and M. Pauly, Architectural Caustics - Controlling Light with Geometry (Springer, 2013).
  7. Y. Schwartzburg, R. Testuz, A. Tagliasacchi, and M. Pauly, “High-contrast computational caustic design,” ACM T. Graphic. 33, 74 (2014).
    [Crossref]
  8. Y. Yue, K. Iwasaki, B.-Y. Chen, Y. Dobashi, and T. Nishita, “Pixel art with refracted light by rearrangeable sticks,” in “Computer Graphics Forum,” (Wiley Online Library, 2012), vol. 31, pp. 575–582.
  9. Y. Ohno, “Color rendering and luminous efficacy of white led spectra,” in “Optical Science and Technology, the SPIE 49th Annual Meeting,” (International Society for Optics and Photonics, 2004), pp. 88–98.

2014 (1)

Y. Schwartzburg, R. Testuz, A. Tagliasacchi, and M. Pauly, “High-contrast computational caustic design,” ACM T. Graphic. 33, 74 (2014).
[Crossref]

2006 (1)

M. Berry, “Oriental magic mirrors and the laplacian image,” Eur. J. Phys. 27, 109 (2006).
[Crossref]

1983 (1)

1969 (1)

L. Lesem, P. Hirsch, and J. Jordan, “The kinoform: a new wavefront reconstruction device,” IBM J. Res. Dev. 13, 150–155 (1969).
[Crossref]

Bartelt, H.

Berry, M.

M. Berry, “Oriental magic mirrors and the laplacian image,” Eur. J. Phys. 27, 109 (2006).
[Crossref]

Bompas, P.

T. Kiser, M. Eigensatz, M. M. Nguyen, P. Bompas, and M. Pauly, Architectural Caustics - Controlling Light with Geometry (Springer, 2013).

Case, S. K.

Chen, B.-Y.

Y. Yue, K. Iwasaki, B.-Y. Chen, Y. Dobashi, and T. Nishita, “Pixel art with refracted light by rearrangeable sticks,” in “Computer Graphics Forum,” (Wiley Online Library, 2012), vol. 31, pp. 575–582.

Damberg, G.

G. Damberg, H. Seetzen, G. Ward, W. Heidrich, and L. Whitehead, “3.2: High dynamic range projection systems,” in “SID Symposium Digest of Technical Papers,” (Wiley Online Library, 2007), vol. 38, pp. 4–7.

Dobashi, Y.

Y. Yue, K. Iwasaki, B.-Y. Chen, Y. Dobashi, and T. Nishita, “Pixel art with refracted light by rearrangeable sticks,” in “Computer Graphics Forum,” (Wiley Online Library, 2012), vol. 31, pp. 575–582.

Eigensatz, M.

T. Kiser, M. Eigensatz, M. M. Nguyen, P. Bompas, and M. Pauly, Architectural Caustics - Controlling Light with Geometry (Springer, 2013).

Haugen, P. R.

Heidrich, W.

G. Damberg, H. Seetzen, G. Ward, W. Heidrich, and L. Whitehead, “3.2: High dynamic range projection systems,” in “SID Symposium Digest of Technical Papers,” (Wiley Online Library, 2007), vol. 38, pp. 4–7.

Hirsch, P.

L. Lesem, P. Hirsch, and J. Jordan, “The kinoform: a new wavefront reconstruction device,” IBM J. Res. Dev. 13, 150–155 (1969).
[Crossref]

Iwasaki, K.

Y. Yue, K. Iwasaki, B.-Y. Chen, Y. Dobashi, and T. Nishita, “Pixel art with refracted light by rearrangeable sticks,” in “Computer Graphics Forum,” (Wiley Online Library, 2012), vol. 31, pp. 575–582.

Jakob, W.

M. Papas, W. Jarosz, W. Jakob, S. Rusinkiewicz, W. Matusik, and T. Weyrich, “Goal-based caustics,” in “Computer Graphics Forum,” (Wiley Online Library, 2011), vol. 30, pp. 503–511.

Jarosz, W.

M. Papas, W. Jarosz, W. Jakob, S. Rusinkiewicz, W. Matusik, and T. Weyrich, “Goal-based caustics,” in “Computer Graphics Forum,” (Wiley Online Library, 2011), vol. 30, pp. 503–511.

Jordan, J.

L. Lesem, P. Hirsch, and J. Jordan, “The kinoform: a new wavefront reconstruction device,” IBM J. Res. Dev. 13, 150–155 (1969).
[Crossref]

Kiser, T.

T. Kiser, M. Eigensatz, M. M. Nguyen, P. Bompas, and M. Pauly, Architectural Caustics - Controlling Light with Geometry (Springer, 2013).

Lesem, L.

L. Lesem, P. Hirsch, and J. Jordan, “The kinoform: a new wavefront reconstruction device,” IBM J. Res. Dev. 13, 150–155 (1969).
[Crossref]

Matusik, W.

M. Papas, W. Jarosz, W. Jakob, S. Rusinkiewicz, W. Matusik, and T. Weyrich, “Goal-based caustics,” in “Computer Graphics Forum,” (Wiley Online Library, 2011), vol. 30, pp. 503–511.

Nguyen, M. M.

T. Kiser, M. Eigensatz, M. M. Nguyen, P. Bompas, and M. Pauly, Architectural Caustics - Controlling Light with Geometry (Springer, 2013).

Nishita, T.

Y. Yue, K. Iwasaki, B.-Y. Chen, Y. Dobashi, and T. Nishita, “Pixel art with refracted light by rearrangeable sticks,” in “Computer Graphics Forum,” (Wiley Online Library, 2012), vol. 31, pp. 575–582.

Ohno, Y.

Y. Ohno, “Color rendering and luminous efficacy of white led spectra,” in “Optical Science and Technology, the SPIE 49th Annual Meeting,” (International Society for Optics and Photonics, 2004), pp. 88–98.

Papas, M.

M. Papas, W. Jarosz, W. Jakob, S. Rusinkiewicz, W. Matusik, and T. Weyrich, “Goal-based caustics,” in “Computer Graphics Forum,” (Wiley Online Library, 2011), vol. 30, pp. 503–511.

Pauly, M.

Y. Schwartzburg, R. Testuz, A. Tagliasacchi, and M. Pauly, “High-contrast computational caustic design,” ACM T. Graphic. 33, 74 (2014).
[Crossref]

T. Kiser, M. Eigensatz, M. M. Nguyen, P. Bompas, and M. Pauly, Architectural Caustics - Controlling Light with Geometry (Springer, 2013).

Rusinkiewicz, S.

M. Papas, W. Jarosz, W. Jakob, S. Rusinkiewicz, W. Matusik, and T. Weyrich, “Goal-based caustics,” in “Computer Graphics Forum,” (Wiley Online Library, 2011), vol. 30, pp. 503–511.

Schwartzburg, Y.

Y. Schwartzburg, R. Testuz, A. Tagliasacchi, and M. Pauly, “High-contrast computational caustic design,” ACM T. Graphic. 33, 74 (2014).
[Crossref]

Seetzen, H.

G. Damberg, H. Seetzen, G. Ward, W. Heidrich, and L. Whitehead, “3.2: High dynamic range projection systems,” in “SID Symposium Digest of Technical Papers,” (Wiley Online Library, 2007), vol. 38, pp. 4–7.

Tagliasacchi, A.

Y. Schwartzburg, R. Testuz, A. Tagliasacchi, and M. Pauly, “High-contrast computational caustic design,” ACM T. Graphic. 33, 74 (2014).
[Crossref]

Testuz, R.

Y. Schwartzburg, R. Testuz, A. Tagliasacchi, and M. Pauly, “High-contrast computational caustic design,” ACM T. Graphic. 33, 74 (2014).
[Crossref]

Ward, G.

G. Damberg, H. Seetzen, G. Ward, W. Heidrich, and L. Whitehead, “3.2: High dynamic range projection systems,” in “SID Symposium Digest of Technical Papers,” (Wiley Online Library, 2007), vol. 38, pp. 4–7.

Weyrich, T.

M. Papas, W. Jarosz, W. Jakob, S. Rusinkiewicz, W. Matusik, and T. Weyrich, “Goal-based caustics,” in “Computer Graphics Forum,” (Wiley Online Library, 2011), vol. 30, pp. 503–511.

Whitehead, L.

G. Damberg, H. Seetzen, G. Ward, W. Heidrich, and L. Whitehead, “3.2: High dynamic range projection systems,” in “SID Symposium Digest of Technical Papers,” (Wiley Online Library, 2007), vol. 38, pp. 4–7.

Yue, Y.

Y. Yue, K. Iwasaki, B.-Y. Chen, Y. Dobashi, and T. Nishita, “Pixel art with refracted light by rearrangeable sticks,” in “Computer Graphics Forum,” (Wiley Online Library, 2012), vol. 31, pp. 575–582.

ACM T. Graphic. (1)

Y. Schwartzburg, R. Testuz, A. Tagliasacchi, and M. Pauly, “High-contrast computational caustic design,” ACM T. Graphic. 33, 74 (2014).
[Crossref]

Appl. Opt. (1)

Eur. J. Phys. (1)

M. Berry, “Oriental magic mirrors and the laplacian image,” Eur. J. Phys. 27, 109 (2006).
[Crossref]

IBM J. Res. Dev. (1)

L. Lesem, P. Hirsch, and J. Jordan, “The kinoform: a new wavefront reconstruction device,” IBM J. Res. Dev. 13, 150–155 (1969).
[Crossref]

Other (5)

Y. Yue, K. Iwasaki, B.-Y. Chen, Y. Dobashi, and T. Nishita, “Pixel art with refracted light by rearrangeable sticks,” in “Computer Graphics Forum,” (Wiley Online Library, 2012), vol. 31, pp. 575–582.

Y. Ohno, “Color rendering and luminous efficacy of white led spectra,” in “Optical Science and Technology, the SPIE 49th Annual Meeting,” (International Society for Optics and Photonics, 2004), pp. 88–98.

M. Papas, W. Jarosz, W. Jakob, S. Rusinkiewicz, W. Matusik, and T. Weyrich, “Goal-based caustics,” in “Computer Graphics Forum,” (Wiley Online Library, 2011), vol. 30, pp. 503–511.

T. Kiser, M. Eigensatz, M. M. Nguyen, P. Bompas, and M. Pauly, Architectural Caustics - Controlling Light with Geometry (Springer, 2013).

G. Damberg, H. Seetzen, G. Ward, W. Heidrich, and L. Whitehead, “3.2: High dynamic range projection systems,” in “SID Symposium Digest of Technical Papers,” (Wiley Online Library, 2007), vol. 38, pp. 4–7.

Cited By

OSA participates in Crossref's Cited-By Linking service. Citing articles from OSA journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (9)

Fig. 1
Fig. 1 Geometry for image formation model: Phase modulation in lens plane at focal distance f from image plane resulting in curvature of the wavefront (phase function p(x)).
Fig. 2
Fig. 2 Intensity change due to distortion of a differential area dx.
Fig. 3
Fig. 3 Algorithm progression for six iterations: target i gets progressively distorted by backwards warping onto lens plane i p ( k ) as phase function p(k) converges towards a solution. The 3D graphic depicts the final lens height field.
Fig. 4
Fig. 4 LuxRender simulation results of a caustic image caused by an acrylic freeform lens. The inset shows the absolute intensity difference between simulated and original image, where the original image is encoded in the interval [0–1], and 0 in the difference map (green) means no difference. There are three possible sources of error: reflections off the edges of the physically thick lens (vertical and horizontal lines, misalignment and scaling of the output relative to the original (manual alignment) and the nature of the light source (not perfectly collimated).
Fig. 5
Fig. 5 Binary test pattern (left) and resulting lens height field (right) used in the wave optics simulation.
Fig. 6
Fig. 6 Spectra of standard white 3-LED (RGB) [9] (dotted graph) and the CIE standard observer color matching functions (solid graph) used in the wave optics simulation.
Fig. 7
Fig. 7 Wave optics simulation for a test lens using standard white 3-LED (RGB) spectra. The simulation was performed at 5nm intervals and mapped to a RGB color space for print.
Fig. 8
Fig. 8 3D printed refractive lens (left), broadband LED spotlight and rear-projection screen with image (right). Differences to the simulation results in Fig. (4) partially stem from an increased beam divergence compared to an ideal light source as well as limitations in the manufacturing process (3D printer).
Fig. 9
Fig. 9 Left: Wrapped phase function p(x) from Fig. (3). Right: Experimental test set-up using a phase modulator, a partially collimated, low power white LED light source and a small front projection screen. Light from an LED reflects off a phase modulator at an angle of 5 degrees and onto a projection screen at the focal distance of our lens computation. The resulting caustic image resembles the target image, but appears slightly blurry, distorted and at lower contrast compared to the simulations. This can be attributed to the broadband nature of light (the image would be sharper for separate lenses for red, green and blue LED light), and the degree of collimation of the incoming beam.

Tables (2)

Tables Icon

Algorithm 1 Freeform lens optimization

Tables Icon

Table 1 Run times of Algorithm 1 using five iterations for a set of different test images and image resolutions.

Equations (9)

Equations on this page are rendered with MathJax. Learn more.

u ( x ) x + f p ( x ) .
i ( u ( x ) ) = d x d u i 0 = 1 m ( x ) i 0 ,
m ( x ) = ( x u ( x ) ) × ( y u ( x ) ) 1 + f 2 p ( x ) .
i ( x + f p ( x ) ) = 1 1 + f 2 p ( x ) .
i ( x + f p ( x ) ) 1 f 2 p ( x ) ,
p ^ ( x ) = argmin p ( x ) x ( i p ( x ) 1 + f 2 p ( x ) ) 2 d x .
p ( x ) x ϕ = θ o θ i .
1 n = sin θ i sin θ o θ i θ o .
h ( x ) = h 0 + 1 n 1 p ( x ) ,

Metrics