Images displayed by holographic methods on phase-only light modulators inevitably suffer from speckle noise. It is partly caused by multiple uncontrolled interferences between laser light rays forming adjacent pixels of the image while having a random phase state. In this work the experimental proof of concept of an almost speckle-less projection method is presented, which assumes introducing a spatial separation of the image pixels, thus eliminating the spurious interferences. A single displayed sub-frame consists of separated light spots of very low intensity error. The sub-frames with different sampling offsets are then displayed sequentially to produce a non-fragmented color final image.
© 2013 Optical Society of America
In the domain of portable display devices a strong trend for miniaturization and high energetic efficiency is observed throughout the world. One of the results is the recent development of numerous solutions of laser pico-projection, based on beam scanning  and holography [2,3]. This work focuses on the holographic approach, since beam scanning technique has persistent difficulties with laser eye-safety and strong image flickering. On the other hand the holographic projection provides stable images formed by strongly divergent wavefronts, which ensures a completely safe operation.
From the point of view of energetic efficiency, the utilization of 8-bit addressable Spatial Light Modulators (SLM) based on the LCoS technology (Liquid Crystal on Silicon) allows one to project a color image with almost 100% efficiency, in principle . This is the result of a very simple optical setup and image formation based on the diffractive redirection of light instead of the selective absorption, which also causes light leakage and drop of contrast ratio. Additionally, the use of laser light sources gives a good overall wall-plug efficiency, for the price of inevitable speckle noise.
The removal of speckle in laser projection is under intense investigation due to the high demand of the display industry for high quality lens-less imaging. Numerous methods have been proposed, involving moving parts , multi-mode dispersion  and purely electronic optimization  which is mainly aimed at the minimization of the specific noise resulting from phase-only modulation of light. The method presented here belongs to the last category.
The image projected on a screen is composed of pixels, or image points, which are formed by the redirection (diffraction) of light by the SLM under a coherent illumination. The uncontrolled interference between closely-spaced object points leads to an unwanted variation of intensity due to random states of phase of overlapping light fields . This common disadvantage of the coherent light is illustrated in Fig. 1, which shows the results of a theoretical addition of two wavefronts forming two closely-packed object points. Three cases are shown: incoherent illumination in Fig. 1(b), coherent illumination with uniform phase of all light rays forming object points in Fig. 1(c) and a constructive interference of light rays forming object point 1 with light rays forming object point 2 in Fig. 1(d). If we consider two fields U1 and U2 forming object point 1 and 2 placed every N = 1 pixel, then Figs. 1(b)–1(d) show the intensity of the following summed fields, respectively:
These intensity variations are responsible for a significant part of the speckle noise in projected images. As a solution, Takaki  addressed the SLM with a controlled set of micro-lenses, which formed an array of highly spatially-separated light spots. This previous method will be referred to as the Lens Array Method (LAM). The Airy disks surrounding the object points were separated enough to avoid uncontrolled interferences, which allowed a very low intensity error. This result is supported by theoretical results in Fig. 2, which shows the analytical addition of light fields when object points are separated by 2 pixels (hence N = 2) and 3 pixels (N = 3).
One can see that for N = 3 the summed fields in Figs. 2(b)–2(d) are practically the same, which proves that adding a spatial separation between object points allows to eliminate the influence of phase relations on the final intensity distribution on the projection screen.
The LAM method provided good results, but on the other hand the limited aperture of the micro-lenses  caused the loss of resolution by increasing the size of the light spots. Moreover there was some energy leakage when non-white pixels were displayed.
Here is proposed a simpler method, which assumes using under-sampled input bitmaps (where only every N'th pixel is preserved) and calculating of holograms using the standard Gerchberg-Saxton (G-S) algorithm . Figure 3 shows schematically the difference in the formation of 4 exemplary adjacent object points on the projection screen with the classic method (Random Phase Integration, RPI  - without pixel separation, hence N = 1), with the LAM method by Takaki  and with the method proposed here (with pixel separation by 4 pixels, hence N = 4). In the RPI approach (Fig. 3, left) the entire surface of the phase hologram forms an array of pixels on the screen. Due to closely packed pixels some spurious interference occurs between them, which increases the noise. In the LAM method based on a set of micro-lenses displayed on the SLM (Fig. 3, center), the image points are separated, which eliminates the unwanted interferences, but the small apertures of the lenses involve a higher diffraction spread of the light spots, contributing to a lower resolution of imaging. The proposed PSM (Pixel Separation Method, Fig. 3, right) exploits the whole aperture of the SLM, which increases the effective aperture and decreases the size of object points hence improving the image resolution. Moreover the iterative G-S calculation of phase holograms utilizes the whole light energy for image formation, which makes the method especially effective in terms of energy.
3. The algorithm of hologram calculation
The goal of the algorithm is to calculate a set of sub-holograms that would reconstruct a 2-D intensity distribution of the USAF pattern on a distant projection screen, when displayed one by one on the SLM. As the first step of the proposed algorithm the input bitmap is split into a set of sub-images where only every N'th pixel is kept, while all the other pixels are set to black. This operation is equivalent to under-sampling of the input image. In order to illustrate this process, Fig. 4 shows an exemplary central part of an input bitmap (USAF pattern), split into a set of 4 sub-images (N = 2 is taken for clarity). The pixels belonging to subsequent sub-frames are intentionally colored in blue, green, red and white for the sake of this illustration.
The undersampled amplitude distribution AN,n,m can be represented with Eq. (2), where U is the initial amplitude of the original USAF pattern, N[px] is the amount of pixel separation and n, m are offset values of sampling in x and y directions, respectively.
Next, the Gerchberg-Saxton algorithm is performed with each pixelated sub-frame (AN,n,m) taken as the input amplitude, one after another. The procedure is done with the in-house software in 3 iterations and takes approx. 1 s on an average CPU (Central Processing Unit). Therefore the calculation of the entire set of sub-holograms for a full frame takes approx. 25 s. A random initial phase distribution  is used (φrandom) with additional rolling operation of the matrix for improved randomization . The size of the calculation matrix is 2048 by 2048 pixels, which is optimal for the used FFT (Fast Fourier Transform) routine. The sampling is 8 µm and the wavelength is set to 671 nm (i.e. the same as the wavelength of the laser used in the experiment). In the next step the iterated phase is multiplied by a phase factor of a converging lens. The focal length f of the lens is established for a given projection distance based on Eq. (3):
4. Numerical simulation
A numerical experiment was conducted prior to the experiment in order to prove the feasibility of the pixel separation concept in minimizing the speckle noise caused by unwanted interference of light fields forming adjacent object points. The input bitmap representing the USAF pattern had the size of 2048 by 2048 pixels. It was under-sampled with a chosen value of N between 1 and 10. Then the collection of N2 sub-holograms was calculated with G-S algorithm in 3 iterations. Resulting phase holograms were numerically reconstructed by calculating field propagated at the distance of z = 200 mm. The illumination beam was a Gaussian wavefront calculated by taking into account the distance ri and the diameter of the core of the used fiber (4.6 µm). The propagation was computed with the Modified Convolution Method . In the last step the N2 diffracted fields were incoherently added to create the final reconstructed image. In this way the output intensity pattern IN can be described with Eq. (4), where Pz is the operator of Fresnel propagation at a distance of z, UIL is the complex amplitude of the illuminating Gaussian beam, G is the operator performing the G-S phase optimization and k = 2π/λ.
The final integrated field was then examined in terms of the speckle noise contrast in the chosen region of the largest square of the USAF pattern. It was calculated as the standard deviation of the intensity divided by the average intensity . The above simulation was performed 10 times for the values of N from 1 to 10 and the results are presented in Fig. 5.
In Fig. 5 the noise contrast of the proposed method is compared to that of the previous RPI method , which was based on a full-frame integration of sub-holograms with different random initial phase. For values of integrated frames (or N2) greater than 16 the image quality is sufficient for most projection systems (speckle contrast below 5% ), while the classic method requires the integration of at least 100 sub-frames to reach this goal. This makes the proposed method at least 6 times as efficient, although meeting of the demanding criterion of 1% for high-end systems will be difficult. The comparison of the speckle contrast with the improvement ratio is given in Table 1. Based on the numerical simulations the proposed pixel separation method gives a 3.3 times lower noise contrast for the optimal value of N = 5 (i.e. 25 integrated sub-frames). In the best case the pixel separation method yields the speckle contrast below 2.3%, which is acceptable by most of the today's commercial applications.
In order to confirm the optimistic conclusions drawn from numerical simulations, an experiment was performed on a breadboard optical table. The scheme of the optical setup built for the experiment is presented in Fig. 6.
The color projection requires the presence of three laser beams in the primary colors (red, green and blue). The beams from the lasers pass through the polarization control section (a set of half wave plates) and then are coupled into single mode fibers by ThorLabs (P1-630A-FC-2, P1-460A-FC-2, P1-405A-FC-2). The three endings of the fibers are used as quasi point sources for the illumination of the SLM through a 50-50 non-polarizing beam splitter (BS). The polarization control is used to achieve a linear state of illumination beams, parallel to one of the sides of the Holoeye Pluto SLM, as suggested by the manufacturer. This orientation allows the smallest depolarization and phase-only modulation. The beam splitter is used in order to achieve a normal angle of incidence of the beam on the surface of the SLM. The divergent beams that passed through the BS are reflected from the SLM and reach the CMOS (Complementary Metal-Oxide Semiconductor) matrix of the Canon EOS 5D mk2 camera. Obviously, the size of the patch of light at the plane of the camera is larger than the matrix itself, nevertheless this wide-spread field is simply the non-diffracted light (i.e. zero diffractive order), which is not of our interest. The useful image is formed in the center of the zero order light patch and is entirely captured by the bare CMOS matrix. The presence of the mentioned zero order light in the background slightly lowers the contrast of the projected image, nevertheless its influence is not obstructing and it faints with distance (due to the divergent wave). The electronic shutters of the laser beams were used for synchronizing the color of the illumination with holograms displayed on the SLM for the particular wavelength (i.e. time-sequential illumination).
The optical head used for the projection, shown in Fig. 7 is extremely simple and can be considered for future implementations in handheld devices after further miniaturization. Nevertheless the future development will include the use of smaller and more powerful light sources and a quasi-planar illumination.
A set of frames projected with the red beam is shown in Fig. 8. Every picture shows a fragmented reproduction of the central part of the input bitmap with high uniformity of the intensity of the bright spots (i.e. with low error). This is the result of the spatial separation of the light spots, which is approx. 80 µm. According to Eq. (5), this distance is approximately 4 times the Airy disk diameter D for this distance and the aperture of the SLM.
In the next step the calculated sub-frames are displayed one after another by addressing the SLM with pre-calculated sequence of sub-holograms. For N = 5 the total of N2 = 25 frames are integrated during a 0.5 s exposure of the digital camera at the sensitivity of ISO 100. Low sensitivity is used in order to minimize the intrinsic noise of the CMOS matrix. The optimal value of N = 5 was chosen based on numerical simulations as a trade-off between an optimal pixel separation yielding smaller interferences and the computational effort needed to calculate and display 25 holograms per a single full frame. Figure 9 shows the exemplary captured image of the USAF test pattern. The picture was taken at the distance of z = 200 mm from the SLM.
The used USAF pattern allows the assessment of the resolution of projection, which was measured at approx. 500 line pairs. The resolution was compromised by the non-uniform, Gaussian-shaped illumination of the SLM, which is the consequence of extreme simplification of the optical setup. The precision of the measurement was additionally lowered by the minor defocus of captured pictures caused by the very shallow depth of sharp imaging. Nevertheless the result allows the method to be used for high-definition projection purposes.
The drawback of this projection technique based on a pixellated phase modulator is the inevitable presence of additional ghost images associated with higher diffractive orders (as seen on the sides of Fig. 9). Nevertheless their presence might be limited when SLMs of a smaller pixel pitch are used in the future.
Clearly the contrast of the image is acceptable (but slightly limited by the background non-diffracted light) and has the measured value of 7:1. This ratio shall increase with improved fill factor parameters of future SLMs or with the increasing projection distance. In this experiment the estimated diffraction efficiency is 65%, thereby approx. 35% of the light reaching the plane of analysis remains in the zero order and in higher diffractive orders.
What is important, the speckle noise in the projected image is almost completely eliminated. In order to prove this, Fig. 10 shows the magnification of the element of the USAF pattern, as captured in the experiment. The measured speckle amount is 2.16%, which is calculated as the standard deviation of the intensity divided by the average intensity  in the bright region of the biggest square in the USAF image. The results from the proposed method are confronted with the classic holographic projection , based on a sequential display of pre-calculated holograms with different initial random phase (RPI), which is equivalent to a rolling diffuser in optical experiments. In order to reproduce the similar level of noise in the RPI method, one has to integrate at least 100 frames, which means approx. 4 times more of computational effort. This makes the proposed method 4 times more efficient. Moreover, the classic projection without pixel separation and with the integration of only 25 frames gives a poor result of 7.36% and a highly visible speckle pattern.
In the classic method based on time-integration of randomized holograms the speckle noise is clearly visible and then slowly vanishes with the square root of the number of integrated sub-frames. Unfortunately the noise remains visible and obstructing to the eye when, for some reason, the exposure is interrupted. In contrast, the proposed method ensures that the speckle noise is never seen on the projection screen, because each displayed sub-frame has a very low intensity error. This unique feature of the proposed method is important from the point of view of human physiology and fully takes advantage of the persistence of vision (afterimage) capabilities of human eyes.
Figure 11 shows more experimental monochromatic projections of the test Lenna image. The input bitmaps have a size of 1024 by 1024 pixels or 2048 by 2048 pixels (which utilizes the full Nyquist frequency). Obviously in the latter case, the energy of the illuminating beam is spread on a larger area thus the image brightness is compromised. As a consequence, the background field is more visible. In the former case the useful image dominates over the background. The red beam alone is used in order to show the contrast and noise advantages of the proposed method in a more distinctive way.
In order to show the feasibility of the method for color display, we use the time-sequential switching of illuminating lasers with electronic shutters . The contents displayed on the SLM are matched to the wavelength of the currently open laser.
The sub-holograms for the green and blue light (532 nm and 445 nm, respectively) are calculated analogically as described above for the red light. The display of a full-color frame (i.e. the exposure time) for N = 5 takes 1.5 s (0.5 s per primary color), therefore the average exposure time of a single sub-hologram is 20 ms (500 ms / N2). The experimentally captured images are presented in Fig. 12 and they exhibit the quality that might be accepted by future consumer electronics products.
The presented method of combining the pixel separation with time-integration allows a significant speckle suppression, which finally enables the holographic projection to provide the quality needed for industrial applications in portable displays. The optical experiment was supported by theory and numerical simulations with small discrepancy in the noise contrast, which can be attributed to low-pass filtering performed by the digital camera. The computation method is fully supported by the multi-parallel processing on modern graphics boards. The main disadvantage from the practical point of view is the necessity of using phase modulators with 8-bit addressing and hundreds of frames per second for a real-time operation. Such devices are still awaited. Currently available SLMs require the integration time of approximately 500 ms, which is approximately 10 times too slow for a human eye. On the other hand, currently available ferroelectric SLMs  could be used for a proof of concept of the real-time integration, but their binary phase modulation would decrease the efficiency of the projection approximately twofold. Nevertheless in this work it is shown that the proposed projection method has superior properties to timely methods [4,15,16] and will work successfully once the appropriate hardware is available. LCoS micro-displays designed for fast switching in color field sequential mode currently achieve frame rates in the range of 480 Hz and pixel pitch of 3.74 µm. They will potentially allow the projection with pixel separation at the speed of 30 frames per second, which is acceptable for handheld projector displays.
This work was supported by the Polish National Center for Research and Development, project “CULPA”, under agreement LIDER/013/469/L-4/12/NCBR/2013 and by the Warsaw University of Technology through the young researcher's grant from the Dean of the Faculty of Physics.
References and links
1. E. Buckley, “Real-time error diffusion for signal-to-noise ratio improvement in a holographic projection system,” J. Disp. Technol. 7, 70–76 (2011). [CrossRef]
2. H. Nakayama, N. Takada, Y. Ichihashi, S. Awazu, T. Shimobaba, N. Masuda, and T. Ito, “Real-time color electroholography using multiple graphics processing units and multiple high-definition liquid-crystal display panels,” Appl. Opt. 49(31), 5993–5996 (2010). [CrossRef]
3. E. Buckley, “Holographic laser projection,” J. Disp. Technol. 7(3), 135–140 (2011). [CrossRef]
5. Y. Kuratomi, K. Sekiya, H. Satoh, T. Tomiyama, T. Kawakami, B. Katagiri, Y. Suzuki, and T. Uchida, “Speckle reduction mechanism in laser rear projection displays using a small moving diffuser,” J. Opt. Soc. Am. A 27(8), 1812–1817 (2010). [CrossRef] [PubMed]
6. J. G. Manni and J. W. Goodman, “Versatile method for achieving 1% speckle contrast in large-venue laser projection displays using a stationary multimode optical fiber,” Opt. Express 20(10), 11288–11315 (2012). [CrossRef] [PubMed]
7. M. Makowski, I. Ducin, M. Sypek, A. Siemion, A. Siemion, J. Suszek, and A. Kolodziejczyk, “Color image projection based on Fourier holograms,” Opt. Lett. 35(8), 1227–1229 (2010). [CrossRef] [PubMed]
9. R. W. Gerchberg and W. O. Saxton, “A practical algorithm for the determination of the phase from image and diffraction plane pictures,” Optik 35, 237–246 (1972).
10. T. Peter, F. Wyrowski, and O. Bryngdhal, “Importance of initial distribution for iterative calculation of quantized diffractive elements,” J. Mod. Opt. 40(4), 591–600 (1993). [CrossRef]
11. A. Czerwiński, K. Kakarenko, M. Sypek, M. Makowski, I. Ducin, J. Suszek, A. Kolodziejczyk, and J. Bomba, “Modeling of the optical system illuminated by quasi-monochromatic spatially incoherent light: New numerical approach,” Opt. Lett. 37(22), 4723–4725 (2012). [CrossRef] [PubMed]
12. M. Sypek, “Light propagation in the Fresnel region. New numerical approach,” Opt. Commun. 116(1-3), 43–48 (1995). [CrossRef]
13. M. Oikawa, T. Shimobaba, T. Yoda, H. Nakayama, A. Shiraki, N. Masuda, and T. Ito, “Time-division color electroholography using one-chip RGB LED and synchronizing controller,” Opt. Express 19(13), 12008–12013 (2011). [CrossRef] [PubMed]
14. T. D. Wilkinson, “Ferroelectric liquid crystal over silicon devices,” Liq. Cryst. Today 21(2), 34–41 (2012). [CrossRef]
15. M. Makowski, M. Sypek, A. Kolodziejczyk, G. Mikuła, and J. Suszek, “Iterative design of multi-plane holograms: experiments and applications,” Opt. Eng. 46(4), 045802 (2007). [CrossRef]
16. M. Makowski, I. Ducin, K. Kakarenko, A. Kolodziejczyk, A. Siemion, A. Siemion, J. Suszek, M. Sypek, and D. Wojnowski, “Efficient image projection by Fourier electroholography,” Opt. Lett. 36(16), 3018–3020 (2011). [CrossRef] [PubMed]