## Abstract

The multiplexing encoding method is proposed and demonstrated for reconstructing colorful images accurately by using single phase-only spatial light modulator (SLM). It will encode the light waves at different wavelengths into one pure-phase hologram at the same time based on the analytic formulas. The three-dimensional (3D) images can be reconstructed clearly when the light waves at different wavelengths are incident into the encoding hologram. Numerical simulations and optical experiments for 2D and 3D colorful images are performed. The results show that the colorful reconstructed images with high quality are achieved successfully. The proposed multiplexing method is a simple and fast encoding approach and the size of the system is small and compact. It is expected to be used for realizing full-color 3D holographic display in future.

© 2014 Optical Society of America

## 1. Introduction

Holographic display, which can provide the full parallax and depth information of a 3D scene without any special eyewear, is one of the research hotspots in 3D display area. Normally, for realizing the full-color holographic display, the computer-generated holograms (CGHs) at different wavelengths are calculated and loaded into optoelectronic devices to reconstruct 3D images by time-multiplexing or space-multiplexing method.

There exist many problems in the Holographic display such as the low image quality, the long time to process the huge data information, the limited space bandwidth of the optoelectronic devices, and the complicated system. As is well-known, the coherent light illuminating will cause the speckle noise, which will degrade the image quality. Furthermore, the commercial optoelectronic devices can modulate the amplitude of the light wave or its phase or both dependently, whereas they cannot modulate the amplitude and the phase, independently and simultaneously. The phase-only optoelectronic devices, such as the phase-only spatial light modulators (SLMs) which have a higher diffractive efficiency than that of the amplitude-only SLM, are the most commonly used. Nevertheless, the quality of the reconstructed image by the kinoform (phase-only CGH) becomes degraded because of the lost information of the light amplitude. Though the image with random initial phase can make the amplitude uniform statistically [1], the reconstructed image is accompanied with speckle noise. In order to generate low-noise kinoforms, some optimization algorithms, such as Gerchberg-Saxton (GS) algorithm [1, 2], multi-plane iterative algorithm (MPIA) [3, 4], and simulated annealing algorithm (SAA) [5], dynamic-pseudorandom-phase (DPP) algorithm [6], have been proposed to encode the partial amplitude information into the phase part. However, those methods are quite time-consuming.

The other problem is the complexity or the size of the display systems when the full-color 3D holographic image is reconstructed. Employing three SLMs respectively to load CGHs with the red, green, and blue components of a color image is one of the most frequently used methods [7, 8], where the utility rate of luminous energy is the highest at the expense of large volume and high cost. For reducing the system size and the cost, several approaches which employ only one SLM for realizing full-color holographic display have been proposed in recent years, such as the depth-division method (DDM) [3, 4], the time-division method (TDM) [9, 10], the space-division method (SDM) [11, 12], and etc. DDM needs the iterative algorithm so it is time-consuming, and it is also suitable to 2D rather than 3D holographic display. TDM requires the SLM with very high rate of frame frequency to produce afterimage effect on human eyes. SDM can reconstruct a color image by one hologram which records the transversely distributed RGB components of the original color image, but it costs time in iterative calculation. In addition, some other similar methods were proposed with one SLM for the color display [1, 13, 14], however, the resolution of the reconstructed image is greatly reduced since the SLM is divided into several regions side by side.

In this paper, a multiplexing encoding method is proposed to generate one pure phase hologram, which minimizes the size of the system. Three computer-generated holograms corresponding to red, green and blue (RGB) are synthesized into one pure-phase hologram by the analytic formulas. The colorful 3D image will be reconstructed when the light waves at three original wavelengths are incident simultaneously. Numerical simulations and optical experiments for 3D colorful images reconstruction are performed and they are in nice agreement. The system is simple, and the synthesized method is fast, as well as the high image quality of the reconstructed 3D display can be achieved successfully.

## 2. The multiplexing encoding method

The color bird picture can be divided into RGB components, and each component can be considered as many self-illuminating point sources (pixel of the picture). Then each of the RGB components will be propagated toward the hologram directly and separately. When calculating the CGH, we assume that the reference beam of red color is a plane wave, and it illuminates the hologram with a tilt angle $\theta $ along z axis. The red component of the bird picture is then interfered with the red reference beam, and the fringe is recorded and encoded as R-CGH. The blue reference beam is a plane wave with a tilt angle -$\theta $ along z axis, and the green reference beam is a plane wave parallel to z axis. In the same way, the B-CGH and the G-CGH can be encoded separately. It is noted that all the reference lights are vertical to the y axis. Then we synthesize the RGB CGH to obtain one hologram, and a final phase-only hologram can be achieved by Eq. (4). In the reconstruction process, the phase-only hologram is loaded on the SLM, and three reference lights of RGB colors with different angels will be simultaneously illuminated the SLM as shown in Fig. 1(b), then the color image can be reconstructed.

Since a full-color 3D object can be divided into multiple 2D slices for RGB channels, the object wave distribution at the object plane is ${o}_{i}(\widehat{x},\widehat{y},{\widehat{z}}_{p})$, where *i* = 1, 2 or 3, corresponds to red, green or blue channel, $p$is the number of the 2D slices of the 3D object. We use the angular spectrum method [15] to describe the propagation process of RGB components. And the frequency spectrum of each color component of the object is described by the Fourier transform

*u*and

*v*represent the spatial frequency. The reference plane light waves for RGB channels on the hologram plane can be written as ${R}_{i}(x,y,{z}_{h})=\mathrm{exp}(j{k}_{i}x\mathrm{sin}{\theta}_{i}),$ where ${k}_{i}=2\pi /{\lambda}_{i}$, ${\lambda}_{i}$ is the wavelength, ${\theta}_{i}$ is the incident angle of the reference light. On the hologram plane, the red component of the image is interfered with the red reference light, the green component interferes with the green reference light, and the blue component interferes with the blue light, then we obtain three different CGHs. The CGH of each color component of the 3D object in the holographic plane can be described as [15]

*x, y*) denote coordinates of the space domain, the formula in the brace represents the complex amplitude of each color component of the 3D object after the propagation process of spectrum, ${z}_{h}-{\widehat{z}}_{p}$ is the distance between the object plane and the hologram plane, ${A}_{i}(x,y)$ represents the amplitude distribution of the CGH, and ${\phi}_{i}(x,y)$ is the phase component. Then the complex amplitudes of all color components are synthesized as

In the reconstruction process, a reconstructed color image is obtained by simultaneously illuminating three reconstructed lights of RGB colors to the phase-only hologram in different angles just as shown in Fig. 1(b). The reconstructed lights of RGB colors can be written as${R}_{i}{}^{\text{'}}(x,y,{z}_{h})=\mathrm{exp}[j{k}_{i}x\mathrm{sin}(-{\theta}_{i})]$. Then the reconstructed optical field distribution near the hologram is given by

In the final reconstruction process, we can employ a 4f system with a band-pass filter in the frequency-plane [15] to filter out the unwanted items in Eq. (6). When the unnecessary diffractive items are carefully filtered, one can achieve the 3D objective image with high quality. Here our proposed method can encode one pure phase hologram for colorful 3D object, and our calculation doesn’t need optimization processing. Therefore, our proposed method is employed for multi-wavelength and can reconstruct a 3D color image by one phase-only hologram clearly.

## 3. Numerical and optical experimental results

The full-color holographic display system is shown in Fig.
3. Three lasers with different wavelengths are collimated by the collimators which
consist of spatial filters and collimating lens, and they illuminate a phase-only SLM (Holoeye
Pluto VIS, fill factor: 87%, pixel pitch 8.0 μm, resolution: 1920 × 1080, 256
phase modulation levels) in different angles simultaneously. The SLM is placed on the front
focus plane of L_{1}. In the 4f system the band-pass filter is placed on the back focus
plane of L_{1}, which is also the front focus plane L_{2}. In the reconstructed
process, the desired first orders will be picked up by the designed hole on the filter. The
distance between the hole and the optic axis of the hole is determined by
$d={f}_{{L}_{1}}\mathrm{sin}\psi ,$ where ${f}_{{L}_{1}}$ is the focal length of L_{1}. The reconstructed color
image is obtained on the output plane after the back focal plane of L_{2}. The CCD
(Lumenera’s INFINITY 4-11C) is used to record the experimental results. In addition, the
scaling of the reconstructed image is determined by${f}_{{L}_{2}}/{f}_{{L}_{1}}$, where ${f}_{{L}_{2}}$ is the focal length of L_{2}. In our experiment, the focal
lengths of L_{1} and L_{2} are both 543mm and their apertures are both
120mm.

The selected ideal 2D color images with 1024 × 1024 pixels are shown in Figs. 4(a)–4(c).
The main parameters used in the numerical simulations and the optical experiments are given as
following: the sampling interval of the hologram is 8.0μm × 8.0μm; the
wavelengths of RGB reference lights are 671 nm for red, 532 nm for green, and 473 nm for blue,
respectively; the distance between the object plane and the hologram plane is 500 mm; the
incident angles *θ* of the red and the blue reference lights are both
1.57°; the tilted angle $\psi $is 1.57°. Our program is run by a personal computer with a
CPU core of a 2.6 GHz clock frequency under Matlab 2011b. The kinoform with size of 1,024
× 1,024 pixels is numerically generated by multiplexing encoding method. The time for
calculating a hologram of 2D color image with 1024 × 1024 pixels with our proposed
analytical method is about 1.32 seconds, and no iteration is needed.

We will first evaluate the quality of the reconstructed color image by peak signal noise ratio (PSNR). The equation for the calculation of the PSNR is defined as $PSNR(dB)=10{\mathrm{log}}_{10}({255}^{2}/({\scriptscriptstyle \frac{1}{MN}}{\displaystyle \sum {}^{MN}{({I}_{o}-{I}_{r})}^{2}})),$ where M and N are the horizontal and vertical numbers of pixels for the original image and the reconstructed image, ${I}_{o}$ and ${I}_{r}$ are the original image and the reconstructed image, respectively [11]. We first calculate the PSNR for the R, G, and B components separately and then take their average value as an overall evaluation for the RGB image. The numerical reconstructed color images are shown in Figs. 4(d)–4(f). The PSNRs of the numerical reconstructed images are 23.69 dB for the car image, 23.90 dB for the parrot image, 21.18 dB for the cup and teapot image. It is clearly observed that the color images can be numerical reconstructed successfully. Note that the numerical results are a little darker than the original images because of light losses. The light loss is caused by the zero orders (0 items) and the unwanted first orders (2 items) as shown in Fig. 7, which is described by Eq. (6). If the numerical reconstructed images are normalized, then they will have the same kind of “lightness” as the original images. And we also can improve the brightness of the images by using high power lasers in the optical experiment. We will talk about it in details in the discussion section.

The phase-only hologram is then loaded on the phase-only SLM. In the process of the optical experiment, the numerical zero-padding technique is utilized to eliminate the dispersions caused by different wavelengths [6]. The relationship between sampling numbers and corresponding wavelengths in the RGB channels should satisfy the function${M}_{R}:{M}_{G}:{M}_{B}={N}_{R}:{N}_{G}:{N}_{B}={\lambda}_{R}:{\lambda}_{G}:{\lambda}_{B},$ which indicates that a larger number of sampling points in the object plane is needed for the longer wavelength. We set the sampling number of the red channel as 1024 × 1024 pixels, and the sampling numbers of the green and the blue channels are 812 × 812 pixels and 722 × 722 pixels, respectively. The redundant marginal districts in the green and the blue channels are padded with zeros. The optical experimental results are shown in Figs. 4(g)–4(i). It is evident that the full color of the image is realized successfully, the image quality of the numerical reconstructed scene is acceptable for human eyes, and the speckle noises are well suppressed. The optical experimental results are in good agreement with computer simulations. In Fig. 4, it is noted that the experimental results exhibit some color shift if one looks carefully though it does not exist in the numerical results. The color shift is caused by our optical experimental system and it could be corrected by finely tuning the output power of the RGB lasers [19]. What’s more, the SLM is a dispersive device and has a different phase modulation depth for different wavelengths. It will also cause the color shift which can be minimized by the pre-compensated lookup table.

Now we consider a full-color 3D object, which can be divided into multiple 2D slices with
RGB channels. Here, we take a full-color 3D object consisting of two slices as an example for
simplification. The CGH generating system is shown in Fig.
5. The distances between each object plane and hologram plane are
*z*_{1} = 650 mm, and *z*_{2} = 500 mm. The
hologram synthetized by the encoding method is then loaded into the SLM. The numerical and
optical reconstructed full-color 3D images at different distances are displayed in Figs. 6(a)-6(d). When
the CCD (INFINITY 4-11C) focus at 650 mm, the ‘RMB’ image is clear and
‘YGC’ image is blur as shown in Figs. 6(a)
and 6(c), and vice versa. It is easily seen that the 3D
colorful image is reconstructed properly and the high image quality is achieved. Note that the
blue images in Figs. 6(c) and 6(d) are not aligned to the rest of the images whereas the numerical results
are fine. There are two reasons that can cause the shift of the blue in Figs. 6(c) and 6(d). The first one is
the mis-alignment of the system or the reference beams which can be corrected by adjusting the
optical setup. The second one is that the blue laser beam (shorter wavelength) may have more
dispersion in our optical system and the standard blue plane wave could be distorted. This
problem can be solved by employing the achromatic optical elements in the optical system and
shaping the blue laser beam to be a standard plane wave.

It is noted that the model could be easily extended to generate the kinoform of a complex full-color 3D object composed of more than two slices [6]. When more slices are utilized to decompose the 3D object, the cross-talk could be introduced, and the quality of the reconstruction will be slightly low. However, it could be improved if we consider the occlusion nulling [20].

## 4. Discussion

In our proposed method, the tilted phase factors synthetized to the holograms will cause
the higher diffraction orders as shown in Fig. 7(a).
Since there existed unwanted images, the efficiency of energy utilization should be considered.
The efficiency of energy is calculated numerically by the function ${\eta}_{e}=[({\displaystyle {\sum}_{}^{MN}{I}_{d}})/({\displaystyle {\sum}_{}^{XY}{I}_{t}})]\times 100\%,$ where M and N are the horizontal and vertical numbers of pixels
for the desired reconstructed image ${I}_{d}$, X and Y are the horizontal and vertical numbers of pixels for the
total reconstructed image ${I}_{t}$. The efficiency of energy is 5.05% for the magic cubic case. One
can distinguish the details and colors of an object when the luminance exceeds 3
cd/m^{2} [21]. In our system, the diffraction
efficiency of the SLM is 60%. The equation for the calculation of the required total luminous
flux is defined as $F(lm)=L\xb7S\xb7\Omega /({\eta}_{e}\xb7{\eta}_{slm}),$ where L (cd/m^{2}) is the luminance, S (m^{2}) is
the area of the reconstructed image, $\Omega $(sr) is the solid angle and ${\eta}_{slm}$is the diffraction efficiency of the SLM. In addition, the solid
angle is given by $\Omega (sr)\approx S/{r}^{2}$, where *r* is the distance of distinct vision
(normally 0.25m). Assuming that the luminance of the reconstructed image is 3 cd/m^{2},
the size of the reconstructed 2D image is 5cm × 5cm, and the efficiency of energy is
5.05%, then the obtained total luminous flux is 0.0099 lumen. According to the spectral luminous
efficiency in photopic vision [21], one lumen corresponds
to 45.75 milliwatts (mW) for the red light with wavelength of 671 nm, 1.66 mW for the green
light with wavelength of 532 nm, and 97.48 mW for the blue light with wavelength of 473 nm. So
the minimal power for the RGB lasers are 0.453 mW, 0.016 mW and 0.965 mW respectively. In the
calculation, the energy loss of the laser beams in propagation is neglected.

We can use higher power lasers to improve the luminance of the image especially for complex scenes. It can be seen from Fig. 7(a) and 7(b) that the color image can be reconstructed clearly since the unwanted images are separated from the desired image. The desired order can be picked up and the unnecessary diffraction lights will be filtered out after the beam goes through a “4f” filtering architecture as shown in Fig. 3.

For real-time 3D holographic display, it is important to consider the calculation speed. Our proposed method is based on the analytical formula, so there is no iteration, and it does not cost more time to generate hologram that reconstructs the 3D object with high quality image in color. A preview of animated 2D projection is shown in Fig. 8 (Media 1). It is noted that the dynamic 3D display also can be easily obtained.

## 5. Conclusion

The multiplexing encoding method is proposed where the CGH is analytically generated. The numerical and experimental results both indicate that the colorful 2D or 3D images can be reconstructed clearly. The size of the optical system can be quite compact because of using single phase-only SLM. It is a simple and time-saving approach. It is a promising way for realizing dynamic full-color 3D holographic display in future. It could also be used to other complex amplitude modulations for multi-wavelength systems, such as optical encryption in color and diffractive optical elements with multi-wavelength, and so on.

## Acknowledgment

This work was supported by the National Basic Research Program of China (973 Program Grant nos. 2013CB328801 and 2013CB328806), the National Natural Science Foundation of China (61235002).

## References and links

**1. **M. Makowski, I. Ducin, K. Kakarenko, J. Suszek, M. Sypek, and A. Kolodziejczyk, “Simple holographic projection in color,” Opt. Express **20**(22), 25130–25136 (2012). [CrossRef] [PubMed]

**2. **M. Hacker, G. Stobrawa, and T. Feurer, “Iterative Fourier transform algorithm for phase-only pulse shaping,” Opt. Express **9**(4), 191–199 (2001). [CrossRef] [PubMed]

**3. **M. Makowski, M. Sypek, and A. Kolodziejczyk, “Colorful reconstructions from a thin multi-plane phase hologram,” Opt. Express **16**(15), 11618–11623 (2008). [PubMed]

**4. **M. Makowski, M. Sypek, I. Ducin, A. Fajst, A. Siemion, J. Suszek, and A. Kolodziejczyk, “Experimental evaluation of a full-color compact lensless holographic display,” Opt. Express **17**(23), 20840–20846 (2009). [CrossRef] [PubMed]

**5. **N. Yoshikawa and T. Yatagai, “Phase optimization of a kinoform by simulated annealing,” Appl. Opt. **33**(5), 863–868 (1994). [CrossRef] [PubMed]

**6. **H. Zheng, T. Tao, L. Dai, and Y. Yu, “Holographic imaging of full-color real-existing three-dimensional objects with computer-generated sequential kinoforms,” Chin. Opt. Lett. **9**(4), 040901 (2011). [CrossRef]

**7. **A. Shiraki, N. Takada, M. Niwa, Y. Ichihashi, T. Shimobaba, N. Masuda, and T. Ito, “Simplified electroholographic color reconstruction system using graphics processing unit and liquid crystal display projector,” Opt. Express **17**(18), 16038–16045 (2009). [CrossRef] [PubMed]

**8. **J. Jia, Y. Wang, J. Liu, X. Li, Y. Pan, Z. Sun, B. Zhang, Q. Zhao, and W. Jiang, “Reducing the memory usage for effective computer-generated hologram calculation using compressed look-up table in full-color holographic display,” Appl. Opt. **52**(7), 1404–1412 (2013). [CrossRef] [PubMed]

**9. **X. Li, Y. Wang, J. Liu, J. Jia, Y. Pan, and J. Xie, “Color holographic display using a phase-only spatial light modulator,” presented at the 10th International Symposium on Display Holography, Hawaii, USA, 25–29 April 2013. [CrossRef]

**10. **M. Oikawa, T. Shimobaba, T. Yoda, H. Nakayama, A. Shiraki, N. Masuda, and T. Ito, “Time-division color electroholography using one-chip RGB LED and synchronizing controller,” Opt. Express **19**(13), 12008–12013 (2011). [CrossRef] [PubMed]

**11. **T. Shimobaba, T. Takahashi, N. Masuda, and T. Ito, “Numerical study of color holographic projection using space-division method,” Opt. Express **19**(11), 10287–10292 (2011). [CrossRef] [PubMed]

**12. **T. Ito and K. Okano, “Color electroholography by three colored reference lights simultaneously incident upon one hologram panel,” Opt. Express **12**(18), 4320–4325 (2004). [CrossRef] [PubMed]

**13. **M. Makowski, I. Ducin, K. Kakarenko, J. Suszek, A. Kolodziejczyk, and M. Sypek, “Extremely simple holographic projection of color images,” Proc. SPIE **8280**, 1–6 (2012). [CrossRef]

**14. **M. Makowski, I. Ducin, M. Sypek, A. Siemion, A. Siemion, J. Suszek, and A. Kolodziejczyk, “Color image projection based on Fourier holograms,” Opt. Lett. **35**(8), 1227–1229 (2010). [CrossRef] [PubMed]

**15. **J. W. Goodman, *Introduction to Fourier Optics*, 2nd ed. (McGraw-Hill, 1996), chap. 2.2.

**16. **H. Zhang, J. Xie, J. Liu, and Y. Wang, “Elimination of a zero-order beam induced by a pixelated spatial light modulator for holographic projection,” Appl. Opt. **48**(30), 5834–5841 (2009). [CrossRef] [PubMed]

**17. **I. Moreno, J. Campos, C. Gorecki, and M. J. Yzuel, “Effects of amplitude and phase mismatching errors in the generation of a kinoform for pattern recognition,” Jpn. J. Appl. Phys. **34**, 6423–6432 (1995). [CrossRef]

**18. **J. A. Davis, D. M. Cottrell, J. Campos, M. J. Yzuel, and I. Moreno, “Encoding Amplitude Information onto Phase-Only Filters,” Appl. Opt. **38**(23), 5004–5013 (1999). [CrossRef] [PubMed]

**19. **R. Shi, J. Liu, H. Zhao, Z. Wu, Y. Liu, Y. Hu, Y. Chen, J. Xie, and Y. Wang, “Chromatic dispersion correction in planar waveguide using one-layer volume holograms based on three-step exposure,” Appl. Opt. **51**(20), 4703–4708 (2012). [CrossRef] [PubMed]

**20. **K. Wakunami, H. Yamashita, and M. Yamaguchi, “Occlusion culling for computer generated hologram based on ray-wavefront conversion,” Opt. Express **21**(19), 21811–21822 (2013). [CrossRef] [PubMed]

**21. **N. Ohta and A. R. Robertson, *Colorimetry: Fundamentals and Applications* (John Wiley & Sons, 2005).