## Abstract

We propose an algorithm based on fully computed holographic stereogram for calculating full-parallax computer-generated holograms (CGHs) with accurate depth cues. The proposed method integrates point source algorithm and holographic stereogram based algorithm to reconstruct the three-dimensional (3D) scenes. Precise accommodation cue and occlusion effect can be created, and computer graphics rendering techniques can be employed in the CGH generation to enhance the image fidelity. Optical experiments have been performed using a spatial light modulator (SLM) and a fabricated high-resolution hologram, the results show that our proposed algorithm can perform quality reconstructions of 3D scenes with arbitrary depth information.

© 2015 Optical Society of America

## 1. Introduction

Holographic display can reconstruct the whole optical wave field of a three-dimensional (3D) scene and has the potential to provide all the depth cues that human eyes can perceive [1]. Hence holographic display can be considered as a promising candidate for future 3D display technologies. With the development of computer technology, we can use computer-generated holograms (CGHs) to reconstruct 3D images for both existing and synthetic objects as long as the mathematical descriptions of the 3D scenes are provided [2, 3].

Although CGH can help the display system get rid of complicated interference recording procedure, it still suffers from the device shortage and computational limitations. The lack of favorable space-bandwidth product of the holographic device limits the viewing parameters. The computation algorithm determines whether the potential of the holographic device can be fully activated. The image quality of the reconstructed 3D scene is directly related with the CGH algorithm.

Most of the current algorithms for synthesizing the CGHs are physically based methods. The optical transmission process are simulated from the 3D scene to a hologram plane [4–8]. 3D objects can be divided into multiple point sources or planar segments, which can give precise descriptions for the 3D scenes. Hence continuous motion parallax and accurate depth information can be optically reconstructed. However, since the optical transmission processes of different primitives are independent, it is hard to express some view-dependent properties of the 3D scenes, such as shading and occlusion. Meanwhile, the physically based methods focus on the wave propagation simulations and are difficult to integrate with computer graphics rendering techniques, hence the fidelity of reconstructed 3D images would be affected.

To deal with the occlusion problem, several modified point source algorithms has been developed. Hidden surface removal method was designed to solve the absence of the hidden surface’s data in making computer-generated rainbow holograms [9]. However, because the viewpoints were set apart from the hologram plane, distortions would be produced when viewing from an arbitrary position. An inverse orthographic projection technique was proposed to avoid the geometric distortions [10]. The positions of the 3D objects should be set beyond a certain distance so that the all the objects can be captured during projection processing, which limited the depth ranges of the reconstructed scenes.

Different with point source algorithm, a polygon-based method was developed by using a silhouette mask to produce occlusion effect [11]. This technique was specially designed for 3D scenes with clearly separate objects, so it could not deal with self-occlusion. To solve this problem, the silhouette method with switch-back technique was further developed, which could perform quality reconstructions of self-occluded objects [12]. This method still lacks the flexibility to integrate with computer graphics rendering techniques. Recently, ray casting and ray tracing techniques were introduced into the calculation of physically based methods to provide occlusion, refractive and reflective effects [13–15]. Because of the computational loads of the ray-object processes, these kinds of techniques are suitable for 3D scenes with simple objects, such as tetrahedrons and spheres.

Holographic stereogram based algorithm is another class of methods for computing the CGHs [16–18]. Holographic stereograms angularly multiplex two-dimensional (2D) parallax views that are generated digitally or captured optically, which are similar to the principle of integral photography [19–21]. Computer graphics rendering techniques can be applied in the stereogram computations to add multiple shading effects and eventually make the 3D scene more realistic. Further, the hologram is spatially segmented into multiple holographic elements (hogels), and the occlusion problem can be automatically solved during the rendering processes in different hogels. But the holographic stereograms are not fully computed. During reconstruction, each hogel projects a set of plane waves in different directions to form the 2D parallax view, which would cause the lack of depth information of the 3D scene. Hence the holographic stereogram is difficult to reconstruct deep 3D scene with accurate accommodation cue.

In order to compensate the depth performances of holographic stereogram based algorithms, several works have been done to avoid the image degradation by adding phase factors or introducing intermediate planes on the CGHs, including phase-added stereograms [22, 23], diffraction specific coherent panoramagram [24], and ray sampling plane techniques [25, 26]. These methods all have their own approximations on the optical wave field, and are still hard to perform quality reconstruction for deep 3D scene with continuous depth change and complex occlusion effect.

In this paper, we propose an efficient algorithm to overcome the occlusion problem of point source algorithm as well as the depth limitation of holographic stereogram. The proposed algorithm is based on fully computed holographic stereogram calculation, which takes advantages of both physically based algorithm and holographic stereogram based algorithm. During calculation, the object points are fetched according to the spatial spectral information of each hogel, which faithfully matches the geometric information and depth cues of the 3D scene. Hence the algorithm can provide accurate depth information as well as correct view-dependent properties. It can also utilize computer graphics rendering techniques to make the 3D images photorealistic. The algorithm is robust for holograms with different parameters and modulation types. Reconstruction results between stereogram based algorithms and our proposed method are compared by evaluating their depth performances numerically. Optical experimental demonstrations are presented with different types of holograms. With the proposed algorithm, realistic 3D scenes with arbitrary depth information can be reconstructed accurately.

## 2. Phase profiles of CGHs

In a holographic display system, the hologram works like a window where all the reconstructed beams are generated from. It can be seen as a specialized diffraction grating that diffracts light along multiple angular directions and finally forms the 3D shape of the original scene. The optical wavefront distribution on the hologram plane is a complex one. But the phase distribution is crucial to the reconstruction of the 3D scene, since the phase profile determines the directions of the reconstructed beams. Hence we can analyze the reconstructions of the CGHs through their phase profiles.

To simplify the analysis, one point source is used as the object point, as is shown in Fig. 1. In point source algorithm, since its calculation simulates the optical transmission process from the point source to the hologram plane, the phase distribution accurately represents the real world situation, as is shown in Fig. 1(a). While in holographic stereogram calculation, the hologram is spatially segmented into multiple hogels. And each hogel corresponds to one viewpoint on the hologram plane and one rendered 2D parallax image. Each image is only visible in its own field of view that is determined by the spatial position and resolution of the corresponding hogel. Full-parallax CGH with smooth motion parallax effect can be generated by addressing a fine grid of hogels on the hologram plane. In the case of Fig. 1(b), where only one point source is in the 3D scene, the CGH uses a set of plane waves with discrete directions to approximate the spherical wavefront. Phase distribution in each hogel has only one spatial frequency, which relates to the plane wave transmitting in the given direction.

In the numerical simulation, the point source was located at z = −300mm from the hologram. The holograms contained 1000 × 1000 pixels with the pixel pitch 8μm. The holographic stereogram contained 8 × 8 hogels, and each contained 125 × 125 pixels. The wavelength used for simulation was 532nm. Angular spectrum method was used to simulate the optical wave propagations [27].

The reconstructed image on the object plane using point source algorithm is shown in Fig. 2(a). We can see a sharp spot located in its center, which fits the original object well. Hence accurate accommodation cue can be perceived during optical reconstruction. While in the reconstruction process of the holographic stereogram, multiple plane waves transmitted from different hogels forms a quasi-square shaped pattern, as is shown in Fig. 2(b). The size of the square is directly proportional to the size of hogel. Thus the spatial resolution of the reconstructed image would decrease during optical reconstruction of the holographic stereogram.

The spectrums of point source hologram and holographic stereogram are shown in Fig. 3(a) and 3(b), respectively. There is a continuous change in the spatial frequency of the point source hologram. The propagation directions of the reconstructed beams are determined by the spatial frequencies of the CGH, hence the surface normal of the reconstructed wavefront is continuously changed. Comparing to the discrete spatial frequency of holographic stereogram, continuous frequency change can provide better approximation to an arbitrary wavefront, which leads to more accurate depth information during optical reconstruction.

The wavefront reconstructed from a holographic stereogram can be seen as a discrete sampling of the objective wavefront, as is shown in Fig. 4. The spatial sampling distance $s$ equals the size of a hogel:

where $n$ is the number of sampling points of a hogel in one direction, and $d$ denotes the pixel pitch. When viewing at a holographic stereogram, the accommodation cue is mimicked through the plane waves received by our pupil. Smaller hogel size can lead to smoother spatial frequency change of adjacent hogels, and hence has a better approximation to the spherical wavefront. When a viewer is focusing at infinity, each plane wave converges at the retina, which would form one or more spots, depending on how many plane wave bundles with different frequencies entered in the pupil. This is different with the point source hologram, through which we would see a dispersion pattern when the object point is out of focus.Another parameter related with the reconstruction quality is the angular sampling distance $\Delta \theta $, as is shown in Fig. 4. This angular sampling distance can be deduced as:

## 3. Fully computed holographic stereogram based algorithm

The above analysis shows that holographic stereogram based CGH cannot produce accurate depth information as it can only reconstruct plane waves with discrete spatial frequencies. Hence it is necessary to modulate the reconstruction wavefront in a flexible way in order to make the optical wave field consistent with the original 3D scene. It could be achieved through integrating point source algorithm and holographic stereogram based algorithm. By calculating point source hologram for each hogel, the holographic stereogram calculation can be considered as fully computed. Hence we cannot only take advantages of computer graphics rendering technique and provide occlusion effect easily, but also reconstruct wavefront with accurate depth information and produce precise accommodation cue.

The diagram and flow chart for calculating the fully computed holographic stereogram are shown in Fig. 5(a) and 5(b), respectively. The hologram is spatially partitioned into a grid of hogels, which is similar with holographic stereogram. For each hogel, a viewing frustum is employed to perform the perspective projection. We then use two rendered images to provide information of all the object points that have contributions to the corresponding hogel. The shading image can provide the amplitude information for each object point, and computer graphics techniques can be implemented in the rendering process to produce photorealistic 3D scenes with lighting, texture and shadowing. While the depth image can be used to extract the coordinate information of the corresponding object points by geometric transformation described below. Given these two information, the complex amplitude distribution of each hogel can then be calculated using point source algorithm. Hence the phase profile of each hogel would faithfully represent the actual wavefront and depth information of the 3D scene.

In order to deal with the occlusion, contribution areas on the hologram plane for each object points need to be evaluated. Each hogel processes its own occlusion culling test through the rendering process and only the contributed object points are considered. Perspective projection is performed from each hogel towards the 3D scene to achieve precise occlusion culling. During rendering, each hogel corresponds to one viewpoint. The occlusion problem can be solved through this multi-viewpoint rendering process.

To evaluate contributed object points for each hogel, perspective projection is performed from the viewpoint. Figure 6(a) shows the diagram of the viewing frustum, and Fig. 6(b) shows its top view. Normally the viewpoint is set at the center of the hogel. The field of view is determined by the maximum diffraction angle of the hologram, which can be deduced from the grating equation:

where ${f}_{\mathrm{max}}$ is the maximum spatial frequency of the hologram. The field of view of the perspective projection of each hogel is set to $2{\theta}_{\mathrm{max}}$ to prevent crosstalk. The pixel index in the rendered image corresponds with a specific projecting direction, which can be denoted as $\left({\theta}_{x},{\theta}_{y}\right)$. In order to calculate the fully computed hologram, we need to transform the angle information into actual geometric coordinates of the object points. The coordinates can be deduced from their geometric relationships as: where $\left({x}_{o},{y}_{o},{z}_{o}\right)$ is the actual position of the object point, and ${z}_{p}$ is the depth coordinate of the object point in perspective rendered image. After getting the geometric information of the object points, the complex amplitude distribution of each hogel can be calculated by superposing the optical wavefronts of all the point sources:The complex amplitude distribution of the final hologram can be acquired after calculation of all the hogels. As each hogel can reconstruct a wavefront that accurately match the corresponding scene in its viewing frustum, the wavefront reconstructed by the entire hologram would be the accumulation of wavefronts from all the hogels, which would faithfully represent the entire 3D scene. Depending on the hologram type used for optical reconstruction, either phase distribution [7, 28] or bipolar intensity [4] can be employed to encode the holographic data into a phase or amplitude hologram.

## 4. Experimental results

To demonstrate the performance of our proposed method, numerical simulations and optical experiments were performed. We first compared the reconstruction results between stereogram based algorithm and our proposed method by evaluating their depth performances numerically. Then the optical experiments were divided into two parts. In the first part, a phase-only spatial light modulator (SLM) was used to show the depth information as well as the accommodation cue of the reconstructed 3D scenes. While in the second part, a high-resolution hologram with a larger field of view was fabricated to perform the optical reconstruction. Hence both occlusion and motion parallax of the view-dependent properties can be perceived.

#### 4.1 Numerical reconstructions

Table 1 shows the parameters for calculating the CGHs. The holograms contained 2000 × 2000 pixels with the pixel pitch 8μm. The wavelength was set to 532nm. Hence the field of views of the holograms were 3.8°. Two segmentation strategies were employed in the holographic stereograms calculations. Stereogram 1 was partitioned into 16 × 16 hogels with the size of 1mm, and each of which contained 125 × 125 hologram pixels. While a finer segmentation was performed in the calculation of Stereogram 2, which was partitioned into 40 × 40 hogels with 0.4mm hogel size. The partition method used in our proposed algorithm was the same with Stereogram 1.

The 3D scene used for generating the CGHs contained two objects: a bunny (12mm × 12mm × 9mm) located at z = −100mm and a background wall (16mm × 16mm × 0mm) located at z = −130mm behind the hologram plane. And the numerical reconstructions were performed using angular spectrum algorithm [27]. Figure 7(a), 7(b), and 7(c) are reconstructed images focused on the bunny, and Fig. 7(d), 7(e), and 7(f) are the reconstructed images focused on the wall. The reconstruction results of Stereogram 1 are shown in Fig. 7(a) and 7(d), and both of which have low spatial resolutions because of the larger hogel size. Since a finer segmentation strategy was used in Stereogram 2, its reconstruction results shown in Fig. 7(b) and 7(e) have moderate spatial resolutions, which is consistent with the analysis in Section 2. The accommodation cue of Stereogram 2 can also be mimicked through the finely partitioned plane waves generated by the small hogels. While the reconstruction results of our proposed method shown in Fig. 7(c) and 7(f) have higher spatial resolutions compared with the holographic stereograms. And the coarse partition strategy used in the fully computed holographic stereogram would not affect the depth performance because the wavefronts generated by the hogels are consistent with the geometric information of the 3D scene. The accurate depth information generated through fully computed holographic stereogram would lead precise accommodation cue during reconstructions.

#### 4.2 SLM based hologram

The PLUTO phase-only SLM from HOLOEYE Photonics was used for the optical demonstration of the depth information. It was a reflective liquid crystal on silicon (LCOS) device with 1920 × 1080 pixels [29]. The pixel pitch was 8μm and the SLM was addressed with 8bit gray-scale levels. The wavelength of the laser used in our experiment was 532nm. The hologram for optical experiment was cropped from the fully computed holographic stereogram demonstrated in Section 4.1.

The optical reconstructed images were recorded by a Canon 500D camera. Figure 8(a) and 8(b) are the reconstruction results when the camera is manually focused on the bunny and wall separately (focal length: 100mm, f number: 2.8). The bunny shown in Fig. 8(a) is reconstructed clearly, whereas the wall is blurred. And if the camera is focused on the wall, as is shown in Fig. 8(b), we can see that the blurred bunny partially blocks the clear background wall. These optical reconstructions agree with the previous simulation results, which shows that the proposed algorithm can produce the accommodation cue of 3D objects with discrete depths.

To further demonstrate the applicability of the proposed algorithm to reconstruct the depth information, we set a deep 3D scene with continuous depth change: a helix with 6 turns located from z = −100mm to z = −200mm from the hologram plane. We manually focused our camera to capture the different turns of the helix (focal length: 100mm, f number: 2.8), as is shown in Fig. 9. Since a short depth of focus is set, we can see clearly that the focusing parts of the helix shift from front to back when we focus our camera to different depths. This depth shift can also be perceived by the viewer when one focusing on the different turns. Hence the CGH can produce precise accommodation cue of a deep 3D scene with continuous depth change.

#### 4.3 High-resolution hologram

SLM based reconstruction is hard to demonstrate the view-dependent properties such as occlusion and motion parallax because of its narrow field of view. These view-dependent effects are important depth cues of 3D perception, and would be more obvious if the hologram has a larger field of view. Hence we need to expand the field of view of the hologram to demonstrate these effects.

A high-resolution hologram (pixel pitch: 1μm) was fabricated using a direct-write lithography system. The CGH was printed in a binary pattern on a substrate of fused silica coated with chromium film. According to the modulation type of the fabricated hologram, the complex amplitude distribution on the hologram plane $h\left(x,y\right)$ was first encoded into the intensity distribution:

where $r\left(x,y\right)$ is the reference wave field on the hologram plane, and here a plane wave with an incident angle of 7.5° is used. $\Delta $ is the constant offset that can keep the intensity distribution nonnegative. Then $I\left(x,y\right)$ was quantized into binary amplitude by setting its midway as the threshold to match the fabrication requirement.The fabricated CGH without illumination is shown in Fig. 10, and the parameters for calculating this hologram is shown in Table 2. The hologram size was 20mm × 20mm, which contained 20000 × 20000 pixels with 1μm pixel pitch. The hologram was partitioned into 20 × 20 hogels, which corresponded to 20 × 20 viewpoints in the rendering process. During calculation, each hogel contained 1000 × 1000 hologram pixels and the corresponding rendered images had 300 × 300 pixels. During reconstruction, the hologram was illuminated by a slanted collimated beam with an incident angle of 7.5°. The wavelength of the laser used in our experiment was 532nm, hence the field of view was 30.9°, which was suitable to demonstrate the view-dependent properties.

The bunny and the wall in the 3D scene was the same with the previous experiment, except for their locations: the bunny was located at z = −60mm and the wall was located at z = −110mm behind the hologram. Figure 11(a) and 11(b) are the reconstruction results when the camera is manually focused on the bunny and wall separately (focal length: 100mm, f number: 2.8), which are similar with the reconstruction results of the SLM based hologram shown in Fig. 8. The qualities of the images reconstructed from the fabricated hologram are better because of its higher space-bandwidth product. Again, the results show that the precise accommodation cue can be clearly perceived as we focus on different depths.

To demonstrate the occlusion as well as motion parallax of the reconstructed 3D scene, the camera was placed at three different viewing positions for capturing (focal length: 100mm, f number: 8). Reconstruction results are shown in Fig. 12. These three images illustrate that motion parallax with occlusion effect can be perceived in the optical reconstruction. Long depth of focus was needed during the recording of images shown in Fig. 12, since the two objects were located at different depths. This was achieved by increasing the f number during image recording, but such method would decrease the quality of the captured images.

We may also view the reconstructed image directly, and we can easily perceive motion parallax with occlusion effect by moving our eyes around. A short animated movie was captured to illustrate the view-dependent properties of the reconstructed 3D scene, and one frame of which is shown in Fig. 13.

#### 4.4 Calculation time

All the CGHs generated in the experiments were calculated by using a PC with a CPU of Intel Xeon E5-2620 (2.10GHz) and a memory of DELL DDR3 ECC RDIMM (16GB). Table 3 shows the calculation time of the CGHs used in the optical experiments. The total calculation time for the CGHs can be divided into rendering part and algorithm processing part. We can see that the complexity of the 3D scene and the number of CGH pixels would affect the computation time. The computational load of the fully computed holographic stereogram is equivalent with which of the point source algorithm, and can also use look-up table and GPU parallel processing to optimize the computational performance.

## 5. Conclusion

Fully computed holographic stereogram based algorithm is developed to provide all the depth cues of the 3D scene. By integrating point source algorithm and holographic stereogram based algorithm, the proposed method can provide the view-dependent effects and accurate depth information of the 3D scene. Precise accommodation cue is reconstructed from a deep 3D scene with continuous depth variation. Smooth motion parallax with occlusion effect is also produced through multi-viewpoint rendering process during calculation. The computer graphics rendering procedure can take advantage of the graphic pipeline to provide photorealistic 3D scenes efficiently. Experiments were performed using holograms with different parameters and modulation types, which optically demonstrates that accurate depth cues can be reconstructed using our proposed method. As a future work, large size holograms with high space-bandwidth products are needed to optimize the viewing parameters.

## Acknowledgments

This work was supported by the National Basic Research Program of China (No. 2013CB328801) and the National Natural Science Foundation of China (No. 61205013).

## References and links

**1. **S. A. Benton and V. M. Bove, Holographic Imaging, Wiley, Hoboken (2007).

**2. **M. Lucente, “Optimization of hologram computation for real-time display,” Proc. SPIE **1667**, 32–43 (1992). [CrossRef]

**3. **F. Yaras, H. Kang, and L. Onural, “State of the art in holographic display: A survey,” J. Display Technol. **6**(10), 443–454 (2010). [CrossRef]

**4. **M. Lucente, “Interactive computation of holograms using a look-up table,” J. Electron. Imaging **2**(1), 28–34 (1993). [CrossRef]

**5. **J.-N. Gillet and Y. Sheng, “Multiplexed computer-generated holograms with polygonal-aperture layouts optimized by genetic algorithm,” Appl. Opt. **42**(20), 4156–4165 (2003). [CrossRef] [PubMed]

**6. **K. Matsushima, “Computer-generated holograms for three-dimensional surface objects with shade and texture,” Appl. Opt. **44**(22), 4607–4614 (2005). [CrossRef] [PubMed]

**7. **H. Kim, J. Hahn, and B. Lee, “Mathematical modeling of triangle-mesh-modeled three-dimensional surface objects for digital holography,” Appl. Opt. **47**(19), D117–D127 (2008). [CrossRef] [PubMed]

**8. **Y. Ichihashi, H. Nakayama, T. Ito, N. Masuda, T. Shimobaba, A. Shiraki, and T. Sugie, “HORN-6 special-purpose clustered computing system for electroholography,” Opt. Express **17**(16), 13895–13903 (2009). [CrossRef] [PubMed]

**9. **T. Fujii and H. Yoshikawa, “Improvement of hidden-surface removal for computer-generated holograms from CG,” in Digital Holography and Three-Dimensional Imaging (Optical Society of America, 2007), paper DBW3.

**10. **J. Jia, J. Liu, G. Jin, and Y. Wang, “Fast and effective occlusion culling for 3D holographic displays by inverse orthographic projection with low angular sampling,” Appl. Opt. **53**(27), 6287–6293 (2014). [CrossRef] [PubMed]

**11. **K. Matsushima and S. Nakahara, “Extremely high-definition full-parallax computer-generated hologram created by the polygon-based method,” Appl. Opt. **48**(34), H54–H63 (2009). [CrossRef] [PubMed]

**12. **K. Matsushima, M. Nakamura, and S. Nakahara, “Silhouette method for hidden surface removal in computer holography and its acceleration using the switch-back technique,” Opt. Express **22**(20), 24450–24465 (2014). [CrossRef] [PubMed]

**13. **H. Zhang, N. Collings, J. Chen, B. Crossland, D. Chu, and J. Xie, “Full parallax three-dimensional display with occlusion effect using computer generated hologram,” Opt. Eng. **50**(7), 074003 (2011). [CrossRef]

**14. **T. Ichikawa, K. Yamaguchi, and Y. Sakamoto, “Realistic expression for full-parallax computer-generated holograms with the ray-tracing method,” Appl. Opt. **52**(1), A201–A209 (2013). [CrossRef] [PubMed]

**15. **T. Ichikawa, T. Yoneyama, and Y. Sakamoto, “CGH calculation with the ray tracing method for the Fourier transform optical system,” Opt. Express **21**(26), 32019–32031 (2013). [CrossRef] [PubMed]

**16. **J. T. McCrickerd and N. George, “Holographic stereogram from sequential component photographs,” Appl. Phys. Lett. **12**(1), 1012 (1968). [CrossRef]

**17. **T. Yatagai, “Stereoscopic approach to 3-D display using computer-generated holograms,” Appl. Opt. **15**(11), 2722–2729 (1976). [CrossRef] [PubMed]

**18. **P. S. Hilaire, “Modulation transfer function and optimum sampling of holographic stereograms,” Appl. Opt. **33**(5), 768–774 (1994). [CrossRef] [PubMed]

**19. **B. Lee, S.-W. Min, and B. Javidi, “Theoretical analysis for three-dimensional integral imaging systems with double devices,” Appl. Opt. **41**(23), 4856–4865 (2002). [PubMed]

**20. **K. Hong, J. Yeom, C. Jang, J. Hong, and B. Lee, “Full-color lens-array holographic optical element for three-dimensional optical see-through augmented reality,” Opt. Lett. **39**(1), 127–130 (2014). [CrossRef] [PubMed]

**21. **J. S. Jang, F. Jin, and B. Javidi, “Three-dimensional integral imaging with large depth of focus by use of real and virtual image fields,” Opt. Lett. **28**(16), 1421–1423 (2003). [CrossRef] [PubMed]

**22. **M. Yamaguchi, H. Hoshino, T. Honda, and N. Ohyama, “Phase-added stereogram: calculation of hologram using computer graphics technique,” Proc. SPIE **1914**, 25–31 (1993). [CrossRef]

**23. **H. Kang, T. Yamaguchi, and H. Yoshikawa, “Accurate phase-added stereogram to improve the coherent stereogram,” Appl. Opt. **47**(19), D44–D54 (2008). [CrossRef] [PubMed]

**24. **Q. Y. J. Smithwick, J. Barabas, D. Smalley, and V. M. Bove, Jr., “Interactive Holographic Stereograms with Accommodation Cues,” Proc. SPIE Practical Holography XXIV: Materials and Applications, **7619**, 761903 (2010).

**25. **K. Wakunami and M. Yamaguchi, “Calculation for computer generated hologram using ray-sampling plane,” Opt. Express **19**(10), 9086–9101 (2011). [CrossRef] [PubMed]

**26. **K. Wakunami, H. Yamashita, and M. Yamaguchi, “Occlusion culling for computer generated hologram based on ray-wavefront conversion,” Opt. Express **21**(19), 21811–21822 (2013). [CrossRef] [PubMed]

**27. **J. W. Goodman, Introduction to Fourier Optics, 2nd ed. (McGraw-Hill, 1996).

**28. **O. Matoba, T. J. Naughton, Y. Frauel, N. Bertaux, and B. Javidi, “Real-time three-dimensional object reconstruction by use of a phase-encoded digital hologram,” Appl. Opt. **41**(29), 6187–6192 (2002). [CrossRef] [PubMed]

**29. **Z. Zhang, Z. You, and D. Chu, “Fundamentals of phase-only liquid crystal on silicon (LCOS) devices,” Light Sci. Appl. **3**, e135 (2014). [CrossRef]