We propose a novel fast method for full parallax computer-generated holograms with occlusion processing, suitable for volumetric data such as point clouds. A novel light wave propagation strategy relying on the sequential use of the wavefront recording plane method is proposed, which employs look-up tables in order to reduce the computational complexity in the calculation of the fields. Also, a novel technique for occlusion culling with little additional computation cost is introduced. Additionally, the method adheres a Gaussian distribution to the individual points in order to improve visual quality. Performance tests show that for a full-parallax high-definition CGH a speedup factor of more than 2,500 compared to the ray-tracing method can be achieved without hardware acceleration.
© 2015 Optical Society of America
Holography is a technique for recording and reconstructing the complex-valued wave field of light (both amplitude and phase) by means of diffraction and interference phenomena. The physical process of light propagation can also be modelled and simulated on a computer, allowing for the generation of computer generated holograms (CGHs). CGHs have the advantage that they eliminate the need for a physical holographic recording set-up; therefore, they are not subject to noise or optical aberrations, and moreover they allow the representation of synthetic objects. However, generating holograms for 3D objects requires significant amounts of computation time, especially when occlusions are taken into consideration. Therefore, many solutions have been proposed over the years for accelerating this process. These algorithms can essentially be divided into two categories: polygonal methods, which represent the virtual objects in the scene using meshes consisting of triangles [1, 2] and point cloud methods describing objects as a collection of self-luminous points . We will focus on the latter. Various algorithms can be found in literature for accelerating the generation of CGH. For example: holographic stereograms , wavefront recording plane methods (WRP) [5–9], look-up tables with pre-calculated distributions of the complex amplitude [6, 10, 11] and methods using band-limited double-step fresnel diffraction [9, 12, 13].
Moreover, occlusions are a challenging issue to be handled in digital holography. There are two main types of occlusions: self-occlusions, in which some areas of the object are occluded by other areas of the same object, and mutual-occlusions, when one object is occluded – partially or not – by another object. Different occlusion handling methods in holography have been suggested, using inverse orthographic projection with low angular sampling , ray-casting , ray-wavefront conversion  or accurate depth cues . However, point clouds occlusion has not been extensively researched, except for a method with interpolation kernels for points , which applies culling to points that lie within the radius of each interpolation kernel.
In this paper, we propose a method for computer-generated holograms based on multiple WRPs that introduces four innovations, namely:
- a novel intra-WRP light wave propagation strategy utilizing pre-computed propagation kernels stored in look-up tables that provide support for forward and backward light propagation,
- a novel inter-WRP light propagation strategy to utilize pre-computed propagation kernels stored in look-up tables,
- the application of a Gaussian interpolation kernel in a multiple WRP setting that blurs the discrete points and hence creates the effect of the propagation of a smooth surface between the WRPs and the holographic plane,
- a novel occlusion processing technique exploiting this multiple WRP propagation structure utilising inverse Gaussian filters to mask occluded points.
This paper is structured as follows. In Sec. 2, we briefly explain the wavefront recording plane method. Subsequently, in Sec. 3 the proposed CGH method with occlusion masking is presented and we report on the experiments in Sec. 4. Finally, conclusions of our work are presented in Sec. 5.
2. The wavefront recording plane method
The wavefront recording plane (WRP) method was introduced in  and the proof of concept is shown in Fig. 1(a). This method generates the hologram by first positioning one virtual holographic plane - the WRP - in front of the object and calculating this WRP point-by-point in a computationally efficient way. Subsequently, the hologram is calculated by propagating this WRP to the hologram plane using e.g. a Fresnel transform.
During the first step, in , the light field of each pixel of the WRP is calculated by summing the contribution of each point of the object according to the ray-tracing equation:
The second step of the WRP method involves the calculation of the complex amplitude uHP(ξ, η) of the WRP propagated to the hologram plane, i.e. to find the diffracted field of the CGH itself. This can be done by the convolution-based Fresnel diffraction:
Using this two-step method, the calculated complex amplitude is an excellent approximation of the wave field that would be computed by calculating immediately the Fresnel propagation of the object points to the hologram plane.
3. The proposed multiple-WRP method with occlusion processing
As stated in the introduction, we introduce four innovations, which are subsequently treated in the following subsections. The concept of the proposed method is illustrated in Fig. 2(a).
3.1. Multiple WRPs with optimized intra- and inter-WRP propagation
The WRP method can be extended by considering multiple WRPs. In the method proposed in , which is depicted in Fig. 1(b), the use of double WRPs improves already the computational complexity; the technique is applied on long depth objects, that are formed by images segmented to different depths. In that case, the WRPs are placed in front of the object – as in the original method. In the method of , illustrated in Fig. 1(c), multiple WRPs have been used for generating objects from RGB and depth data, taking advantage of discretized depth data and making use of forward and backward propagation.
In our proposed method, depicted in Fig. 2(a), we employ multiple WRPs that are crossing through the object (i.e. the point cloud). This reduces the distance of each point to the closest WRP plane, and, hence, reduces the size of the support windows 2Wj, leading to a decrease of the computational load. Additionally, by uniformly quantizing the depth of the object, forward and backward propagation can be applied, allowing for the generation of a reusable look-up table (LUT) for all depth levels [Fig. 2(b)]. The LUT consists of the field contributions of a point at every quantized distance. These field contributions have different sizes, which are derived from 2Wj plus a corrective factor for each depth level. This corrective factor is determined heuristically and has a value in the range of 2 to 8 pixels. In other approaches where the WRP (nearly) passes through the object points, one needs to apply depth margins , such that the points that are very close to or pass through the WRP do not have zero window support. However, in our case, we use volume points (see section 3.2), that actually need a bigger support window to be reconstructed to avoid cutting out the smooth degradation of their Gaussian distribution, thus requiring a corrective factor. Hence, this corrective factor will not only improve the rendering result, but also gracefully solve the support area problem of a point close to the WRP. The propagation of the points’ field within a WRP zone is termed as intra-WRP propagation and results in a further reduced computational complexity .
The number of WRPs used in each CGH can be arbitrarily chosen, and therefore optimized. The optimal number of WRPs regarding performance is a function of the depth of the object and the density of the point cloud, as can be seen from the performance experiments in section 4.4. Moreover, the WRPs are placed symmetrically through the object resulting in smaller and symmetrical propagation kernels.
Furthermore, in literature [5, 8, 9], it is suggested that when all the points that belong to one WRP are calculated, the WRP is propagated directly to the hologram plane. The previous multiple WRP techniques make use of fast Fourier transform (FFT)  and band-limited double-step Fresnel diffraction (BL-DSF) , which was firstly introduced and applied in digital holography in . However, in the proposed method, progressive propagation is used (see Fig. 2(a)), which has been also applied in  for a polygon-based CGH method.
In our approach, the angular spectrum method (ASM)  is used for the intra- and inter- WRP propagations. The angular spectrum propagation is a convolution-based diffraction method and is given by the following formula:
More specifically, each WRP ℓ is propagated by angular spectrum propagation to the next WRP ℓ+1, i.e. inter-WRP propagation, and only the last WRP L is propagated to the hologram plane. This approach provides the advantage of using a precalculated kernel for angular spectrum propagation between the equidistant, successive WRPs, and also enables the occlusion processing method proposed in the next section, because of the sequential propagation.
3.2. Volume points
To generate high-quality object reconstructions, we propose to use volume points, as introduced in . Each point is attributed a Gaussian distribution, so that the objects have a smoother surface. The 2D Gaussian distribution in a particular depth plane – we do assume a planar distribution – is given by the formula:
In contrast to the original WRP method , where the ray tracing method is used for the calculation of the contribution of each point, we use angular spectrum propagation, which enables also the use of volume points. In practice, a point is modelled as a discrete 2D map by sampling the corresponding 2D Gaussian profile expressed by Eq. (5) and is positioned in parallel to the WRP planes. Thereafter, the 2D map modelling the point is propagated to the WRP using the angular spectrum method of Eq. (4) for all possible depth planes, and the result for each depth plane is stored as a separate entry in the LUT. This way the LUT consists of all the possible optical fields that should be calculated in a WRP zone, as shown in Fig. 2(b).
3.3. Occlusion processing
Occluded object points have been an unresolved problem in digital holography for the past years, and no general solutions for point clouds exist. The method proposed includes occlusion processing by calculating the contribution of the points ordered from back to front. This enables masking the field contributions from the previously calculated points, which are behind the currently processed point. Before adding the contribution of the point to a specific area of the WRP or the hologram plane, the optical field recorded in this area is suppressed by multiplying the field locally with the complement of a Gaussian distribution. As a result, the light emanating from points that are situated directly behind this point is occluded, and is therefore not contributing to the optical field recorded at the WRP and by extension at the hologram plane.
The size of this occlusion mask is calculated based on the lateral density of the point cloud i.e. the mean distance between two points when projected in the hologram plane μp. The aim is that two points of the same depth level do not overlap and occlude each other. The density of the point cloud is considered uniform and the point density per pixel ρp is given by dividing the point density on the projected surface by the pixel surface:
3.4. The proposed algorithm
In the subsequent paragraphs, we describe the algorithm. The general algorithm is illustrated in Fig. 2(a). Moreover, an example of the sequential calculation of the contribution of three subsequently processed points is given in Fig. 3. In the following explanation, steps 1 to 3 are preparation steps, step 4 handles the intra- and inter-WRP propagation and is processed for every WRP, and step 5 is calculating the diffraction field at the hologram plane by propagating WRPL.
- 1. The optimal number of WRPs L and the size of the occlusion mask Wocc (optional) are calculated based on the depth of the object and the lateral density of the point light sources of the object, respectively.
- 2. The depth of the point cloud is subdivided uniformly in L WRP zones and each WRP zone is further quantized in depth levels, half of them in front of each WRP and half of them behind it. The points are grouped per WRP zone.
- 3. The optical field of a point for each depth level in a WRP zone is precalculated by backward or forward propagation, creating a table with the fields for each depth level. Each level has a different support window 2Wj, as shown in Eq. (2). This LUT is the same for all WRP zones and its size depends on the depth resolution and the size of the WRP zone.
- 4. The WRP zones are processed sequentially starting at WRP1 and ending at WRPL, where for every WRP zone, the processing order of the points is by decreasing z-coordinates to facilitate the occlusion processing.
Per point we:
- determine the depth level;
- apply the occlusion mask in the area around the point on the current WRP to suppress the diffraction pattern of the occluded points (except for the first point);
- add the optical field contribution from the LUT for the given depth level to the current WRP.
When all the points within one WRP zone are calculated, the current WRP diffraction pattern is propagated by angular spectrum propagation to the next WRP. As explained in section 3.1, the kernel of the inter-WRP propagation is precalculated, since it is the same for all the inter-WRP propagations. This step is then repeated for the next WRP until WRPL is reached.
- 5. When the last WRP zone (L) is calculated, the diffraction pattern is propagated to the hologram plane (HP) by angular spectrum propagation.
To demonstrate the validity of the proposed CGH approach, we have generated a set of digital holograms according to the specifications of currently commercially available Spatial light modulators (SLMs), i.e. 1,920 × 1,080 resolution at 8 μm pixel pitch (Table 1). We have used different 3D object point data sets, a.k.a. point clouds (see Table 2). The simulations ran on a computer with a 2.10 GHz Intel Core i7-4600U CPU processor and Windows 8.1 (x64) as operating system.
4.1. CGH without occlusion processing
We validate our method by comparing the generated CGH to the original ray tracing method Eq. (1). In both cases, we do not apply any occlusion technique to demonstrate the equivalence of the methods for non-occluded surface renderings. For this experiment, we used only the front part of the point data set Venus (34,195 points). It has to be noted that when applying the proposed method we used a point size of σ = 0, corresponding to the perceived size of the ray tracing method – to have a more fair comparison, since it assumes dimensionless discrete points.
The results are depicted in Fig. 4. The visual comparison even suggests the superiority of the proposed method compared to the ray tracing method. Ray tracing appears to give rise to a noisier reconstruction and a minor loss of accuracy in the depth reconstruction. These observations can probably be attributed to the window support incorporated in the proposed method such that the wave propagation is confined by the diffraction angle.
4.2. CGH with occlusion processing
The importance of occlusion processing in CGH can be understood intuitively. Without applying an occlusion culling technique, the object’s surface seems transparent and it is impossible to distinguish between the front and the back parts of the surface, since all of the points are shown. With occlusion processing, only the visible part of the object is shown, accomplishing a more realistic effect. The methodology used is suitable for both sparse and dense models and can provide texture effects for dense point clouds. Figure 5(a) shows another modeled object in (a), while the reconstruction of the CGH generated by rotating the object are illustrated in Fig. 5(b) and Fig. 5(c). In Fig. 5(b), the orientation is such that the front holes match the back holes, since the model is symmetrical and angular propagation is an orthographic projection, while in Fig. 5(c) the back part of the ball is visible through the front holes, but is out of focus as expected.
Additionally, Fig. 6 shows the numerically reconstructed images from the CGHs generated by the proposed method. For all CGHs the number of depth levels per WRP is 63 (LUT levels), while the other parameters are calculated automatically based on the properties of the data set, e.g. the density of the point cloud and the depth of the object. For the occlusion processing for the specifications used, a value of 12 to 20 [μm] was selected for σ.
We used angular spectrum propagation as reconstruction method and the reconstruction distance in all cases is the distance zwL of the last WRP to the hologram plane. The dimensions and the viewing angle of all images are 15.36 × 8.64 mm and 4.5º, respectively.
4.3. Full parallax
To demonstrate the occlusion effect and the full parallax properties of the CGHs, we generated a CGH with a resolution of 8,192 × 8,192 pixels and a pixel pitch of 2μm to achieve a wider viewing angle of 18.2º. The object is positioned at a distance of 15 cm from the hologram plane. To show the reconstruction from different viewing angles, sub-holograms are extracted from the hologram with a window aperture of 2,720 pixels from the hologram plane. The results are shown in Fig. 7. Additionally, we also visualize the parallax by providing a video demonstrating numerical reconstruction from different viewing angles ( Visualization 2).
4.4. Performance tests of the proposed multiple-WRP method
Firstly, we compare the proposed method (without occlusion processing) to other CGH generation methods (Table 3). The proposed method is 2,500 faster than the ray tracing method and 5 times faster than the original WRP method . Moreover, it is observed that placing the WRP in the middle of the object slightly reduces the computational complexity. This is due to the smaller distances to the WRP, which leads to smaller window support 2Wj and therefore less computation and memory readings. The speed improvement of the OpenCL implementation suggests that with a GPU implementation our proposed method will be even faster and potentially suitable for real-time CGH display systems.
Figure 8 shows the time performance of our method for 1 up to 100 WRPs. The computation times are the averaged for 5 runs per number of WRPs for each model. As expected, the time cost depends on the number of 3D object points and total number of WRPs used, due to the propagation cost of each WRP to the next. It can be concluded that increasing the number of WRPs decreases the computation time with a factor of up to 19 for the investigated data sets, compared to a single WRP. The optimal number of WRPs depends on the number of points the object is composed of. Beyond this number of WRPs the computational gain is slightly decreasing.
By comparing the same model with smaller depth and less points to the full model (e.g. Venus#1 and Venus#2 or Bimba#1 and Bimba#2) without occlusion processing it is noticed that the number of points matters only for a small number of WRPs, and it is insignificant for larger number of WRPs. It was calculated that the average computation time for the occlusion processing is 0.68 seconds for Venus#2 (occlusion mask size of 9×9 pixels) and 4.14 seconds for Bimba#2 (occlusion mask size of 7×7 pixels) and as expected, it is constant for any number of WRPs.
The time performance behavior can be understood in terms of algorithmic complexity, which can be described as:
We have proposed a novel CGH technique with full parallax that improves the WRP method by proposing the use of multiple WRPs with intra-WRP light-field propagation utilising angular spectrum propagation and reducing as such the computational complexity with a factor 5. By exploiting the proposed intra- and inter-WRP propagation technique – utilising look-up tables and Gaussian point density profiles, an efficient occlusion culling solution, utilising inverse Gaussian filtering of the WRP, is proposed that represents a minimal computational overhead compared to classical WRP solutions. Compared to the classical ray tracing solution a speed-up factor of 2,500 could be achieved. In future work, we are planning to implement the proposed method on a GPU to further decrease the computation burden through parallel computing.
The models of Bimba, Venus (Head sculpture), Perforated Ball and Cat are courtesy of INRIA by the AIM@SHAPE-VISIONAIR Repository, Direct Dimensions Inc. (www.dirdim.com), Manuel Piñeiro from GrabCad.com, and Tosca hi-resolution 3D database v1.0, respectively. The research leading to these results has received funding from the European Research Council under the European Unions Seventh Framework Programme ( FP7/2007–2013)/ ERC Grant Agreement n. 617779 (INTERFERE).
References and links
1. K. Matsushima, “Exact hidden-surface removal in digitally synthetic full-parallax holograms,” Proc. SPIE 5742, 25 (2005). [CrossRef]
4. H. Zhang, Y. Zhao, L. Cao, and G. Jin, “Fully computed holographic stereogram based algorithm for computer-generated holograms with accurate depth cues,” Opt. Express 23, 3901–3913 (2015). [CrossRef] [PubMed]
6. T. Shimobaba, H. Nakayama, N. Masuda, and T. Ito, “Rapid calculation algorithm of fresnel computer-generated-hologram using look-up table and wavefront-recording plane methods for three-dimensional display,” Opt. Express 18, 19504–19509 (2010). [CrossRef] [PubMed]
7. J. Weng, T. Shimobaba, N. Okada, H. Nakayama, M. Oikawa, N. Masuda, and T. Ito, “Generation of real-time large computer generated hologram using wavefront recording method,” Opt. Express 20, 4018–4023 (2012). [CrossRef] [PubMed]
8. A.-H. Phan, M. A. Alam, S.-H. Jeon, J.-H. Lee, and N. Kim, “Fast hologram generation of long-depth object using multiple wavefront recording planes,” Proc. SPIE 9006, 900612 (2014). [CrossRef]
9. N. Okada, T. Shimobaba, Y. Ichihashi, R. Oi, K. Yamamoto, T. Kakue, and T. Ito, “Fast calculation of a computer-generated hologram for RGB and depth images using a wavefront recording plane method,” Photon. Lett. Poland 6, 90–92 (2014).
10. M. E. Lucente, “Interactive computation of holograms using a look-up table,” J. Elec. Imag. 2, 28–34 (1993). [CrossRef]
11. Y. Pan, X. Xu, S. Solanki, X. Liang, R. B. A. Tanjung, C. Tan, and T.-C. Chong, “Fast CGH computation using S-LUT on GPU,” Opt. Express 17, 18543–18555 (2009). [CrossRef]
12. N. Okada, T. Shimobaba, Y. Ichihashi, R. Oi, K. Yamamoto, M. Oikawa, T. Kakue, N. Masuda, and T. Ito, “Band-limited double-step fresnel diffraction and its application to computer-generated holograms,” Opt. Express 21, 9192–9197 (2013). [CrossRef] [PubMed]
13. D. Hiyama, T. Shimobaba, T. Kakue, and T. Ito, “Acceleration of color computer-generated hologram from RGB-D images using color space conversion,” Opt. Commun. 340, 121–125 (2015). [CrossRef]
14. J. Jia, J. Liu, G. Jin, and Y. Wang, “Fast and effective occlusion culling for 3D holographic displays by inverse orthographic projection with low angular sampling,” Appl. Opt. 53, 6287–6293 (2014). [CrossRef] [PubMed]
15. H. Zhang, N. Collings, J. Chen, B. Crossland, D. Chu, and J. Xie, “Full parallax three-dimensional display with occlusion effect using computer generated hologram,” Opt. Eng. 50(7), 074003 (2011). [CrossRef]
17. J. W. Goodman, Introduction to Fourier Optics (Roberts and Company Publishers, 2004).
18. A. Bronstein, M. Bronstein, and R. Kimmel, Numerical Geometry of Non-Rigid Shapes (Springer Publishing Company, Incorporated, 2008), 1st ed.