Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Colour computer-generated holography for point clouds utilizing the Phong illumination model

Open Access Open Access

Abstract

A technique integrating the bidirectional reflectance distribution function (BRDF) is proposed to generate realistic high-quality colour computer-generated holograms (CGHs). We build on prior work, namely a fast computer-generated holography method for point clouds that handles occlusions. We extend the method by integrating the Phong illumination model so that the properties of the objects’ surfaces are taken into account to achieve natural light phenomena such as reflections and shadows. Our experiments show that rendering holograms with the proposed algorithm provides realistic looking objects without any noteworthy increase to the computational cost.

© 2018 Optical Society of America under the terms of the OSA Open Access Publishing Agreement

1. Introduction

Holograms intrinsically offer the most complete representation of the plenoptic function. An important advantage is that holographic displays do not suffer from the accommodation-convergence conflict, due to the fact that all depth cues are provided. Hence, no glasses are needed and visual fatigue or nausea effects are suppressed [1]. During the last years, major progress has been made in holographic display technology [2,3], therefore the need for high quality photo-realistic holograms has rapidly arisen. Examples of displays include a display with white light illumination and a single phase only SLM [4] and attempts for holographic 3D video displays [5].

Holograms can be acquired optically or digitally. Unfortunately, optical holographic capturing setups are subject to several limitations: the resolution of the camera sensor, the physical size of the pictured object, speckle noise, aberrations, etc. Hence, these constraints make digital optical holography impractical for macroscopic objects. With computer-generated holography (CGH), the aforementioned drawbacks are overcome and moreover the generation of holograms from synthetic scene content is enabled. However, CGH techniques depict high computational complexity [6]. Several approaches have been proposed to reduce the complexity and accelerate the process. These approaches include: stereograms [7], polygonal methods [8,9], the wavefront recording plane method (WRP) [10–14], look-up tables (LUTs) [15–17], which all can employ GPU parallelization [18,19]. Here, we will focus on point-based CGH algorithms [12,18], where the 3D scene is represented by a point cloud and the hologram is generated by superpositioning the light emanating from the different points.

When rendering 3D scenes, the models should not only have the proper geometrical shape, but also have a natural visual appearance. To correctly depict the illumination and depth cues of the 3D scene, the natural properties of light, such as colour, shading and reflection – based on the direction of the illumination, the properties of the surface and the viewer’s position, have to be considered. Since the 1970s, shading, which is the process of computing the brightness of an object in the 3D scene, was applied to improve realism in computer graphics. To achieve this, the material and the surface properties of the object and its orientation, i.e. normal vectors, are considered, since they determine how the incident light is reflected or transmitted towards the viewer when light interacts with the surface. The bidirectional reflectance distribution function (BRDF) [20] defines how light is reflected by an opaque surface, and it is determined by the direction of the incoming light L⃗ and the direction of the viewer V⃗. A BRDF in can be written as:

BRDFλ(θi,ϕi,θo,ϕo,u,v),
where λ denotes that the BRDF depends on the wavelength, θi and ϕi represent the incoming light direction in spherical coordinates, θo and ϕo represent the outgoing reflected direction in spherical coordinates, and u and v represent the surface position parameterized in orthographic coordinates.

Several physical, theoretical and empirical BRDF models have been proposed [21]. However, empirical models are favoured for real-time rendering due to their simplicity. Such models include the Phong shading model, which is the first specular model in computer graphics and which was introduced by Phong in 1975 [22]. Figure 1 shows the vectors’ directions for the Phong model and how the diffuse and specular reflections are modelled when light illuminates a surface. Typically the Phong model is written as a function consisting of three terms, namely the ambient, diffuse and specular reflection terms. The ambient component of the Phong reflection model determines the minimum brightness i.e. even when no light hits a surface directly, this term will light up the surface so that it does not appear as pure black. The ambient brightness is constant. The diffuse component arises from the assumption that light illuminating the surface from any direction is reflected uniformly in all directions. The specular component makes a surface shiny (when applicable) by generating specular highlights. The illumination of each surface point per image pixel is expressed mathematically as:

I=Iaka+l=1LIdkd(L^lN^)+Isks(R^lV^)m,
where Ia, Id and Is are the intensities of the ambient, diffuse and specular components of the light sources, respectively, is the normal vector of the surface, l is the illumination vector of light source l, is the direction of the viewer, l = 2(l · )i is the reflection vector of light source l, and ka, kd and ks are the material reflectance coefficients for the respective effects. m ∈ [0, ∞) is the glossiness, which characterizes the shape of the specular highlight – from 0 or dull up to progressively glossier surfaces. The summation index L denotes the number of light sources, while the hat on the vectors indicates that the vectors are unit vectors.

 figure: Fig. 1

Fig. 1 The Phong reflection model for light reflection on an object surface.

Download Full Size | PDF

These shading models utilized in computer graphics can be integrated with CGH methods to produce realistic light rendering in 3D scenes. Several algorithms have already been proposed for simulating different lighting effects. Important work by Ichikawa et al. [23, 24] deploys the exhaustive ray tracing method to compute on a GPU realistic scenes by casting rays from elementary holograms to the objects and determining their travelling paths. CGH approaches proposed in literature vary also on the input modality they use to represent the scene. For example, in polygon-based algorithms, specular reflections can be computed by shifting the frequency spectrum of each polygon to fit its reflection direction [25,26] and by convolution of the angular spectrum of the mesh with a kernel with the desired reflectance distribution [27]. For multi-view (+depth) input modalities, the Phong model has been incorporated in a CGH solution to render specular reflections by band-limitation of the frequency domain of the light emitted from each point [28]. For point cloud-based CGH methods, suggested solutions include amplitude modulation of the 2D zone plates for amplitude-only holograms [29] and computation of the wavefield by modulating the amplitude per point [30] for phase only holograms. Both methods utilize the Phong model for modulating the zone plates; however, the computation of the individual contribution of each point can be comparatively computationally very expensive.

In this work, we enhance our previously proposed point-based CGH method [12] to acquire high quality colour holograms from point clouds with realistic illumination rendering. In [12], multiple parallel wavefront recording planes (WRPs) and pre-computed look-up tables are used to achieve fast CGH computation. This technique extends the principle of the single wavefront recording plane method introduced by Shimobaba et al. [10], which introduces a local hologram close to the object to decrease the distance of the points to the hologram plane; hence, only a subset of the WRP pixels needs to be updated. Our technique employs multiple equidistant parallel WRPs that are sectioning the object – i.e. the point cloud. The WRPs are placed across the object to minimize the distance to the points, thereby further reducing the computation time. Additionally, in [31] we proposed a technique to generate diffuse holograms.

Here, the proposed extension exploits the Phong illumination model in the pre-computation of diffuse and specular look-up tables, so that each point can contribute to the WRP according to the properties of its associated surface. This is enabled by considering the normals of the points of the point cloud, either directly from the point cloud dataset or by computing them. However, the Phong model is a local illumination model and therefore it does not account for surface visibility or bouncing reflections from other objects in the scene. Therefore, additional techniques should be applied to block the surfaces that are not visible and to generate shadows on objects that are visible to the viewer but not directly reached by light. Occlusion masking is used to conceal invisible (occluded) surfaces, while shadowing refers to surfaces that are shielded from light originating from a particular source. Techniques to compensate for these phenomena are integrated in the CGH method proposed. Additionally, it is enhanced with a z-buffering technique to assess the visibility of the points from the perspective of the light source. Colour holograms are achieved by considering three colour channels (RGB) for the wavefields, corresponding to the three wavelengths that usually represent the visible part of the spectrum.

This paper is structured as follows. In Sec. 2, we describe the proposed CGH algorithm with realistic light rendering and ambient occlusion, and the experimental results are reported in Sec. 3. Finally, Sec. 4 provides the conclusions of our work.

2. Full-parallax colour CGH method with Phong shading

In our prior work [12], a technique for computer-generated holograms exploiting multiple wavefront recording planes (WRP) has been proposed. Here, we extend that method to enable colour holograms of realistic-looking 3D scenes. To generate colour holograms the final hologram – and the WRPs – consist of the three RGB channels, and they are computed as three individual monochromatic holograms i.e. with red, green and blue wavelengths, respectively. Hence, to maintain the colour information of the 3D model, the points of the point cloud are attributed texture information by specifying the intensity values for the Red, Green and Blue channels, respectively.

The proposed full-parallax colour CGH method with Phong shading is summarized in Algorithm 1. It consists of three main steps, which are described in detail in the following sections, while the concept of the method is illustrated in Fig. 2.

Tables Icon

Algorithm 1. Proposed CGH method

 figure: Fig. 2

Fig. 2 The proposed colour CGH technique with multiple wavefront recording planes (WRPs) for point cloud datasets

Download Full Size | PDF

2.1. Pre-processing

To initialize the CGH generation, the number of WRPs is computed based on the depth of the object and the number of points of the point cloud. Then, the depth of the object is uniformly quantized to acquire the boundaries of the WRP zones, and form the depth levels of the LUTs. Additionally, since the normals of the points are required for the algorithm, either point cloud datasets with normals associated to the points should be deployed, or the normals must be computed at this step based on the geometry of the point cloud.

2.1.1. Shadow processing

Targeting computational efficiency, mutual occlusion and backface culling must be processed as soon as possible during the algorithm. Hence, the points that are illuminated by each light source have to be determined. For a surface to be illuminated by a light source, the angle of incidence has to be between 0 and 90 degrees. Using the normal associated to each point, one can filter out the points that are not visible by the light source, hence no specular or diffuse reflection should be considered for those points. Additionally, shadows can be simulated based on the same principle.

To process shadows efficiently, we propose a technique inspired by the z-buffering technique - introduced by Catmull in 1974 for computer graphics [32]. We consider a plane perpendicular to the normal of the light source and we compute the orthographic projection of the points to the plane and store their indices. The points whose index is in the list are in line of sight of the light source, hence light reaches them causing specular or diffuse reflection. The rest of the points are only attributed the ambient term, and as a result, a shadowing effect is perceived, since those points appear darker. Evidently, this process is applied per light source, hence a different set of points is considered occluded in each case.

2.1.2. Generation of LUTs

Since the distance between successive WRPs is constant, look-up tables (LUTs) can be pre-computed and reused by all WRPs. For this reason, the depth of a WRP zone is further quantized in D quantization levels, to form the depth levels for the LUT. That way the LUT consists of the wavefield contributions of a point at every quantized distance for D depths symmetrically around the WRP.

The generation of the LUTs is initiated by considering volume points. That means that a point is attributed a Gaussian distribution so that the object surface is depicted smoother and continuous. The 2D Gaussian distribution in any particular depth is given by the formula:

g(x,y)=exp(x2+y22σ2),
where σ2 is the variance of the distribution – σ is the standard deviation. The smoothness of the surface of the points can be adapted based on the value of σ. When σ → 0, a Dirac delta function is considered, therefore the points appear as normal dimensionless points.

In the LUTs, the size of the wavefield varies per depth level and it is determined by the maximum angle of diffraction θ for the on-axis reconstruction of the 3D object light from the CGH. The angle θ is given by the grating equation [33]:

sinθ=λf=λ2p,
where f is the spatial frequency, λ is the wavelength and p is the pixel pitch. The span Wj of each field j is computed by:
Wj=|zj|tan(sin1λ2p)=|zj|λ4p2λ2,
where zj is the distance between a level j and the WRP. Therefore, the size of each depth level, depending on its distance to the WRP, is equal to 2Wj/p + G, where G is a corrective factor, determined by the σ of the point profile, and its value ranges from 2 to 8 pixels.

The point profile is modulated accordingly to support the diffuse and specular reflections required for the Phong model for all depth levels of the LUT, as described in the next sections.

Adaptation for ambient and diffuse reflection

Diffuse reflections are formed when light is reflected by a surface in a diffuse or wide angle manner, resulting in a near-uniform spectral response. To model this, the Gaussian profile of the points (see Eq. (3)) is modulated by a random phase factor exp(ir(x, y)), where r(x, y) represents the uniform random phase distribution between 0 and 2π, and is discretized by sampling. This concept was introduced in [31], where several random phase patterns are computed and then propagated to the depth levels d to generate n LUTs for each of the D depth levels. This is motivated by the fact that reusing the same propagation kernels from the LUTs can cause interference phenomena at the hologram plane – due to coherency between the summed identical kernels, resulting in speckle noise artifacts. In this work, we also use different random phase patterns per quantization level to further reduce speckle noise produce artifacts. We consider an even number of quantization levels D for the LUTs. Figure 3 shows the structure and the dimensionality of the LUTs. It can be seen that LUTD consists of an entry for each wavelength, of which each has n entries for each depth level d.

 figure: Fig. 3

Fig. 3 Structure and dimensionality of the proposed LUTs for diffuse and specular reflections. For diffuse light, n entries are computed for each depth level d per wavelength, while for the specular reflection 1 entry per depth level is created for each angle of reflection per wavelength.

Download Full Size | PDF

The 2D profile with different random phase pattern for each depth level is propagated from its corresponding depth quantization level of the LUT to the WRP by the angular spectrum propagation (ASP) method [34]. For the mathematical formulation of ASP see section 2.3. This propagation method allows for reusing the same LUTqD,λ(x,y) for all WRPs, requiring them to be computed only once.

Adaptation for specular reflection

Specular reflections are determined by the polar angle θ and the azimuthal angle ϕ between the normal of the surface (point) and the light source in spherical coordinates as shown in Fig. 1. Based on the field of view (FoV), which is equal to twice the diffraction angle, we can determine a cone of interest, i.e. the solid angle in which specular reflections can reach the hologram plane without causing aliasing. Then, the solid angle is uniformly sampled to create the LUT entries, thereby quantizing θ and ϕ in an A × A grid, acquiring A2 pairs of angles. The number of quantized angles A can be arbitrarily chosen, based on the targeted angular resolution.

In practice, for each combination θ and ϕ, the Gaussian profile with random phase – described above, is modulated in the frequency domain to substantiate to the glossiness m of the surface and all possible angles of reflection. Since every frequency component corresponds to a plane wave propagating in a specific direction according to the angular spectrum model, we can associate a vector Ê to every 2D frequency component (fx, fy):

E^(fx,fy)=(λfx,λfy,1λ2(fx2+fy2))
the attenuation factor a of a frequency component is given by the Phong model for specular reflections:
a(fx,fy)=(E^(fx,fy)N^)m
with glossiness m and the surface normal .

Subsequently, it is then propagated for all the depth quantization levels by ASP, to acquire the final LUTθ(l),ϕ(l),qS,λ(x,y) entries. In Fig. 3, we illustrate that LUTS consists of A2 entries per wavelength, each of which has Q entries for the depth levels. Since each LUT entry of LUTS is reused much less compared to the diffuse tables, there is no need for multiple random phase patterns as in the case for diffuse LUT. The structure of the specular LUT is illustrated in Fig. 3.

2.2. Sequential WRP processing

The actual computation of the hologram implies processing the WRP zones sequentially from the WRP furthest from the hologram plane to the closest one and applying per point occlusion processing and the intra-WRP propagation. Each point that belongs to the WRP zone is treated sequentially based on its distance to the hologram plane. First, the occlusion processing is applied by locally blocking the light with an inverted Gaussian mask and then the contribution is added to the WRP – i.e intra-WRP propagation. Both processes are analyzed in the following sections.

2.2.1. Occlusion processing

Despite the fact that occlusions are a challenging issue to tackle in computer-generated holography – especially for point clouds – in [12] we have proposed an occlusion processing technique. The technique is enabled by applying the intra-WRP propagation sequentially on the points based on their distance from the hologram plane, i.e. ordered in depth from the furthest to the closest. This allows for masking the interference pattern generated from all previously calculated points, which are located behind the currently processed point.

In practice, the optical field – that has already been recorded, is suppressed by multiplying the field locally around the projection area of the point with an occlusion mask. As a result, the light emitted from points that are situated directly behind this point is suppressed, and is therefore not contributing to the optical field recorded at the WRP and by extension to the hologram plane. The occlusion mask is an inverted Gaussian distribution and its size depends on the lateral density of the point cloud i.e. the mean distance between two points when projected to the hologram plane. For more details, we refer to [12]. It is straightforward to understand that the occlusion processing has to be applied to all colour channels.

2.2.2. Intra-WRP propagation

The wavefield contribution of each point i follows Eq. (2), and, in practice, it is given as:

W(x,y)iλ=Ii(D(x,y)+S(x,y))exp(2πjΔziλ),
where Ii is the intensity of the point i, D denotes the ambient and diffuse wavefield contribution of the point, S is the specular wavefield contribution of the point and Δzi is the distance of the point to the WRP it belongs to. The exponential term provides a phase delay that improves the differentiation between the points that belong to the same quantization level, and therefore use the same LUT entry, but have slightly different depths zi.

The combined ambient and diffuse term for each wavelength λ – denoted with a superscript, is described as:

D(x,y)iλ=(kaiλIalλ+kdiλLl=1LρdlIdlλ)LUTqD,λ(x,y),
where Ialλ and Idlλ are the ambient and diffuse colour parameter of the light source, respectively, i.e. the intensity of the light source for a specific wavelength λ, kaiλ, kdiλ are the ambient and diffuse material coefficients of a point i for wavelength λ, respectively, ρd = (N · Ll) is the diffuse coefficient, l is the light source counter from 1 to L, which is the total number of light sources, and LUTD(x, y) is the LUT entry from the diffuse LUT based on the depth level q to which the point belongs to.

Furthermore, the specular term is calculated as follows:

S(x,y)iλ=ksiλLl=1LρslIslλLUTθ(l),ϕ(l),qS,λ(x,y),
where ksiλ is the specular material coefficient of a point i for wavelength λ, ρs = (Rl · V)m is the specular coefficient, where m is the glossiness, Islλ are the specular colour parameter of the light source and LUTS(x, y) is the LUT entry from the specular LUT based on the angles θ and ϕ between the normal of the point and the light source and the quantization level it belongs to.

2.3. Inter-WRP propagation

Inter-WRP propagation refers to the wavefield propagation of a WRP to the next WRP + 1, once the intra-WRP propagation for all the points within the WRP zone has been completed. When the contributions of all the points in a WRP zone have been added to the WRP, the wavefront is propagated to the next WRP by ASP. Additionally, because the WRPs are equidistant, we can reuse the same precalculated ASP kernel for every inter-WRP propagation.

The motivation for deploying the ASP method for the propagation between two parallel planes, is that it has the advantage of maintaining the pixel pitch on both planes, contrary to many other methods. It is a convolution-based diffraction method [34] defined by:

u2(x2,y2,z)=1[[u1(x1,y1,0)]exp(2πiz1/λ2ωx2ωy2)],
where and −1 are the Fourier and inverse Fourier operators, u1(x1, y1, 0) and u2(x2, y2, z) are the source and destination planes with their corresponding coordinates x1, y1 and x2, y2 respectively, z is the distance between them and ωx, ωy are the spatial frequencies of x1, y1 in the Fourier domain. However, when the pixel pitch is small, the light field becomes strongly diffused; i.e. the light field diffuses beyond the borders of the computation window in the output plane. Moreover, the convolution with the transfer function using the FFT is a circular convolution, which only behaves properly for periodic functions. To linearize the otherwise circular convolution zero-padding is applied, i.e. the area of the calculation window in the source plane is expanded by additional zeros. The size of the zero-padding region is determined by the maximum diffraction angle θ – see Eq. (5). In this case, the propagation distance zj is the distance between two subsequent WRPs. After the calculation, the output window is cropped to the original window size at the destination plane.

When the last WRP is complete, the last step of the algorithm is applied, namely the final propagation. During that step, the wavefield of the last WRP – that is already computed, is propagated to the hologram plane by ASP to acquire the final hologram.

2.3.1. Numerical reconstruction technique for colour CGH

The intensities of the three different colour channels can vary, and extremely high values can appear due to constructive interference phenomena at some pixels that are inherent to diffraction. Therefore to create a balanced RGB image, we have proposed a framework in [35] for the numerical reconstruction of wide-FOV holograms. The framework consists of 4 preprocessing steps: (1) aperture extraction from the hologram, (2) apodization of the aperture with a Hanning window (3) back propagation of the sub-hologram and (4) colour balancing of the three colour channels. In practice, during the 4th step, first, percentile clipping is applied, to suppress the high intensity outliers. The maximum intensity Imax per colour channel is given by the maximum intensity value after clipping the higher values that belong to the highest 5% percentile. Subsequently, the intensity range is modified by histogram stretching, based on the following principle:

I(x,y)=I(x,y)IminImaxImin
where, Imin and Imax are the minimum and maximum value of the intensity.

3. Experimental results

To demonstrate the validity and the capabilities of the proposed CGH approach, several CGHs were generated. In all cases below the scenes were designed to comply with the Shannon criterion of the holographic signal, therefore the objects were set in the aliasing-free zone, given by the grating equation Eq. (4). Hence, the minimum distance between the object and the hologram plane was determined by:

zmin=Np2λ,
where N is the pixel resolution, considering square dimensions of the hologram plane. However, the distance depends on the wavelength, as shown above, hence, for colour CGH we considered as minimum distance the corresponding minimum distance for the smaller wavelength – in our case blue light.

The holograms generated for this paper are available for download as CGH dataset “INTERFERE-III” [36]. All the simulations ran in MATLAB code on a computer with a 3.3 GHz Intel Core i7-5820 CPU processor, 64 GB RAM and Windows 8.1 ×64 as operating system. ASP is used for back-propagation i.e numerical reconstruction of the CGHs that are rendered.

3.1. Phong model reflection terms in monochromatic CGH

To illustrate the importance of realistic light rendering, we show the components of the Phong model in the context of CGH for a monochromatic setup only. That way, we can focus more on the details of the three reflection terms. The 3D object used for this experiment is a sphere composed of 171,833 points. The sphere has a diameter of 5 mm and it is located at z = 15 cm from the hologram plane. The directional light source is positioned at the left side of the object. The size of each of the four holograms shown is 1024-by-1024 pixels with pixel pitch of 8μm, providing a field of view of 4.5°, according to Eq. (4). We rendered the sphere four times so that we acquire a CGH with the following settings: (a) only the ambient term, (b) only the diffuse term, (c) only the specular term and (d) combination of ambient, diffuse and specular terms. Figure 4 shows the numerical reconstructions of the four CGHs. The focal depth for the back propagation corresponds to the front of the sphere. It can be clearly observed that the sphere on the right looks more realistic than the other rendering, giving indeed the impression of a light source from the left.

 figure: Fig. 4

Fig. 4 Demonstration of the effect of the terms of the Phong shading model. Numerical reconstructions of 4 CGHs computed with: (a) ambient (b) diffuse (c) specular component and (d) the combination of the three components

Download Full Size | PDF

It can be noticed that in Fig. 4(c) the specular highlight is quite small in diameter, while generally it can be adapted be changing the shininess parameter of the surface. However, the number of the reflected rays that will reach the hologram is limited due to the small aperture and the big relative distance to the object.

3.2. Light sources and materials

3.2.1. “Sphere1”: Comparison of light sources

We already know that what we perceive as color is a combination of the object’s surface color (ambient and diffuse reflection characteristics), the specular reflection, and the wavelength composition of the light source. Since light is is a mixture of wavelengths, white objects appear white because they reflect all the visible wavelengths of light that reaches them and absorb none. To validate our method regarding this property of light, we generated CGHs of the same scene, according to the parameters shown in Table 1. More specifically, a ruby red sphere, consisting of 329,617 points, was rendered four times with the only differences being the number, the colour and the direction of the light sources as shown in Table 2. The RGB values describe the colour of the light source, while Nx, Ny, and Nz are the coordinates of the vector N⃗ that denotes the direction that the light source illuminates the scene. The surface reflectance parameters are shown in Table 3. The LUTs were reused for the 4 CGHs and the the computation time for each CGH is 127, 128, 137 and 150 seconds, respectively.

Tables Icon

Table 1. Optical settings for the simulation parameters of “Sphere1” and “Biplane16K”

Tables Icon

Table 2. Colour and direction of the light sources for the CGHs “Sphere1”

Tables Icon

Table 3. Material reflectance parameters of the spheres for CGHs “Sphere1” and “Biplane16K”

Figure 5 shows the numerical reconstruction of the 4 CGHs “Sphere1” when illuminated with different light sources. In Fig. 5(a) white illumination is considered. Coloured objects reflect only certain wavelengths and absorb the rest: when white light reaches a red ball, the ball reflects red wavelengths and we perceive it as red. Likewise, as shown in Fig. 5(b), when green light illuminates a red ball, it will look dark, because it does not reflect green light. In Fig. 5(c), the rendering shows the change of the colour of the surface and the specular highlight due to the yellow colour of the light source, and the change of the position of the highlight spot and the shading due to the change of the direction from which the light source illuminates the sphere, compared to Fig. 5(a). Figure 5(d), shows the capability of the method to render scenes with multiple light sources. However, in all cases we notice that –as expected – the colour of the specular highlight always depends only on the colour of the light source.

 figure: Fig. 5

Fig. 5 Numerical reconstruction of the CGHs “Sphere1” when illuminated with different light sources (a) white light from south (b) green light from southwest (c) yellow light from southeast (d) white light from south and blue light from northwest

Download Full Size | PDF

3.2.2. “Sphere3”: Comparison of various materials

To evaluate the performance of the algorithm for rendering different materials, three spheres are positioned next to each other to generate the CGH “Sphere3”. The scene consists of 543,090 points and each sphere has a diameter of 1 cm. The three spheres are made of polished gold (Left), pearl (Middle) and silver (Right), respectively. The hologram resolution is 1,920 × 1,080 pixels, while the rest of the optical parameters are the same as “Sphere1” – see Table 1. The material reflectance parameters of the spheres are shown in Table 2 and they model the aforementioned materials that are commonly used for computer graphics [37,38]. For the computation of this CGH 31 WRPs, 12 quantization depth levels and 31 angular quantization levels (A) are used, while the computation time is 196 seconds.

Figure 6 shows the numerical reconstruction of the CGH “Sphere3” with the focus depth for the back propagation at the front of the spheres. The three spheres made of different materials show similar yet different light reflections, due to the differences in the reflectance parameters k, which determine the colour, diffusiveness, glossiness etc, based on Table 3.

 figure: Fig. 6

Fig. 6 Numerical reconstruction of the CGH “Sphere3” depicting three spheres made of different materials: polished gold, pearl and silver, respectively.

Download Full Size | PDF

3.3. “Venus”: Comparison of rendering and computation cost

Comparing CGH techniques in general is not straightforward, because they differ substantially in their accuracy and capabilities (e.g. many do not have any shading or occlusion processing). We compared the proposed method with other similar CGH algorithms – both visually and computationally, by calculating a hologram of the same object with the following CGH methods:

  1. Ray-tracing without occlusion handling or shading
  2. Multiple-WRP with uniform phase (UP-MWRP) [12]
  3. Multiple-WRP with amplitude with random phase (RP-MWRP) [31]
  4. Proposed multiple-WRP with Phong model (PH-MWRP)

The optical settings are the same as for “Sphere1” (Table 1). The 3D scene consists of the point cloud dataset “Venus” (267,751 points) and a background image. The normal vector of the white light source is [−1, 0, 1]. We computed the holograms with the following parameters:

  • the number of WRPs: both L = 20 and L = 40,
  • the number of depth levels in the LUT: D = 20,
  • the number of diffuse LUTs: n = 5,
  • the levels for the angular quantization: both A = 9 and A = 19 (only used in PH-MWRP).

The computation times for the CGHs (and their LUTs) are reported for each evaluated method and parameter set are shown in Table 4. Additionally, Fig. 7 presents the scene renderings with the aforementioned methods, along with the ground truth image – rendered with CloudCompare, for visual comparison.

Tables Icon

Table 4. Computation times for the LUT generation and CGH calculation for “Venus” with different CGH methods and parameters (in seconds), and implementation language

 figure: Fig. 7

Fig. 7 The 3D scene rendered with (a) 3D graphics, and the numerical reconstructions of the holograms computed with: (b) amplitude with uniform phase (UP-MWRP), (c) amplitude with random phase (RP-MWRP), and (d) the Phong model (PH-MWRP with A=9) for 20 WRPs.

Download Full Size | PDF

We note that a larger number of WRPs does not significantly alter the computation time. Additionally, the visual quality of the two corresponding reconstructions of each method is visually identical. Furthermore, the number of angles for the angular quantization of the specular LUTs significantly affects their calculation time. Nevertheless, due to the small diffraction angle of this hologram, the improved angular resolution (larger A) does not noticeably improve the visual quality of the numerical reconstructions.

However, these conclusions are parameter-dependent. For higher resolution holograms, the FFTs will dominate the total CGH calculation time, so more WRPs will lead to significantly longer computation times. Additionally, for a finer pixel pitch the angular resolution A should be increased, to support the higher FoV.

For the 2K hologram resolution, the LUT generation time of the PH-MWRP is significant compared to the remainder of the CGH computation time. Fortunately, as can be seen in Sec. 3.4, with increasing resolutions the LUT generation time will be several degrees of magnitude lower than the total CGH calculation time, making it far less significant. Furthermore, the LUT can be precomputed and reused for similar scenes.

Additionally, the aim of this case study is to compare how realistic a rendering can be with the three methods. In Fig. 7(b), the background is identical to Fig. 7(a), however the statue shows no realistic light shading, because it does not take into account the light source set at the left side of the statue. Similarly, in Fig. 7(c), speckle noise artifacts appear due to the random phase added, yet the light source is not considered. In Fig. 7(d), the speckle noise is still visible, however, there is shading on the statue at the correct areas: the right side of the statue appears darker due to shadow, in the same areas as in Fig. 7(a).

Based on the visual quality comparison of the reconstructions, the proposed method is much more realistic compared to the other tested ones. Not only it supports the parallax for the supported FoV – as in RP-MWRP, but also it shows the additional visual advantages that the shading provides. For example, the soft specular highlights at the neck, make the surface look cylindrical as is, while in Fig. 7(c) and (d) the surface is not depicted so accurately.

3.4. “Biplane16K”: high-resolution colour CGH

Finally, to generate high resolution full parallax holograms with a wide field of view, sampling with a small pixel pitch is required, according to Eq. (4). Therefore, we choose a holographic setup with a pixel pitch of 1 μm and high resolution equal to 16,384-by-16,384 pixels, while the wavelengths of the reference beams are respectively 633, 532, 460 nm. Therefore, the FOV is 36.5°, 30.8° and 26.5°, for the three colour channels, and the distance between the object and hologram plane is 30 mm. The optical parameters of this CGH are shown in Table 1.

The object selected is the point cloud dataset “Biplane” (10,000,000 points). The texture and the depth map of the scene are shown in Fig. 8. The light source is considered to be white illumination coming from the west (normal vector [−0.3 0 −1]). The computation time for the LUTs (for A = 31) was 350 seconds and for the CGH 4 hours, without applying any optimization strategies for the computation. The corresponding times for the RP-MWRP method is 4 seconds and 3.3 hours, respectively. Figure 9 shows the numerical reconstruction from the CGH “Biplane16K”, generated with our proposed method, from 9 different viewing angles. Smaller aperture leads to a larger depth of field, hence for the reconstruction the aperture is 4096-by-4096 pixels. Additionally, we also visualize the parallax by providing a video demonstrating the rendered hologram from different viewing angles, with the aperture moving in a snake-like manner and the depth of focus moving from the front to the back (see Visualization 1).

 figure: Fig. 8

Fig. 8 Scene texture and depth map for CGH “Biplane”.

Download Full Size | PDF

 figure: Fig. 9

Fig. 9 Numerical reconstruction of the CGH “Biplane” from 9 different views: central view and from ± 18 degrees, showing occlusion effect and full parallax. The white square shows the position of the aperture of the camera ( Visualization 1).

Download Full Size | PDF

It is important to note that, because of the large aperture, colour dispersion will be visible in the reconstructions for points lying further from the focus plane. Solutions for colour aberrations for hologram reconstructions have been proposed in literature for both numerical and optical correction for scenes consisting of flat or shallow objects [39,40]. However, no solution has been proposed for high resolution holograms of 3D scenes with wide FOV.

4. Conclusion

We have proposed a computer-generated holography technique for acquiring high quality holograms with full parallax support for point cloud datasets. The proposed technique improves the multiple WRP method by incorporating diffuse and specular look-up tables to account for the ambient, diffuse and specular reflection terms of the Phong illumination model.

The method uses multiple parallel wavefront recording planes and includes an occlusion and a shadow processing technique. The hologram is computed based on an intra- and inter-WRP propagation scheme with pre-computed look-up tables. The method takes into account the BRDF function, during the computation of the LUTs. This is achieved by generating LUTs that include the reflection terms of the Phong model, namely, ambient, diffuse and specular. These novel LUTs, along with the novel shadowing scheme proposed, enable realistic rendering of illuminated surfaces. The experiments demonstrated that high quality CGHs with realistic illumination can be calculated, without significant additional computational cost compared to the original algorithm. We report that the surfaces are correctly rendered with shadows appearing in the correct areas and that the visible area of the 3D scene changes according to the viewer, hence the motion parallax and the occlusion effect are correctly demonstrated. In future work, parallel computing techniques on GPUs will be deployed to further decrease the computation cost of the hologram generation. Further improvements could be achieved by integrating a physical BRDF function – instead of the experiment-based Phong illumination model, to achieve even more realistic 3D scenes.

Funding

European Research Council under the European Union’s Seventh Framework Programme (FP7/2007-2013)/ERC Grant Agreement n.617779 (INTERFERE).

Acknowledgments

The models of Venus (Head sculpture) and Biplane are courtesy of Direct Dimensions Inc. (http://www.dirdim.com) and ScanLAB Projects, respectively.

References and links

1. B. Javidi and F. Okano, Three-Dimensional Television, Video, and Display Technologies (SpringerBerlin Heidelberg, 2002).

2. S. A. Benton and V. M. Bove, Holographic Imaging (Wiley, 2008). [CrossRef]  

3. Y. Pan, J. Liu, X. Li, and Y. Wang, “A review of dynamic holographic three-dimensional display: Algorithms, devices, and systems,” IEEE Trans. Ind. Informat. 12, 1599–1610 (2016). [CrossRef]  

4. T. Kozacki and M. Chlipala, “Color holographic display with white light LED source and single phase only SLM,” Opt. Express 24, 2189–2199 (2016). [CrossRef]   [PubMed]  

5. L. Onural, F. Yaras, and H. Kang, “Digital holographic three-dimensional video displays,” Proc. IEEE 99, 576–589 (2011). [CrossRef]  

6. P. W. M. Tsang and T. C. Poon, “Review on the state-of-the-art technologies for acquisition and display of digital holograms,” IEEE Trans. Ind. Informat. 12, 886–901 (2016). [CrossRef]  

7. H. Zhang, Y. Zhao, L. Cao, and G. Jin, “Fully computed holographic stereogram based algorithm for computer-generated holograms with accurate depth cues,” Opt. Express 23, 3901–3913 (2015). [CrossRef]   [PubMed]  

8. K. Matsushima, “Computer-generated holograms for three-dimensional surface objects with shade and texture,” Appl. Opt. 44, 4607–4614 (2005). [CrossRef]   [PubMed]  

9. K. Matsushima, Y. Arima, and S. Nakahara, “Digitized holography: modern holography for 3D imaging of virtual and real objects,” Appl. Opt. 50, H278–H284 (2011). [CrossRef]   [PubMed]  

10. T. Shimobaba, N. Masuda, and T. Ito, “Simple and fast calculation algorithm for computer-generated hologram with wavefront recording plane,” Opt. Lett. 34, 3133–3135 (2009). [CrossRef]   [PubMed]  

11. H. Zhang, Q. Tan, and G. Jin, “Full parallax three-dimensional computer generated hologram with occlusion effect using ray casting technique,” J. Phys. Conf. Ser. 415, 012048 (2013). [CrossRef]  

12. A. Symeonidou, D. Blinder, A. Munteanu, and P. Schelkens, “Computer-generated holograms by multiple wavefront recording plane method with occlusion culling,” Opt. Express 23, 22149–22161 (2015). [CrossRef]   [PubMed]  

13. D. Arai, T. Shimobaba, T. Nishitsuji, T. Kakue, N. Masuda, and T. Ito, “An accelerated hologram calculation using the wavefront recording plane method and wavelet transform,” Opt. Commun. 393, 107–112 (2017). [CrossRef]  

14. C. Chang, Y. Qi, J. Wu, C. Yuan, S. Nie, and J. Xia, “Numerical study for the calculation of computer-generated hologram in color holographic 3D projection enabled by modified wavefront recording plane method,” Opt. Commun. 387, 267–274 (2017). [CrossRef]  

15. M. E. Lucente, “Interactive computation of holograms using a look-up table,” J. Electron. Imaging 2, 28–34 (1993). [CrossRef]  

16. Y. Pan, X. Xu, S. Solanki, X. Liang, R. B. A. Tanjung, C. Tan, and T.-C. Chong, “Fast CGH computation using S-LUT on GPU,” Opt. Express 17, 18543–18555 (2009). [CrossRef]  

17. J. Jia, Y. Wang, J. Liu, X. Li, Y. Pan, Z. Sun, B. Zhang, Q. Zhao, and W. Jiang, “Reducing the memory usage for effective computer-generated hologram calculation using compressed look-up table in full-color holographic display,” Appl. Opt. 52, 1404–1412 (2013). [CrossRef]   [PubMed]  

18. R. H.-Y. Chen and T. D. Wilkinson, “Computer generated hologram from point cloud using graphics processor,” Appl. Opt. 48, 6841–6850 (2009). [CrossRef]   [PubMed]  

19. N. Takada, T. Shimobaba, H. Nakayama, A. Shiraki, N. Okada, M. Oikawa, N. Masuda, and T. Ito, “Fast high-resolution computer-generated hologram computation using multiple graphics processing unit cluster system,” Appl. Opt. 51, 7303–7307 (2012). [CrossRef]   [PubMed]  

20. F. E. Nicodemus, “Directional reflectance and emissivity of an opaque surface,” Appl. Opt. 4, 767–775 (1965). [CrossRef]  

21. T. Akenine-Möller, E. Haines, and N. Hoffman, Real-Time Rendering (A. K. Peters, Ltd., 2008), 3rd ed.

22. B. T. Phong, “Illumination for computer generated pictures,” Commun. ACM 18, 311–317 (1975). [CrossRef]  

23. T. Ichikawa, K. Yamaguchi, and Y. Sakamoto, “Realistic expression for full-parallax computer-generated holograms with the ray-tracing method,” Appl. Opt. 52, A201–A209 (2013). [CrossRef]   [PubMed]  

24. T. Ichikawa and Y. Sakamoto, “A rendering method of background reflections on a specular surface for CGH,” J. Phys. Conf. Ser. 415, 012044 (2013). [CrossRef]  

25. H. Nishi, K. Matsushima, and S. Nakahara, “Rendering of specular surfaces in polygon-based computer-generated holograms,” Appl. Opt. 50, H245–H252 (2011). [CrossRef]   [PubMed]  

26. H. Nishi and K. Matsushima, “Rendering of specular curved objects in polygon-based computer holography,” Appl. Opt. 56, F37–F44 (2017). [CrossRef]   [PubMed]  

27. H.-J. Yeom and J.-H. Park, “Calculation of reflectance distribution using angular spectrum convolution in mesh-based computer generated hologram,” Opt. Express 24, 19801–19813 (2016). [CrossRef]   [PubMed]  

28. A. Gilles, P. Gioia, R. Cozot, and L. Morin, “Computer generated hologram from multiview-plus-depth data considering specular reflections,” in “2016 IEEE International Conf. Multimedia Expo Workshops (ICMEW)” (2016), pp. 1–6.

29. T. Kurihara and Y. Takaki, “Shading of a computer-generated hologram by zone plate modulation,” Opt. Express 20, 3529–3540 (2012). [CrossRef]   [PubMed]  

30. H. Zhang, L. Cao, and G. Jin, “Lighting effects rendering in three-dimensional computer-generated holographic display,” Opt. Commun. 370, 192–197 (2016). [CrossRef]  

31. A. Symeonidou, D. Blinder, A. Ahar, C. Schretter, A. Munteanu, and P. Schelkens, “Speckle noise reduction for computer generated holograms of objects with diffuse surfaces,” Proc. SPIE 9896, 98960F (2016).

32. E. E. Catmull, “A subdivision algorithm for computer display of curved surfaces,” Ph. D. Dissertation, Department of Computer Science, University of Utah (1974).

33. H. Sasaki, K. Yamamoto, Y. Ichihashi, and T. Senoh, “Image size scalable full-parallax coloured three-dimensional video by electronic holography,” Sci. Rep. 4, 2045–2322 (2014).

34. J. W. Goodman, Introduction to Fourier Optics (Roberts and Company Publishers, 2004), 3rd ed.

35. A. Symeonidou, D. Blinder, B. Ceulemans, A. Munteanu, and P. Schelkens, “Three-dimensional rendering of computer-generated holograms acquired from point-clouds on light field displays,” Proc. SPIE 9971, 99710S (2016). [CrossRef]  

36. “CGH dataset: Interfere-III, http://www.erc-interfere.eu/downloads.html”.

37. D. DiLaura, and I. E. S. of North America, The Lighting Handbook: Reference and Application, IESNA LIGHTING HANDBOOK (Illuminating Engineering Society of North America, 2011).

38. T. McReynolds and D. Blythe, Advanced Graphics Programming Using OpenGL (The Morgan Kaufmann Series in Computer Graphics) (Morgan Kaufmann Publishers Inc., USA, 2005).

39. J. Zhao, H. Jiang, and J. Di, “Recording and reconstruction of a color holographic image by using digital lensless fourier transform holography,” Opt. Express 16, 2514–2519 (2008). [CrossRef]   [PubMed]  

40. J. Wang, H. D. Zheng, and Y. J. Yu, “Achromatization in optical reconstruction of computer generated color holograms,” Journal of Display Technology 12, 390–396 (2016). [CrossRef]  

Supplementary Material (1)

NameDescription
Visualization 1       Numerical reconstructions of the CGH 'Biplane16K' with an aperture moving at a snake-like manner and depth of focus changing from front to back of the object.

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (9)

Fig. 1
Fig. 1 The Phong reflection model for light reflection on an object surface.
Fig. 2
Fig. 2 The proposed colour CGH technique with multiple wavefront recording planes (WRPs) for point cloud datasets
Fig. 3
Fig. 3 Structure and dimensionality of the proposed LUTs for diffuse and specular reflections. For diffuse light, n entries are computed for each depth level d per wavelength, while for the specular reflection 1 entry per depth level is created for each angle of reflection per wavelength.
Fig. 4
Fig. 4 Demonstration of the effect of the terms of the Phong shading model. Numerical reconstructions of 4 CGHs computed with: (a) ambient (b) diffuse (c) specular component and (d) the combination of the three components
Fig. 5
Fig. 5 Numerical reconstruction of the CGHs “Sphere1” when illuminated with different light sources (a) white light from south (b) green light from southwest (c) yellow light from southeast (d) white light from south and blue light from northwest
Fig. 6
Fig. 6 Numerical reconstruction of the CGH “Sphere3” depicting three spheres made of different materials: polished gold, pearl and silver, respectively.
Fig. 7
Fig. 7 The 3D scene rendered with (a) 3D graphics, and the numerical reconstructions of the holograms computed with: (b) amplitude with uniform phase (UP-MWRP), (c) amplitude with random phase (RP-MWRP), and (d) the Phong model (PH-MWRP with A=9) for 20 WRPs.
Fig. 8
Fig. 8 Scene texture and depth map for CGH “Biplane”.
Fig. 9
Fig. 9 Numerical reconstruction of the CGH “Biplane” from 9 different views: central view and from ± 18 degrees, showing occlusion effect and full parallax. The white square shows the position of the aperture of the camera ( Visualization 1).

Tables (5)

Tables Icon

Algorithm 1 Proposed CGH method

Tables Icon

Table 1 Optical settings for the simulation parameters of “Sphere1” and “Biplane16K”

Tables Icon

Table 2 Colour and direction of the light sources for the CGHs “Sphere1”

Tables Icon

Table 3 Material reflectance parameters of the spheres for CGHs “Sphere1” and “Biplane16K”

Tables Icon

Table 4 Computation times for the LUT generation and CGH calculation for “Venus” with different CGH methods and parameters (in seconds), and implementation language

Equations (13)

Equations on this page are rendered with MathJax. Learn more.

BRDF λ ( θ i , ϕ i , θ o , ϕ o , u , v ) ,
I = I a k a + l = 1 L I d k d ( L ^ l N ^ ) + I s k s ( R ^ l V ^ ) m ,
g ( x , y ) = exp ( x 2 + y 2 2 σ 2 ) ,
sin θ = λ f = λ 2 p ,
W j = | z j | tan ( sin 1 λ 2 p ) = | z j | λ 4 p 2 λ 2 ,
E ^ ( f x , f y ) = ( λ f x , λ f y , 1 λ 2 ( f x 2 + f y 2 ) )
a ( f x , f y ) = ( E ^ ( f x , f y ) N ^ ) m
W ( x , y ) i λ = I i ( D ( x , y ) + S ( x , y ) ) exp ( 2 π j Δ z i λ ) ,
D ( x , y ) i λ = ( k a i λ I a l λ + k d i λ L l = 1 L ρ d l I d l λ ) LUT q D , λ ( x , y ) ,
S ( x , y ) i λ = k s i λ L l = 1 L ρ s l I s l λ LUT θ ( l ) , ϕ ( l ) , q S , λ ( x , y ) ,
u 2 ( x 2 , y 2 , z ) = 1 [ [ u 1 ( x 1 , y 1 , 0 ) ] exp ( 2 π i z 1 / λ 2 ω x 2 ω y 2 ) ] ,
I ( x , y ) = I ( x , y ) I min I max I min
z min = Np 2 λ ,
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.