## Abstract

We propose a speckle noise reduction method for generation of coherent holographic stereograms. The method employs densely sampled light field (DSLF) of the scene together with depth information acquired for each ray in the captured DSLF. Speckle reduction is achieved based on the ray separation technique where the scene is first described as a superposition of sparse sets of point sources corresponding to separated sets of rays and then the holographic reconstructions corresponding to these sparse sets of point sources are added incoherently (intensity-wise) to obtain the final reconstruction. The proposed method handles the light propagation between the sparse scene points and hologram elements accurately by utilizing ray resampling based on the notion of DSLF. As a result, as demonstrated via numerical simulations, significant speckle suppression is achieved at no cost of sampling related reconstruction artifacts.

© 2018 Optical Society of America under the terms of the OSA Open Access Publishing Agreement

## 1. Introduction

As a three-dimensional (3D) display method, holography [1] is often considered as the ultimate way to visually replicate a 3D scene, i.e. including all of the relevant visual cues necessary for proper 3D perception such as continuous motion parallax, correct spatial relations between objects, and accommodation. Due to utilized coherent imaging techniques, however, holographic reconstructions suffer from the speckle noise issue. The speckle noise patterns are random in nature with high contrast and frequency, thus heavily degrade the visual quality of the reconstructed images. In traditional optical holography, coherent light illumination creates random phase distributions on the (rough) scene surface. This results in random interference of scene points on the hologram plane, which is observed as speckle noise in the reconstructed images [2].

On the other hand, computer-generated holography provides a way to obtain the hologram numerically by simulating the physical wave propagation phenomenon during recording. A hologram obtained in this manner is commonly referred to as a computer-generated hologram (CGH). Stereograms constitute an important and widely used category of CGHs especially due to their ease of application for real life scenes. In particular, the necessary content can be captured by conventional cameras as a set of multiview images. The stereograms can be categorized into incoherent and coherent types, depending on the data they utilize. Incoherent stereograms are purely image-based, that is, they are generated from a set of multiperspective images. On the other hand, coherent stereograms require information about object location. This can be either in the form of a 3D model (e.g. point cloud) or multiperspective images coupled with depth information (i.e. depth maps). The availability of additional 3D information in coherent stereograms brings critical improvements in different aspects of reconstruction quality compared to incoherent ones, such as delivering correct accommodation cues [3]. Although 3D object models are often very precise and therefore beneficial in CGH applications, utilization of multiperspective images and depth information possesses some important advantages. For example, when recording a real scene the explicit object information is not available. Furthermore, utilizing such scene representation benefits from advanced rendering techniques in computer graphics enabling reduction of computational burden [4]. Occlusions are also intrinsically handled by image and depth based holograms as the perspective views record them correctly on the hologram.

It is a common practice in CGHs to employ random phase distributions (e.g. they are assigned to a set of point sources) so as to simulate diffused diffraction of light from the object. This creates similar speckle noise patterns on the reconstructed images as in optical holograms. One widely used method for speckle suppression in electro-holography is the so-called averaging method, where several CGH frames with statistically independent speckle patterns are averaged (over intensity) through multiple recordings by time-multiplexed reconstruction [5–8]. Although this speckle averaging approach comes at the expense of computational cost or more complicated optics, unlike the methods that suppress speckle by reducing the temporal [9] or spatial [10] coherence of the light, it does not suffer from loss of resolution. Nevertheless, the speckle reduction performance of the speckle averaging method is limited in its efficiency. That is, for *N* holograms utilized in the averaging, the speckle contrast is reduced by a factor of $1/\sqrt{N}$ [5]. Alternative solutions proposed for incoherent stereograms include spatially separating the fringe patterns in each single hologram of multiple recordings, which are to be averaged again by time-multiplexed reconstruction [11], and phase distribution manipulation [12]. Such methods do not suffer from the abovementioned theoretical limitation inherent to speckle averaging methods, and thus are able reduce the speckle noise more effectively.

When examining speckle suppression for coherent stereograms, on the other hand, it is important to consider the object information utilized in the hologram generation. Speckle suppression in model-based coherent stereograms has been achieved, for instance, through superposition of CGH frames obtained from sparse sets of point sources [11, 13]. Similar solution regarding sparse light rays for coherent stereograms utilizing multiperspective images and depth information has been also proposed [14]. These methods also reduce the speckle noise more effectively than the speckle averaging methods [14].

The speckle noise reduction method that we propose in this paper for coherent stereograms also relies on multiperspective images and depth data. The proposed method is mainly based on the ray separation solution previously presented in [14]. That is, the speckle patterns are suppressed through generating several CGH frames for different sets of sparse (quantized) scene points, corresponding to sparse sets of rays, and combining them in a time-multiplexed manner. Although this approach suppresses the speckle noise effectively, it does not accurately solve the light propagation problem between the quantized scene points and the hologram elements (hogels). This degrades the accuracy of the subsequent reconstructions. The method that we propose in this paper approaches the problem through signal processing means by utilizing the notion of densely sampled light field (DSLF). In particular, the light propagation between the quantized scene points and the hogels is accurately defined via accurate ray resampling from DSLF.

Below in section 2, we start with describing generation of coherent stereogram from captured light field and depth maps. In section 3, we first discuss the basic principle behind the ray separation method and then in section 4 we present the proposed approach. Finally, in section 5, numerical simulations are presented where the speckle suppression performance of the proposed method is compared with the existing techniques.

## 2. Discrete LF capture and coherent stereograms

#### 2.1. Coherent stereogram generation from discrete LF and depth information

Considering geometrical optics and rays as the fundamental light carrier, a defined space can be interpreted as a collection of light rays. LF describes the intensities of light rays traveling from different points in space to different directions. Assuming monochromatic illumination and static scenes with a transparent medium for the light to travel in, the LF can be represented as a 4D radiance function [15]. This function can be parametrized in different ways, including point and direction, points on two (parallel) planes or point pairs on a 3D surface. To simplify the analysis and visualizations, let us consider a 2D cross-section of the 3D space. Thus, utilizing the two-plane parametrization, a continuous LF function *L*_{1}(*x*, *s*) is defined between planes denoted by *x* and *s*.

One can calculate the CGH of a scene based on the captured LF together with depth information of each ray. Let us define the hologram on the *x* plane as denoted in Fig. 1. In practice, the continuous LF is rarely available and capturing the LF requires usually discretization on the two parametrization planes (e.g. when capturing as multiperspective images and depth). That is, a discrete set of samples is taken from the continuous LF by sampling the planes *x* and *s* at intervals of Δ* _{x}* and Δ

*, respectively, resulting in the discrete LF*

_{s}*L*

_{1}(

*m*Δ

*,*

_{x}*i*Δ

*)=*

_{s}*L*

_{1}[

*m*,

*i*], where

*m*= 1, 2, …,

*M*and

*i*= 1, 2, …,

*N*. Utilizing such information for the hologram generation results in segmentation on the hologram plane due to discretization on the

*x*plane. Thus, the discretization on the

*x*plane determines perceived spatial resolution when viewing the hologram. On the other hand, the sampling on the

*s*plane corresponds to view-dependent quality aspects of the hologram such as parallax, occlusion.

There are several segmented coherent holographic representations such as phase-added stereogram (PAS) [16] and diffraction specific coherent panoramagram (DSCP) [3]. Similarly to the incoherent alternative holographic stereograms (HSs), these CGHs also segment the hologram into elements containing several hologram pixels. In HS and PAS these segments are called holographic elements and they produce a piecewise planar approximation of the wavefield. The DSCP segments, on the other hand, are divided into so-called wavefront elements (wafels), which have controllable curvatures to produce sharp points even for deep scenes. DSCP, therefore, provides a more accurate representation compared to other coherent segmented CGH representations, such as PAS and accurate PAS [17]. Thus, here in this paper we consider the DSCP as an accurate coherent stereogram representation. The wavefield segments of DSCP approximate the field as segments of varying radius spherical waves, the radius depending on the distance between the hologram and the emission location of each light ray. Based on the discrete LF *L*_{1}[*m*, *i*] together with depth information giving a corresponding point source location (*x _{mi}*,

*z*) for each ray, the object wavefield for the DSCP is defined as [3]

_{mi}*λ*is the wavelength of the monochromatic light;

*x*is the

_{mi}*x*-coordinate,

*z*is the

_{mi}*z*-coordinate and

*r*is the Euclidean distance between the point source (corresponding to ray [

_{mi}*m*,

*i*]) and the hologram segment indexed by

*m*.

The hologram generation imposes strict restrictions on the discrete LF. The properties of the human visual system (HVS), e.g. its resolution limitations, usually play critical role in determining the sampling parameters Δ* _{x}* and Δ

*[18]. Nevertheless, within the scope of this paper, the LF sampling requirement is derived to ensure accurate ray resampling. Because, as will be discussed in section 3, in the ray separation method the point sources corresponding to captured rays are to be quantized on to different grids than the original (capture) grid. Hence, in order to accurately calculate the intensity distribution for the new set of quantized rays, accurate resampling from captured LF*

_{s}*L*

_{1}[

*m*,

*i*] is required. The following section discusses LF capture and sampling fulfilling such requirements.

#### 2.2. DSLF capture as multiperspective images

The discrete LF required for hologram generation can be captured with a multiperspective camera setup, placing cameras at the parametrization plane *s* (i.e. camera plane) separated by the LF sampling distance Δ* _{s}* from each other. A third plane containing the camera sensors is introduced behind the camera apertures (see Fig. 2), thus capturing the discrete LF between the camera and sensor planes according to two-plane parametrization. The camera and sensor planes are denoted by

*s*and

*u*respectively and the distance between the two planes by

*l*. The continuous LF between the two planes is denoted by

*L*

_{2}(

*s*,

*u*). This LF can be obtained from the LF

*L*

_{1}(

*x*,

*s*) defined between the hologram and camera (or viewer) plane, as the radiance along each light ray remains constant (due to transparent medium assumption).

When capturing multiperspective images, two different camera models are often considered: regular and recentering. Depending on the chosen camera model, the capture process has varying requirements. In the regular camera model, high resolution images with wide field-of-view are required to maintain the entire scene within the image boundaries across the entire range of perspective views. On the other hand, the recentering camera model shifts the sensor behind camera center of projection, as seen in Fig. 2, thus wasting less area on the images. Additionally, the correct LF rays are directly captured, assuming correct setup parameters. Although its practical implementations can be problematic, here we utilize the recentering camera model due to its lower resolution requirement and direct LF correspondence.

Capturing the discrete LF with a multiperspective imaging setup results in the discrete LF *L*_{2}[*i*, *k*] being stored in the images. However, as the hologram generation process requires the LF in relation to the hologram plane, i.e. the LF *L*_{1}[*m*, *i*], a solution for obtaining this LF accurately from the captured one is needed. The notion of DSLF provides a structured framework for this purpose. Considering the recentering camera model for LF capture, the DSLF defines the LF sampling such that the continuous function can be retrieved from the discrete LF (with sufficient accuracy) using bilinear interpolation under the condition that the disparity between adjacent camera images is kept to be between [−1, 1] pixels during capture [19].

Let us define the scene and capture parameters in accordance with the Fig. 2. A set of cameras sampled at distance Δ* _{s}* capture multiperspective views according to the recentering camera model. The recentering plane is placed at distance

*z*

_{0}from the camera plane, and the camera parameters are chosen such that the recentering plane is sampled at distance Δ

*. The captured scene is limited from the back and front by*

_{x}*z*and

_{b}*z*, respectively, with respect to the camera plane. Let us assume that

_{f}*z*≤

_{f}*z*

_{0}≤

*z*holds, i.e. that the scene boundaries

_{b}*z*and

_{b}*z*are behind and in front of the recentering plane, respectively. The maximum camera spacing to achieve dense LF sampling is then defined as the minimum of the camera spacing values corresponding to −1 and 1 pixel disparity:

_{f}*can be first chosen and then the scene can be limited to meet the DSLF requirement as*

_{s}By placing the recentering plane at the hologram plane during DSLF capture such that each pixel corresponds to a hologram segment, the DSLF capture can be connected to the discrete LF required for CGH generation.

## 3. Speckle suppression by ray separation

The image perceived by the HVS can be modeled as a sum of point spread functions (PSF) corresponding to point sources on the object. Those point sources having overlapping PSFs on the retina interfere with each other under coherent illumination. The random phase distribution on the object points results in random interference pattern, which is observed as speckle noise. The significant part of the speckle occurs due to interaction between the main lobes of PSFs. For a diffused surface, the average size of the speckle on the retina can be estimated for laterally separated point sources based on the lateral PSF main lobe width as [2, 14]

where*T*is the human eye lens diameter and

*l*is the distance between the lens and the retina.

The approach of describing the object as a superposition of sets of sparse object points and then calculating and reconstructing the corresponding holograms for these sets separately (i.e. superposing them incoherently) suppresses the speckle noise [11, 14]. This is due to having less overlap between the PSFs of the object points in each sparse set. In order for the speckle suppression to be effective, the separation between adjacent points in each multiplexed set needs to be larger than speckle size. Furthermore, the PSF side lobes also contribute to the interference causing speckles, thus further increase in separation is beneficial.

Since our method is mainly based on the ray separation (correspondingly sparse object points) technique introduced in [14], let us first briefly discuss this technique to better address the problems of the existing approach. Although there are no explicit point sources in image-based scene representations, one can associate point sources (ray emission points) with the light rays utilizing the available depth information as demonstrated in Fig. 3(a). The emission point (*x _{mi}*,

*z*) for the light ray corresponding to the pixel

_{mi}*m*of captured image at view

*i*is obtained as

*D*[

*m*,

*i*] is the corresponding depth value and

*d*is the distance between the camera and hologram planes. In order to achieve efficient speckle suppression, the ray emission points need to be quantized on a predefined scene grid. If the grid is defined such that the quantization step is less than the Rayleigh resolution limit, no spatial resolution is lost in terms of HVS capabilities. Because of similar reasons, it is also a good and common practice to choose the hologram segment size according to the Rayleigh resolution limit. Consequently, the lateral quantization step can be chosen as [14] The quantization grid can, thus, be placed such that the horizontal spacing is equal to the hologram segment size and aligned with the segment center points, i.e. ${\mathrm{\Delta}}_{\tilde{x}}={\mathrm{\Delta}}_{x}$. The axial quantization step, on the other hand, can be determined based on the depth acuity of the HVS. The depth acuity of the HVS depends on several factors and, therefore, it has been mostly studied experimentally. However, the stereoacuity can be used to obtain a rough estimate in most of the scenarios: at depth

*z*, the depth difference

*δ*(

_{z}*z*) just detectable by the HVS can be estimated as [20] where

*δ*is the angular measure for the stereoacuity of the HVS, which is typically around 0.5 arcmin,

_{γ}*B*is the baseline between the two eyes of the human, which is typically around 6.5 cm, and

*c*= 3437.75 is a constant defining conversion from radian to arcmin. The quantization step in depth can, thus, be chosen as ${\mathrm{\Delta}}_{\tilde{z}}={\delta}_{z}\left(d\right)$.

Having formed the quantization grid, the quantized point coordinates on the grid can be obtained by finding the nearest grid points to the emission coordinates as [14]

*S*is the entire set of points on the quantization grid. As shown in Fig. 3(b), this divides the scene space into equal size quantization volumes (voxels) surrounding each quantization point.

_{q}After obtaining the set of voxels representing quantized scene points, it is critical to accurately define the light propagation between these voxels and the hologram segments. Let us denote the LF between such voxels and the hologram segment *m* as ${\tilde{L}}_{1}\left(m{\mathrm{\Delta}}_{x},{s}_{mk}\right)$, where the corresponding rays intersect the camera plane at *s _{mk}*,

*k*= 1, 2, …,

*K*;

*K*≤

*N*and

*N*represents the total number angular samples per hogel. Please note that

*k*is used for voxel indexing. As proposed in [14], one can define these new LF samples using the captured discrete LF

*L*

_{1}[

*m*,

*i*] as

*V*denotes the set of indices

^{mk}*i*for which the emission points corresponding to captured rays for hogel

*m*are inside the voxel

*k*. These sets of indices can be found as

*X*,

^{mk}*Z*) is the set of original ray emission coordinates that are quantized on to the point $\left({\tilde{x}}_{mk},{\tilde{z}}_{mk}\right)$ representing the voxel

^{mk}*k*for hogel

*m*. As demonstrated in Figs. 3(c) and 3(d), this summation based mapping procedure can create originally nonexistent intensity variations along the angular dimension, i.e. on the

*s*plane. For example, the value of ${\tilde{L}}_{1}\left(m{\mathrm{\Delta}}_{x},{s}_{m5}\right)$ is the sum of captured light rays corresponding to

*i*= 6 and

*i*= 7, whereas ${\tilde{L}}_{1}\left(m{\mathrm{\Delta}}_{x},{s}_{m1}\right)$ is obtained from the single ray corresponding to

*i*= 1. Thus, the new LF values vary in relation to the number of rays within the voxel. This creates undesired intensity variations along the quantized scene recorded on the hologram, which are observed as dark and light regions along the scene surfaces in the perceived images, as will be demonstrated in section 5.

## 4. Proposed speckle suppression method

The speckle reduction method we propose addresses the issues related to light propagation between the quantized voxels and hologram segments highlighted in the previous section. The problem is how to accurately obtain the desired unknown samples ${\tilde{L}}_{1}\left(m{\mathrm{\Delta}}_{x},{s}_{mk}\right)$ from the known data samples *L*_{1}[*m*, *i*]. The accurate signal processing oriented solution for the problem is, thus, to reconstruct the continuous function (i.e. continuous LF between *m*Δ* _{x}* and the camera plane

*s*,

*L*

_{1}(

*m*Δ

*,*

_{x}*s*)) and resample that function at the sample positions of

*s*=

*s*. Such a solution is made available by using DSLF capture and then obtaining ${\tilde{L}}_{1}\left(m{\mathrm{\Delta}}_{x},{s}_{mk}\right)$,

_{mk}*k*= 1, 2, …,

*K*from

*L*

_{1}[

*m*,

*i*] via linear interpolation.

The resampling procedure is performed for each hologram segment as follows. For the light ray emitted from the quantized point $\left({\tilde{x}}_{mk},{\tilde{z}}_{mk}\right)$ corresponding to voxel *k* for hogel *m*, first the corresponding intersection point on the camera plane *s _{mk}* is obtained as

*i*

_{1}and

*i*

_{2}are then acquired from the two nearest camera plane coordinates of the captured set of rays

*i*as

*L*

_{1}[

*m*,

*i*

_{1}] and

*L*

_{1}[

*m*,

*i*

_{2}], the intensity value of ${\tilde{L}}_{1}\left(m{\mathrm{\Delta}}_{x},{s}_{mk}\right)$ is obtained through linear interpolation, i.e.

The DSCP object field, similarly to Eq. (1), is obtained from the resampled LF samples as

*k*and hogel

*m*. This process generates a hologram of the quantized version (in terms of point source locations) of the recorded scene. In order to achieve effective speckle noise reduction, the distance between adjacent scene points recorded on the hologram should be increased as explained in section 3, i.e. from the complete set of quantized points (corresponding light rays), only sparse sets are included in separate frames. The separation is done in horizontal and vertical direction by including every

*N*th row and column of the quantization grid. Each hologram frame generated from such set of light rays is displayed (or propagated) separately, in sequence. The end result is a combination of the speckle suppressed frames containing parts of the scene corresponding to the sparse set of light rays. Combining these frames results in a speckle-reduced reconstruction of the entire scene as a collection of the quantized point emitted light rays.

## 5. Experiments

The validity of the proposed method is evaluated by computational simulations including comparisons to random averaging and ray separation method [14]. For the proposed method, the DSCP object wavefields are generated using the rays sampled away from the hologram plane (at *z*_{0}) in the form of multiperspective images recentered with respect to the hologram plane. These images satisfy the DSLF criterion given by Eq. (2) for the given scene, hogel size Δ* _{x}* and camera sampling distance Δ

*. For the random averaging and ray separation methods, the rays are sampled by conventional pinhole cameras placed at the hogel centers. The angular sampling rate is chosen in accordance with the DSLF capture setup to be 2 tan*

_{s}^{−1}(Δ

*/(2*

_{s}*z*

_{0})). The images and depth maps are acquired by the 3D-modeling software Blender [22]. Only the green color channel is utilized in hologram calculations with the corresponding wavelength

*λ*= 534 nm.

In order to evaluate the different speckle suppression methods more reliably, only the complex object wave is utilized, i.e. the reconstruction noise that would otherwise be introduced by the conjugate object wave is avoided. The HVS viewing process is simulated by obtaining the perceived image *I*(*u*, *v*) by the viewer via the Fresnel diffraction model as [21]

*T*(

*s*,

*t*) is the lens transfer function of the human eye and ${\mathcal{F}}_{z}\{\cdot \}$ is the Fresnel propagation operation by distance

*z*. The eye is considered as a camera with a circular aperture and a thin lens placed at a distance

*z*from the hologram plane. The distance between the lens and sensor (i.e. pupil and retina)

_{eye}*l*is fixed at 25 mm. The eye is focused at distance

*d*by choosing the focal length

_{f}*f*as The speckle suppression capabilities of three different methods (random averaging, ray separation [14], and proposed accurate ray separation) are first evaluated by comparison of speckle contrasts. The speckle contrast

*C*is defined as [2] where

*σ*is the standard deviation and $\tilde{I}$ is the mean intensity of the reconstructed image. That is a better speckle suppression method results in lower speckle contrast. Three scenes are utilized for this purpose, each consisting of a planar monochromatic object placed 6 mm, 9 mm or 12 mm behind the hologram plane. The hologram parameters as well as the LF capture parameters for the proposed method are given in Table 1. The rays used by the random averaging and ray separation methods are sampled on the hologram plane with the angular sampling step of 0.29°. The multiplexing factor for the ray separation and proposed methods is chosen to be 4 × 4 and the random averaging method is applied with 16 frames, i.e. in all cases 16 different reconstructions are superposed intensity-wise to obtain the final reconstructed image. The lateral quantization step is chosen to be hogel size, i.e. ${\mathrm{\Delta}}_{\tilde{x}}={\mathrm{\Delta}}_{x}=64\phantom{\rule{0.2em}{0ex}}\mu \mathrm{m}$. The simulated human eye is set to be 300 mm away from the hologram plane and the pupil size is set as 3 mm. The eye is focused on the object surface in each case.

The simulated reconstructed images in Figs. 4(a)–(d) together with the speckle contrasts presented in Table 2 demonstrate the speckle suppression performance of each method. The ray separation and its proposed accurate version suppress speckles more effectively than the random averaging method. However, the ray separation method produces varying results depending on the depth of the planar object (as seen in top and middle row of Fig. 4(c)), which is an undesirable feature in the case of more realistic scenes with 3D objects. On the other hand, the proposed method successfully suppress speckle in all simulated depth cases without introducing periodic intensity variations on the object surface. The speckle contrast values agree with the visual analysis.

In order to properly evaluate the speckle suppression capabilities in a more realistic scenario, another experiment utilizing a scene containing a 3D object is performed. The scene and LF capture setup for the second experiment is shown in Fig. 5. The LF capture setup for the proposed method as well as the ray sampling parameters for the random averaging and ray separation methods are the same as in the previous experiment. From the captured data, four different variants of the DSCP object field are generated, i.e. one without speckle suppression, one utilizing random averaging, one with the ray separation method [14] and one with the proposed method. For the ray separation as well as the proposed method, the scene is quantized with quantization steps ${\mathrm{\Delta}}_{\tilde{x}}={\mathrm{\Delta}}_{x}=64\phantom{\rule{0.2em}{0ex}}\mu \mathrm{m}$ and ${\mathrm{\Delta}}_{\tilde{z}}=0.20\phantom{\rule{0.2em}{0ex}}\text{mm}$ in accordance with Eq. (9) and Eq. (10), respectively. The final hologram reconstructions are again obtained by intensity-wise summation of 16 reconstructions for random averaging and 4 × 4 sparse object reconstructions for the ray separation and proposed methods. The HVS viewing process is simulated for each hologram from three different view positions: (−15, −15) mm, (0, 0) mm and (15, 15) mm. The aperture diameter *T* of the eye is chosen again as 3 mm and it is focused at the hologram plane, i.e. *d _{f}* = 300 mm.

The simulation results, along with the reference views, are shown in Fig. 6. The reference view *I _{ref}* (

*u*,

*v*) simulates the aperture effects of the human eye and is generated as a superposition of elementary apertures, i.e. as a sum of several pinhole images within the extent of the lens. Each of these views is then compared against the corresponding reference view and the peak signal-to-noise ratio (PSNR) is evaluated as the visual image quality criterion. The results are presented in Table 3. Please note that as the dynamic ranges of reconstructed images by different methods are different, the PSNRs are calculated after mean normalization with respect to the reference image.

The simulation results show that both the ray separation and its proposed accurate version suppress the speckle noise effectively. However, the basic ray separation method images suffer from undesirable intensity patterns on the object surfaces due to its simplistic voxel mapping solution. Though the speckle patterns are adequately suppressed locally, the depth-dependent patterns on the reconstructed views of the scene degrade the overall visual quality. This can be clearly seen in the zoomed-in images shown in Fig. 6. On the other hand, the proposed solution alleviates these issues due to its more accurate and robust mapping approach. The analysis of the perceived visual quality is supported by the corresponding PSNR values shown in Table 3 as the proposed method achieves the best results in all three views.

Furthermore, as the proposed method preserves the spectral content of the underlying incoherent image that is to be perceived by the viewer (as in the case of random averaging), no resolution is lost during the multiplexing procedure [14]. This is further demonstrated in Fig. 7 comparing detailed regions of the reference image with the reconstructed image obtained by using the proposed method.

The quantization of emission coordinates to the voxel grid can cause occlusion related issues in certain areas. The problem is mostly present in areas where the emission points cross quantization step in depth. In such cases, several emission points can be quantized to voxels that have same lateral position but are at different depths. As some of these voxels are actually occluded, including all of them in the hologram calculations causes errors in the reconstructed images, and therefore needs to be taken into account. As utilized in our method, the simplistic approach of including only the front-most of such voxels provides a reasonable solution for this problem. Small amounts of error, nonetheless, remains in the reconstructed images in the form of dark stripes at locations corresponding to quantized depth transitions. This can be seen in the right-most zoomed-in image shown in Fig. 6.

## 6. Conclusion

We have presented an improved speckle suppression method for coherent stereograms that is mainly based on the light ray separation technique previously proposed in [14]. The scene points corresponding to captured rays are first quantized on to a 3D uniform grid resulting a voxel-based representation. Then the scene is described as a superposition of sparse sets of voxels that are obtained by undersampling the grid in the lateral directions. The holographic reconstruction is performed by incoherent (intensity-wise) superposition of several reconstructions corresponding to such sparse sets of voxels.

It has been demonstrated with the numerical simulations that the speckle suppression is successfully achieved for 3D scenes. The speckle suppression capability has been shown to be significantly better than the random averaging approach that also uses time-multiplexed reconstruction. The accurate ray resampling enabled with the notion of DSLF provides an accurate tool for calculating the light propagation between the quantized sparse scene points, i.e. voxels, and the hogels. By this way, speckle suppression is achieved without introducing sampling related artifacts in the reconstructed images. This is the main improvement over the ray separation method proposed in [14].

The current implementation utilizes a simplistic approach in dealing with the occluded voxels. In particular, if there are multiple voxels with same lateral positions but different depths, all voxels but the front-most one are ignored. This approach has been shown to provide a reasonable solution. However, there still remain artifacts in the reconstructed images in the form of dark stripes at locations corresponding to quantized depth transitions of the object. A more sophisticated treatment of this occlusion issue can, thus, further improve the reconstruction quality.

## References and links

**1. **D. Gabor, “A new microscopic principle,” Nature **161**(4098), 777–778 (1948). [CrossRef] [PubMed]

**2. **J. W. Goodman, *Speckle Phenomena in Optics: Theory and Applications* (Roberts and Company Publishers, 2007).

**3. **Q. Y. J. Smithwick, J. Barabas, D. E. Smalley, and V. M. Bove Jr., “Interactive holographic stereograms with accommodation cues,” Proc. SPIE **7619**, 761903 (2010). [CrossRef]

**4. **P. W. McOwan, W. J. Hossack, and R. E. Burge, “Three-dimensional stereoscopic display using ray traced computer generated holograms,” Opt. Commun. **82**(1–2), 6–11 (1991). [CrossRef]

**5. **J. Amako, H. Miura, and T. Sonehara, “Speckle-noise reduction on kinoform reconstruction using a phase-only spatial light modulator,” Appl. Opt. **34**(17), 3165–3171 (1995). [CrossRef] [PubMed]

**6. **P. Memmolo, V. Bianco, M. Paturzo, B. Javidi, P. Netti, and P. Ferraro, “Encoding multiple holograms for speckle-noise reduction in optical display,” Opt. Express **22**(21), 25768–25775 (2014). [CrossRef] [PubMed]

**7. **V. Bianco, P. Memmolo, M. Paturzo, A. Finizio, B. Javidi, and P. Ferraro, “Quasi noise-free digital holography,” Light: Science & Applications **5**(9), e16142 (2016). [CrossRef]

**8. **L. Rong, W. Xiao, F. Pan, S. Liu, and R. Li, “Speckle noise reduction in digital holography by use of multiple polarization holograms,” Chin. Opt. Lett. **8**(7), 653–655 (2010). [CrossRef]

**9. **F. Yaraş, H. Kang, and L. Onural, “Real-time phase-only color holographic video display system using LED illumination,” Appl. Opt. **48**(34), H48–H53 (2009). [CrossRef]

**10. **M. Yamaguchi, H. Endoh, T. Honda, and N. Ohyama, “High-quality recording of a full-parallax holographic stereogram with a digital diffuser,” Opt. Lett. **19**(2), 135–137 (1994). [CrossRef] [PubMed]

**11. **Y. Takaki and M. Yokouchi, “Speckle-free and grayscale hologram reconstruction using time-multiplexing technique,” Opt. Express **19**(8), 7567–7579 (2011). [CrossRef] [PubMed]

**12. **Y. Takaki and K. Taira, “Speckle regularization and miniaturization of computer-generated holographic stereograms,” Opt. Express **24**(6), 6328–6340 (2016). [CrossRef] [PubMed]

**13. **T. Kurihara and Y. Takaki, “Speckle-free, shaded 3d images produced by computer-generated holography,” Opt. Express **21**(4), 4044–4054 (2013). [CrossRef] [PubMed]

**14. **T. Utsugi and M. Yamaguchi, “Speckle-suppression in hologram calculation using ray-sampling plane,” Opt. Express **22**(14), 17193–17206 (2014). [CrossRef] [PubMed]

**15. **M. Levoy and P. Hanrahan, “Light field rendering,” in Proceedings of the 23rd Annual Conference on Computer Graphics and Interactive Techniques, (ACM, 1996), pp. 31–42.

**16. **M. Yamaguchi, H. Hoshino, T. Honda, and N. Ohyama, “Phase-added stereogram: calculation of hologram using computer graphics technique,” Proc. SPIE **1914**, 25–31 (1993). [CrossRef]

**17. **H. Kang, T. Yamaguchi, and H. Yoshikawa, “Accurate phase-added stereogram to improve the coherent stereogram,” Appl. Opt. **47**(19), D44–D54 (2008). [CrossRef] [PubMed]

**18. **M. Lucente, “Diffraction-specific fringe computation for electro-holography,” Ph.D. dissertation (Massachusetts Institute of Technology1994).

**19. **Z. Lin and H.-Y. Shum, “A geometric analysis of light field rendering,” International Journal of Computer Vision **58**(2), 121–138 (2004). [CrossRef]

**20. **C. H. J. Howard, “A Test for the Judgment of Distance,” American Journal of Ophthalmology **2**, 656–675 (1919). [CrossRef]

**21. **J. W. Goodman, *Introduction to Fourier Optics*2nd ed. (McGraw-Hill, 1996).

**22. ** Blender Foundation, http://www.blender.org.