Abstract

We propose a speckle noise reduction method for generation of coherent holographic stereograms. The method employs densely sampled light field (DSLF) of the scene together with depth information acquired for each ray in the captured DSLF. Speckle reduction is achieved based on the ray separation technique where the scene is first described as a superposition of sparse sets of point sources corresponding to separated sets of rays and then the holographic reconstructions corresponding to these sparse sets of point sources are added incoherently (intensity-wise) to obtain the final reconstruction. The proposed method handles the light propagation between the sparse scene points and hologram elements accurately by utilizing ray resampling based on the notion of DSLF. As a result, as demonstrated via numerical simulations, significant speckle suppression is achieved at no cost of sampling related reconstruction artifacts.

© 2018 Optical Society of America under the terms of the OSA Open Access Publishing Agreement

1. Introduction

As a three-dimensional (3D) display method, holography [1] is often considered as the ultimate way to visually replicate a 3D scene, i.e. including all of the relevant visual cues necessary for proper 3D perception such as continuous motion parallax, correct spatial relations between objects, and accommodation. Due to utilized coherent imaging techniques, however, holographic reconstructions suffer from the speckle noise issue. The speckle noise patterns are random in nature with high contrast and frequency, thus heavily degrade the visual quality of the reconstructed images. In traditional optical holography, coherent light illumination creates random phase distributions on the (rough) scene surface. This results in random interference of scene points on the hologram plane, which is observed as speckle noise in the reconstructed images [2].

On the other hand, computer-generated holography provides a way to obtain the hologram numerically by simulating the physical wave propagation phenomenon during recording. A hologram obtained in this manner is commonly referred to as a computer-generated hologram (CGH). Stereograms constitute an important and widely used category of CGHs especially due to their ease of application for real life scenes. In particular, the necessary content can be captured by conventional cameras as a set of multiview images. The stereograms can be categorized into incoherent and coherent types, depending on the data they utilize. Incoherent stereograms are purely image-based, that is, they are generated from a set of multiperspective images. On the other hand, coherent stereograms require information about object location. This can be either in the form of a 3D model (e.g. point cloud) or multiperspective images coupled with depth information (i.e. depth maps). The availability of additional 3D information in coherent stereograms brings critical improvements in different aspects of reconstruction quality compared to incoherent ones, such as delivering correct accommodation cues [3]. Although 3D object models are often very precise and therefore beneficial in CGH applications, utilization of multiperspective images and depth information possesses some important advantages. For example, when recording a real scene the explicit object information is not available. Furthermore, utilizing such scene representation benefits from advanced rendering techniques in computer graphics enabling reduction of computational burden [4]. Occlusions are also intrinsically handled by image and depth based holograms as the perspective views record them correctly on the hologram.

It is a common practice in CGHs to employ random phase distributions (e.g. they are assigned to a set of point sources) so as to simulate diffused diffraction of light from the object. This creates similar speckle noise patterns on the reconstructed images as in optical holograms. One widely used method for speckle suppression in electro-holography is the so-called averaging method, where several CGH frames with statistically independent speckle patterns are averaged (over intensity) through multiple recordings by time-multiplexed reconstruction [5–8]. Although this speckle averaging approach comes at the expense of computational cost or more complicated optics, unlike the methods that suppress speckle by reducing the temporal [9] or spatial [10] coherence of the light, it does not suffer from loss of resolution. Nevertheless, the speckle reduction performance of the speckle averaging method is limited in its efficiency. That is, for N holograms utilized in the averaging, the speckle contrast is reduced by a factor of 1/N [5]. Alternative solutions proposed for incoherent stereograms include spatially separating the fringe patterns in each single hologram of multiple recordings, which are to be averaged again by time-multiplexed reconstruction [11], and phase distribution manipulation [12]. Such methods do not suffer from the abovementioned theoretical limitation inherent to speckle averaging methods, and thus are able reduce the speckle noise more effectively.

When examining speckle suppression for coherent stereograms, on the other hand, it is important to consider the object information utilized in the hologram generation. Speckle suppression in model-based coherent stereograms has been achieved, for instance, through superposition of CGH frames obtained from sparse sets of point sources [11, 13]. Similar solution regarding sparse light rays for coherent stereograms utilizing multiperspective images and depth information has been also proposed [14]. These methods also reduce the speckle noise more effectively than the speckle averaging methods [14].

The speckle noise reduction method that we propose in this paper for coherent stereograms also relies on multiperspective images and depth data. The proposed method is mainly based on the ray separation solution previously presented in [14]. That is, the speckle patterns are suppressed through generating several CGH frames for different sets of sparse (quantized) scene points, corresponding to sparse sets of rays, and combining them in a time-multiplexed manner. Although this approach suppresses the speckle noise effectively, it does not accurately solve the light propagation problem between the quantized scene points and the hologram elements (hogels). This degrades the accuracy of the subsequent reconstructions. The method that we propose in this paper approaches the problem through signal processing means by utilizing the notion of densely sampled light field (DSLF). In particular, the light propagation between the quantized scene points and the hogels is accurately defined via accurate ray resampling from DSLF.

Below in section 2, we start with describing generation of coherent stereogram from captured light field and depth maps. In section 3, we first discuss the basic principle behind the ray separation method and then in section 4 we present the proposed approach. Finally, in section 5, numerical simulations are presented where the speckle suppression performance of the proposed method is compared with the existing techniques.

2. Discrete LF capture and coherent stereograms

2.1. Coherent stereogram generation from discrete LF and depth information

Considering geometrical optics and rays as the fundamental light carrier, a defined space can be interpreted as a collection of light rays. LF describes the intensities of light rays traveling from different points in space to different directions. Assuming monochromatic illumination and static scenes with a transparent medium for the light to travel in, the LF can be represented as a 4D radiance function [15]. This function can be parametrized in different ways, including point and direction, points on two (parallel) planes or point pairs on a 3D surface. To simplify the analysis and visualizations, let us consider a 2D cross-section of the 3D space. Thus, utilizing the two-plane parametrization, a continuous LF function L1(x, s) is defined between planes denoted by x and s.

One can calculate the CGH of a scene based on the captured LF together with depth information of each ray. Let us define the hologram on the x plane as denoted in Fig. 1. In practice, the continuous LF is rarely available and capturing the LF requires usually discretization on the two parametrization planes (e.g. when capturing as multiperspective images and depth). That is, a discrete set of samples is taken from the continuous LF by sampling the planes x and s at intervals of Δx and Δs, respectively, resulting in the discrete LF L1(mΔx, iΔs)= L1[m, i], where m = 1, 2, …, M and i = 1, 2, …, N. Utilizing such information for the hologram generation results in segmentation on the hologram plane due to discretization on the x plane. Thus, the discretization on the x plane determines perceived spatial resolution when viewing the hologram. On the other hand, the sampling on the s plane corresponds to view-dependent quality aspects of the hologram such as parallax, occlusion.

 figure: Fig. 1

Fig. 1 Discrete light field and hologram definitions according to two-plane parametrization.

Download Full Size | PPT Slide | PDF

There are several segmented coherent holographic representations such as phase-added stereogram (PAS) [16] and diffraction specific coherent panoramagram (DSCP) [3]. Similarly to the incoherent alternative holographic stereograms (HSs), these CGHs also segment the hologram into elements containing several hologram pixels. In HS and PAS these segments are called holographic elements and they produce a piecewise planar approximation of the wavefield. The DSCP segments, on the other hand, are divided into so-called wavefront elements (wafels), which have controllable curvatures to produce sharp points even for deep scenes. DSCP, therefore, provides a more accurate representation compared to other coherent segmented CGH representations, such as PAS and accurate PAS [17]. Thus, here in this paper we consider the DSCP as an accurate coherent stereogram representation. The wavefield segments of DSCP approximate the field as segments of varying radius spherical waves, the radius depending on the distance between the hologram and the emission location of each light ray. Based on the discrete LF L1[m, i] together with depth information giving a corresponding point source location (xmi, zmi) for each ray, the object wavefield for the DSCP is defined as [3]

ODSCP(x)=mrect(xmΔxΔx)iLi[m,i]rmiexp[j2πλ((xxmi)2+zmi2zmi)],
where λ is the wavelength of the monochromatic light; xmi is the x-coordinate, zmi is the z-coordinate and rmi is the Euclidean distance between the point source (corresponding to ray [m, i]) and the hologram segment indexed by m.

The hologram generation imposes strict restrictions on the discrete LF. The properties of the human visual system (HVS), e.g. its resolution limitations, usually play critical role in determining the sampling parameters Δx and Δs [18]. Nevertheless, within the scope of this paper, the LF sampling requirement is derived to ensure accurate ray resampling. Because, as will be discussed in section 3, in the ray separation method the point sources corresponding to captured rays are to be quantized on to different grids than the original (capture) grid. Hence, in order to accurately calculate the intensity distribution for the new set of quantized rays, accurate resampling from captured LF L1[m, i] is required. The following section discusses LF capture and sampling fulfilling such requirements.

2.2. DSLF capture as multiperspective images

The discrete LF required for hologram generation can be captured with a multiperspective camera setup, placing cameras at the parametrization plane s (i.e. camera plane) separated by the LF sampling distance Δs from each other. A third plane containing the camera sensors is introduced behind the camera apertures (see Fig. 2), thus capturing the discrete LF between the camera and sensor planes according to two-plane parametrization. The camera and sensor planes are denoted by s and u respectively and the distance between the two planes by l. The continuous LF between the two planes is denoted by L2(s, u). This LF can be obtained from the LF L1(x, s) defined between the hologram and camera (or viewer) plane, as the radiance along each light ray remains constant (due to transparent medium assumption).

 figure: Fig. 2

Fig. 2 Capture setup and parameters for DSLF. A set of cameras capture the scene at intervals of Δs, sampling the (recentered) image plane at Δx. The area between zb and zf fulfills the DSLF requirement.

Download Full Size | PPT Slide | PDF

When capturing multiperspective images, two different camera models are often considered: regular and recentering. Depending on the chosen camera model, the capture process has varying requirements. In the regular camera model, high resolution images with wide field-of-view are required to maintain the entire scene within the image boundaries across the entire range of perspective views. On the other hand, the recentering camera model shifts the sensor behind camera center of projection, as seen in Fig. 2, thus wasting less area on the images. Additionally, the correct LF rays are directly captured, assuming correct setup parameters. Although its practical implementations can be problematic, here we utilize the recentering camera model due to its lower resolution requirement and direct LF correspondence.

Capturing the discrete LF with a multiperspective imaging setup results in the discrete LF L2[i, k] being stored in the images. However, as the hologram generation process requires the LF in relation to the hologram plane, i.e. the LF L1[m, i], a solution for obtaining this LF accurately from the captured one is needed. The notion of DSLF provides a structured framework for this purpose. Considering the recentering camera model for LF capture, the DSLF defines the LF sampling such that the continuous function can be retrieved from the discrete LF (with sufficient accuracy) using bilinear interpolation under the condition that the disparity between adjacent camera images is kept to be between [−1, 1] pixels during capture [19].

Let us define the scene and capture parameters in accordance with the Fig. 2. A set of cameras sampled at distance Δs capture multiperspective views according to the recentering camera model. The recentering plane is placed at distance z0 from the camera plane, and the camera parameters are chosen such that the recentering plane is sampled at distance Δx. The captured scene is limited from the back and front by zb and zf, respectively, with respect to the camera plane. Let us assume that zfz0zb holds, i.e. that the scene boundaries zb and zf are behind and in front of the recentering plane, respectively. The maximum camera spacing to achieve dense LF sampling is then defined as the minimum of the camera spacing values corresponding to −1 and 1 pixel disparity:

Δs=min{Δxzbzbz0,Δxzfz0zf}.
If either of the scene limiting inequalities does not hold, the camera spacing for the corresponding depth limit results in a negative value and can thus be omitted. The camera spacing is maximized when both front and back disparity values are equal [19], i.e.
zopt=2zbzfzb+zf.
Alternatively, the camera sampling distance Δs can be first chosen and then the scene can be limited to meet the DSLF requirement as
zb=Δsz0ΔsΔx,
zf=Δsz0Δs+Δx.

By placing the recentering plane at the hologram plane during DSLF capture such that each pixel corresponds to a hologram segment, the DSLF capture can be connected to the discrete LF required for CGH generation.

3. Speckle suppression by ray separation

The image perceived by the HVS can be modeled as a sum of point spread functions (PSF) corresponding to point sources on the object. Those point sources having overlapping PSFs on the retina interfere with each other under coherent illumination. The random phase distribution on the object points results in random interference pattern, which is observed as speckle noise. The significant part of the speckle occurs due to interaction between the main lobes of PSFs. For a diffused surface, the average size of the speckle on the retina can be estimated for laterally separated point sources based on the lateral PSF main lobe width as [2, 14]

Lx2.44λlT,
where T is the human eye lens diameter and l is the distance between the lens and the retina.

The approach of describing the object as a superposition of sets of sparse object points and then calculating and reconstructing the corresponding holograms for these sets separately (i.e. superposing them incoherently) suppresses the speckle noise [11, 14]. This is due to having less overlap between the PSFs of the object points in each sparse set. In order for the speckle suppression to be effective, the separation between adjacent points in each multiplexed set needs to be larger than speckle size. Furthermore, the PSF side lobes also contribute to the interference causing speckles, thus further increase in separation is beneficial.

Since our method is mainly based on the ray separation (correspondingly sparse object points) technique introduced in [14], let us first briefly discuss this technique to better address the problems of the existing approach. Although there are no explicit point sources in image-based scene representations, one can associate point sources (ray emission points) with the light rays utilizing the available depth information as demonstrated in Fig. 3(a). The emission point (xmi,zmi) for the light ray corresponding to the pixel m of captured image at view i is obtained as

xmi=mΔx+zmi(iΔsmΔx)d,
zmi=D[m,i],
where D[m, i] is the corresponding depth value and d is the distance between the camera and hologram planes. In order to achieve efficient speckle suppression, the ray emission points need to be quantized on a predefined scene grid. If the grid is defined such that the quantization step is less than the Rayleigh resolution limit, no spatial resolution is lost in terms of HVS capabilities. Because of similar reasons, it is also a good and common practice to choose the hologram segment size according to the Rayleigh resolution limit. Consequently, the lateral quantization step can be chosen as [14]
Δx˜1.22λdT.
The quantization grid can, thus, be placed such that the horizontal spacing is equal to the hologram segment size and aligned with the segment center points, i.e. Δx˜=Δx. The axial quantization step, on the other hand, can be determined based on the depth acuity of the HVS. The depth acuity of the HVS depends on several factors and, therefore, it has been mostly studied experimentally. However, the stereoacuity can be used to obtain a rough estimate in most of the scenarios: at depth z, the depth difference δz(z) just detectable by the HVS can be estimated as [20]
δz(z)=z2δγcB,
where δγ is the angular measure for the stereoacuity of the HVS, which is typically around 0.5 arcmin, B is the baseline between the two eyes of the human, which is typically around 6.5 cm, and c = 3437.75 is a constant defining conversion from radian to arcmin. The quantization step in depth can, thus, be chosen as Δz˜=δz(d).

 figure: Fig. 3

Fig. 3 General structure of the ray separation based speckle suppression method. (a) Assigning emission coordinates to the captured light rays, (b) quantizing each emission point to a voxel (highlighted in red) center point, (c) detailed look at the original and quantized ray distributions, (d) assigning quantized ray intensities based on summation of original ray intensities within the voxels.

Download Full Size | PPT Slide | PDF

Having formed the quantization grid, the quantized point coordinates on the grid can be obtained by finding the nearest grid points to the emission coordinates as [14]

(x˜mi,z˜mi)=argmin(x˜,z˜)Sq{(x˜xmi)2+(z˜zmi)2},
where Sq is the entire set of points on the quantization grid. As shown in Fig. 3(b), this divides the scene space into equal size quantization volumes (voxels) surrounding each quantization point.

After obtaining the set of voxels representing quantized scene points, it is critical to accurately define the light propagation between these voxels and the hologram segments. Let us denote the LF between such voxels and the hologram segment m as L˜1(mΔx,smk), where the corresponding rays intersect the camera plane at smk, k = 1, 2, …, K; KN and N represents the total number angular samples per hogel. Please note that k is used for voxel indexing. As proposed in [14], one can define these new LF samples using the captured discrete LF L1[m, i] as

L˜1(mΔx,smk)=iVmkL1[m,i],
where Vmk denotes the set of indices i for which the emission points corresponding to captured rays for hogel m are inside the voxel k. These sets of indices can be found as
Vmk=1Δs[mΔxdZmk(XmkmΔx)],
where (Xmk, Zmk) is the set of original ray emission coordinates that are quantized on to the point (x˜mk,z˜mk) representing the voxel k for hogel m. As demonstrated in Figs. 3(c) and 3(d), this summation based mapping procedure can create originally nonexistent intensity variations along the angular dimension, i.e. on the s plane. For example, the value of L˜1(mΔx,sm5) is the sum of captured light rays corresponding to i = 6 and i = 7, whereas L˜1(mΔx,sm1) is obtained from the single ray corresponding to i = 1. Thus, the new LF values vary in relation to the number of rays within the voxel. This creates undesired intensity variations along the quantized scene recorded on the hologram, which are observed as dark and light regions along the scene surfaces in the perceived images, as will be demonstrated in section 5.

4. Proposed speckle suppression method

The speckle reduction method we propose addresses the issues related to light propagation between the quantized voxels and hologram segments highlighted in the previous section. The problem is how to accurately obtain the desired unknown samples L˜1(mΔx,smk) from the known data samples L1[m, i]. The accurate signal processing oriented solution for the problem is, thus, to reconstruct the continuous function (i.e. continuous LF between mΔx and the camera plane s, L1(mΔx, s)) and resample that function at the sample positions of s = smk. Such a solution is made available by using DSLF capture and then obtaining L˜1(mΔx,smk), k = 1, 2, …, K from L1[m, i] via linear interpolation.

The resampling procedure is performed for each hologram segment as follows. For the light ray emitted from the quantized point (x˜mk,z˜mk) corresponding to voxel k for hogel m, first the corresponding intersection point on the camera plane smk is obtained as

smk=mΔxd(x˜mkmΔx)z˜mk.
The surrounding ray indices i1 and i2 are then acquired from the two nearest camera plane coordinates of the captured set of rays i as
i1=argminismk/Δs|iΔssmk|,
i2=argmini>smk/Δs|iΔssmk|.
Finally, utilizing the corresponding LF samples L1[m, i1] and L1[m, i2], the intensity value of L˜1(mΔx,smk) is obtained through linear interpolation, i.e.
L˜1(mΔx,smk)=L1[m,i1](i2Δssmk)+L1[m,i2](smki1Δs)Δs.
When compared to the voxel mapping in Eq. (12), the major difference is that our solution provides a structured signal processing framework enabling accurate calculation of light propagation between the quantized voxels and hogels.

The DSCP object field, similarly to Eq. (1), is obtained from the resampled LF samples as

O(x)=mrect(xmΔxΔx)kL˜1(mΔx,smk)r˜mkexp[j2πλ((xx˜mk)2+z˜mk2z˜mk)],
where r˜mk is the distance between the centers of voxel k and hogel m. This process generates a hologram of the quantized version (in terms of point source locations) of the recorded scene. In order to achieve effective speckle noise reduction, the distance between adjacent scene points recorded on the hologram should be increased as explained in section 3, i.e. from the complete set of quantized points (corresponding light rays), only sparse sets are included in separate frames. The separation is done in horizontal and vertical direction by including every Nth row and column of the quantization grid. Each hologram frame generated from such set of light rays is displayed (or propagated) separately, in sequence. The end result is a combination of the speckle suppressed frames containing parts of the scene corresponding to the sparse set of light rays. Combining these frames results in a speckle-reduced reconstruction of the entire scene as a collection of the quantized point emitted light rays.

5. Experiments

The validity of the proposed method is evaluated by computational simulations including comparisons to random averaging and ray separation method [14]. For the proposed method, the DSCP object wavefields are generated using the rays sampled away from the hologram plane (at z0) in the form of multiperspective images recentered with respect to the hologram plane. These images satisfy the DSLF criterion given by Eq. (2) for the given scene, hogel size Δx and camera sampling distance Δs. For the random averaging and ray separation methods, the rays are sampled by conventional pinhole cameras placed at the hogel centers. The angular sampling rate is chosen in accordance with the DSLF capture setup to be 2 tan−1s/(2z0)). The images and depth maps are acquired by the 3D-modeling software Blender [22]. Only the green color channel is utilized in hologram calculations with the corresponding wavelength λ = 534 nm.

In order to evaluate the different speckle suppression methods more reliably, only the complex object wave is utilized, i.e. the reconstruction noise that would otherwise be introduced by the conjugate object wave is avoided. The HVS viewing process is simulated by obtaining the perceived image I(u, v) by the viewer via the Fresnel diffraction model as [21]

I(u,v)=|l{T(s,t)zeye{ODSCP(x,y)}}|2,
where T(s, t) is the lens transfer function of the human eye and z{} is the Fresnel propagation operation by distance z. The eye is considered as a camera with a circular aperture and a thin lens placed at a distance zeye from the hologram plane. The distance between the lens and sensor (i.e. pupil and retina) l is fixed at 25 mm. The eye is focused at distance df by choosing the focal length f as
f=(1df+1l)1.
The speckle suppression capabilities of three different methods (random averaging, ray separation [14], and proposed accurate ray separation) are first evaluated by comparison of speckle contrasts. The speckle contrast C is defined as [2]
C=σI˜,
where σ is the standard deviation and I˜ is the mean intensity of the reconstructed image. That is a better speckle suppression method results in lower speckle contrast. Three scenes are utilized for this purpose, each consisting of a planar monochromatic object placed 6 mm, 9 mm or 12 mm behind the hologram plane. The hologram parameters as well as the LF capture parameters for the proposed method are given in Table 1. The rays used by the random averaging and ray separation methods are sampled on the hologram plane with the angular sampling step of 0.29°. The multiplexing factor for the ray separation and proposed methods is chosen to be 4 × 4 and the random averaging method is applied with 16 frames, i.e. in all cases 16 different reconstructions are superposed intensity-wise to obtain the final reconstructed image. The lateral quantization step is chosen to be hogel size, i.e. Δx˜=Δx=64μm. The simulated human eye is set to be 300 mm away from the hologram plane and the pupil size is set as 3 mm. The eye is focused on the object surface in each case.

Tables Icon

Table 1. Parameters of the CGH and LF capture for the car scene.

The simulated reconstructed images in Figs. 4(a)–(d) together with the speckle contrasts presented in Table 2 demonstrate the speckle suppression performance of each method. The ray separation and its proposed accurate version suppress speckles more effectively than the random averaging method. However, the ray separation method produces varying results depending on the depth of the planar object (as seen in top and middle row of Fig. 4(c)), which is an undesirable feature in the case of more realistic scenes with 3D objects. On the other hand, the proposed method successfully suppress speckle in all simulated depth cases without introducing periodic intensity variations on the object surface. The speckle contrast values agree with the visual analysis.

 figure: Fig. 4

Fig. 4 The reconstructed images via the viewing simulation for different speckle suppression methods and object distances. The plane distance from the hologram plane is 6 mm (top row), 9 mm (middle row) or 12 mm (bottom row). The speckle suppression methods used: (a) without speckle reduction, (b) random averaging, (c) ray separation [14] and (d) proposed method.

Download Full Size | PPT Slide | PDF

Tables Icon

Table 2. Speckle contrasts of each different scene and speckle suppression method.

In order to properly evaluate the speckle suppression capabilities in a more realistic scenario, another experiment utilizing a scene containing a 3D object is performed. The scene and LF capture setup for the second experiment is shown in Fig. 5. The LF capture setup for the proposed method as well as the ray sampling parameters for the random averaging and ray separation methods are the same as in the previous experiment. From the captured data, four different variants of the DSCP object field are generated, i.e. one without speckle suppression, one utilizing random averaging, one with the ray separation method [14] and one with the proposed method. For the ray separation as well as the proposed method, the scene is quantized with quantization steps Δx˜=Δx=64μm and Δz˜=0.20mm in accordance with Eq. (9) and Eq. (10), respectively. The final hologram reconstructions are again obtained by intensity-wise summation of 16 reconstructions for random averaging and 4 × 4 sparse object reconstructions for the ray separation and proposed methods. The HVS viewing process is simulated for each hologram from three different view positions: (−15, −15) mm, (0, 0) mm and (15, 15) mm. The aperture diameter T of the eye is chosen again as 3 mm and it is focused at the hologram plane, i.e. df = 300 mm.

 figure: Fig. 5

Fig. 5 Scene and LF capture setup for the second experiment utilizing the Pony car model. (The 3D model Pony Cartoon by Slava Zhuravlev is licensed under CC BY 4.0.)

Download Full Size | PPT Slide | PDF

The simulation results, along with the reference views, are shown in Fig. 6. The reference view Iref (u, v) simulates the aperture effects of the human eye and is generated as a superposition of elementary apertures, i.e. as a sum of several pinhole images within the extent of the lens. Each of these views is then compared against the corresponding reference view and the peak signal-to-noise ratio (PSNR) is evaluated as the visual image quality criterion. The results are presented in Table 3. Please note that as the dynamic ranges of reconstructed images by different methods are different, the PSNRs are calculated after mean normalization with respect to the reference image.

 figure: Fig. 6

Fig. 6 Reconstructed images obtained via the viewing simulation. The viewer position is (−15, −15) mm in the top row, (0, 0) mm in the middle row and (15, 15) mm in the bottom row. From left to right: reference image, no speckle suppression, random averaging, ray separation [14], and proposed accurate ray separation.

Download Full Size | PPT Slide | PDF

Tables Icon

Table 3. PSNRs (dB) for the view images corresponding to different methods.

The simulation results show that both the ray separation and its proposed accurate version suppress the speckle noise effectively. However, the basic ray separation method images suffer from undesirable intensity patterns on the object surfaces due to its simplistic voxel mapping solution. Though the speckle patterns are adequately suppressed locally, the depth-dependent patterns on the reconstructed views of the scene degrade the overall visual quality. This can be clearly seen in the zoomed-in images shown in Fig. 6. On the other hand, the proposed solution alleviates these issues due to its more accurate and robust mapping approach. The analysis of the perceived visual quality is supported by the corresponding PSNR values shown in Table 3 as the proposed method achieves the best results in all three views.

Furthermore, as the proposed method preserves the spectral content of the underlying incoherent image that is to be perceived by the viewer (as in the case of random averaging), no resolution is lost during the multiplexing procedure [14]. This is further demonstrated in Fig. 7 comparing detailed regions of the reference image with the reconstructed image obtained by using the proposed method.

 figure: Fig. 7

Fig. 7 Resolution comparison of reference image (left) and the reconstructed image for the proposed method (right), magnifying the detail on the front of the car.

Download Full Size | PPT Slide | PDF

The quantization of emission coordinates to the voxel grid can cause occlusion related issues in certain areas. The problem is mostly present in areas where the emission points cross quantization step in depth. In such cases, several emission points can be quantized to voxels that have same lateral position but are at different depths. As some of these voxels are actually occluded, including all of them in the hologram calculations causes errors in the reconstructed images, and therefore needs to be taken into account. As utilized in our method, the simplistic approach of including only the front-most of such voxels provides a reasonable solution for this problem. Small amounts of error, nonetheless, remains in the reconstructed images in the form of dark stripes at locations corresponding to quantized depth transitions. This can be seen in the right-most zoomed-in image shown in Fig. 6.

6. Conclusion

We have presented an improved speckle suppression method for coherent stereograms that is mainly based on the light ray separation technique previously proposed in [14]. The scene points corresponding to captured rays are first quantized on to a 3D uniform grid resulting a voxel-based representation. Then the scene is described as a superposition of sparse sets of voxels that are obtained by undersampling the grid in the lateral directions. The holographic reconstruction is performed by incoherent (intensity-wise) superposition of several reconstructions corresponding to such sparse sets of voxels.

It has been demonstrated with the numerical simulations that the speckle suppression is successfully achieved for 3D scenes. The speckle suppression capability has been shown to be significantly better than the random averaging approach that also uses time-multiplexed reconstruction. The accurate ray resampling enabled with the notion of DSLF provides an accurate tool for calculating the light propagation between the quantized sparse scene points, i.e. voxels, and the hogels. By this way, speckle suppression is achieved without introducing sampling related artifacts in the reconstructed images. This is the main improvement over the ray separation method proposed in [14].

The current implementation utilizes a simplistic approach in dealing with the occluded voxels. In particular, if there are multiple voxels with same lateral positions but different depths, all voxels but the front-most one are ignored. This approach has been shown to provide a reasonable solution. However, there still remain artifacts in the reconstructed images in the form of dark stripes at locations corresponding to quantized depth transitions of the object. A more sophisticated treatment of this occlusion issue can, thus, further improve the reconstruction quality.

References and links

1. D. Gabor, “A new microscopic principle,” Nature 161(4098), 777–778 (1948). [CrossRef]   [PubMed]  

2. J. W. Goodman, Speckle Phenomena in Optics: Theory and Applications (Roberts and Company Publishers, 2007).

3. Q. Y. J. Smithwick, J. Barabas, D. E. Smalley, and V. M. Bove Jr., “Interactive holographic stereograms with accommodation cues,” Proc. SPIE 7619, 761903 (2010). [CrossRef]  

4. P. W. McOwan, W. J. Hossack, and R. E. Burge, “Three-dimensional stereoscopic display using ray traced computer generated holograms,” Opt. Commun. 82(1–2), 6–11 (1991). [CrossRef]  

5. J. Amako, H. Miura, and T. Sonehara, “Speckle-noise reduction on kinoform reconstruction using a phase-only spatial light modulator,” Appl. Opt. 34(17), 3165–3171 (1995). [CrossRef]   [PubMed]  

6. P. Memmolo, V. Bianco, M. Paturzo, B. Javidi, P. Netti, and P. Ferraro, “Encoding multiple holograms for speckle-noise reduction in optical display,” Opt. Express 22(21), 25768–25775 (2014). [CrossRef]   [PubMed]  

7. V. Bianco, P. Memmolo, M. Paturzo, A. Finizio, B. Javidi, and P. Ferraro, “Quasi noise-free digital holography,” Light: Science & Applications 5(9), e16142 (2016). [CrossRef]  

8. L. Rong, W. Xiao, F. Pan, S. Liu, and R. Li, “Speckle noise reduction in digital holography by use of multiple polarization holograms,” Chin. Opt. Lett. 8(7), 653–655 (2010). [CrossRef]  

9. F. Yaraş, H. Kang, and L. Onural, “Real-time phase-only color holographic video display system using LED illumination,” Appl. Opt. 48(34), H48–H53 (2009). [CrossRef]  

10. M. Yamaguchi, H. Endoh, T. Honda, and N. Ohyama, “High-quality recording of a full-parallax holographic stereogram with a digital diffuser,” Opt. Lett. 19(2), 135–137 (1994). [CrossRef]   [PubMed]  

11. Y. Takaki and M. Yokouchi, “Speckle-free and grayscale hologram reconstruction using time-multiplexing technique,” Opt. Express 19(8), 7567–7579 (2011). [CrossRef]   [PubMed]  

12. Y. Takaki and K. Taira, “Speckle regularization and miniaturization of computer-generated holographic stereograms,” Opt. Express 24(6), 6328–6340 (2016). [CrossRef]   [PubMed]  

13. T. Kurihara and Y. Takaki, “Speckle-free, shaded 3d images produced by computer-generated holography,” Opt. Express 21(4), 4044–4054 (2013). [CrossRef]   [PubMed]  

14. T. Utsugi and M. Yamaguchi, “Speckle-suppression in hologram calculation using ray-sampling plane,” Opt. Express 22(14), 17193–17206 (2014). [CrossRef]   [PubMed]  

15. M. Levoy and P. Hanrahan, “Light field rendering,” in Proceedings of the 23rd Annual Conference on Computer Graphics and Interactive Techniques, (ACM, 1996), pp. 31–42.

16. M. Yamaguchi, H. Hoshino, T. Honda, and N. Ohyama, “Phase-added stereogram: calculation of hologram using computer graphics technique,” Proc. SPIE 1914, 25–31 (1993). [CrossRef]  

17. H. Kang, T. Yamaguchi, and H. Yoshikawa, “Accurate phase-added stereogram to improve the coherent stereogram,” Appl. Opt. 47(19), D44–D54 (2008). [CrossRef]   [PubMed]  

18. M. Lucente, “Diffraction-specific fringe computation for electro-holography,” Ph.D. dissertation (Massachusetts Institute of Technology1994).

19. Z. Lin and H.-Y. Shum, “A geometric analysis of light field rendering,” International Journal of Computer Vision 58(2), 121–138 (2004). [CrossRef]  

20. C. H. J. Howard, “A Test for the Judgment of Distance,” American Journal of Ophthalmology 2, 656–675 (1919). [CrossRef]  

21. J. W. Goodman, Introduction to Fourier Optics2nd ed. (McGraw-Hill, 1996).

22. Blender Foundation, http://www.blender.org.

References

  • View by:

  1. D. Gabor, “A new microscopic principle,” Nature 161(4098), 777–778 (1948).
    [Crossref] [PubMed]
  2. J. W. Goodman, Speckle Phenomena in Optics: Theory and Applications (Roberts and Company Publishers, 2007).
  3. Q. Y. J. Smithwick, J. Barabas, D. E. Smalley, and V. M. Bove, “Interactive holographic stereograms with accommodation cues,” Proc. SPIE 7619, 761903 (2010).
    [Crossref]
  4. P. W. McOwan, W. J. Hossack, and R. E. Burge, “Three-dimensional stereoscopic display using ray traced computer generated holograms,” Opt. Commun. 82(1–2), 6–11 (1991).
    [Crossref]
  5. J. Amako, H. Miura, and T. Sonehara, “Speckle-noise reduction on kinoform reconstruction using a phase-only spatial light modulator,” Appl. Opt. 34(17), 3165–3171 (1995).
    [Crossref] [PubMed]
  6. P. Memmolo, V. Bianco, M. Paturzo, B. Javidi, P. Netti, and P. Ferraro, “Encoding multiple holograms for speckle-noise reduction in optical display,” Opt. Express 22(21), 25768–25775 (2014).
    [Crossref] [PubMed]
  7. V. Bianco, P. Memmolo, M. Paturzo, A. Finizio, B. Javidi, and P. Ferraro, “Quasi noise-free digital holography,” Light: Science & Applications 5(9), e16142 (2016).
    [Crossref]
  8. L. Rong, W. Xiao, F. Pan, S. Liu, and R. Li, “Speckle noise reduction in digital holography by use of multiple polarization holograms,” Chin. Opt. Lett. 8(7), 653–655 (2010).
    [Crossref]
  9. F. Yaraş, H. Kang, and L. Onural, “Real-time phase-only color holographic video display system using LED illumination,” Appl. Opt. 48(34), H48–H53 (2009).
    [Crossref]
  10. M. Yamaguchi, H. Endoh, T. Honda, and N. Ohyama, “High-quality recording of a full-parallax holographic stereogram with a digital diffuser,” Opt. Lett. 19(2), 135–137 (1994).
    [Crossref] [PubMed]
  11. Y. Takaki and M. Yokouchi, “Speckle-free and grayscale hologram reconstruction using time-multiplexing technique,” Opt. Express 19(8), 7567–7579 (2011).
    [Crossref] [PubMed]
  12. Y. Takaki and K. Taira, “Speckle regularization and miniaturization of computer-generated holographic stereograms,” Opt. Express 24(6), 6328–6340 (2016).
    [Crossref] [PubMed]
  13. T. Kurihara and Y. Takaki, “Speckle-free, shaded 3d images produced by computer-generated holography,” Opt. Express 21(4), 4044–4054 (2013).
    [Crossref] [PubMed]
  14. T. Utsugi and M. Yamaguchi, “Speckle-suppression in hologram calculation using ray-sampling plane,” Opt. Express 22(14), 17193–17206 (2014).
    [Crossref] [PubMed]
  15. M. Levoy and P. Hanrahan, “Light field rendering,” in Proceedings of the 23rd Annual Conference on Computer Graphics and Interactive Techniques, (ACM, 1996), pp. 31–42.
  16. M. Yamaguchi, H. Hoshino, T. Honda, and N. Ohyama, “Phase-added stereogram: calculation of hologram using computer graphics technique,” Proc. SPIE 1914, 25–31 (1993).
    [Crossref]
  17. H. Kang, T. Yamaguchi, and H. Yoshikawa, “Accurate phase-added stereogram to improve the coherent stereogram,” Appl. Opt. 47(19), D44–D54 (2008).
    [Crossref] [PubMed]
  18. M. Lucente, “Diffraction-specific fringe computation for electro-holography,” Ph.D. dissertation (Massachusetts Institute of Technology1994).
  19. Z. Lin and H.-Y. Shum, “A geometric analysis of light field rendering,” International Journal of Computer Vision 58(2), 121–138 (2004).
    [Crossref]
  20. C. H. J. Howard, “A Test for the Judgment of Distance,” American Journal of Ophthalmology 2, 656–675 (1919).
    [Crossref]
  21. J. W. Goodman, Introduction to Fourier Optics2nd ed. (McGraw-Hill, 1996).
  22. Blender Foundation, http://www.blender.org .

2016 (2)

V. Bianco, P. Memmolo, M. Paturzo, A. Finizio, B. Javidi, and P. Ferraro, “Quasi noise-free digital holography,” Light: Science & Applications 5(9), e16142 (2016).
[Crossref]

Y. Takaki and K. Taira, “Speckle regularization and miniaturization of computer-generated holographic stereograms,” Opt. Express 24(6), 6328–6340 (2016).
[Crossref] [PubMed]

2014 (2)

2013 (1)

2011 (1)

2010 (2)

L. Rong, W. Xiao, F. Pan, S. Liu, and R. Li, “Speckle noise reduction in digital holography by use of multiple polarization holograms,” Chin. Opt. Lett. 8(7), 653–655 (2010).
[Crossref]

Q. Y. J. Smithwick, J. Barabas, D. E. Smalley, and V. M. Bove, “Interactive holographic stereograms with accommodation cues,” Proc. SPIE 7619, 761903 (2010).
[Crossref]

2009 (1)

2008 (1)

2004 (1)

Z. Lin and H.-Y. Shum, “A geometric analysis of light field rendering,” International Journal of Computer Vision 58(2), 121–138 (2004).
[Crossref]

1995 (1)

1994 (1)

1993 (1)

M. Yamaguchi, H. Hoshino, T. Honda, and N. Ohyama, “Phase-added stereogram: calculation of hologram using computer graphics technique,” Proc. SPIE 1914, 25–31 (1993).
[Crossref]

1991 (1)

P. W. McOwan, W. J. Hossack, and R. E. Burge, “Three-dimensional stereoscopic display using ray traced computer generated holograms,” Opt. Commun. 82(1–2), 6–11 (1991).
[Crossref]

1948 (1)

D. Gabor, “A new microscopic principle,” Nature 161(4098), 777–778 (1948).
[Crossref] [PubMed]

1919 (1)

C. H. J. Howard, “A Test for the Judgment of Distance,” American Journal of Ophthalmology 2, 656–675 (1919).
[Crossref]

Amako, J.

Barabas, J.

Q. Y. J. Smithwick, J. Barabas, D. E. Smalley, and V. M. Bove, “Interactive holographic stereograms with accommodation cues,” Proc. SPIE 7619, 761903 (2010).
[Crossref]

Bianco, V.

V. Bianco, P. Memmolo, M. Paturzo, A. Finizio, B. Javidi, and P. Ferraro, “Quasi noise-free digital holography,” Light: Science & Applications 5(9), e16142 (2016).
[Crossref]

P. Memmolo, V. Bianco, M. Paturzo, B. Javidi, P. Netti, and P. Ferraro, “Encoding multiple holograms for speckle-noise reduction in optical display,” Opt. Express 22(21), 25768–25775 (2014).
[Crossref] [PubMed]

Bove, V. M.

Q. Y. J. Smithwick, J. Barabas, D. E. Smalley, and V. M. Bove, “Interactive holographic stereograms with accommodation cues,” Proc. SPIE 7619, 761903 (2010).
[Crossref]

Burge, R. E.

P. W. McOwan, W. J. Hossack, and R. E. Burge, “Three-dimensional stereoscopic display using ray traced computer generated holograms,” Opt. Commun. 82(1–2), 6–11 (1991).
[Crossref]

Endoh, H.

Ferraro, P.

V. Bianco, P. Memmolo, M. Paturzo, A. Finizio, B. Javidi, and P. Ferraro, “Quasi noise-free digital holography,” Light: Science & Applications 5(9), e16142 (2016).
[Crossref]

P. Memmolo, V. Bianco, M. Paturzo, B. Javidi, P. Netti, and P. Ferraro, “Encoding multiple holograms for speckle-noise reduction in optical display,” Opt. Express 22(21), 25768–25775 (2014).
[Crossref] [PubMed]

Finizio, A.

V. Bianco, P. Memmolo, M. Paturzo, A. Finizio, B. Javidi, and P. Ferraro, “Quasi noise-free digital holography,” Light: Science & Applications 5(9), e16142 (2016).
[Crossref]

Gabor, D.

D. Gabor, “A new microscopic principle,” Nature 161(4098), 777–778 (1948).
[Crossref] [PubMed]

Goodman, J. W.

J. W. Goodman, Speckle Phenomena in Optics: Theory and Applications (Roberts and Company Publishers, 2007).

J. W. Goodman, Introduction to Fourier Optics2nd ed. (McGraw-Hill, 1996).

Hanrahan, P.

M. Levoy and P. Hanrahan, “Light field rendering,” in Proceedings of the 23rd Annual Conference on Computer Graphics and Interactive Techniques, (ACM, 1996), pp. 31–42.

Honda, T.

M. Yamaguchi, H. Endoh, T. Honda, and N. Ohyama, “High-quality recording of a full-parallax holographic stereogram with a digital diffuser,” Opt. Lett. 19(2), 135–137 (1994).
[Crossref] [PubMed]

M. Yamaguchi, H. Hoshino, T. Honda, and N. Ohyama, “Phase-added stereogram: calculation of hologram using computer graphics technique,” Proc. SPIE 1914, 25–31 (1993).
[Crossref]

Hoshino, H.

M. Yamaguchi, H. Hoshino, T. Honda, and N. Ohyama, “Phase-added stereogram: calculation of hologram using computer graphics technique,” Proc. SPIE 1914, 25–31 (1993).
[Crossref]

Hossack, W. J.

P. W. McOwan, W. J. Hossack, and R. E. Burge, “Three-dimensional stereoscopic display using ray traced computer generated holograms,” Opt. Commun. 82(1–2), 6–11 (1991).
[Crossref]

Howard, C. H. J.

C. H. J. Howard, “A Test for the Judgment of Distance,” American Journal of Ophthalmology 2, 656–675 (1919).
[Crossref]

Javidi, B.

V. Bianco, P. Memmolo, M. Paturzo, A. Finizio, B. Javidi, and P. Ferraro, “Quasi noise-free digital holography,” Light: Science & Applications 5(9), e16142 (2016).
[Crossref]

P. Memmolo, V. Bianco, M. Paturzo, B. Javidi, P. Netti, and P. Ferraro, “Encoding multiple holograms for speckle-noise reduction in optical display,” Opt. Express 22(21), 25768–25775 (2014).
[Crossref] [PubMed]

Kang, H.

Kurihara, T.

Levoy, M.

M. Levoy and P. Hanrahan, “Light field rendering,” in Proceedings of the 23rd Annual Conference on Computer Graphics and Interactive Techniques, (ACM, 1996), pp. 31–42.

Li, R.

Lin, Z.

Z. Lin and H.-Y. Shum, “A geometric analysis of light field rendering,” International Journal of Computer Vision 58(2), 121–138 (2004).
[Crossref]

Liu, S.

Lucente, M.

M. Lucente, “Diffraction-specific fringe computation for electro-holography,” Ph.D. dissertation (Massachusetts Institute of Technology1994).

McOwan, P. W.

P. W. McOwan, W. J. Hossack, and R. E. Burge, “Three-dimensional stereoscopic display using ray traced computer generated holograms,” Opt. Commun. 82(1–2), 6–11 (1991).
[Crossref]

Memmolo, P.

V. Bianco, P. Memmolo, M. Paturzo, A. Finizio, B. Javidi, and P. Ferraro, “Quasi noise-free digital holography,” Light: Science & Applications 5(9), e16142 (2016).
[Crossref]

P. Memmolo, V. Bianco, M. Paturzo, B. Javidi, P. Netti, and P. Ferraro, “Encoding multiple holograms for speckle-noise reduction in optical display,” Opt. Express 22(21), 25768–25775 (2014).
[Crossref] [PubMed]

Miura, H.

Netti, P.

Ohyama, N.

M. Yamaguchi, H. Endoh, T. Honda, and N. Ohyama, “High-quality recording of a full-parallax holographic stereogram with a digital diffuser,” Opt. Lett. 19(2), 135–137 (1994).
[Crossref] [PubMed]

M. Yamaguchi, H. Hoshino, T. Honda, and N. Ohyama, “Phase-added stereogram: calculation of hologram using computer graphics technique,” Proc. SPIE 1914, 25–31 (1993).
[Crossref]

Onural, L.

Pan, F.

Paturzo, M.

V. Bianco, P. Memmolo, M. Paturzo, A. Finizio, B. Javidi, and P. Ferraro, “Quasi noise-free digital holography,” Light: Science & Applications 5(9), e16142 (2016).
[Crossref]

P. Memmolo, V. Bianco, M. Paturzo, B. Javidi, P. Netti, and P. Ferraro, “Encoding multiple holograms for speckle-noise reduction in optical display,” Opt. Express 22(21), 25768–25775 (2014).
[Crossref] [PubMed]

Rong, L.

Shum, H.-Y.

Z. Lin and H.-Y. Shum, “A geometric analysis of light field rendering,” International Journal of Computer Vision 58(2), 121–138 (2004).
[Crossref]

Smalley, D. E.

Q. Y. J. Smithwick, J. Barabas, D. E. Smalley, and V. M. Bove, “Interactive holographic stereograms with accommodation cues,” Proc. SPIE 7619, 761903 (2010).
[Crossref]

Smithwick, Q. Y. J.

Q. Y. J. Smithwick, J. Barabas, D. E. Smalley, and V. M. Bove, “Interactive holographic stereograms with accommodation cues,” Proc. SPIE 7619, 761903 (2010).
[Crossref]

Sonehara, T.

Taira, K.

Takaki, Y.

Utsugi, T.

Xiao, W.

Yamaguchi, M.

Yamaguchi, T.

Yaras, F.

Yokouchi, M.

Yoshikawa, H.

American Journal of Ophthalmology (1)

C. H. J. Howard, “A Test for the Judgment of Distance,” American Journal of Ophthalmology 2, 656–675 (1919).
[Crossref]

Appl. Opt. (3)

Chin. Opt. Lett. (1)

International Journal of Computer Vision (1)

Z. Lin and H.-Y. Shum, “A geometric analysis of light field rendering,” International Journal of Computer Vision 58(2), 121–138 (2004).
[Crossref]

Light: Science & Applications (1)

V. Bianco, P. Memmolo, M. Paturzo, A. Finizio, B. Javidi, and P. Ferraro, “Quasi noise-free digital holography,” Light: Science & Applications 5(9), e16142 (2016).
[Crossref]

Nature (1)

D. Gabor, “A new microscopic principle,” Nature 161(4098), 777–778 (1948).
[Crossref] [PubMed]

Opt. Commun. (1)

P. W. McOwan, W. J. Hossack, and R. E. Burge, “Three-dimensional stereoscopic display using ray traced computer generated holograms,” Opt. Commun. 82(1–2), 6–11 (1991).
[Crossref]

Opt. Express (5)

Opt. Lett. (1)

Proc. SPIE (2)

Q. Y. J. Smithwick, J. Barabas, D. E. Smalley, and V. M. Bove, “Interactive holographic stereograms with accommodation cues,” Proc. SPIE 7619, 761903 (2010).
[Crossref]

M. Yamaguchi, H. Hoshino, T. Honda, and N. Ohyama, “Phase-added stereogram: calculation of hologram using computer graphics technique,” Proc. SPIE 1914, 25–31 (1993).
[Crossref]

Other (5)

J. W. Goodman, Introduction to Fourier Optics2nd ed. (McGraw-Hill, 1996).

Blender Foundation, http://www.blender.org .

J. W. Goodman, Speckle Phenomena in Optics: Theory and Applications (Roberts and Company Publishers, 2007).

M. Lucente, “Diffraction-specific fringe computation for electro-holography,” Ph.D. dissertation (Massachusetts Institute of Technology1994).

M. Levoy and P. Hanrahan, “Light field rendering,” in Proceedings of the 23rd Annual Conference on Computer Graphics and Interactive Techniques, (ACM, 1996), pp. 31–42.

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (7)

Fig. 1
Fig. 1 Discrete light field and hologram definitions according to two-plane parametrization.
Fig. 2
Fig. 2 Capture setup and parameters for DSLF. A set of cameras capture the scene at intervals of Δs, sampling the (recentered) image plane at Δx. The area between zb and zf fulfills the DSLF requirement.
Fig. 3
Fig. 3 General structure of the ray separation based speckle suppression method. (a) Assigning emission coordinates to the captured light rays, (b) quantizing each emission point to a voxel (highlighted in red) center point, (c) detailed look at the original and quantized ray distributions, (d) assigning quantized ray intensities based on summation of original ray intensities within the voxels.
Fig. 4
Fig. 4 The reconstructed images via the viewing simulation for different speckle suppression methods and object distances. The plane distance from the hologram plane is 6 mm (top row), 9 mm (middle row) or 12 mm (bottom row). The speckle suppression methods used: (a) without speckle reduction, (b) random averaging, (c) ray separation [14] and (d) proposed method.
Fig. 5
Fig. 5 Scene and LF capture setup for the second experiment utilizing the Pony car model. (The 3D model Pony Cartoon by Slava Zhuravlev is licensed under CC BY 4.0.)
Fig. 6
Fig. 6 Reconstructed images obtained via the viewing simulation. The viewer position is (−15, −15) mm in the top row, (0, 0) mm in the middle row and (15, 15) mm in the bottom row. From left to right: reference image, no speckle suppression, random averaging, ray separation [14], and proposed accurate ray separation.
Fig. 7
Fig. 7 Resolution comparison of reference image (left) and the reconstructed image for the proposed method (right), magnifying the detail on the front of the car.

Tables (3)

Tables Icon

Table 1 Parameters of the CGH and LF capture for the car scene.

Tables Icon

Table 2 Speckle contrasts of each different scene and speckle suppression method.

Tables Icon

Table 3 PSNRs (dB) for the view images corresponding to different methods.

Equations (21)

Equations on this page are rendered with MathJax. Learn more.

O D S C P ( x ) = m rect ( x m Δ x Δ x ) i L i [ m , i ] r m i exp [ j 2 π λ ( ( x x m i ) 2 + z m i 2 z m i ) ] ,
Δ s = min { Δ x z b z b z 0 , Δ x z f z 0 z f } .
z o p t = 2 z b z f z b + z f .
z b = Δ s z 0 Δ s Δ x ,
z f = Δ s z 0 Δ s + Δ x .
L x 2.44 λ l T ,
x m i = m Δ x + z m i ( i Δ s m Δ x ) d ,
z m i = D [ m , i ] ,
Δ x ˜ 1.22 λ d T .
δ z ( z ) = z 2 δ γ c B ,
( x ˜ m i , z ˜ m i ) = arg min ( x ˜ , z ˜ ) S q { ( x ˜ x m i ) 2 + ( z ˜ z m i ) 2 } ,
L ˜ 1 ( m Δ x , s m k ) = i V m k L 1 [ m , i ] ,
V m k = 1 Δ s [ m Δ x d Z m k ( X m k m Δ x ) ] ,
s m k = m Δ x d ( x ˜ m k m Δ x ) z ˜ m k .
i 1 = arg min i s m k / Δ s | i Δ s s m k | ,
i 2 = arg min i > s m k / Δ s | i Δ s s m k | .
L ˜ 1 ( m Δ x , s m k ) = L 1 [ m , i 1 ] ( i 2 Δ s s m k ) + L 1 [ m , i 2 ] ( s m k i 1 Δ s ) Δ s .
O ( x ) = m rect ( x m Δ x Δ x ) k L ˜ 1 ( m Δ x , s m k ) r ˜ m k exp [ j 2 π λ ( ( x x ˜ m k ) 2 + z ˜ m k 2 z ˜ m k ) ] ,
I ( u , v ) = | l { T ( s , t ) z e y e { O D S C P ( x , y ) } } | 2 ,
f = ( 1 d f + 1 l ) 1 .
C = σ I ˜ ,

Metrics