## Abstract

Speckle noise is an important issue in electro-holographic displays. We propose a new method for suppressing speckle noise in a computer-generated hologram (CGH) for 3D display. In our previous research, we proposed a method for CGH calculation using ray-sampling plane (RS-plane), which enables the application of advanced ray-based rendering techniques to the calculation of hologram that can reconstruct a deep 3D scene in high resolution. Conventional techniques for effective speckle suppression, which utilizes the time-multiplexing of sparse object points, can suppress the speckle noise with high resolution, but it cannot be applied to the CGH calculation using RS-plane because the CGH calculated using RS-plane does not utilize point sources on an object surface. Then, we propose the method to define the point sources from light-ray information and apply the speckle suppression technique using sparse point sources to CGH calculation using RS-plane. The validity of the proposed method was verified by numerical simulations.

© 2014 Optical Society of America

## 1. Introduction

A three-dimensional (3D) display based on a holography reconstructs a wavefront emitted from a 3D scene. A wavefront reconstruction enables the display of 3D scene with all the depth cues of human vision. In addition, it is possible to reproduce a high-resolution image of a deep 3D scene contrary to the ray reconstruction such as multi-view or integral imaging method, in which the images located far from the display are blurred. However, as coherent light is used in the hologram recording and reconstructing process, speckle noise degrades the reconstructed image quality. In an electro-holographic display, since the hologram is calculated with a computer-generated hologram (CGH) technique, it is possible to reduce the speckle noise effectively by optimizing the CGH calculation method. In this paper, we propose a new calculation method for CGH that enables an effective speckle reduction and photorealistic reproduction.

Several approaches of speckle reduction have been proposed for electro-holographic displays. One of them uses reducing the temporal coherence of reconstruction light [1,2]. These approaches add blur to reconstructed image, therefore the advantage of holography, which is a reproduction of a deep 3D scene with high-resolution, could be lost. Another approach employs time averaging of several speckle patterns [3,4]. We call them as a “random-averaging method”. Although this approach doesn't add blur, this is not efficient because the effect of speckle contrast reduction is proportional to the square root of the number of multiplexed patterns. Recently, more effective methods have been proposed [5–7]. In particular, Takaki and Yokouchi has proposed a promising speckle reduction technique based on time-multiplexing of sparse point sources on the object surface by non-overlapped zone-plates [6].

Also, several CGH calculation methods have been investigated for electro-holographic displays. CGH calculation based on light-ray information, or the principle of holographic stereogram has been proposed [8–10]. Using light-ray information allows the reduction of calculation cost [8] and photorealistic reproduction of reconstructed image by utilizing advanced rendering techniques of conventional computer graphics (CG) [9]. This approach is called as a “ray-based method”. Recently, our research group proposed the CGH calculation method using ray-sampling plane (RS-plane). This method enables to realize high resolution image of a deep 3D scene in addition to the advantage of ray-based method [10], however, speckle reduction optimized for this method has not been considered yet.

In this paper, we present a method to apply the concept of the sparse point sources to the CGH calculated by using RS-plane. A simple model of speckle generation and the theory of speckle suppression in CGH are described in section 2. The motivation and the proposed method are described in section 3 and 4. The computer simulation results are described in section 5. Finally, discussions and summary are described in section 6 and 7.

## 2. Speckle generation and suppression in CGH

#### 2.1 Speckle generation

We adopt a simple model of a speckle generation in a CGH reconstruction for estimating the speckle size in this paper. Figure 1 shows the setup of the CGH observation. The wavefront reconstructed by the CGH is propagated to observer's eye pupil. The lens of the observer's eye modulates the wavefront and forms the real image of the object on the retina.

Assuming a coherent imaging system, the intensity distribution on the retina is written based on the Fresnel approximation as:

*g*(

*x,y*) and

*u*(

*x,y*) are the wavefront on the focal plane and the retina, respectively.

*d*and

_{ret}*d*are the distances shown in Fig. 1.

_{obj}*λ*is a wavelength and

*M*=

*d*is a magnification. $i$ is an imaginary unit,

_{ret}/d_{obj}*k*=

*2π/λ*is the wave number,

*D*is the diameter of the eye pupil, $*$ is a convolution operator,

*PSF*(●) is the point spread function (PSF) of the eye’s optical system, $\Im [\u2022]$ is a Fourier transform,

*circ*(●) is a

*circle function*defined in [11],

*J*

_{1}(●) is a Bessel function of the first kind, order 1, $\propto $ is a proportionality, and

*jinc*(●) =

*J*

_{1}(●)/● is so called a

*jinc function*[11]. Here we assume that the eye pupil is denoted by a circle function and the optical system is aberration-free and the CGH reconstructs a complete wavefront of the object without distortion.

Speckle generation is originated from the interference of the adjacent object points. The image on the retina is simply expressed as the convolution of the wavefront on the focal plane and the PSF, which is originated from a low pass filtering functionality of the pupil, as shown in Eq. (1). Here, we consider an object as an aggregation of point sources, in which the point sources are separated less than Rayleigh resolution limit that is determined by the optical system of the eye. In the case the diffused object is imaged on the retinal surface, each point interferes with the adjacent point randomly since the points, which have random phases, are blurred by PSF convolution. In this way, speckle noise are generated on the reconstructed image of the CGH.

In addition to this, there are some sources of noise such as sampling, quantization, the recording and the reconstruction system. However it can be considered that the most influential reason of speckle generation is the interference between the object points on the observer's retina described above. In this paper, our purpose is to reduce this kind of speckle.

Here, we estimate the speckle size. If the object in Fig. 1 has a completely diffused surface, the average size of the speckle on the retina is estimated on the basis of the correlation width [12]. The average diameter of transverse direction is expressed as *L _{x,y}*, and axial direction is expressed as

*L*. Then they are written as

_{z}*D*= 3.5

*mm*,

*d*= 20

_{ret}*mm*and

*λ*= 532

*nm*, each speckle size on retina is estimated as

*L*= 7.4

_{x,y}*μm*and

*L*= 278

_{z}*μm*. Also, when

*d*= 300

_{obj}*mm*, the effective speckle sizes on the object surface

*L*and

_{x,y}’*L*are estimated as

_{z}’*L*= 111

_{x,y}’*μm*and

*L*= 63

_{z}’*mm*. These estimates are valuable for considering the speckle noise of CGH.

In this paper, speckle noise is evaluated by speckle contrast *C* [12], which is written as:

*C*should be equal to or smaller than 0.05 for sufficient image quality in the application of laser projection display [13].

#### 2.2 Speckle suppression by sparse-points time-multiplexing

Let us now refine the approach for speckle suppression by a “sparse-points
time-multiplexing” here considering the observation model shown in Fig. 1. In this method, an object surface consists of the sparse point
sources and they are grouped into subsets as shown in Fig.
2(a).Then the image of each point on the retina is blurred by the PSF given in Eq. (2), but they are separated from adjacent ones by
longer distance than the size of PSF main lobe to avoid the interference as shown in Fig. 2(b). Then, to fill the gap between the images of the
sparse point sources incoherently, several CGH frames are displayed with higher speed than the
temporal resolution of the human eye using a high-speed spatial light modulator (SLM). When the
distance between adjacent points image after multiplexing is as small as the Rayleigh
resolution limit [Fig. 2(c)], observer cannot perceive
the point configuration of the object. In this way, the effective speckle reduction is realized
without cutting down the spatial resolution. The main lobe width is given by Eq. (3) and that on the object surface becomes
2.44*λ d _{obj}/D*. Thus the minimum requirement is that the
separation between points in the sparse subset must be several times larger than the main lobe
width. As shown in Fig. 2(c), if the number of
multiplexed point images is large enough, this method does not lower the spatial resolution of
the reconstructed image.

This method works more efficiently than the random-averaging method. In the random-averaging method, the superposition of uncorrelated N speckle patterns can reduce the speckle contrast to $1/\sqrt{N}$. To decrease the number of the superposed speckle patterns, negatively-correlated speckle pattern is used [14]. Then, periodic dot patterns which are shifted smaller than the dot period each other have negative correlation. Therefore, if the separated distance of sparse point sources is several times of the PSF main lobe size, superposition of each frame can reduce the speckle contrast to less than $1/\sqrt{N}$ because of the negative correlation.

## 3. Motivation

We express the motivation for our proposal. As identified above, speckle suppression by the sparse-points time-multiplexing does not lower the spatial resolution of the reconstructed image and works more efficiently than the random-averaging method. On the other hand, we proposed the CGH calculation method using RS-plane (RS-plane method) [10]. One of the feature of this method is high resolution image of a deep 3D scene, however, the speckle noise degrades the image. Thus, in this paper, we adapt sparse-points time-multiplexing method to RS-plane method in order to suppress the speckle noise with high spatial resolution. Although, the method is not suitable for the ray-based CGH calculation techniques like RS-plane method, because it is a “point-based method”, in which objects are not consisted of light-rays but consisted of point sources. Therefore, we propose a novel speckle suppression method, in which sparse-points are realized by means of light-ray information processing on the RS-plane.

Here, we describe the comparison between the proposed method and the conventional methods [5–7]. A speckle suppression method using shift-averaging [5] is an effective approach, but it has been proposed for a holographic projection image and has not been applied for a 3D image. A CGH calculation using a zone plate modulation and the sparse point sources [6,7] realizes both shading reproduction and speckle suppression. However, it is not optimized for the PSF of human visual system that mainly causes speckle noise. Since the sparse-points are generated by the half zone plates arranged not to be overlapped each other, there are strong limitations as follows; (1) the size of the zone plate become significantly large if the reconstructed object is far from the hologram plane. Then it is almost impossible to arrange them not to be overlapped, and the method cannot be applied to the object located far from the hologram plane. (2) The number of the frames that should be multiplexed is large. In principle, the minimum requirement for the separation of the sparse-points can be considered as the PSF size determined by human visual system as described above. But by the use of non-overlapped zone plates, the separation becomes larger than the minimum requirement. These limitations are discussed again in section 6.

## 4. Speckle suppression in CGH calculation using RS-plane

#### 4.1 CGH calculation using RS-plane

In this paper, the speckle suppression in the RS-plane method is discussed. RS-plane method is a ray-based method and is modified to avoid reduction of the spatial resolution in the image far from the hologram plane by introducing a ray-wavefront conversion [10]. The method is briefly reviewed in this section.

The concept of the method is shown in Fig. 3.In this method, at first the RS-plane is defined near the object location and the light-rays emitted from the object are sampled on the ray-sampling points on the RS-plane. In this step, light-ray information is sampled as projection images. Thus we can use conventional CG rendering techniques such as a ray tracing or an image-based rendering from a multi-view images of real or virtual object. Then each projection image is converted into a wavefront segment by Fourier transform likewise a holographic stereogram. Tiling these wavefront segments at the correspondent ray sampling points, the wavefront on the RS-plane is obtained. Next, the wavefront propagation from the RS-plane to the CGH plane and the interference of object and reference wave are simulated by using discrete Fresnel transform.

In this way, we obtain the CGH, which enables to reproduce a high-resolution image of a deep 3D scene. Using various CG techniques, it is also easy to calculate hidden-surface removal and occlusion processing. In the case of plural objects at different distance from CGH plane, this method is easily extended [10]. Applying to real scene with integral imaging and effective occlusion process are also studied [15,16].

Speckle noise appears on the reconstructed image in this method. The speckle noise is originated from the random phase of the ray-information added to equalize the energy of the wavefront segment. To date, speckle reduction technique optimized for this method has not been analyzed yet.

#### 4.2 Outline of the proposed speckle suppression method

Introducing the concept of the sparse point sources to the RS-plane method, we can realize effective speckle suppression for the CGH that reproduces photorealistic and high-resolution deep 3D scene. In the RS-plane method, the wavefront is generated from the light-ray information without defining object points. Therefore, the sparse-points time-multiplexing method cannot be simply applied to the RS-plane method. The way to define the point sources on the object surface (object points) from the light-rays is described below.

In the proposed method, when the light-rays are sampled on the RS-plane, it is necessary to obtain the depth information of each light-ray. Then, we calculate the spatial coordinate from which the light-ray is emitted using the depth information. The coordinates obtained here corresponds the object points. We divide the object points into several subsets and the light-rays group corresponding to the subset are calculated by culling the light-rays corresponding to other subsets. Then, speckle suppression by sparse point sources are realized by light-ray culling process. After this, the optimal phases are added to the light-rays before converting to wavefront. The phase assigned to each light-ray is calculated to satisfy the condition that the phases of all light-rays from a certain object point on the object surface are matched. This phase adding technique is originated from phase-added stereogram (PAS) [17]. Recently, Kang et al. reported more study on the accurately compensated PAS (ACPAS) [18]. As shown in Fig. 4, the sparse object points are defined by culling light-rays to which phases are added. The CGH is separated to multiple frames that reconstruct different sets of sparse-points. Finally, the wavefront conversion, the propagation and the CGH cording are calculated for each frame similar to the RS-plane method. CGH frames are superposed in the reconstruction using the time-multiplexing.

Since single object point is defined by a set of light-rays in the proposed method, the object point is able to have a various amplitude distribution in its angular spectrum similar to the modulated zone plate in [7]. This is obviously different from a pure point source, which has a uniform amplitude distribution in angular directions. We can call that object point as a “quasi-point source”. In contrast to a pure point source, which is expressed as a Dirac delta function, the quasi-point source has a finite width of complex amplitude distribution. Therefore, the proposed method enables not only to suppress speckle noise effectively, but also to realize various reflection properties of object surface, or easily process the hidden-surface removal or occlusion effect. But, if the width of quasi-point source is large, speckle suppression effect based on sparse-points multiplexing is degraded. Since the width of the quasi-point source is typically small, we assume the width is small enough compared with the interval of the sampling points.

In addition, if the depth information can be obtained, the proposed method is possible to apply not only a virtual scene but also a real scene captured by a multi-view camera or an integral imaging method.

#### 4.3 Calculation procedure

In this subsection, the detailed calculation procedure is described. Here we consider the CGH calculation for a 3D scene with objects located in limited depth range. Figure 5 shows the setup of CGH calculation. Using RS-plane method, the distance between the objects and the CGH can be large.

### Step 1: Obtain the light-ray and depth information

At First, RS-plane is defined near the objects. The shorter distance between objects and RS-plane is better for avoiding resolution reduction [10]. Also, it is possible to place the RS-plane in the middle of the object. Then the light-rays emitted from objects are sampled as projection images at sampling points on RS-plane as shown in Fig. 5. The pitches of the sampling points are *ΔS _{x}* and

*ΔS*, and the pitches of the angular sampling are

_{y}*Δθ*and

_{x}*Δθ*In this step, the depth information, which is the distance from RS-plane to the object surface, is also obtained for each light-ray. Intensity distribution of the light-rays is expressed as

_{y.}*I*[

_{i,j}*m,n*] and depth distribution is expressed as

*d*[

_{i,j}*m,n*], where

*i*and

*j*are the indices of the ray-sampling point ($i\text{=0,}\pm \text{1,}\text{\xb12,}\text{\u2026}$and$j\text{=0,}\pm \text{1,}\text{\xb12,}\text{\u2026}$) and

*m*and

*n*are the pixel indices of projection image ($m\text{=0,}\pm \text{1,}\text{\xb12,}\text{\u2026}$and$n\text{=0,}\pm \text{1,}\text{\xb12,}\text{\u2026}$). ${a}_{i,j}[m,n]=\sqrt{{I}_{i,j}[m,n]}$ is the amplitude distribution.

### Step 2: Define voxels in object space

A set of voxels is defined on Cartesian grid in the 3D space so as to cover the objects. Then, we
assume that the light-ray is emitted from the center of voxels as explained in the step. 3.
Object point is defined at the center of voxel. Undesirable artifacts are generated on the
reconstructed image if there is unbalance of the number of light-rays emitted from the object
points. These artifacts can be suppressed by matching the voxel pitch to the pitch of
ray-sampling points *ΔS _{x}* and

*ΔS*, and placing the center of voxel (the candidate of the object point) on the lines, which is perpendicular to the RS-plane and pass through the ray-sampling point shown in Fig. 6.The voxel pitch of axial direction is

_{y}*ΔS*. The set of voxels is denoted as

_{z}*S*hereafter.

_{v}### Step 3: Assign light-rays to the voxels

Next, every light-ray is assigned to the corresponding voxel from which the light-ray is emitted as follows; first, using the depth information for each light-ray derived in the step. 1, the coordinates where each light-ray is emitted, $({x}_{em},{y}_{em},{z}_{em})$ are calculated as shown in Fig. 7.They are expressed as

In this step, it is possible to be applied a back-project scheme, in which we find the intersection points of the light-ray with the object surface and then evaluate the amplitude and coordinate $({x}_{em},{y}_{em},{z}_{em})$.

### Step 4: Cull the light-rays

After the definition of the object points to all light-rays, the object points are culled to
suppress the speckle generation as shown in Fig. 8.At first, the object points are divided into several groups, where the object points in
each group are separated by a longer distance than PSF main lobe, which depend on optical
system of the human vision and the observer’s distance from the object. In the
time-multiplexing reconstruction method, each group of object points is associated with a
single hologram and is related to the each frame. Therefore, when calculating the CGH
corresponding to the one frame, the object point groups corresponding to the other frames are
omitted. This process is similar to the method proposed in [6], but the division of the object point groups is calculated as the light-rays
culling in the proposed method. Therefore, we substitute zero to the amplitude of the
light-ray that is emitted from the culled object point in this process. Here, the voxel
(object point) pitch *ΔS _{x}* and

*ΔS*, are identical to the Rayleigh resolution limit of PSF, every other object point should be divided into different groups in order to avoid the interference of the PSF. Since better speckle suppression is realized by considering the interference of the first side lobe, we set the separation distance of object points to four times of object point pitch in a simulation of section 5.

_{y}### Step 5: Add the optimal phase to the light-ray

Optimal phases calculated from the depth information obtained in step. 1 are added to the light-rays. In the CGH calculation using RS-plane, light-rays are regarded as a segmented plane wave. The segment size is equal to the distance of the ray-sampling points. We add the phases to the light-rays such that the phases of the light-rays (plane waves) emitted from a single object point are matched at the corresponding voxel center (object point). This calculation process is in the series of the research called phase-added stereogram (PAS) [17] or a coherent stereogram [18–20], which are studied to improve the resolution of the CGH of the ray-based method [17] or calculation speed of the CGH of the point-based method [18–20].

Let us now derive the phase that should be added to the light-rays using the concept of compensated phase added stereogram (CPAS) [19]. The wavefront of a segmented RS-plane emitted from the object point *p, U _{CPAS_p}*(

*ξ,η*) is described as:

*p*that has angular distribution,

*r*is the distance between the object point

_{p}*p,*whose coordinate is $\left({x}_{ob\_p},{y}_{ob\_p},{z}_{ob\_p}\right)$, and the ray-sampling point $\left({\xi}_{c},{\eta}_{c},0\right)$. ${\theta}_{p\xi cFFT}$ and ${\theta}_{p\eta cFFT}$ are the angle of the plane wave which is sampled version in fast Fourier transform (FFT), while ${\theta}_{p\xi c}$ and ${\theta}_{p\eta c}$ are true angle. ${\phi}_{\text{p}}$ is the initial phase of the object point

*p.*And,

*C*is compensation term. We used the term expressed as:

_{ξη}*ikr*] to the light-ray expressed as ${a}_{p}\left({\theta}_{p\xi cFFT},{\theta}_{p\eta cFFT}\right)$. It means that the phase dependent on

_{p}+ iφ_{p}+ iC_{ξη}*i, j, m,*and

*n*is added to the amplitude

*a*[

_{i,j}*m,n*].

### Step 6: Obtain the wavefront on the RS-plane

The amplitude of light-ray information is given by the square root of the projection image, as the pixel value of the image is considered to be the intensity. Adding phase by the CPAS theory, the set of light-rays that pass through a single ray-sampling point is converted into the small wavefront segment around the ray-sampling point by Fourier transform. Tiling the wavefront segment, the total wavefront on the RS-plane is obtained. The detail of the ray-wavefront conversion is described in [10]. In this step, using accurate phase added stereogram (APAS) technique, a quantization error of sampling light-rays is reduced. In APAS, the wavefront segment is cut out from the complex amplitude that is the transformed image of the oversampled light-ray information using FFT. The detail of the APAS is described in [20]. Consequently, we can use so called ACPAS [18], which is combination of APAS and CPAS.

### Step 7: Simulate wavefront propagation from RS-plane to CGH-plane and encode CGH

The wavefront propagation from the RS-plane to the CGH-plane is calculated by discrete Fresnel transform. Several methods proposed for computing discrete Fresnel transform in digital holography [21,22]. Finally, we obtain the CGH by simulating the interference between the wavefront on the CGH plane and the reference wave.

#### 4.4 Time-multiplexing reconstruction

In reconstruction step, multiple frames of CGHs are presented such that the reproduced images are incoherently superposed. In this paper, we consider time-multiplexing as in [6]. In the time-multiplexing reconstruction, the CGH pattern corresponding to each frame is transferred to a high-speed SLM. By displaying the several CGH frames in higher rate than the temporal resolution of human visual system, reconstructed images are multiplexed on the observer's retina incoherently. Since the ray-culling process suppresses the speckle noise of each frame, the speckle noise of the multiplexed image on the retina is also suppressed. In this method, spatial resolution of the CGH is not reduced.

## 5. Computer simulations

#### 5.1 Example 1: planar diffused object

A computer simulation was carried out to confirm the effect of speckle reduction by the proposed
method. At first, we simulated the case of a planar diffused object. Figure 9 shows the setup of the simulation and Table 1 shows the parameters of the simulations. An RS-plane was defined at 9.6*mm* in front of the planar diffused
object of 6.4*mm* × 6.4*mm* square. In the simulation of
reconstruction, the observer’s eye was placed at 300*mm* from the
reconstruction image, where the eye pupil was 3.5*mm* in diameter and the
distance between the eye pupil and the retina is 20*mm*. We assumed a CGH
reconstructs the recording wavefront completely. And, the model of the human visual system on
this simulation is same as the one described in subsection 2.1.

We used Open Graphics Library (OpenGL) [23] for modeling the object and obtaining light-ray and depth information. The interval of the object point of slanted 45 degree line was fitted to the Rayleigh resolution limit of the human visual system, which is *L _{x,y}’/*2 = 55.5

*μm*, not to perceive the periodic dot pattern by the object points. And, the interval of the sparse object point was four times larger than that of horizontal and vertical direction, which is $55.5/\sqrt{2}\times 4=157$

*μm*. Then, we divided the CGH into 16 frames for time-multiplexing reconstruction.

Figure 10 shows the simulation results of the reconstructed image on the retina. Figures 10(a) and 10(b) are the reconstructed image without speckle suppression and with the random-averaging method of 16 frames, respectively. In contrast to this, Figs. 10(d) and 10(e) are the single frame and time-multiplexed reconstruction image of the CGH calculated with the proposed method, where 16 frames are multiplexed. We verified the speckle suppression effect by estimating speckle contrast [Eq. (5)] of the simulation results shown in Fig. 10.

In the case of the random-averaging method, speckle contrast of the reconstructed image by 16 frames multiplexing shown in Fig. 10(b) is 0.24. It almost agrees with the theoretical value $1/\sqrt{16}=0.25$. On the other hand, speckle contrast by the proposed method is 0.15. This result is also expressed as the horizontal cross section of the center of Figs. 10(b) and 10(e) shown in Figs. 10(c) and 10(f), respectively. Therefore, we can say that the proposed method can suppress speckle contrast effectively. Remaining speckle noise is generated from the interference with the side lobes overlapped even though the image of point sources are separated as in Fig. 10(d). In order to realize the speckle contrast smaller than 0.05, it is needed to increase the frames for multiplexing with separating the interval of the sparse object points, apply other speckle suppression techniques such as the superposition of different polarization, or reduce the side lobes of the reconstructed point images.

#### 5.2 Example 2: shaded 3D object

As a 3D object, a teapot modeled by OpenGL was used in the second example. The surface of the teapot object has angular dependent reflectance, which can be easily reproduced by ray-based rendering technique. At first, we obtained the light-ray and the depth information of the teapot using OpenGL. Then, the fringe pattern of the CGH was calculated with the proposed method. The parameters of the CGH is same as Table 1 and the setup of the simulation is shown in Fig. 11.

The simulation results of the reconstructed image are shown in Fig. 12.Figure 12(a) is the reconstructed image by the hologram calculated without speckle suppression and Fig. 12(b) is the one calculated by the proposed method. The result shows that speckle noise can be effectively suppressed by the proposed method. Undesirable artifacts like lines are seen on Fig. 12(b) since the voxel pitch is unmatched to the pitch of ray-sampling points, but it is avoidable (see step. 2 of subsection 4.3). The experimental proof of this reconstruction method requires high-speed spatial light modulator, and it remains as a future issue.

## 6. Discussions

Now we can indicate the novelty of the proposed method in comparison to the conventional method using non-overlapped zone-plates [6,7]. If the visible angle is same as this simulation ( ± 3.8deg as shown in Table 1) and the distance between the hologram and the object is 150mm, the zone-plate size becomes 20mm, which is larger than the hologram size. Therefore, it is impossible to display the object far from the hologram plane with same visible angle by using non-overlapped zone-plates. Moreover, in this simulation, we determined the interval of the sparse object points to consider the human visual system (157μm as shown in Table 1). On the other hand, it is depended on zone-plates size in [6,7], which means that the object far from the hologram plane requires more time-multiplexing frames independently of human visual system.

## 7. Summary

We proposed a new calculation method of CGH to suppress the speckle noise. In this method, CGH is calculated with the ray-information emitted from the object and speckle suppression is realized by culling the ray-information. We think that our approach makes a contribution to realizing high quality image of a holographic 3D display. Also, similar speckle suppression approach is proposed in the application of a lens-less holographic projection [13].

Merits of the proposed method are (1) more effective speckle noise reduction than random-averaging method, (2) easy to reproduce various light reflectance properties or process the occlusion by general CG techniques while it is difficult for the point source based method and (3) applicable not only virtual objects but also a real scene captured by multi-view camera or integral photography.

Future works are described below: (1) Evaluation of the speckle noise of the proposed method using the CGH which has more large size and large viewing angle, (2) effect of the spread of quasi-point source in speckle suppression should be considered as described in subsection 4.2, and (3) observer’s conditions including viewing position, focus position and pupil size, which affect the PSF are expected to be investigated.

## Acknowledgment

We are deeply grateful to Dr. Hoonjong Kang (Realistic Media Platform Research Center, Korea Electronics Technology Institute) who offered continuing support about the PAS technique.

## References and links

**1. **F. Yaraş, H. Kang, and L. Onural, “Real-time phase-only color holographic video display system using LED illumination,” Appl. Opt. **48**(34), H48–H53 (2009). [CrossRef] [PubMed]

**2. **H. Yoshikawa, “Computer-generated holograms for white light reconstruction,” in *Digital Holography and Three-Dimensional Display*, T.-C. Poon, ed. (Springer, 2006), pp. 235–255.

**3. **J. Amako, H. Miura, and T. Sonehara, “Speckle-noise reduction on kinoform reconstruction using a phase-only spatial light modulator,” Appl. Opt. **34**(17), 3165–3171 (1995). [CrossRef] [PubMed]

**4. **T. Kozacki, M. Kujawińska, G. Finke, B. Hennelly, and N. Pandey, “Extended viewing angle holographic display system with tilted SLMs in a circular configuration,” Appl. Opt. **51**(11), 1771–1780 (2012). [CrossRef] [PubMed]

**5. **L. Golan and S. Shoham, “Speckle elimination using shift-averaging in high-rate holographic projection,” Opt. Express **17**(3), 1330–1339 (2009). [CrossRef] [PubMed]

**6. **Y. Takaki and M. Yokouchi, “Speckle-free and grayscale hologram reconstruction using time-multiplexing technique,” Opt. Express **19**(8), 7567–7579 (2011). [CrossRef] [PubMed]

**7. **T. Kurihara and Y. Takaki, “Speckle-free, shaded 3D images produced by computer-generated holography,” Opt. Express **21**(4), 4044–4054 (2013). [CrossRef] [PubMed]

**8. **T. Yatagai, “Stereoscopic approach to 3-D display using computer-generated holograms,” Appl. Opt. **15**(11), 2722–2729 (1976). [CrossRef] [PubMed]

**9. **P. McOwan, W. Hossack, and R. Burge, “Three-dimensional stereoscopic display using ray traced computer generated holograms,” Opt. Commun. **82**(1–2), 6–11 (1991). [CrossRef]

**10. **K. Wakunami and M. Yamaguchi, “Calculation for computer generated hologram using ray-sampling plane,” Opt. Express **19**(10), 9086–9101 (2011). [CrossRef] [PubMed]

**11. **J. W. Goodman, *Introduction to Fourier Optics* (McGraw-Hill, 1996).

**12. **J. W. Goodman, *Speckle Phenomena in Optics: Theory and Applications* (Roberts and Company, 2007).

**13. **M. Makowski, “Minimized speckle noise in lens-less holographic projection by pixel separation,” Opt. Express **21**(24), 29205–29216 (2013). [CrossRef] [PubMed]

**14. **W. F. Hsu and C. F. Yeh, “Speckle suppression in holographic projection displays using temporal integration of speckle images from diffractive optical elements,” Appl. Opt. **50**(34), H50–H55 (2011). [CrossRef] [PubMed]

**15. **K. Wakunami, M. Yamaguchi, and B. Javidi, “High-resolution three-dimensional holographic display using dense ray sampling from integral imaging,” Opt. Lett. **37**(24), 5103–5105 (2012). [CrossRef] [PubMed]

**16. **K. Wakunami, H. Yamashita, and M. Yamaguchi, “Occlusion culling for computer generated hologram based on ray-wavefront conversion,” Opt. Express **21**(19), 21811–21822 (2013). [CrossRef] [PubMed]

**17. **M. Yamaguchi, H. Hoshino, T. Honda, and N. Ohyama, “Phase-added stereogram: calculation of hologram using computer graphics technique,” Proc. SPIE **1914**, 25–31 (1993). [CrossRef]

**18. **H. Kang, F. Yaraş, and L. Onural, “Graphics processing unit accelerated computation of digital holograms,” Appl. Opt. **48**(34), H137–H143 (2009). [CrossRef] [PubMed]

**19. **H. Kang, T. Fujii, T. Yamaguchi, and H. Yoshikawa, “Compensated phase-added stereogram for real-time holographic display,” Opt. Eng. **46**(9), 095802 (2007). [CrossRef]

**20. **H. Kang, T. Yamaguchi, and H. Yoshikawa, “Accurate phase-added stereogram to improve the coherent stereogram,” Appl. Opt. **47**(19), D44–D54 (2008). [CrossRef] [PubMed]

**21. **K. Matsushima, “Shifted angular spectrum method for off-axis numerical propagation,” Opt. Express **18**(17), 18453–18463 (2010). [CrossRef] [PubMed]

**22. **J. P. Liu, “Controlling the aliasing by zero-padding in the digital calculation of the scalar diffraction,” J. Opt. Soc. Am. A **29**(9), 1956–1964 (2012). [CrossRef] [PubMed]