Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Three-dimensional computer-generated hologram with Fourier domain segmentation

Open Access Open Access

Abstract

We propose an efficient algorithm for calculating photorealistic three-dimensional (3D) computer-generated hologram with Fourier domain segmentation. The segmentation of the spatial frequency processes the depth information from multiple parallel projections, recombining the wave fields of different viewing directions in the Fourier domain. Segmented angular spectrum with layer based processing is introduced to calculate the partitioned elements, which effectively extends the limited region of conventional angular spectrum. The algorithm can provide accurate depth cues and is compatible with computer graphics rendering techniques to provide quality view-dependent properties. Experiments demonstrate the proposed method can reconstruct photorealistic 3D images with accurate depth information.

© 2019 Optical Society of America under the terms of the OSA Open Access Publishing Agreement

1. Introduction

Holographic display is a promising technique to reconstruct three-dimensional (3D) images since it can provide all the depth cues that human eyes can perceive [1,2]. With the development of computing technology, computer-generated holograms (CGHs) can be used to reconstruct 3D images without the complicated interference recording system. The CGH algorithms are directly related to the optical performance and the computational efficiency.

Physically based algorithms are commonly used in CGH calculations, which simulate the wave propagation process from the 3D scene to the hologram. The 3D scenes are often divided into multiple points or polygons. According to the dividing methods addressed on the 3D scenes, point based and polygon based algorithms can be implemented to calculate the CGHs [3–10]. Both algorithms can provide accurate depth information and continuing motion parallax due to their precise representations of the geometrical information. Since the handlings of different primitives are independent, physically based algorithms are difficult to provide the occlusion effect. Moreover, the computational load would be aggravated with the increase of the number of the primitives.

Layer based algorithms were proposed to improve the computational efficiency [11,12]. The 3D scenes are sliced into multiple parallel layers according to the depth information. Fast Fourier transform (FFT) based Fresnel diffraction and angular spectrum can be implemented into the calculation of wave propagation between parallel planes. Since the rendering procedures of layer based algorithms are based on the orthographic projection from single viewpoint, the hidden primitives are hard to process, leading the absence of the occlusion effect. Slab based orthographic projection can be implemented in layer based angular spectrum with silhouette mask culling to process the hidden primitives of each layer [13]. However, artifacts would be reconstructed with a large viewing angle due to the orthographic projection based slicing strategy.

Stereogram based algorithms spatially multiplex two-dimensional (2D) parallax views of the 3D scene, taking advantage of computer graphics rendering techniques in the parallax views rendering process [14–16]. During calculation, the CGH is spatially segmented into multiple holographic elements (hogels), and the parallax views are captured by perspective projections from the corresponding viewpoints. Hence the view-dependent properties can be provided with the help of the multi-viewpoint rendering process. Since each hogel corresponds only one 2D parallax view in the stereogram based algorithms, the depth performance would be affected during optical reconstruction, especially the accommodation cue. Recently, fully computed holographic stereogram was developed to improve the depth performance, which integrated the stereogram based and physically based algorithm to provide accurate depth information [17]. And inverse Fresnel diffraction with layer based processing was implemented in the calculation of fully computed holographic stereogram to improve the computational efficiency [18]. The paraxial approximation of the Fresnel diffraction limits the parameters of the 3D scene for accurate calculation. Also, for the above stereogram based algorithms, the spatial segmentations of the CGHs would cause artifacts during optical reconstructions.

In this study, we propose an efficient algorithm for calculating photorealistic 3D CGH with Fourier domain segmentation. Different to the stereogram based algorithm, the proposed method segments the CGH in its Fourier domain. Segmented angular spectrum with layer based processing is introduced to calculate the partitioned elements in the Fourier domain without paraxial approximation. The segmentation can reduce the aliasing error by limiting the bandwidth of the transfer function, and effectively extends the limited region of conventional angular spectrum. The algorithm is also compatible with computer graphics rendering techniques, which could take advantage of realistic rendering process and provide smooth motion parallax with occlusion effect. Numerical simulations and optical experiments demonstrate that the proposed method can reconstruct photorealistic 3D images with accurate depth information.

2. 3D CGH with Fourier domain segmentation

According to the angular spectrum theory, the field distribution in the hologram plane can be represented as the inverse Fourier transform of its spatial frequency distribution:

h(x,y)=F1[O(fx,fy)exp(j2πzλ2fx2fy2)]=F1[H(fx,fy)],
where λ is the wavelength, fx and fy are spatial frequency coordinates, O is the Fourier transform of the field distribution in the object plane, z is the propagation distance between the object plane and hologram plane, and H is the Fourier transform of h, representing its spatial frequency distribution.

Figure 1 illustrates the coordinates of field distributions of the hologram in spatial and Fourier domains. The coordinates in the Fourier domain represent the spatial frequencies of the wave field, representing the direction cosines corresponding to the propagation direction of a plane wave. The segmentation of the Fourier domain would lead plane wave decomposition of the wave field, where the central position of each element in the Fourier domain corresponds to the plane wave with a specific angular spectrum. Each element in the Fourier domain only records the wave field propagated along the corresponding direction, which can be implemented with the help of parallel projection along the specific direction.

 figure: Fig. 1

Fig. 1 Coordinates of field distributions of the hologram in the spatial and Fourier domains.

Download Full Size | PDF

When computing CGH from a 3D scene, parallel projections along the corresponding directions can be used to get the shading and depth information of the 3D scene, as shown in Fig. 2. The shading image represents the amplitude distribution of the 3D scene along the specific projection direction. Computer graphics rendering techniques can be implemented to process multiple optical effects, which are coded in the amplitude distribution of the shading image. The depth image can provide accurate geometry information along z direction. The 3D data of the corresponding projection direction can be sliced into multiple parallel layers according to the depth image. And layer based processing can be used in the diffraction calculation from the parallel layers to the hologram plane. Each element in the Fourier domain can be calculated as:

He(fx,fy)=i=1mOi(fx,fy)exp(j2πziλ2fx2fy2),
where m is the number of sliced layers, Oi is Fourier transform of the field distribution in ith layer, and zi is the distance between ith layer and the hologram plane. Each element in the Fourier domain collects the wave field from the corresponding angular spectrum range, which is the sum of the angular spectrums of all the parallel layers along the corresponding direction. The whole Fourier spectrum of the 3D scene can be generated after calculating all the segmented elements in the Fourier domain. Hence field distribution in the hologram plane is recomposed by the segmented elements in the Fourier domain, which corresponds to the parallel projections along different directions.

 figure: Fig. 2

Fig. 2 Parallel projection along αx and αy.

Download Full Size | PDF

During calculation, each parallel projection corresponds to a specific element in the Fourier plane, as shown in Fig. 3. The position of the element in the Fourier domain is determined by the corresponding direction cosine of the parallel projection. The central spatial frequency of the element that corresponds to the projection direction is given by:

fx=cosαxλ.
where αx is the angle between x coordinate and the parallel projection direction. The sampling range in the Fourier domain is determined by the sampling pitch of the hologram:
fx[12p,12p],
where p is the sampling pitch of the hologram. Hence the bandwidth of each segmented element in the Fourier domain can be calculated as:
Sfx=1Np
where N is the segmentation number along fx coordinate, which corresponds to the number of parallel projections addressed to the 3D scene.

 figure: Fig. 3

Fig. 3 Fourier domain segmentation for corresponding parallel projection.

Download Full Size | PDF

Figure 4 illustrates the whole calculation process from the parallel projections to the field distribution of the hologram. First, parallel projections are used to get the shading and depth information of the 3D scene from different directions according to the segmentation parameters of the hologram. Then each segmented element in the Fourier domain is calculated by adding the angular spectrums of all the parallel layers along the corresponding direction according to Eq. (2). Next, the calculated segments are placed to the corresponding positions according to their central spatial frequencies. After calculating all the elements on the Fourier domain, an inverse Fourier transform can be used to get the final field distribution in the hologram plane from the frequency domain.

 figure: Fig. 4

Fig. 4 Field distribution of the hologram calculated from multiple parallel projections.

Download Full Size | PDF

By segmenting the hologram in its Fourier domain, the depth information can be angularly extracted from the 3D scene with the help of parallel projections, which can solve the occlusion problem properly. Layer based processing provides accurate depth information of the 3D scene by calculating the angular spectrums from to the corresponding projection directions. The number of sliced layers and segmented elements would affect the accuracy of the accommodation cue and the smoothness of motion parallax. Specifically, the number of segmented elements determines the number of angular samplings of the 3D scene. Larger number of segmented elements would increase the smoothness of motion parallax when the hologram is reconstructed from different viewing angle. The distance between the adjacent layers and the number of sliced layers determine the depth sampling parameters of the 3D scene, which would affect the accuracy of the accommodation cue when human eyes are focusing on different depths of the 3D scene. These parameters can be adjusted according to the viewing parameters required in the optical system.

3. Experimental results

To demonstrate the optical performance of the proposed algorithm, numerical simulations and optical experiments are performed. Table 1 shows the parameters of the calculated CGH. The sampling number of the hologram is 20,000 × 20,000 with the pixel pitch 1 μm. The wavelength used in the calculation is 633 nm. Hence the theoretical field of view of the CGH is 36.9 °, which is deduced from the grating equation. During calculation, the hologram is segmented into 40 × 40 elements in the Fourier domain, corresponding to 40 × 40 parallel projections in the rendering process. A high resolution hologram is fabricated with the same parameters using a direct-write lithography system. The calculated CGH is printed in a binary pattern on a substrate of fused silica coated with chromium film. According to the modulation property based on the fabrication method, the field distribution on the hologram plane h(x, y) is first coded into the intensity distribution:

I(x,y)=2Re[h(x,y)r*(x,y)]+C,
where r(x, y) is the reference wave field in the hologram plane, and here a plane wave with an incident angle of 10 ° is used. C is the constant offset that keep the intensity distribution nonnegative. Then the intensity distribution I(x, y) is quantized into binary amplitude modulation by setting the midway as the threshold to match the fabrication requirement.

Tables Icon

Table 1. Parameters of the CGH

Figure 5 illustrates the optical setup of the optical experiment. A collimated beam is used to illuminate the fabricated hologram. The incident angle is the same as the reference beam used to calculate the CGH, which would reconstruct the original object wave for observing. The 3D scene used to calculate the CGH is a checkboard with 3 pieces, which are located at 35 mm – 55 mm from the hologram plane. Figure 6 demonstrates the reconstruction results at different depths. Figures 6(a)-6(c) are the numerical reconstruction results when the reconstruction plane is placed at the king (37 mm), rook (45 mm), and pawn (48 mm), respectively. Figures 6(d)-6(f) are the optical reconstruction results when the camera is manually focused on the corresponding pieces. The results clearly demonstrate the effective accommodation cue provided by the proposed algorithm.

 figure: Fig. 5

Fig. 5 Optical setup of the optical reconstruction experiment.

Download Full Size | PDF

 figure: Fig. 6

Fig. 6 Numerical and optical reconstruction results when focusing on (a, d) king, (b, e) rook, and (c, f) pawn.

Download Full Size | PDF

To demonstrate the motion parallax as well as occlusion effect of the proposed algorithm, the camera needs to be placed at different viewpoints. Figures 7(a)-7(c) show the numerical reconstruction results when viewing from left (−10 °), center (0 °), and right (10 °) viewpoints, respectively. And the optical reconstruction results shown in Figs. 7(d)-7(f) are in agreement with the numerical simulations. Although the theoretical field of view is determined by the maximum sampling frequency of the hologram, in order to reconstruct the whole scene, the real viewing angle would be affected by the size and the depth range of the 3D scene. Hence the demonstrated viewing angles in the numerical and optical reconstructions are smaller than the theoretical field of view. To illustrate the continuous change of the reconstruction results from different viewpoints, numerical and optical reconstruction videos are revealed in Visualization 1 and Visualization 2, respectively. The viewpoints are continuous moved in the videos to demonstrate the view-depended properties of the proposed algorithm. The optical reconstruction video is captured by an iphone camera for convenience in moving the viewpoint. And the limited relative aperture and sensor size of the cellphone camera would decrease the image quality in the recorded video compared to Figs. 6 and 7, which are captured by a single lens reflex camera. The reconstructed 3D scene can also be viewed by our naked eyes. We can easily perceive the accommodation cue by focusing on the different depths and perceive the smooth motion parallax with occlusion effect by moving our eyes around.

 figure: Fig. 7

Fig. 7 Numerical and optical reconstruction results when viewing from (a, d) left, (b, e) center, and (c, f) right viewpoints (see Visualization 1 and Visualization 2).

Download Full Size | PDF

4. Discussion on the aliasing consideration

The transfer function of the angular spectrum in Eq. (1) can be expressed as:

T=exp(j2πzλ2fx2fy2).
The local frequency of the exponential term in one-dimensional format can be calculated as [19]:
|flx|=12π|fx(2πzλ2fx2)|=|zfxλ2fx2|.
For a given propagation distance z, the local frequency increases along with the absolute value of fx. According to the Nyquist–Shannon sampling theorem, following relation needs to be satisfied for avoiding the aliasing error:
12Δfx|flx|.
where Δfx is (2L)−1 instead of (L)−1 to avoid circular convolution [20], and L is the size of the hologram. From Eqs. (8) and (9), the propagation distance z should satisfy the following equation to avoid aliasing:
zLλ2fx21.
In conventional angular spectrum, the maximum fx is (2p)−1, hence the maximum propagation distance without aliasing error is:
zL4p2λ21.
Whereas for the calculation of the diffraction field using Fourier domain segmentation, the maximum fx for each segmented element is (2Np)−1 according to Eq. (5), hence the region of propagation distance without aliasing error extends more than N times:
zL4(Np)2λ21.
According to Eq. (12), when L = 20 mm, λ = 633 nm, N = 40, p = 1μm, z should be smaller than 59.94 mm for conventional angular spectrum algorithm and 2527.57 mm for the proposed algorithm. Hence by implementing segmentation in the Fourier domain during the calculation of the diffraction field, the CGH can be calculated without aliasing error at a wider range of propagation distance.

The above analysis on the aliasing conditions can be used to evaluate the accuracy of the algorithm for generating the field distribution in the hologram plane. If amplitude modulation hologram is used for the reconstruction, aliasing induced by the interference between object wave and reference wave also needs to be considered in the coding process. Let αy be the angle between y coordinate and the k vector of the reference wave. The local frequency of the interference pattern along y direction calculated by Eq. (6) can be expressed as

|fI|=12π|(2πfyy2πcosαyλy)y|=|fycosαyλ|,
where fy is the spatial frequency of the object wave field distribution in the Fourier domain. Since the sampling pitch of hologram plane is p, according to the Nyquist–Shannon sampling theorem, following relation needs to be satisfied for avoiding the aliasing error:
12p|fycosαyλ|.
Hence the vertical spectrum range of the object wave field distribution in the Fourier domain needs to be pre-filtered according to Eq. (14) before adding the reference wave.

5. Conclusion

In summary, Fourier domain segmentation is introduced in the algorithm for calculating 3D CGH. The algorithm provides an efficient way to generate high quality 3D CGH with multiple depth cues, including accommodation, smooth motion parallax, and occlusion. The hologram is segmented in the Fourier domain to process the depth information from different viewing directions. Segmented angular spectrum with layer based processing is used to calculate segmented elements without paraxial approximation, which effectively reduces the aliasing error and extends the limited region of conventional angular spectrum obviously. The proposed algorithm is applicable for generating high resolution CGH with large viewing angle, and is compatible with the computer graphics rendering techniques for reconstructing photorealistic 3D images. Numerical simulations and optical experiments demonstrate that the proposed algorithm is effective in reconstructing quality 3D CGH with accurate depth information.

Funding

National Natural Science Foundation of China (NSFC) (61875105).

References

1. S. A. Benton and V. M. Bove, Holographic imaging (Wiley, 2008).

2. H. Zhang, Y. Zhao, L. Cao, and G. Jin, “Three-dimensional display technologies in wave and ray optics: a review (Invited Paper),” Chin. Opt. Lett. 12(6), 060002 (2014). [CrossRef]  

3. M. E. Lucente, “Interactive computation of holograms using a look-up table,” J. Electron. Imaging 2(1), 28–34 (1993). [CrossRef]  

4. H. Zhang, Q. Tan, and G. Jin, “Holographic display system of a three-dimensional image with distortion-free magnification and zero-order elimination,” Opt. Eng. 51, 075801–075801–075801–075805 (2012). [CrossRef]  

5. K. Matsushima, “Computer-generated holograms for three-dimensional surface objects with shade and texture,” Appl. Opt. 44(22), 4607–4614 (2005). [CrossRef]   [PubMed]  

6. K. Matsushima and S. Nakahara, “Extremely high-definition full-parallax computer-generated hologram created by the polygon-based method,” Appl. Opt. 48(34), H54–H63 (2009). [CrossRef]   [PubMed]  

7. L. Ahrenberg, P. Benzie, M. Magnor, and J. Watson, “Computer generated holograms from three dimensional meshes using an analytic light transport model,” Appl. Opt. 47(10), 1567–1574 (2008). [CrossRef]   [PubMed]  

8. H. Kim, J. Hahn, and B. Lee, “Mathematical modeling of triangle-mesh-modeled three-dimensional surface objects for digital holography,” Appl. Opt. 47(19), D117–D127 (2008). [CrossRef]   [PubMed]  

9. J.-H. Park, S.-B. Kim, H.-J. Yeom, H.-J. Kim, H. Zhang, B. Li, Y.-M. Ji, S.-H. Kim, and S.-B. Ko, “Continuous shading and its fast update in fully analytic triangular-mesh-based computer generated hologram,” Opt. Express 23(26), 33893–33901 (2015). [CrossRef]   [PubMed]  

10. Y.-P. Zhang, F. Wang, T.-C. Poon, S. Fan, and W. Xu, “Fast generation of full analytical polygon-based computer-generated holograms,” Opt. Express 26(15), 19206–19224 (2018). [CrossRef]   [PubMed]  

11. N. Okada, T. Shimobaba, Y. Ichihashi, R. Oi, K. Yamamoto, M. Oikawa, T. Kakue, N. Masuda, and T. Ito, “Band-limited double-step Fresnel diffraction and its application to computer-generated holograms,” Opt. Express 21(7), 9192–9197 (2013). [CrossRef]   [PubMed]  

12. Y. Zhao, L. Cao, H. Zhang, D. Kong, and G. Jin, “Accurate calculation of computer-generated holograms using angular-spectrum layer-oriented method,” Opt. Express 23(20), 25440–25449 (2015). [CrossRef]   [PubMed]  

13. H. Zhang, L. Cao, and G. Jin, “Computer-generated hologram with occlusion effect using layer-based processing,” Appl. Opt. 56(13), F138–F143 (2017). [CrossRef]   [PubMed]  

14. T. Yatagai, “Stereoscopic approach to 3-D display using computer-generated holograms,” Appl. Opt. 15(11), 2722–2729 (1976). [CrossRef]   [PubMed]  

15. K. Wakunami and M. Yamaguchi, “Calculation for computer generated hologram using ray-sampling plane,” Opt. Express 19(10), 9086–9101 (2011). [CrossRef]   [PubMed]  

16. K. Wakunami, H. Yamashita, and M. Yamaguchi, “Occlusion culling for computer generated hologram based on ray-wavefront conversion,” Opt. Express 21(19), 21811–21822 (2013). [CrossRef]   [PubMed]  

17. H. Zhang, Y. Zhao, L. Cao, and G. Jin, “Fully computed holographic stereogram based algorithm for computer-generated holograms with accurate depth cues,” Opt. Express 23(4), 3901–3913 (2015). [CrossRef]   [PubMed]  

18. H. Zhang, Y. Zhao, L. Cao, and G. Jin, “Layered holographic stereogram based on inverse Fresnel diffraction,” Appl. Opt. 55(3), A154–A159 (2016). [CrossRef]   [PubMed]  

19. J. W. Goodman, Introduction to Fourier optics, 2nd ed. (McGraw-Hill, 1996).

20. K. Matsushima and T. Shimobaba, “Band-limited angular spectrum method for numerical simulation of free-space propagation in far and near fields,” Opt. Express 17(22), 19662–19673 (2009). [CrossRef]   [PubMed]  

Supplementary Material (2)

NameDescription
Visualization 1       Numerical reconstruction for demonstrating the motion parallax with occlusion effect of the computer-generated hologram.
Visualization 2       Optical reconstruction for demonstrating the motion parallax with occlusion effect of the computer-generated hologram.

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (7)

Fig. 1
Fig. 1 Coordinates of field distributions of the hologram in the spatial and Fourier domains.
Fig. 2
Fig. 2 Parallel projection along αx and αy.
Fig. 3
Fig. 3 Fourier domain segmentation for corresponding parallel projection.
Fig. 4
Fig. 4 Field distribution of the hologram calculated from multiple parallel projections.
Fig. 5
Fig. 5 Optical setup of the optical reconstruction experiment.
Fig. 6
Fig. 6 Numerical and optical reconstruction results when focusing on (a, d) king, (b, e) rook, and (c, f) pawn.
Fig. 7
Fig. 7 Numerical and optical reconstruction results when viewing from (a, d) left, (b, e) center, and (c, f) right viewpoints (see Visualization 1 and Visualization 2).

Tables (1)

Tables Icon

Table 1 Parameters of the CGH

Equations (14)

Equations on this page are rendered with MathJax. Learn more.

h( x,y )= F 1 [ O( f x , f y )exp( j2πz λ 2 f x 2 f y 2 ) ]= F 1 [ H( f x , f y ) ],
H e ( f x , f y )= i=1 m O i ( f x , f y )exp( j2π z i λ 2 f x 2 f y 2 ) ,
f x = cos α x λ .
f x [ 1 2p , 1 2p ],
S fx = 1 Np
I( x,y )=2Re[ h( x,y ) r * ( x,y ) ]+C,
T=exp( j2πz λ 2 f x 2 f y 2 ).
| f lx |= 1 2π | f x ( 2πz λ 2 f x 2 ) |=| z f x λ 2 f x 2 |.
1 2Δ f x | f lx |.
zL λ 2 f x 2 1 .
zL 4 p 2 λ 2 1 .
zL 4 ( Np ) 2 λ 2 1 .
| f I |= 1 2π | ( 2π f y y2π cos α y λ y ) y |=| f y cos α y λ |,
1 2p | f y cos α y λ |.
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.