Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

CGH calculation with the ray tracing method for the Fourier transform optical system

Open Access Open Access

Abstract

Computer-generated holograms (CGHs) are usually displayed on electronic devices. However, the resolution of current output devices is not high enough to display CGHs, so the visual field is very narrow. A method using a Fourier transform optical system has been proposed, to enlarge the size of reconstructed images. This paper describes a method of CGH calculations for the Fourier transform optical system to enlarge the visual field and reconstruct realistic images by using the ray tracing method. This method reconstructs images at arbitrary depths and also eliminates unnecessary light including zero-th order light.

© 2013 Optical Society of America

1. Introduction

Numerous 3-D displays have already been launched, and the 3-D display technologies can roughly be divided into two kinds. The first involves multiple parallax, and the second involves holography. Various 3-D displays have been proposed in methods that involve multiple parallax, including a method with glasses [1,2] and a method that displays 3-D images by restricting horizontal parallax [3,4]. These methods have essentially prevailed for movies and public displays because they are comparatively easy. On the other hand, the methods that make it possible to observe full-parallax 3-D images with the naked eye have been proposed, such as integral photography [5] and super multi-view displays [6]. Nevertheless, these systems cause fatigue due to long periods of viewing because of the mismatch between convergence and accommodation.

In contrast, holography has been publicized as an ideal 3D display technology because holography conforms to human visual features in perceiving 3D images. Holography is technology that can record and reconstruct 3-D images by using diffraction and interference of light [7]. Interference patterns on holograms are obtained by computer simulation, and they are called computer-generated holograms (CGHs). There are some problems with CGHs, such as a lack of rendering techniques to express realistic images, prolonged computation times, and a narrow visual field. Several methods have been proposed to solve the problem of rendering techniques for CGHs, such as hidden surface removal [8,9], shadowing and shading [10,11], and multiple reflections [12]. Each method is only suitable for specific expressions and it is difficult to combine them. Therefore, we have proposed rendering techniques for CGHs with the ray tracing method [13]. Calculation methods with the ray tracing method are able to be applied to multiple rendering techniques simultaneously, and express very realistic scenes. Moreover, calculations are done at very high speeds by using graphic processing units (GPUs). Thanks to acceleration of the calculation of CGHs, displaying holographic movie by electronic holography with the spatial light modulator (SLM) was attained. However, the advantages of rendering methods of CGHs cannot be adequately demonstrated because the visual field of electronic holography is very narrow. Since the holography uses the diffracted light for reconstruction, a viewing angle is determined by the diffraction angle. The diffraction angle becomes larger, as the pixel pitch of an output device is narrow. The pixel pitch of a hologram with an optical film is about 0.1 [μm]. Therefore, a viewing angle is more than 40 [degrees], and sufficient corporeal vision is possible. However, an electronic display is necessary for displaying a holographic movie. Although the pixel pitch of the device marketed now is about 10 [μm], a diffraction angle is still about 3.6 [degrees]. Therefore, it is necessary to expand the visual field in CGHs without depending on the performance of electronic devices.

Various kinds of research have been proposed to expand the visual field of electronic holography such as a method using eye tracking [14], cylindrical holograms with a curved array of SLMs [15], and a time division method using ultra-high-definition LCDs [16]. However, these systems entail complicated and massive optical systems for reconstruction. A Fourier transform optical system has been proposed in contrast to these systems to enlarge the visual field with a simple composition. In the study [17], we attempted enlargement of visual field of CGHs implemented the rendering techniques by the ray tracing method [13] with the Fourier transform optical system. However, 3D images by the Fourier transform optical system are not reconstructed at the correct position. Because magnifying power by the Fourier transform optical system varies by the depth, reconstructed images deformed. Therefore, a compensation method is necessary to correct the deformation of reconstructed images.

This paper describes a method of a CGH calculation with the ray tracing method for the Fourier transform optical system that implements correction to distortions in reconstructed images. This method is able to reconstruct realistic images that are implemented hidden surface removal having a wide visual field. We confirmed that realistic 3-D images could be reconstructed by optical reconstructions without distortion with the proposed method.

2. CGH calculations with ray tracing method

We have proposed a method that calculates CGHs that express realistic 3D images implemented the hidden surface removal by the ray tracing method. The ray tracing method is used in the field of computer graphics to obtain intersections between objects and lines of sight to implement hidden surface removal. Correct hidden surface removal corresponding to movements of viewpoints is not expressed by ray tracing from one viewing point in CGH because motion parallax is necessary. This section briefly explains the calculation algorithm to generate full-parallax CGHs.

First, a hologram plane is divided into the elementary holograms in Fig. 1. Rays are cast from the center of each elementary hologram by ray tracing to determine the intersections between virtual objects. This method treats the intersections as a point light source group. Here, interval Δθ between rays is 1/60 degrees, because the angular resolution of the human eye is assumed to be 1/60 degrees. Hence, as gaps between intersections are not visible by the observer, the aggregate of points is regarded as a plane. Therefore, elementary holograms have different point light source groups that are visible from each elementary hologram.

 figure: Fig. 1

Fig. 1 Calculation algorithm for CGH with the ray tracing method.

Download Full Size | PDF

The brightness of intersections in the ray tracing process is determined simultaneously. Various reflectance distributions are expressed in CGHs by changing the brightness. When a diffuse reflection is defined as a Lambert reflection, the intensity of the reflected light is calculated as;

Ir=Iaρa+(NL)kdρd+ksρs,
where, Ia is the intensity of ambient light, kd and ks correspond to the ratio of Lambert light and specular light (kd + ks = 1), and values ρa, ρd, and ρs correspond to the reflectance of ambient, Lambert, and specular light. Vectors N and L are a normal unit vector and a unit vector to the light source. The qualities of material are mostly expressed by changing the reflectance of specular light ρs. For instance, the Phong reflection model is used to express a surface like plastic material. The Phong reflection model is calculated by using the direction of rays and the position of the light source that illuminates the objects as shown in Fig. 2.

 figure: Fig. 2

Fig. 2 Phong reflection model.

Download Full Size | PDF

After the intersections are determined, light waves on an elementary hologram are calculated. Complex amplitude distributions hm(x, y) on an elementary hologram plane are obtained as

hm(x,y)=i=1NmAiri(x,y)exp(j2πλri(x,y)+jϕi),
ri=(xix)2+(yiy)2+zi2,
where parameters x, y and z represent the horizontal, vertical, and depth components, while indices m and i correspond to the number of elementary holograms and point light sources. Here, λ is the wavelength, and ϕi is the initial phase of each point light source. The number of point light sources that create 3D objects in each elementary hologram is given by Nm. The amplitude of point light source Ai is calculated by taking the square root of light intensity Ir of Eq. (1) in the shading process. Finally, whole light waves on a hologram plane are obtained by superimposing all elementary holograms. The method of expressing reflective and refractive objects in CGHs can be found in our recent paper [13].

3. Fourier transform optical system

We adopted a Fourier transform optical system to enlarge the visual field, which was composed of a lens, a reflective LCD, and a point light source as a reference light at the focal point of a lens. The Fourier transform optical system was originally used to reconstruct conjugate images of 3-D images in front of a hologram with the lensless Fourier transform method [18] shown in Fig. 3. Virtual objects in calculation with the lensless Fourier transform method are arranged near the focal point of the lens, and a point light source as reference light is located at the focal point. The Fourier transform optical system reconstructs real images and conjugate images simultaneously. The reconstruction position of real images and conjugate images is influenced by the position of virtual objects when calculations are done. Under certain conditions, the conjugate images are widely reconstructed at the back of the hologram. This section describes a method of expanding the visual field for the Fourier transform optical system.

 figure: Fig. 3

Fig. 3 Fourier transform optical system.

Download Full Size | PDF

Reconstruction of 3-D images with the Fourier transform optical system was considered on a y–z plane to simplify the process. The coordinates of virtual objects are defined as O(yo, zo) in the process to calculate CGHs, as shown in Fig. 4. A point light source as a reference light is located at R(0. − f), and then interference patterns are obtained. The point light source for the observations is located in front of the lens at R′(0, f). At this time, a conjugate image Q(y2, z2) and a real image P(y1, z1) are reconstructed. Supposing the gap between the hologram and the lens is illimitably small, the coordinates of a conjugate image Q(y2, z2) and a real image P(y1, z1) are given as following equations with optical imaging theory.

y1=yozoz1,z1=zof2zo+f,
y2=yo,z2=zo.
According to Eq. (4), when virtual object O is located nearby rather than zo = −f/2, a real image P(x1, z1) is reconstructed at the back of the hologram. However, we observe a real image and a conjugate image simultaneously. Therefore, it is necessary to eliminate conjugate image Q to observe only the real image at the back of the hologram.

 figure: Fig. 4

Fig. 4 Position of reconstructed images.

Download Full Size | PDF

Next, we will discuss the relationship between a portion of the hologram and the conjugate and the real images. When a viewpoint is located above line PQ passing through the focal point in Fig. 5, the hologram is divided into two regions R1 and R2 by line PQ. It is presumed from Fig. 5 that region R1 generates light waves of P that arrive at the viewpoint and region R2 generates light waves of Q. Therefore, we only observe real images by only calculating the region R1. The border yth of R1 and R2 on the hologram plane is calculated by

yth=fyof+zo.

To wrap up, 3-D images are widely reconstructed at the back of the hologram by arranging virtual objects nearby rather than zo = −f/2 and calculating interference patterns on the hologram above yth. At this time, the visual field of the Fourier transform optical system is behind the hologram shown in Fig. 6. Reconstructed light is converged by the lens, and passed through a window with width w given by

w=λfp,
where p is the pixel pitch of the output device. When the position of the viewpoint ze is equal to ze_min, maximum viewing angle ϕF is obtained by
ze_min=LfL+w,
ϕF=2tan1(w+L2f),
Where L is the size of the hologram. From Eqs. (7)(9), we confirmed that viewing angle ϕF could be expanded by changing the size of the hologram and focal length of the lens.

 figure: Fig. 5

Fig. 5 Elimination of reconstructed image.

Download Full Size | PDF

 figure: Fig. 6

Fig. 6 Expanded visual field.

Download Full Size | PDF

4. CGH calculations for Fourier transform optical system

4.1. Compensation calculation for Fourier transform optical system

The method of calculation to express rendering techniques for CGHs described in Section 2 was designed for an ordinary hologram. In our previous study [17], virtual objects were located nearby rather than zo = −f/2, and ray tracing was then conducted to obtain point light source groups for CGH calculations. However, reconstructed images through the Fourier transform optical system were expanded and deformed because the magnifying power increased as the depth increased according to Eq. (4). Therefore, it is necessary to have a method to compensate for the Fourier transform optical system. Distortion by the optical system can be computed. Therefore, the compensation is possible by back calculations of distortion. The method of compensation described here obtains the position from which we want to reconstruct 3-D images by ray tracing method in advance. Then, the coordinates of point light sources reconstructed at the positions are calculated by back calculations of distortion. Moreover the method of compensation takes into account gap D between the hologram and the lens in Fig. 7.

 figure: Fig. 7

Fig. 7 Compensation of reconstructed image.

Download Full Size | PDF

When the lens is gap D away from the hologram as seen in Fig 7, the hologram is magnified by the lens and forms an image at zh in depth. Hence, the position of the reconstructed image (xi, yi, zi) is restricted to zi < zh. When the coordinates of a virtual object point that should be reconstructed are (xi, yi, zi), the coordinates of the point light source (xo, yo, zo) in the light wave calculations are obtained by

zo=fAfA,
xo=xizoB,
yo=yizoB,
where A and B are expressed as
A=zi(fD)+d2ziDf,
B=A+DfAf.
Where zh is given by
zh=(fDfDD).
Therefore, light waves on the hologram plane for the Fourier transform optical system are calculated by translating the distance ri in Eq. (2) to ro expressed as
ro=(xox)2+(yoy)2+zo2,
Only light waves that arrived above yth are calculated to observe only the real image at the back of the hologram. The same image that has been rendered with ray tracing can be observed at position (xi, yi, zi) without distortion by doing so.

4.2. Calculation process

Our proposed method used a GPU in addition to a CPU to accelerate the calculations. This section describes the calculation flow for CGHs implemented hidden surface removal for the Fourier transform optical system with a GPU. To parallelize CGH calculations with the GPU, we used the compute unified device architecture (CUDA) programming environment (NVIDIA). To accelerate calculations of the ray-tracing process with the GPU, we used the OptiX Application Acceleration Engine. OptiX supports the determination of intersections in the ray-tracing process with the GPU. The calculation flow for CGHs with the GPU is shown in Fig. 8. Although the approximate flow of the calculation system was close to the one in the paper [13], it was necessary to carry out compensation calculation to apply them to the Fourier transform optical system along the way. The compensation flow is explained in detail below.

 figure: Fig. 8

Fig. 8 Block diagram of the calculation for CGHs.

Download Full Size | PDF

First, an OptiX initialization function to prepare for ray tracing was invoked by the CPU. Virtual object data were stored in the global memory of the GPU, and contexts for ray tracing were created. The CPU prepared point cloud buffers during initialization that stored information on the intersections between virtual objects and rays for each elementary hologram. Then, the ray tracing kernel was launched. The ray tracing kernels were iteratively invoked as many times as there were elementary holograms. Rays were cast in parallel in the ray tracing kernel and the intersections provided by the OptiX were determined. The ray tracing kernel returned information on the intersections including the intensity and the length of the light path.

Finally, the light waves on elementary holograms were calculated in the CGH kernel. Light wave propagation from the intersections obtained by the ray tracing kernel was just calculated with the former algorithm. However the method created distortion in the reconstructed images when observed with the Fourier transform optical system. The compensation process in this study existed before light wave propagation was calculated to correct distortion. The coordinates of the point light source were first calculated at every intersection by compensation with Eqs. (10)(12). Second, yth was calculated at every point light source to eliminate the conjugate images described in Section 3. Light waves on each elementary hologram above yth from the point light source (xo, yo, zo) were then calculated. Entire light waves were obtained by summing up all elementary holograms. Interference patterns on the hologram were obtained by summing up spherical waves as reference light from the point light source at the focal point of the lens.

5. Experiment

5.1. Experimental setup

We conducted optical reconstructions to confirm the effectiveness of the proposed method. We used the display system shown in Fig. 9 for the optical reconstructions. This display system has been developed by Yoneyama et al. [19] The illuminated light by the white LED is emitted to the SLM through the lens, and reconstructed light is reflected by the half mirror. The reconstructed light is converged around a viewing window as shown in Fig. 9(b). The barrier in Fig. 9(b) eliminated 0th order light converges at a focal point of the lens. An viewer observes reconstructed images through the viewing window. The display system included a white LED, and the reconstructed images were colorized with the time division method. The time division method synchronizes the timing of the lighting of each color with the display timing of the holograms of each color. Then reconstructed images for each color are overlapped at the same place by displaying with a high frequency as many as 180 [Hz]. The experimental parameters are listed in Table 1. The LED used in this experiment is covered with a sharpened optical fiber. The fiber is mirror-coated except for the apex so as not to leak emitted light. Because the apex of the fiber is quite small, the LED works as an ideal point light source. The details of this LED are described in the paper [19].

 figure: Fig. 9

Fig. 9 Eyepiece type full-color electronic holographic display. (a) The photo taken from the front. (b) The photo taken from the forward left.

Download Full Size | PDF

Tables Icon

Table 1. Setup parameters for experiments.

The viewing angle of the Fourier transform optical system in this experiment is 9.79 [degrees]. This viewing angle is calculated with a wavelength of blue color. The viewing angle ϕF is calculated by Eq. (9). On the other hand, the maximum viewing angle ϕmax of the Fresnel hologram is calculated by the following equation.

ϕmax=2sin1(λ2p).
According to Eq. (17), the viewing angle ϕmax of the Fresnel hologram by the SLM used in this experiment is 2.78 [degrees]. Therefore, we succeeded in expanding a viewing angle ϕF of the Fourier transform optical system about 3.5 times compared with the Fresnel hologram. Moreover it is possible to extend a viewing angle by using a lens with a shorter focal length.

5.2. Measurement of the size of reconstructed images

First, we conducted the experiment to confirm that virtual objects are reconstructed in the defined correct size. To reconstruct virtual objects at the position obtained by the ray tracing method in the proposed method, the compensation calculation described in section 4.1 is necessary. By the compensation calculation Eqs. (10)(14), the point light source position to reconstruct at the target depth and a size can be obtained.

A virtual object of a ruler was made to measure the size of the reconstructed images. The virtual object of a ruler were composed of the lower lines of 10 [mm], and the upside line of 5 [mm]. The virtual image of the ruler was reconstructed at a place 50 [cm] behind a hologram by the proposed method. And an actual ruler of the notation by a millimeter unit was also exactly located at 50 [cm] behind a hologram.

Figures 10(a), and 10(b) are photos of the reconstructed image and an actual ruler. The actual ruler was located at exactly 50 [cm] behind a hologram and Fig. 10 was focused on the ruler. According to Fig. 10(a), since only the true images were displayed, it turns out that removal of a conjugate image and removal of the 0th light are performed correctly. Next, since both of the reconstructed image and the ruler come into focus apparent from Fig. 10(a), it turns out that virtual objects were reconstructed at the correct depth by the compensation calculation for the Fourier transform optical system. Moreover, Fig. 10(b) shows a magnified image of a part of Fig. 10(a). In Fig. 10(b), the size of calibrations of reconstructed image almost coincided with the size of calibrations of the actual ruler. As a result of the measurement, it was confirmed that virtual objects were correctly reconstructed with the objective size and at the objective depth without distortion by the compensation calculation of the proposed method.

 figure: Fig. 10

Fig. 10 Measurement of the size of reconstructed images at −50 [cm]. (a) The size of reconstructed image. (b) Magnified image.

Download Full Size | PDF

There were some problems in our previous research [17] that attempted enlargement of visual field with the Fourier transform optical system. One is that restriction of arrangement of virtual objects existed. Virtual object must be arranged in the position nearer than z = −2/f in the previous research. And the 3D images are reconstructed at the position which differs from the position of arranged virtual objects by the magnification effect of the Fourier transform optical system. So it was difficult to reconstruct the virtual objects at the objective position. Moreover, since magnifying power becomes large as the depth becomes deep, reconstructed images deform. Therefore, the depth perception of the reconstructed images is no longer felt. These problems are solved by using the compensation calculation described in section 4.1 in this paper. Furthermore, restriction of the virtual object arrangement in a previous study does not also exist, and it becomes possible to reconstruct 3D images at the arbitrary positions behind a hologram.

5.3. Optical reconstructions

In this experiment, we generated holograms of complex scenes to confirm that rendering techniques for CGH by the proposed method were adequately implemented. The proposed method is capable of expressing the rendering techniques such as reflection and refraction owing to advantages of the ray tracing method. We optically reconstructed a full-color CGH with the Fourier transform optical system.

At first, a complex scene with great depth was defined as shown in Fig. 11(a). The checker board was spread out from −25 [cm] to −100 [cm] and tilted slightly. There were two spheres made of a specular surface and a diffuse surface above the checker board. The CGH of the complex scene of Fig. 11(a) was made, and taken the picture. Figure 11(b) is a photograph of the reconstructed image with focus on a front sphere. Since the rear sphere defocused, it turns out that depth was expressed correctly. According to Fig. 11(b), the front sphere made by the specular surface has a highlight as a result of shading effect. Moreover, reflected images of the circumference existed on the specular surface. The metal ball was able to be felt very realistic according to these rendering effects. To the contrary, the rear sphere has blur highlight. So the spheres characterize the material on the surface like metal and plastic. Since the hidden surface removal was successfully implemented to the CGH, the reconstructed images of multiple objects did not overlap or were not missing. As a result of this experiment, we confirmed that 3-D images were naturally reconstructed without distortion. The reconstructed images in our previous study were expanded and deformed. Therefore, the sense of perspective was not able to be perceived correctly. Distortion due to the Fourier transform optical system did not occur in this study because of compensation calculations.

 figure: Fig. 11

Fig. 11 Experimental geometry and reconstructed images. (a) Experimental geometry of complex scene. (b) Reconstructed image focused on a metal sphere ( Media 1).

Download Full Size | PDF

We also conducted another optical reconstruction to confirm the expression of refractive objects in CGH. Arrangement of the virtual objects is shown in Fig. 12(a). Grass sphere located at z = −400[mm] was defined as a transparent object. In this experiment, the refractive index of glass is 1.3. Figure 12(b) shows the reconstructed image of a glass sphere and a checker board. It turned out that the checker patterns through the glass sphere were magnified because the checker patterns were refracted on a transparent object. This result indicates that refraction by the transparent object was also achieved by using the proposed method. As results of these optical reconstructions, it was confirmed that we succeeded in expressing a complex scene that required multiple rendering techniques simultaneously without distortion by the proposed method. Conventional methods are unable to express a complex scene, which requires the use of several rendering techniques to be used at a time.

 figure: Fig. 12

Fig. 12 Experimental geometry and reconstructed images. (a) Experimental geometry of a glass sphere. (b) Reconstructed image of a glass sphere.

Download Full Size | PDF

The CGH calculations in this study were accelerated with a GPU. The calculations took about 10 seconds to generate a full-color CGH including three interference patterns of RGB. The calculation process for light propagation took up most of the calculation time rather than ray tracing. We need to adopt a fast calculation algorithm for light propagation to speed up calculations.

6. Conclusion

This paper proposed a calculation method with the ray tracing method for a Fourier transform optical system. The method included an approach to eliminate unnecessary light such as zero-th order light and conjugate images. As a result, we succeeded in enlarging the visual field and displaying full-color holographic images. Furthermore, we implemented various rendering techniques to express a realistic 3-D scene. Optical reconstructions demonstrated that the reconstructions of 3-D images implemented hidden surface removal without distortion. Moreover, shading, shadowing, refraction, and multiple reflections appeared in the reconstructed images.

We expect that the reality of reconstructed images will be improved in the future by taking advantage of the ray tracing method, and we aim at generating full-color holographic movies in real time.

Acknowledgments

This work was supported by Grant-in-Aid for Scientific Research (B) KAKENHI ( 23300032) and Grant-in-Aid for JSPS Fellows ( 25·146).

References and links

1. H. Isono and M. Yasuda, “Flicker-free field sequential stereo-scopic TV system and measurement of human depth perception,” SMPTE J. 99(2), 138–141(1990).

2. T. Motoki, I. Yayuma, H. Isono, and S. Komiyama, “Research on 3-D television system at NHK,” ABU Tech. Rev. 150, 14–18 (1991).

3. D. J. Sandin, E. Sandor, W. T. Cunnally, M. Resch, and T. A. DeFanti, “Computer-generated barrier-strip autostereography,” Proc. SPIE 1083, 65–75 (1989). [CrossRef]  

4. S. Ichinose, “Fullcolor stereoscopic video pickup and display technique without special glasses,” Proc.SID 30-4, 319–323 (1989).

5. M. G. Lippmann, “Epreuves reversible donnant la sensation du relief,” J. de Phys. 7(4), 821–825 (1908).

6. Y. Takaki, “Super multi-view display with 128 viewpoints and viewpoint formation,” Proc. SPIE 7237, 72371T (2009). [CrossRef]  

7. D. Gabor, “A new microscopic principle,” Nature 161, 777–778 (1948). [CrossRef]   [PubMed]  

8. K. Matsushima and S. Nakahara, “Extremely high-defintion full-parallax computer-generated hologram created by the polygon-based method,” Appl. Opt. 48, H54–H63 (2009). [CrossRef]   [PubMed]  

9. R. H.-Y. Chen and T. D. Wilkinson, “Computer generated hologram with geometric occlusion using GPU-accelerated depth buffer rasterization for three-dimensional display,” Appl. Opt. 48, 4246–4255 (2009). [CrossRef]   [PubMed]  

10. H. Kim, J. Hahn, and B. Lee, “Mathematical modeling of digital holography,” Appl. Opt. 48, D117–D127 (2008). [CrossRef]  

11. K. Matsushima, “Computer-generated holograms for three-dimensional surface objects with shade and texture,” Appl. Opt. 44, 4607–4614 (2005). [CrossRef]   [PubMed]  

12. K. Yamaguchi and Y. Sakamoto, “Computer generated hologram with characteristics of reflection: reflectance distributions and reflected images,” Appl. Opt. 48, H203–H211 (2005). [CrossRef]  

13. T. Ichikawa, K. Yamaguchi, and Y. Sakamoto, “Realistic expression for full-parallax computer-generated holograms with the ray tracing method,” Appl. Opt. 52, A201–A209 (2013). [CrossRef]   [PubMed]  

14. R. Haussler, S. Reichlet, N. Leister, E. Zchau, R. Missbach, and A. Schwerdtner, “Large real-time holographic displays: from prototypes to a consumer product,” Proc. SPIE 7237, 72370S (1989). [CrossRef]  

15. J. Hahn, H. Kim, Y. Lim, G. Park, and B. Lee, “Wide viewing angle dynamic holographic stereogram with a curved array of spatial light modulators,” Opt. Express 16, 12372–12386 (2008). [CrossRef]   [PubMed]  

16. T. Senoh, T. Mishima, K. Yamamoto, R. Oi, and T. Kurita, “Viewing-zone-angle-expanded color electronic holography system using ultra-high-definition liquid crystal displays with undesirable light elimination,” J. Display Technol. 7(7), 382–390 (2011). [CrossRef]  

17. T. Ichikawa, K. Yamaguchi, and Y. Sakamoto, “Realistic 3D image reconstruction in CGH with Fourier transform optical sytem,” Proc. SPIE 8644, 86440D (2013). [CrossRef]  

18. G. W. Stroke, “Lensless Fourier-transform method for optical holography,” Appl. Phys. Lett. 6, 201–203 (1965). [CrossRef]  

19. T. Yoneyama, C. Yang, Y. Sakamoto, and F. Okuyama, “Eyepiece-type full-color electro-holographic binocular display with see-through vision,” in Digital Holography and Three-Dimensional Imaging, OSA Technical Digest (online) (Optical Society of America, 2013), paper DW2A.11. [CrossRef]  

Supplementary Material (1)

Media 1: AVI (3555 KB)     

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (12)

Fig. 1
Fig. 1 Calculation algorithm for CGH with the ray tracing method.
Fig. 2
Fig. 2 Phong reflection model.
Fig. 3
Fig. 3 Fourier transform optical system.
Fig. 4
Fig. 4 Position of reconstructed images.
Fig. 5
Fig. 5 Elimination of reconstructed image.
Fig. 6
Fig. 6 Expanded visual field.
Fig. 7
Fig. 7 Compensation of reconstructed image.
Fig. 8
Fig. 8 Block diagram of the calculation for CGHs.
Fig. 9
Fig. 9 Eyepiece type full-color electronic holographic display. (a) The photo taken from the front. (b) The photo taken from the forward left.
Fig. 10
Fig. 10 Measurement of the size of reconstructed images at −50 [cm]. (a) The size of reconstructed image. (b) Magnified image.
Fig. 11
Fig. 11 Experimental geometry and reconstructed images. (a) Experimental geometry of complex scene. (b) Reconstructed image focused on a metal sphere ( Media 1).
Fig. 12
Fig. 12 Experimental geometry and reconstructed images. (a) Experimental geometry of a glass sphere. (b) Reconstructed image of a glass sphere.

Tables (1)

Tables Icon

Table 1 Setup parameters for experiments.

Equations (17)

Equations on this page are rendered with MathJax. Learn more.

I r = I a ρ a + ( N L ) k d ρ d + k s ρ s ,
h m ( x , y ) = i = 1 N m A i r i ( x , y ) exp ( j 2 π λ r i ( x , y ) + j ϕ i ) ,
r i = ( x i x ) 2 + ( y i y ) 2 + z i 2 ,
y 1 = y o z o z 1 , z 1 = z o f 2 z o + f ,
y 2 = y o , z 2 = z o .
y t h = f y o f + z o .
w = λ f p ,
z e _ min = L f L + w ,
ϕ F = 2 tan 1 ( w + L 2 f ) ,
z o = f A f A ,
x o = x i z o B ,
y o = y i z o B ,
A = z i ( f D ) + d 2 z i D f ,
B = A + D f A f .
z h = ( f D f D D ) .
r o = ( x o x ) 2 + ( y o y ) 2 + z o 2 ,
ϕ max = 2 sin 1 ( λ 2 p ) .
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.