Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Fast generation of 360-degree cylindrical photorealistic hologram using ray-optics based methods

Open Access Open Access

Abstract

Due to the large pixel pitch and limited size of spatial light modulator (SLM), the field of view (FOV) of current holographic display is greatly restricted. Cylindrical holography can effectively overcome the constraints of FOV. However, the existent algorithms of cylindrical hologram are all based on the wave-optics based approach. In this paper, to the best of our knowledge, we adopt the ray-optics based approach in the generation of cylindrical computer generated hologram (CCGH) for the first time. Information of parallax images captured from three-dimensional (3D) objects using a curved camera array is recorded into a cylindrical hologram. Two different recording specific algorithms are proposed, one is based on the Fast Fourier Transform (FFT) method, and another is based on the pinhole-type integral imaging (PII) method. The simulation results confirm that our proposed methods are able to realize a fast generation of the cylindrical photorealistic hologram.

© 2021 Optical Society of America under the terms of the OSA Open Access Publishing Agreement

1. Introduction

Holographic 3D display is always regarded as the ultimate 3D display technology due to its ability to offer all depth cues that human eyes require by reproducing entire light field [1,2]. It has shown great application potentials in military, education, entertainment, medical [3], and other fields. Computer-generated hologram (CGH) is a crucial technique for realizing holographic display based on digital computing to avoid complex optical recording process [4,5]. Generally, the CGH can be calculated using a computer and then loaded on a spatial light modulator (SLM). However, one problem of commercial SLMs is the narrow viewing angle, which is limited by the size and the pixel pitch of SLM [6]. To effectively overcome the constraints of FOV, as early as the last century, some researchers discussed the possibility of cylindrical holograms [7,8]. The holographic film was adopted to make a cylindrical hologram. Nowadays, holographic printing or direct laser lithography can make it easier to print the CGHs on the films [9,10]. Recently, a phase reflecting diffractive optical element (DOE) has been manufactured using electron-beam technology to record a cylindrical computer generated hologram [11]. On the other hand, the flexible dynamic display devices have also achieved great progress. Some flexible screens have already been applied in the field of near-eye displays and 3D displays [12,13]. And the appearance of flexible liquid crystal display (LCD) makes curved SLMs possible [14]. Therefore, the rapid generation algorithms of cylindrical holograms have attracted the attention of many researchers.

Yamaguchi et al. proposed a calculation method for cylindrical CGH that is viewable at 360° [15]. However, it took over 81 hours on a parallel computing machine for an object of 15×15 ×15 mm. Sando et al. generated cylindrical and spherical holograms in spatial domain using a convolution method [16]. Three FFT calculations have been used for simulating the propagation of wavefront. The calculation time required in their method is 10,000 times less than that of the direct method. However, the object is limited to be a cylinder. Jackin et al. introduced another high-speed calculation method based on FFT, in which the angular spectrum diffraction formula of the cylinder model was proposed and the transfer function was found [17]. But the shape of the object is also limited. Zhao et al. proposed a fast calculation based on a wavefront recording surface (WRS) [18]. The shape of the object can be arbitrary. If the radius of the WRS and the size of the object are small, this method can much reduce the generation time. Otherwise, the acceleration may be not significant. Recently, Kang et al. proposed using a wavefront plane with an approximate compensation to accelerate curved hologram generation [19]. However, it is not easy to apply this method to the generation of cylindrical holograms.

All of the above methods can be classified as the wave-optics based approach. In the wave-optics based approach, the depth information of the recorded object is represented by a set of many points, polygons, or layers [2022]. The wave field from each point, polygon or layer is propagating independently to the hologram plane and then adding them together to generate a hologram. This approach can provide accurate depth cues, but the view-dependent properties, such as occlusion, gloss, shading and specular reflection, are hard to present. Without these properties, the fidelity of the reconstructed 3D images will be reduced. There is also another type of approach can solve these issues named the ray-optics based approach [23], which is also known as the holographic stereogram (HS). The ray-optics based approach records the light ray information of parallax images which is also called element image (EI). The parallax images are usually captured from 3D objects using a camera array or lens array. The hologram is partitioned into many small square segments named hogels, and each hogel is converted from a corresponding EI. The directional information of each light ray is encoded as a plane wave phase with a specific spatial frequency. Since the parallax images are captured either by camera or by physical rendering, the view-dependent properties mentioned above can be easily encoded into the hologram, making the 3D image photorealistic. The FFT is always used to accelerate the computation [24,25]. In our previous work, we have proposed a novel method based on pinhole-type integral imaging (PII). It is even more efficient than conventional FFT-based HS generation [26,27]. The ray-optics based approach was used to generate flat holograms, but there have been no reports on the generation of cylindrical or curved holograms.

In this paper, we propose using ray-optics based approach to generate the CCGH. To our best knowledge, it is the first time we have applied the ray-optics based approach to cylindrical holograms. Unlike the conventional HS, the cameras are curved arranged on a concentric circle of the cylindrical hologram to capture parallax images. Then, the captured parallax images are recorded into the cylindrical hologram. Similarly, the cylindrical hologram is also divided into many curved hogel (elemental hologram). We propose two specific recording algorithms to record the captured parallax images into the cylindrical hologram. In the first algorithm, the FFT-based method with an approximate compensation is adopted to generate the hologram. The second algorithm is based on the PII method. This method works by recording the diffraction patterns produced by the pinholes. It is more efficient and only requires one simple calculation step. Compared with the current algorithms based on the wave-optics based approach, the proposed methods can simply present view-dependent properties such as occlusion, shading, and glossiness. And the recorded objects can be of any shape and not limited to be a cylinder. The simulation results confirm that our proposed methods can realize a fast CCGH generation. Since the recording process of each curved hogel is independent, parallel computing can further improve the computing speed.

Section 2 briefly introduces the conventional holographic stereogram. Section 3 presents the capture process of CCGH generation based on ray-optics approach. Section 4 and Section 5 describe the proposed two recording algorithms. Section 6 provides some discussions. Finally, Section 7 gives concluding remarks.

2. Conventional holographic stereogram

Conventional holographic stereogram (HS) is an effective method using multi-view images to present the view-dependent properties [24,25]. The generation of conventional HS typically involves two processes: the capture process and the recording process. As Fig. 1(a) shows, in the capture process of conventional HS, a lens array or camera array is usually arranged along a plane to capture parallax images. Then, in the recording process, each hogel of the holographic stereogram is converted from the corresponding parallax image using FFT [24]. During the reconstruction, each hogel emits a set of plane waves to reproduce the light field [25]. Since the parallax images are captured either by camera or by physical rendering, the view-dependent properties mentioned above can be easily encoded into the hologram, making the 3D image photorealistic. In this paper, we propose using ray-optics based approach to generate the CCGH. Compared with the conventional HS, the capture and recording processes are both different.

 figure: Fig. 1.

Fig. 1. (a) Principle of conventional holographic stereogram. (b) Wavefront reconstruction of conventional holographic stereogram.

Download Full Size | PDF

3. Capture process of CCGH generation based on ray-optics methods

Figure 2 shows the capture process of the proposed method to generate a CCGH. Different from the conventional HS, the camera or lens array is curvedly arranged around a circle to capture the parallax images and each camera is facing the center of the circle. Assume that there are Nx×Ny EIs captured using Nx×Ny cameras. Accordingly, the CCGH is also spatially segmented into Nx×Ny curved hogels (hologram element), and each hogel corresponds to a parallax image.

 figure: Fig. 2.

Fig. 2. (a) The capturing process using the curved lens array. (b) Top view.

Download Full Size | PDF

This kind of arrangement has two fundamental benefits. Firstly, due to the orientation of each camera in the same row is different, light information from 360° can be captured. Secondly, since all cameras face the center of the circle, the positional relationship between each EI and the corresponding curved hogel is the same. Thus, the recording processes of all curved hogels can be the same, which means the recording process is greatly simplified. In the next two sections, two specific recording algorithms are described.

4. Recording process based on fast Fourier transform

Different from conventional HS, since each hogel on a CCGH is no longer a plane, a single FFT can not covert the EI into a curved hogel. Thus, the recording process is changed. Figure 3 shows the specific recording process of each curved hogel using the proposed FFT-based method. For each lens, the corresponding EI is placed at the front focal plane of the lens array, and the back focal plane of the lens is set near the CCGH. The recording process contains two steps. In the first step, the complex amplitude distribution on the back focal plane of the lens is calculated. According to the common knowledge in Fourier optics, if an object is placed at the front focal plane of the lens, we can obtain its exact Fourier transform at the back focal plane of the lens. Thus, to each elemental lens, the distribution on the back focal plane H(u, v) is the exact Fourier transform of the elemental image I(x,y) in front of it. We can obtain the following Fourier transform relation [24]:

$$H(u,v) = \frac{1}{{j\lambda f}}\int {\int_{ - \infty }^\infty {I(x,y)} \exp [ - j\frac{{2\pi }}{{\lambda f}}(xu + yv)]dxdy}$$
where λ is the wavelength and f is the focal length of the lens array.

 figure: Fig. 3.

Fig. 3. Principle of the proposed FFT-based method.

Download Full Size | PDF

Then, in the second step, the distribution at the back focal plane of the lens is converted to the curved hogel by adding a phase different distribution. Since the CCGH is divided into many small hogels, the size and the central angle of each curved hogel are small enough. Thus, the phase difference distribution can be regarded as an approximate compensation generated by the geometric optical path difference [19]. As Fig. 3 shows, point B on the back focus plane can be approximately seen as the vertical projection of point A on CCGH. For each point on the CCGH, there is a corresponding point on the back focus plane. Thus, the resolution of the curved hogel is seen as equal to the back focus plane and the curved hogel can be expressed as:

$${H_c}(u,v,{z_2}) = H(u,v,{z_1})\exp [jk({z_2} - {z_1})]$$
where (z2z1) is the perpendicular distance between the back focal plane and the curved hogel, ${\exp[\textrm{jk}(}{\textrm{z}_\textrm{2}}\textrm{ - }{\textrm{z}_\textrm{1}}\textrm{)]}$ is phase difference distribution, z1 is the coordinate on the back focal plane, z2 is the coordinate on the cylindrical hologram surface. z1 is a constant and can be expressed as:
$${\textrm{z}_1} = R\cos (\frac{{360}}{{\textrm{2}{N_x}}})$$
where R is the radius and Nx is the number of hogels in x direction, z2 varies with different locations and can be expressed as:
$${z_2} = R\cos \theta \begin{array}{{c}{c}} {\begin{array}{{c}{c}} {}&{} \end{array}}&{ - \frac{{360}}{{2{N_x}}} < \theta < } \end{array}\frac{{360}}{{2{N_x}}}$$

Since the positional relationship between each EI and the corresponding curved hogel is the same, the phase difference distribution of each curved hogel is also the same. This phase difference distribution can be precalculated and stored in a table. When calculating a CCGH, we can simply look up this phase different distribution from the table and multiply it with the complex amplitude distribution on the back focal plane to obtain a final curved hogel.

Next, the computation amount of the proposed FFT-based method is discussed. Assuming that there are Nx×Ny captured multi-view images each with N0×N0 pixels, the computational amount of FFT in the first step is ${N_x}{N_y}{N_0}^\textrm{2}\textrm{lo}{\textrm{g}_2}{N_0}$. The computational amount of the second step is only ${N_x}{N_y}{N_0}^2$, thus the total computation amount is ${N_\textrm{x}}{N_y}{N_0}^\textrm{2}\textrm{lo}{\textrm{g}_2}{N_0} + {N_\textrm{x}}{N_y}{N_0}^2$. Since each hogel is generated independently, parallel computing can be used to accelerate the generation of CCGHs.

Numerical simulations were performed to verify the proposed method. As Fig. 4(a) and Fig. 4(b) show, some letters were set at different positions in the 3ds Max modeling software. Among them, the two letters ‘3’ and ‘D’ were both in the vicinity of θ=0° but at different depths. The rest letters were set in the vicinity of some other angles. And a virtual camera array was built to capture the elemental image array (EIA) of these letters. The camera array was evenly arranged along a cylinder with a radius of 15.2 mm. The radius of the cylindrical hologram is set as 12.7 mm. The camera array consisted of 320×40 virtual cameras, and the pitch and focal length were set as p=0.25 mm and g=2.5 mm. Figure 4(d) shows the captured EIA which contains 16000×2000 pixels with 5 µm pixel pitch. Thus, each EI or hogel contains 50×50 pixels and the size of each hogel is 0.25 mm. Figure 4(f) shows the enlarged detail of the captured EIA. Here, a random phase distribution was added to the elemental images to avoid the loss of low frequency information [28].

 figure: Fig. 4.

Fig. 4. (a) The original objects. (b) The top view of the letters. (c) The phase difference distribution between the back focal plane and the curved hogel. (d) The captured EIA using 320×40 virtual cameras. (e) The generated CCGH. (f) The enlarged detail of captured EIA. (g) The enlarged detail of generated CCGH.

Download Full Size | PDF

Using the MATLAB R2016b with an intel i7-8700 (3.2 GHz) and memory of 32 Gbytes, the calculation time of the first step for performing FFT of each EI is 695 ms, and the calculation time of the second step for adding a phase difference distribution is only 93 ms. Thus, the total generation time of the CCGH based on FFT method is about 788 ms. Since we only used one thread of the CPU, the calculation can be further accelerated if we use more CPU threads simultaneously. Figure 4(c) shows the phase difference distribution between the back focal plane and the curved hogel. Figure 4(e) shows the calculated CCGH. And Fig. 4(g) shows the enlarged detail of the CCGH. The CCGH also contains 16000×2000 pixels.

During the reconstruction, only three interested regions (of the object) are expected to be reconstructed in focus and other areas are unfocused. The width and height of the reconstruction plane were set to be 10 mm and 15 mm, respectively. Here, in order to verify our method, the direct integration method is used to reconstruct the hologram [29]. It took about 6667 seconds to reconstruct the plane at 10 mm far from the circle center, while the hologram generation only took 788 milliseconds. From this perspective, it also proves that the CCGH generation using the proposed method is very fast. Figure 5 shows the reconstructed results. Figures 5(a) and 5(b) show the numerical reconstruction results in the vicinity of θ=0° at different depths. It can be seen that at 10 mm and 6 mm distant from the circle center, the two letters ‘3’ and ‘D’ can be reconstructed correctly, respectively. It proves that the calculated CCGH can correctly reconstruct images of different depths.

 figure: Fig. 5.

Fig. 5. (a) and (b) Reconstruct images in the vicinity of θ=0°at different depths. (c) and (d) Reconstruct images in the vicinity of θ=105°and θ=225°.

Download Full Size | PDF

Then, to prove that the proposed method correctly records information from different perspectives. The hologram was also reconstructed from other particular directions. Figures 5(c) and 5(d) show the reconstructed letters ‘H’ and ‘U’, which are in the vicinity of θ=105° and θ=225°, respectively. Since ‘T’ and ‘F’ are not on the reconstruction plane, they are blurred. Hence the hologram was able to reconstruct the stored information properly.

To verify that the proposed method is equally effective for photorealistic three-dimensional objects, another simulation was performed. As Fig. 6 shows, some animals were set at different positions. A curved virtual camera array was built to capture the EIA of the animals. The camera array was arranged along a cylinder with a radius of 27.9 mm, and the height of the camera array is 20 mm. The animals were built inside the camera array, and each takes up around 1/6 of the space. Under the action of ambient light, they show different glosses. The radius of the hologram was set as 25.4 mm. The camera array consisted of 640×80 virtual cameras, and the pitch and focal length were set as p=0.25 mm and g=2.5 mm. During the reconstruction, only three interested regions (of the object) are expected to be reconstructed in focus and other areas are unfocused. Figure 7 shows the reconstructed results. It can be seen from Fig. 7(a) that glossy 3D objects can be reconstructed and the occlusion problem can be solved well. Which means the proposed method is effective.

 figure: Fig. 6.

Fig. 6. Positions of animals.

Download Full Size | PDF

 figure: Fig. 7.

Fig. 7. Numerical reconstructions of the CCGH from different perspectives.

Download Full Size | PDF

To further verify that the proposed method can reconstruct photorealistic three-dimensional objects of different depths, another simulation was performed. For simplicity, we only used the 1/8 surface of CCGH to record and reproduce 3D objects. It can be seen as a curved hologram with a 45° center angle [19]. As Fig. 8 (a) shows, the other two rabbits were built in the 3ds Max modeling software. The camera array consisted of 80×80 virtual cameras was built to capture the EIA of the rabbits. Figure 8(b) shows the captured EI array (EIA) which contains 4000×4000 pixels with 5 µm pixel pitch. It can be seen that the occlusion and the surface glossiness are rendered in the EIs.

 figure: Fig. 8.

Fig. 8. (a)The position of two rabbits. (b) Captured EIA. (c) and (d) Numerical reconstruction of the rabbits at different depths.

Download Full Size | PDF

Figures 8(c) and 8(d) show the reconstructed images of the rabbits at different depths. It can be clearly seen that different rabbits are correctly and clearly reconstructed at different depths. Since the reference light is not a plane wave and converges at the circle center, the two rabbits expand or shrink outside their own depth. However, since each rabbit can be clearly reconstructed with a correct aspect ratio at its own depth, the entire 3D scene in space can be reconstructed correctly. From these figures, the view-dependent properties, such as occlusion and glossiness, can also be easily confirmed. This kind of view-dependent properties is hard to present in the wave-based algorithms [21].

There is also one disadvantage of the proposed algorithm. Since the CCGH is spatially divided into many hogels, there is a trade-off between the spatial sampling resolution and angular sampling resolution [5]. The reconstructed image quality depends on the both factors. One can increase the spatial sampling resolution by decreasing the pitch of hogels. But the angular sampling resolution will be decreased at the same time and may affect the image quality as well. In addition, a certain angular sampling resolution is essential for continuous motion parallax. The pitch of hogel can be optimized to obtain a better reconstruction [5]. However, this trade-off is still an inherent problem limiting the reconstructed image quality. It also exists in many 3D displays such as the integral imaging (II) [30] and the autostereoscopic display [31]. Several methods have been proposed to improve the image quality such as the move array lenslet technique [25], the phase-added stereograms [32], and the fully computed HS [33].

5. Recording process based on pinhole-type integral imaging

Although the first method can quickly generate CCGHs, it spends a major amount of time on performing a FFT for each hogel. In the second algorithm, we propose a more efficient algorithm to generate the CCGH. This method requires only one simple step. It is still a ray-optics based method and based on pinhole-type II. Similarly, curvedly arranged pinhole cameras are used to capture parallax images and each pinhole camera also faces the center of circle. Then, the captured images are encoded into a CCGH. The applications of this method in the generation of flat holograms have been already demonstrated in our previous works [26,27].

Figure 9 shows the principle of recording process of the CCGH based on PII. A 3D object is reconstructed by the PII system, and the CCGH is adopted to record the object beam by simulating light propagation from pinholes. Each pixel of the EI emits a beam of light that passes through the corresponding pinhole. The EI is projected onto CCGH and forms a diffraction pattern. Therefore, each pinhole can be seen as a point light source emitting different pyramid-shaped light rays in different directions. The intensity of each light rays are determined by the corresponding pixel of EI.

 figure: Fig. 9.

Fig. 9. Principle of recording stage of the CCGH based on PII.

Download Full Size | PDF

To each pinhole, the recorded complex amplitude on CCGH can be seen as the multiplication of a spherical wave phase and the projected EI. For example, to the center pinhole, the recorded complex amplitude U(xc, yc) on the curved hogel can be expressed as the multiplication of the spherical wave phase u(x, y) and the projected EI:

$$\begin{array}{l} U(x,y) = u(x,y) \cdot proj[EI(x,y)],\\ u(x,y) = \frac{1}{{{r_{pc}}}}\exp (i\frac{{2\pi }}{\lambda }{r_{pc}}) \end{array}$$
where EI(x,y) is the original EI, and proj[] is the projection operator on each EI, rpc is the distance between the center pinhole (xp, yp, zp) and a point (xc, yc, zc) on the CCGH, The distance rpc can be given by:
$${r_{pc}} = \sqrt {{{({{x_p} - {x_c}} )}^2} + {{({y_p} - {y_c})}^2} + {{({z_p} - {z_c})}^2}}$$

In the cylindrical coordinate system, the point (xc, yc, zc) can be given by:

$$\begin{array}{l} {x_c} = R\sin \theta \\ {y_c} = {y_c}\\ {z_c} = R\cos \theta \end{array}$$
where R is the curvature radius of the curve hologram. Since each EI faces the center of the CCGH, it’s relative position to the cylindrical hologram is the same. Therefore, the phase distribution of the spherical wave from each pinhole is also the same. This phase distribution can be precalculated and stored in a table. When generating a CCGH, the recorded complex amplitude from each pinhole can be simply obtained by looking up the spherical wave phase u(x,y) from the table and multiplied with the corresponding projected EI. Then, the total diffraction pattern on the CCGH Utotal(x,y) can be obtained by adding them together:
$${U_{total}}(x,y) = \sum\limits_{m ={-} {N_x}/2}^{{N_x}/2} {\sum\limits_{n ={-} {N_y}/2}^{{N_y}/2} {u(x - m{p_x},y - n{p_y}) \cdot proj[E{I^{m,n}}(x,y)]} }$$
where m and n are the sequence numbers of EIs in the directions of x and y, px and py are the pinhole pitch in two dimensions, Nx and Ny are the total number of EIs in two dimensions.

Note that, in conventional flat HS generation based on the PII [26,27], the projection relationship between each EI and the hologram is very simple. The projection of EI on the flat hologram can be regarded as the uniform magnification of the EI. Thus, the projection of the EI on the hologram can be simply obtained by an interpolation. However, it is different in a CCGH generation. As Fig. 10 shows, assume that the pitch of each EI is p, the distance in z direction between the pinhole and the edge of the diffraction pattern is L, M is set as L/g. We firstly consider the projection in the x direction, for each row, the magnification is the same. The central angle $\phi$ corresponding to the projection area can be easily deduced as:

$$\phi = 2{\sin ^{ - 1}}(\frac{{Mp}}{{2R}})$$
where M = L/g. And the corresponding arc length can be deduced as:
$$A = R\ast \phi = 2R{\sin ^{ - 1}}(\frac{{Mp}}{{2R}})$$

 figure: Fig. 10.

Fig. 10. The geometric relations of the projection on CCGH.

Download Full Size | PDF

Then, for the projection of y direction, different columns have different magnifications. The largest magnification is at the edges of the arc, where the length of the projected columns can be expressed as:

$${B_{\max }} = Mp = \frac{L}{g}p$$

And the smallest magnification is at the center of the arc, the length of the projected column on CCGH can also be deduced as:

$${B_{\min }} = \frac{{L - R(1 - \cos \phi )}}{g}p$$

Generally, the projection of EI onto the CCGH is very small, and the central angle $\phi$ is usually a very small angle, we can get the following approximations:

$$A \approx Mp = \frac{L}{g}p$$
$$B{}_{\min } \approx Mp = \frac{L}{g}p$$

Therefore, the total magnification radio of projection on the curved CCGH can still be approximated to M. Strictly speaking, the projection size of each pixel of EI on CCGH is not completely the same, and the size of each projection can be solved strictly through geometric relations. But for the sake of simplicity, when the center angle is small, we can still approximate them as the same size and evenly distributed on the CCGH. Thus, the projection of EI on the CCGH can be considered as a uniform magnification of EI. Accordingly, the projection of each EI can also be simply obtained by rotating it by 180 degrees and magnifying the rotated image by either interpolation or simple replication.

During the process of reproduction, the CCGH based on PII will reconstruct the recorded light rays, as well as the virtual pinhole array. When the observers see through the hologram, they are actually seeing the 3D image through the reproduced virtual pinhole array. It is a very similar situation to that of PII. That is, the observer’s eye samples one light ray from each virtual or real pinhole. Note that the PII-based method also has the trade-off between angular resolution and spatial resolution [25].

Assuming there are Nx×Ny multi-view images each with N0×N0 pixels, the total computational amount of is ${M^2}{N_x}{N_y}{N_0}^2$, where M = L/g is the magnification ratio. It can be seen from the formula that the calculation time is closely related to the number of EI and the pixel number of each EI. Since only one calculation step is required, the calculation speed of this method is very fast. In the case M=1, the calculation amount of this method is only NxNyN02. It only uses the same time of the phase compensation process in the FFT-based method.

Numerical simulations were performed to verify the proposed method. Similarly with the first simulation in Section 4, some letters were set at different positions. And a virtual pinhole camera array was built to capture the EIA of these letters. The parameters were also set the same as Section 4. The magnification ratio was set as 1 to ensure a fast computing speed. Using the MATLAB R2016b with an intel i7-8700 (3.2 GHz) and memory of 32 Gbytes, the generation time of CCGH is only 96 ms, which is almost the same as the FFT-based method’s second step. Here, only one thread of the CPU is used in the simulation and we can easily get real-time calculations through parallel works. Figure 11(a) shows the generated CCGH and Fig. 11(b) shows the enlarged detail of the hologram in the red box. Figure 11(c) shows the phase distribution of the spherical wavefront emitted by each pinhole.

 figure: Fig. 11.

Fig. 11. (a) The generated PII-based CCGH. (b) The enlarged detail. (c) The phase distribution of the spherical wavefront emitted by each pinhole.

Download Full Size | PDF

Similarly, Figs. 12(a) and 12(b) show the numerical reconstruction results at different distances from circle center of CCGH. It can be seen that at 10 mm and 6 mm distant from the circle center, the two letters ‘3’ and ‘D’ can be reconstructed clearly, respectively. Figures 12(c) and 12(d) show the numbers ‘H’ and ‘U’, which are in the vicinity of θ=105° and θ=225° reconstructed, respectively. Hence the simulations preliminary prove the effectiveness of our method.

 figure: Fig. 12.

Fig. 12. The reconstructed images of cylindrical pinhole-type HS.

Download Full Size | PDF

Consistent with the Section 4, two more simulations were performed. Figure 13 shows the reconstructed images of the animals from different view. It can be seen that the three selected animals can be clearly reconstructed using the proposed PII-based method. Behind them, the other animals are blurred which proves the proposed method can provide the depth cue for human eye. Figure 14 shows the reconstructed images of the rabbits at different depths. From these figures, the view-dependent properties, such as occlusion and glossiness, can be easily confirmed.

 figure: Fig. 13.

Fig. 13. Numerical reconstructions of the CCGH from different perspectives.

Download Full Size | PDF

 figure: Fig. 14.

Fig. 14. Numerical reconstruction of the rabbits at different depths using pinhole-type method.

Download Full Size | PDF

6. Discussions

6.1 Comparisons of the two proposed methods

In this section, we discuss the comparisons of the two proposed methods. In the comparison of calculation speed, the first FFT-based method took 788 ms for generating a cylindrical hologram of 16000×2000 pixels while the second PII-based method only took 96 ms. Thus, the second PII-based method’s calculation speed is superior. However, in other aspects, such as depth of field (DOF) and resolution, the two methods have their advantages and disadvantages. For the FFT-based method, as Fig. 1 shows, it uses multiple plane waves to approximate the spherical waves emitted from each object point [24]. The voxel size of the 3D object is equal to the hogel size p, and the resolution can be expressed as:

$${R_1} = \frac{1}{p}$$
where p is the pitch of hogels. With this method, the reconstruction quality is usually slightly low. However, it has a large depth of field since the width of a plane wave does not change during its propagation.

As for the PII-based method, as Fig. 15 shows, each EI emits spherical wave rather than plane wave toward the reconstructed plane and finally focuses on the pinhole plane [26]. According to geometric relationship, the resolution can be expressed as:

$${R_2} = \frac{{gN}}{{lp}}$$
where l is the distance between the reconstruction plane and the pinhole, p is the pitch of pinholes, g is the gap between EI and pinhole, N is the pixel number of EI. When an object point is located at the pinhole plane, the image point will be reconstructed ideally to be the same as Fresnel hologram. When object points are away from the pinhole plane, resolution degradation appears. Thus, the DOF is limited.

 figure: Fig. 15.

Fig. 15. Voxel size on the reconstructed object plane of PII-based method.

Download Full Size | PDF

Thus, the difference between FFT-based method and PII-based method is that different kinds of waves are used to represent the real light field. This difference causes different properties of the two methods, that is, the FFT-based method is better at depth while the PII-based method is better at resolution. It can be confirmed from Fig. 8 and Fig. 14, for the front rabbit close to the camera array, the second method based on PII can obtain a better reconstruction quality. However, when the reconstructed plane is far away from the camera array and focused on the second rabbit, the reconstruction quality of the first method is almost unchanged, but the reconstruction quality of the second method is decreased. This discussion can be simplified as shown in Table 1.

Tables Icon

Table 1. Comparison of the proposed two methods.

6.2 Comparisons of the wave-optics based approach and the proposed approach

As mentioned in the introduction, in the wave-optics based method, the recorded object is represented by a set of many self-luminous points, polygons, or layers. Each self-luminous point, polygon, or layer emits an independent wavefront to the hologram plane. These wavefronts are added together to form the final hologram. For simplicity, we chose the layer-based method for comparison [19,21]. The 3D models used are the same as Fig. 8(a) in the third simulation of the proposed methods.

Figure 16 shows the slices of rabbits. Figure 17(a) shows the reconstructed result focusing at the first rabbit. Without occlusion culling, all the layers are overlapped, making the image difficult to distinguish. Then, a common method is adopted to solve the occlusion problem [34]. The basic idea is that if a value on the front layer is not zero, it will block the wavefront distribution propagating from the back layers, otherwise lets the wavefront pass. However, it does not consider that the object itself may has some black parts. In this situation, some of the back lights may be leaked. Figure 17(b) shows the reconstructed result with occlusion culling. The leakage happens since the front rabbit has some black parts. Figure 17(c) shows the enlarged detail of this region. Besides, in the wave-optics based method, the interaction of ambient light and the 3D objects is difficult to be simulated. Thus glossy surface of 3D object is hard to be reconstructed. But the occlusion, shadow, and gloss can be easily confirmed from Fig. 8 and Fig. 14 using the proposed methods. For comparison, Fig. 17(d) shows the enlarged detail of the same region in Fig. 8(c) using the proposed FFT-based method.

 figure: Fig. 16.

Fig. 16. Slices of the rabbits.

Download Full Size | PDF

 figure: Fig. 17.

Fig. 17. (a) The reconstruction result of the wave-optics based method without occlusion culling. (b) The reconstruction result with occlusion culling. (c) The enlarged detail of Fig. 17(b). (d) The enlarged detail of Fig. 8(b).

Download Full Size | PDF

7. Conclusion

In this paper, we proposed two fast algorithms based on ray-optics based approach to generate cylindrical photorealistic holograms. To our best knowledge, it is the first time we have applied the ray-optics based methods to cylindrical holograms. The cameras are cylindrically arranged to capture parallax images. Then, the captured parallax images are recorded into a cylindrical hologram. Two specific recording algorithms were proposed. In the first algorithm, the FFT-based method with an approximate compensation is adopted to generate the hologram. This algorithm contains two steps. In the first step, the complex amplitude distribution on the back focal plane of the lens is calculated using FFT. Then, in the second step, the distribution at the back focal plane of the lens is converted to the curved hogel by adding an approximate compensation. For a cylindrical hologram of 16000×2000 pixels, the generation time is 788 ms. The second algorithm is based on the PII method. This method works by recording the diffraction patterns produced by the pinholes. This method only requires one calculation step, so it is more efficient. It took only 96 ms to generate a cylindrical hologram of 16000×2000 pixels. Both proposed methods can simply present view-dependent properties such as occlusion, shading, and glossiness. And the recorded object can be of any shape and not limited to be a cylinder. Since the proposed methods use different kinds of waves to represent the real light field, the FFT-based method is better at depth while the PII-based method is better at resolution. The simulation results confirm that our proposed method can realize a fast CCGH generation, and the calculation time can be further reduced to obtain a real-time generation through parallel computing. Moreover, the proposed methods can record 360° information of a real 3D scene with some real cameras. And the recorded CCGHs can digitally reconstruct captured objects from different perspectives and at different depths. Thus, the proposed methods also have some application potentials in real light field information acquisition, light field information storage and real 3D object modeling.

Funding

National Natural Science Foundation of China (61805065); Major Science and Technology Projects in Anhui Province (No. 201903a05020057); Fundamental Research Funds for the Central Universities (JZ2021HGTB0077).

Acknowledgments

The authors thank anonymous reviewers for their thoughtful and helpful comments.

Disclosures

The authors declare no conflicts of interest.

Data availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

References

1. D. Wang, C. Liu, C. Shen, Y. Xing, and Q. Wang, “Holographic Capture and Projection System of Real Object Based on Tunable Zoom Lens,” PhotoniX 1(1), 1–15 (2020). [CrossRef]  

2. X. Li, Y. Wang, Q. Wang, S. Kim, and X. Zhou, “Copyright Protection for Holographic Video Using Spatiotemporal Consistent Embedding Strategy,” IEEE Trans. Ind. Inf. 15(11), 6187–6197 (2019). [CrossRef]  

3. S. J. Hart and M. N. Dalton, “Display holography for medical tomography,” Practical Holography IV. International Society for Optics and Photonics 12, 116–135 (1990). [CrossRef]  

4. Z. Wang, G. Lv, Q. Feng, A. Wang, and H. Ming, “Resolution Priority Holographic Stereogram Based on Integral Imaging with Enhanced Depth Range,” Opt. Express 27(3), 2689–2702 (2019). [CrossRef]  

5. X. Zhang, G. Lv, Z. Wang, Z. Hu, S. Ding, and Q. Feng, “Resolution-Enhanced Holographic Stereogram Based on Integral Imaging Using an Intermediate-View Synthesis Technique,” Opt. Commun. 457, 124656 (2020). [CrossRef]  

6. X. Zhang, G. Lv, Z. Wang, P. Dai, D. Li, M. Guo, H. Xiao, and Q. Feng, “Holographic Display System with Enhanced Viewing Angle Using Boundary Folding Mirrors,” Opt. Commun. 482, 126580 (2021). [CrossRef]  

7. T. Jeong, “Cylindrical Holography and Some Proposed Applications,” J. Opt. Soc. Am. 57(11), 1396–1398 (1967). [CrossRef]  

8. O. D. D. Soares and J. C. A. Fernandes, “Cylindrical hologram of 360 degrees field of view,” Appl. Opt. 21(17), 3194–3196 (1982). [CrossRef]  

9. K. T. Lim, H. Liu, Y. Liu, and J. K. Yang, “Holographic colour prints for enhanced optical security by combined phase and amplitude control,” Nat. Commun. 10(1), 1–8 (2019). [CrossRef]  

10. Y. G. Kim, H. G. Rhee, and Y. S. Ghim, “Real-time method for fabricating 3D diffractive optical elements on curved surfaces using direct laser lithography,” Int J Adv Manuf Technol 114(5-6), 1497–1504 (2021). [CrossRef]  

11. G. Anton and D. Svyatoslav, “Cylindrical computer-generated hologram for displaying 3D images,” Opt. Express 26(17), 22160–22167 (2018). [CrossRef]  

12. G. Xue, Q. Zhang, J. Liu, Y. Wang, and M. Gu, “Flexible Holographic 3D Display with Wide Viewing Angle,” Frontiers in Optics2016, p.FTu5A.3(2016).

13. J. Ratcliff, A. Supikov, S. Alfaro, and R. Azuma, “ThinVR: Heterogeneous microlens arrays for compact, 180 degree FOV VR near-eye displays,” IEEE Trans. Visual. Comput. Graphics 26(5), 1981–1990 (2020). [CrossRef]  

14. W. Y. Li, P. H. Chiu, T. H. Huang, J. K. Lu, Y. H. Lai, Y. S. Huang, C. T. Chuang, C. N. Yeh, and N. Sugiura, “9.3: The first flexible liquid crystal display applied for wearable smart device,” >SID Symposium Digest of Technical Papers46(1), 98–101 (2015).

15. T. Yamaguchi, T. Fujii, and H. Yoshikawa, “Fast Calculation Method for Computer-Generated Cylindrical Holograms,” Appl. Opt. 47(19), D63–D70 (2008). [CrossRef]  

16. Y. Sando, M. Itoh, and T. Yatagai, “Fast Calculation Method for Cylindrical Computer-Generated Holograms,” Opt. Express 13(5), 1418–1423 (2005). [CrossRef]  

17. B. J. Jackin and T. Yatagai, “Fast Calculation Method for Computer-Generated Cylindrical Hologram Based on Wave Propagation in Spectral Domain,” Opt. Express 18(25), 25546–25555 (2010). [CrossRef]  

18. Y. Zhao, J. Jeong, G. Li, J. Jeong, and N. Kim, “Fast Calculation Method of Computer-Generated Cylindrical Hologram Using Wave-Front Recording Surface,” Opt. Lett. 40(13), 3017–3020 (2015). [CrossRef]  

19. R. Kang, J. Liu, D. Pi, and X. Duan, “Fast Method for Calculating a Curved Hologram in a Holographic Display,” Opt. Express 28(8), 11290–11300 (2020). [CrossRef]  

20. T. Shimobaba, H. Nakayama, N. Masuda, and T. Ito, “Rapid Calculation Algorithm of Fresnel Computer-Generated-Hologram Using Look-up Table and Wavefront-Recording Plane Methods for Three-Dimensional Display,” Opt. Express 18(19), 19504–19509 (2010). [CrossRef]  

21. Y. Zhao, L. Cao, H. Zhang, D. Kong, and G. Jin, “Accurate Calculation of Computer-Generated Holograms Using Angular-Spectrum Layer-Oriented Method,” Opt. Express 23(20), 25440–25449 (2015). [CrossRef]  

22. K. Matsushima and S. Nakahara, “Extremely High-Definition Full-Parallax Computer-Generated Hologram Created by the Polygon-Based Method,” Appl. Opt. 48(34), H54–H63 (2009). [CrossRef]  

23. Z. Wang, L. Zhu, X. Zhang, P. Dai, G. Lv, Q. Feng, A. Wang, and H. Ming, “Computer-Generated Photorealistic Hologram Using Ray-Wavefront Conversion Based on the Additive Compressive Light Field Approach,” Opt. Lett. 45(3), 615–618 (2020). [CrossRef]  

24. Y. Ichihashi, R. Oi, T. Senoh, K. Yamamoto, and T. Kurita, “Real-Time Capture and Reconstruction System with Multiple GPUs for a 3D Live Scene by a Generation from 4 K IP Images to 8 K Holograms,” Opt. Express 20(19), 21645–21655 (2012). [CrossRef]  

25. Z. Wang, R. Chen, X. Zhang, G. Lv, Q. Feng, Z. Hu, H. Ming, and A. Wang, “Resolution-Enhanced Holographic Stereogram Based on Integral Imaging Using Moving Array Lenslet Technique,” Appl. Phys. Lett. 113(22), 221109 (2018). [CrossRef]  

26. Z. Wang, G. Lv, Q. Feng, A. Wang, and H. Ming, “Simple and Fast Calculation Algorithm for Computer-Generated Hologram Based on Integral Imaging Using Look-up Table,” Opt. Express 26(10), 13322–13330 (2018). [CrossRef]  

27. Z. Wang, G. Lv, Q. Feng, A. Wang, and H. Ming, “Enhanced Resolution of Holographic Stereograms by Moving or Diffusing a Virtual Pinhole Array,” Opt. Express 28(15), 22755–22766 (2020). [CrossRef]  

28. T. Shimobaba and T. Ito, “Random phase-free computer-generated hologram,” Opt. Express 23(7), 9549–9554 (2015). [CrossRef]  

29. M. Lucente, “Interactive computation of holograms using a look-up table,” J. Electron. Imaging 2(1), 28–34 (1993). [CrossRef]  

30. Q. Wang, C. Ji, L. Li, and H. Deng, “Dual-View Integral Imaging 3D Display by Using Orthogonal Polarizer Array and Polarization Switcher,” Opt. Express 24(1), 9–16 (2016). [CrossRef]  

31. W. Zhao, Q. Wang, A. Wang, and D. Li, “Auto stereoscopic Display Based on Two-Layer Lenticular Lenses,” Opt. Lett. 35(24), 4127–4129 (2010). [CrossRef]  

32. H. Kang, T. Yamaguchi, and H. Yoshikawa, “Accurate Phase-Added Stereogram to Improve the Coherent Stereogram,” Appl. Opt. 47(19), D44 (2008). [CrossRef]  

33. H. Zhang, Y. Zhao, L. Cao, and G. Jin, “Fully Computed Holographic Stereogram Based Algorithm for Computer-Generated Holograms with Accurate Depth Cues,” Opt. Express 23(4), 3901–3913 (2015). [CrossRef]  

34. Z. Hao, L. Cao, and G. Jin, “Computer-generated hologram with occlusion effect using layer-based processing,” Appl. Opt. 56(13), F138–F143 (2017). [CrossRef]  

Data availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (17)

Fig. 1.
Fig. 1. (a) Principle of conventional holographic stereogram. (b) Wavefront reconstruction of conventional holographic stereogram.
Fig. 2.
Fig. 2. (a) The capturing process using the curved lens array. (b) Top view.
Fig. 3.
Fig. 3. Principle of the proposed FFT-based method.
Fig. 4.
Fig. 4. (a) The original objects. (b) The top view of the letters. (c) The phase difference distribution between the back focal plane and the curved hogel. (d) The captured EIA using 320×40 virtual cameras. (e) The generated CCGH. (f) The enlarged detail of captured EIA. (g) The enlarged detail of generated CCGH.
Fig. 5.
Fig. 5. (a) and (b) Reconstruct images in the vicinity of θ=0°at different depths. (c) and (d) Reconstruct images in the vicinity of θ=105°and θ=225°.
Fig. 6.
Fig. 6. Positions of animals.
Fig. 7.
Fig. 7. Numerical reconstructions of the CCGH from different perspectives.
Fig. 8.
Fig. 8. (a)The position of two rabbits. (b) Captured EIA. (c) and (d) Numerical reconstruction of the rabbits at different depths.
Fig. 9.
Fig. 9. Principle of recording stage of the CCGH based on PII.
Fig. 10.
Fig. 10. The geometric relations of the projection on CCGH.
Fig. 11.
Fig. 11. (a) The generated PII-based CCGH. (b) The enlarged detail. (c) The phase distribution of the spherical wavefront emitted by each pinhole.
Fig. 12.
Fig. 12. The reconstructed images of cylindrical pinhole-type HS.
Fig. 13.
Fig. 13. Numerical reconstructions of the CCGH from different perspectives.
Fig. 14.
Fig. 14. Numerical reconstruction of the rabbits at different depths using pinhole-type method.
Fig. 15.
Fig. 15. Voxel size on the reconstructed object plane of PII-based method.
Fig. 16.
Fig. 16. Slices of the rabbits.
Fig. 17.
Fig. 17. (a) The reconstruction result of the wave-optics based method without occlusion culling. (b) The reconstruction result with occlusion culling. (c) The enlarged detail of Fig. 17(b). (d) The enlarged detail of Fig. 8(b).

Tables (1)

Tables Icon

Table 1. Comparison of the proposed two methods.

Equations (16)

Equations on this page are rendered with MathJax. Learn more.

H ( u , v ) = 1 j λ f I ( x , y ) exp [ j 2 π λ f ( x u + y v ) ] d x d y
H c ( u , v , z 2 ) = H ( u , v , z 1 ) exp [ j k ( z 2 z 1 ) ]
z 1 = R cos ( 360 2 N x )
z 2 = R cos θ 360 2 N x < θ < 360 2 N x
U ( x , y ) = u ( x , y ) p r o j [ E I ( x , y ) ] , u ( x , y ) = 1 r p c exp ( i 2 π λ r p c )
r p c = ( x p x c ) 2 + ( y p y c ) 2 + ( z p z c ) 2
x c = R sin θ y c = y c z c = R cos θ
U t o t a l ( x , y ) = m = N x / 2 N x / 2 n = N y / 2 N y / 2 u ( x m p x , y n p y ) p r o j [ E I m , n ( x , y ) ]
ϕ = 2 sin 1 ( M p 2 R )
A = R ϕ = 2 R sin 1 ( M p 2 R )
B max = M p = L g p
B min = L R ( 1 cos ϕ ) g p
A M p = L g p
B min M p = L g p
R 1 = 1 p
R 2 = g N l p
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.