This paper reports a fast method for generating a 2048x2048 digital Fresnel hologram at a rate of over 100 frames per second. Briefly, the object wave of an image is nonuniformally sampled and generated on a wavefront recording plane (WPR) that is close to the object scene. The sampling interval at each point on the WRP image is then modulated according to the depth map. Subsequently, the WRP image is converted into a hologram. The hologram generated with our proposed method, which is referred to as the warped WRP (WWRP) hologram, is capable of presenting a 3-D object with faster speed as compared with existing methods.
© 2015 Optical Society of America
The exploration on fast generation of digital holograms has been an area of interest in the past 2 decades with the ultimate objective of generating holograms of (3-D) object scenes at video rates (25 to 30 frames per second). Numerous research works have been conducted to simplify the computationally intensive hologram generation process. For example, moderate reduction in the computation time has been achieved with look-up-tables [1–3], virtual windows , multi-rate filters , and the patch models . There are also techniques that utilize hardware devices to speed up some of the core processes [7–9]. The fastest approach attained, is probably the wavefront recording plane (WRP) method  . Briefly, the object wave on a virtual 2-D WRP that is close to the object scene is derived. For each object point in the scene, only a small zone of diffraction fringe patterns is determined. Subsequently, the WRP is converted into the hologram. For a sparse object scene (i.e., limited number of object points), a hologram can be generated at high frame rate. However, the computation time will increase proportionally with the number of object points, hence restricting the generation of holograms to a small, or a coarsely sampled object image. In this paper we propose a fast algorithm for hologram generation that is independent on the number of object points. The algorithm only involves a pair of re-sampling, and 4 Fast Fourier Transform (FFT) operations. The hologram generated with our proposed method, which will be presented in the following sections, is capable of preserving the depth information of a 3-D scene of a dense object scene.
2. Generating the warped wavefront recording plane (WWRP) hologram
Our proposed method for generating the WWRP hologram is comprising of 4 stages, and the following terminologies are adopted. The source 3-D object is modeled as a 3-D surface, with the intensity and depth of each object point represented by the planar images , and the depth map , respectively. The hologram and the WRP are denoted by and . We assume that , , , and are identical in size, comprising of columns and rows of pixels. Each pixel has a dimension of .
Stage 1: re-sampling (pre-warping) the object intensity image
Since , , , and are digital images, the default sampling interval is uniform for both the horizontal and the vertical directions. In this stage a new image , which is referred to as the ‘pre-warped image,’ is obtained by sampling pixels from the original image according to the depth map. In other words, the sampling interval of the pre-warped image could be nonuniform. The rationale, as well as the criteria of pixel mapping between and will be explained in the later part of this paper. For the time being, we interpret is a modified version of the original image.
Stage 2: generation of the WRP
In the 2nd stage, a WRP is generated from the pre-warped image . The WRP is a hypothetical plane that is placed at a close distance from, and parallel to , we have
Stage 3: re-sampling (warping) the WRP
In this stage, we shall explain how the depth map can be incorporated onto the WRP with our proposed method. We note that, due to the close proximity between the WRP and the object image, each object point is only affecting a small neighboring region on the WRP. We further assume that the depth map is generally smooth, so that within a small neighborhood of an object point, the depth value is practically constant. The depth of each object point within the region can be extended by changing the sampling interval at the corresponding region on the WRP. To illustrate this, we consider a simple scenario of a small region centered at on the WRP. The diffraction fringe pattern in is mainly contributed by object points that are close to the region, with almost the same depth. Object points that are farther away have less effect, and are neglected. Suppose the sampling interval in is increased by a factor , both the WRP and its corresponding image will be scaled by the same amount. The modified WRP signal is given byEq. (2) can be expressed asEq. (3), we haveEquation (5) indicates that due to the stretching of the sampling interval, the effective depth of the object points corresponding to the diffraction patterns in the region has been relocated to a new value . At the same time, the original source image , has been changed to . However, it can be easily seen that the original image can be preserved if is set to in Eq. (5). This is the principle on deriving the pre-warped image in Stage 1. Referring back to Eq. (5), if the depth of the object scene covered by the region is to be increased from to , the sampling interval in has to be increased by a factor so thatFig. 1, based on a uniform sampling interval of, say, 0.6 as an example (i.e., ). The same principle can be easily extended to 2-D sampling with non-uniform sampling intervals. From the figure, it can be seen that each sample in is mapped from one of the samples in .
Equation (9) shows that the depth information has been incorporated in the WRP image . However, the WRP is contributed from the pre-warp image instead of . To preserve the original image, we simply generate the pre-warp image as
Stage 4: converting WRP to a hologram
In the final stage of our proposed method, the WRP is converted into a hologram that is positioned at a distance of from the WRP. This can be accomplished by convolving the warped WRP image with the free-space impulse response as given byEqs. (1) and (13) is realized in the spectral domain based on a pair of Fast Fourier Transform (FFT) operations asTable 1, and we explain the evaluation as follows. As the FFT of the free-space impulse response functions can be pre-computed in advance, Eqs. (14) and (15) can be realized with 4 FFT operations. Next, in Eq. (8), the sampling array can be deduced from with a small Look-up Table with negligible amount of computation. The location of the new sample positions (, ) only involves 2 additions per pixel according to Eqs. (10) and (11). However, it can be seen that in the computation of and each column and row, index with ‘y’ and ‘x’, respectively, can be evaluated independently from each other. As such, both of these processes can be realized in a parallel fashion, and the processes in Table 1 can be computed in less than 5ms with a graphical processing unit (GPU). The re-sampling process in Eq. (9) and (12) is simply a memory-addressing operation and is basically computation-free in practice. In the above evaluation, there is an extra time involved for transferring the image data to and from the computing device and the source/destination units. However, these additional overheads are not directly related to our proposed method and hence not included as a part of the computation loading.
3. Experimental results
Our proposed method is evaluated with a pair of 3-D models. Each model is represented by the intensity image and the depth map as shown in Figs. 2(a)-2(d). The depth map shows the relative distance, with the nearest and the farthest distances from the view-point represented in black and white intensity, respectively. The first model ‘A’ is a wedge geometry (progressively increasing depth from left to right) with a highly textural image, while the second model ‘B’ is a cone having the texture of the grid image, and with the tip of the cone being nearest to the hologram. The depth range of both models is .
The size of the object image, the WRP, and the hologram are assumed to be identical and composing of pixels. The wavelength of the optical beam, pixel size of the hologram, and the distance between the WRP and (i.e. ) are set to , , and 0.1m, respectively. For each model, the following steps are conducted. First, Eq. (8) is first applied to generate the sampling interval matrix , based on the depth map in each case. After is determined, Eqs. (10) and (11) are employed to generate the revised sampling intervals, which are taken to derived the pre-warped image . Equation (14) is then applied to convert into the WRP image , from which a warped WRP is generated with Eq. (9). Subsequently, Eq. (15) is applied to convert the warped WRP in the hologram that is separated by a distance of from the WRP. To evaluate the hologram generated with our proposed method, we have computed the numerical reconstructed images at 3 selected focused planes positioned at , , and from the WRP (i.e., , , and ). The results are shown in Figs. 3(a)-3(c) and Figs. 4(a)-4(c). For model ‘A’, we observe that when the focused plane is at , the textural patterns on the left side of the reconstructed image in Fig. 3(a) (that is closer to the hologram) are clearer than the rest of the image. The clear region moves to the middle in Fig. 3(b), and to the right in Fig. 3(c), when the focused distances are changed to , and , respectively. Similar results are attained for model ‘B’. In Fig. 4(a), the part of the grid pattern that is closest to the hologram is clearly reconstructed at . The mid section of the cone is clear at the reconstruction distance in Fig. 4(b), while the bottom section is reconstructed with clarity at in Fig. 4(c). The above observations will become even more apparent when the images are zoomed in. These evaluation show that the hologram generated by our proposed method is capable of preserving the depth information, as well as the intensity of the source object.
In this paper, we have proposed a fast method for generation of Fresnel holograms that only involves 2 re-sampling processes and 4 FFT operations. Comparing with existing methods that are based on the WRP framework, our proposed method has the following advantages. First, the initial WRP is generated directly from a planar image instead of individual object points. As such, the process can be realized swiftly with a pair of FFT operations, and the computation time is independent on the number of object points. Second, the depth information at each point of the object scene is incorporated into the initial WRP by adjusting the local sampling intervals. The amount of arithmetic calculations involved is insignificant in the re-sampling process, as compared with the computation of the WRP fringe patterns for individual object points. Third, there is no need to reserve a large look-up-table to store the pre-computed WRP fringe patterns. Fourth, the hologram is capable of representing a dense object scene without the need of down-sampling the intensity image, hence preserving favorable quality on the reconstructed images that contain high textural contents. Our evaluation has demonstrated the generation of a 2048x2048 hologram, representing an image scene of similar size and comprising of complicated textures, in less than 10ms (i.e., over 100 frames per second). On the downside, the re-sampling process in Eq. (12) will impose certain degradation in the source image, but as shown in the experimental result, the effect is not prominent for a depth range of 0.02m. For a wider depth range which involves higher degree of re-sampling, the degradation will become progressively more obvious. The speed performance of our proposed method, as compared with  is about the same in the generation of a hologram for object points (based on the GPU adopted in our work), and the image quality is also similar. However, as the number of object points increased, the number of parallel threads will be insufficient to handle concurrent processing of all the object points due to the limited amount of parallel processors in the GPU. As a result, the hologram generation task has to be conducted sequentially in multiple rounds, hence lowering the overall computation speed. In our method, the speed is fix for a given hologram size and independent of the number of object points.
References and links
1. S.-C. Kim, J. M. Kim, and E.-S. Kim, “Effective memory reduction of the novel look-up table with one-dimensional sub-principle fringe patterns in computer-generated holograms,” Opt. Express 20(11), 12021–12034 (2012). [CrossRef] [PubMed]
4. T. Yamaguchi, G. Okabe, and H. Yoshikawa, “Real-time image plane full-color and full-parallax holographic video display system,” Opt. Eng. 46(12), 125801 (2007). [CrossRef]
7. K. Murano, T. Shimobaba, A. Sugiyama, N. Takada, T. Kakue, M. Oikawa, and T. Ito, “Fast computation of computer-generated hologram using Xeon Phi coprocessor,” Comput. Phys. Commun. 185(10), 2742–2757 (2014). [CrossRef]
8. A. Sugiyama, N. Masuda, M. Oikawa, N. Okada, T. Kakue, T. Shimobaba, and T. Ito, “Acceleration of computer-generated hologram by greatly reduced array of processor Eelement with data reduction,” Opt. Eng. 53(11), 113104 (2014). [CrossRef]
9. T. Shimobaba, T. Ito, N. Masuda, Y. Ichihashi, and N. Takada, “Fast calculation of computer-generated-hologram on AMD HD5000 series GPU and OpenCL,” Opt. Express 18(10), 9955–9960 (2010). [CrossRef] [PubMed]
10. T. Shimobaba, N. Okada, T. Kakue, N. Masuda, Y. Ichihashi, R. Oi, K. Yamamoto, and T. Ito, “Computer holography using wavefront recording method,” in Digital Holography and Three-Dimensional Imaging, OSA Technical Digest (online), OSA, paper DTu1A.2 (2013).
11. T. Shimobaba, H. Nakayama, N. Masuda, and T. Ito, “Rapid calculation algorithm of Fresnel computer-generated-hologram using look-up table and wavefront-recording plane methods for three-dimensional display,” Opt. Express 18(19), 19504–19509 (2010). [CrossRef] [PubMed]