A large-scale full-parallax computer-generated hologram (CGH) with four billion () pixels is created to reconstruct a fine true 3D image of a scene, with occlusions. The polygon-based method numerically generates the object field of a surface object, whose shape is provided by a set of vertex data of polygonal facets, while the silhouette method makes it possible to reconstruct the occluded scene. A novel technique using the segmented frame buffer is presented for handling and propagating large wave fields even in the case where the whole wave field cannot be stored in memory. We demonstrate that the full-parallax CGH, calculated by the proposed method and fabricated by a laser lithography system, reconstructs a fine 3D image accompanied by a strong sensation of depth.
© 2009 Optical Society of America
The technology of computer-generated holograms (CGHs) is the counterpart of computer graphics in holography. The technology has a long history and is sometimes referred to as the final 3D technology, because CGHs not only produce a sensation of depth but also generate light from the objects themselves. However, currently available CGHs cannot yet produce fine true 3D images accompanied by a strong sensation of depth. Such fine CGHs commonly require the following two conditions: the CGHs must have a large viewing zone to acquire the autostereoscopic property, i.e., motion parallax and, second, the dimensions of the CGHs must be large enough to reconstruct a 3D object that can be observed by two naked eyes. Both of these conditions lead to an extremely large number of pixels for a CGH, because the large viewing-zone requires high spatial resolution and the large dimensions require a large number of pixels for high resolution. In addition, scenes with occlusions should be reconstructed to give a CGH a strong sensation of depth, because the ability to handle occlusions is one of the most important mech anisms in the perception of 3D scenes.
The reason why fine 3D images could not be produced by CGH technology was that there was no practical technique to compute the object fields for such high-definition CGHs that reconstruct 3D scenes with occlusions. Computation of the object fields should be performed with an extremely large number of sampling points within a short span of time (typically a few hours or at least a few days), but this was very difficult to achieve even for a state-of-the-art computer. Research during the past few decades evolved from ray-oriented point-based methods, which are widely used for calculating the wave field. However, point-based methods are very time consuming, especially when full-parallax CGHs of surface objects are being computed. Although many techniques to accelerate computation of point-based methods have been proposed and developed in the past decade [1, 2, 3, 4, 5, 6, 7], the computation time or CGH size is still insufficient to produce fine full-parallax CGHs. Furthermore, there is no practical algorithm for full-parallax handling of occlusions in 3D scenes. This is an unresolved and fundamentally difficult problem of the point-based methods.
A field-oriented polygon-based method has been proposed  to overcome the limits of the point-based methods. In this new technique, an object is composed of polygonal facets. Each polygon is regarded as a surface source of light. The field emitted by the polygon is referred to as the polygon field. The field of an object is computed from the sum of the polygon fields. Numerical operations necessary for computing the polygon field are a double fast Fourier transform (FFT) and an interpolation for each polygon. Consequently, computation of a polygon field is slower than that of a spherical wave used in point-based methods. However, the number of polygons necessary for forming object surfaces is much less than that of point sources of light. As a result, the total computation time of the object field by the polygon-based method is shorter than the computation time of point-based methods. Several researchers have already proposed improvements on our method in order to compute faster but sacrifice texture mapping and uniform diffusiveness [9, 10].
In calculating object fields, light-shielding by obstacles should be realized to add an effect of mutual occlusion to the 3D scene reconstructed by the CGH. We have also proposed a practical solution to this problem. It is based on wave-optical consideration of field propagation [11, 12, 13]. This technique is referred to as silhouette masking or simply the silhouette method. Silhouette masking is applicable not only to polygon-based methods but also to point-based methods. However, silhouette masking best matches the polygon-based method because both methods use field propagation operations.
Though these techniques have advantages, so far the polygon-based methods cannot produce fine 3D images due to the inherent difficulty of segmented processing. Since the polygon-based methods use numerical field propagation, the size of the main memory needed for storing complex-valued wave fields limits the pixel size of CGHs. In this study, we propose a technique to overcome the limits on the pixel size in polygon-based methods. The large object field is divided into small segments in the proposed technique. Only a few segments of the field are simultaneously stored in the main memory and processed by multiple processors. Field propagation is also performed in segmented fields by using the shifted- Fresnel method .
We demonstrate the implementation of the large-scale CGH using a laser lithography system. This investigation is intended to produce a CGH by combining these techniques. The CGH reconstructs fine true 3D images that can be appreciated as works of art.
2. 3D Scene and 3D Object
The CGH created in this research is named “The Venus” because the 3D object is similar to the famous statue of the Venus de Milo. The Venus statue is in height and composed of 1396 polygons. The shape of the Venus is provided as a set of vertex coordinates of the polygons. The scene including the Venus statue and the hologram coordinates used in the computation are shown in Fig. 1. The hologram coordinate system is the world coordinates of the 3D scene, in which the hologram is placed in the plane. The Venus statue is placed at behind the hologram. In addition, wallpaper with a simple planar image featuring a binary check pattern, is placed at behind the Venus statue to provide an occlusion and enhance the viewer’s sensation of depth. Both the wallpaper and hologram have the same dimensions, approximately . Some parameters used for creating the Venus CGH are summarized in Table 1.
3. Polygon-Based Method for Computing 3D Objects
In the polygon-based method, a wave field, given in the tilted plane that includes the polygon, represents the polygonal surface source of light. The polygon field in a plane parallel to the hologram is computed from the wave field in the tilted plane by using rotational transformation of the wave fields [15, 16]. This section comprehensively summarizes the details of the polygon-based method proposed in .
3A. Local Coordinate System and Rotation Matrix
The procedure for computing the object field of a triangular prism is shown in Fig. 2 as an example. There are three polygons, , , and , on the front face of the object.
A set of tilted local coordinate systems are defined as for the polygon , as shown in Fig. 2a. One more set of local coordinate systems, referred to as parallel local coordinates, is defined as . All axes of the parallel local coordinate systems are parallel to that of the hologram coordinates, but the origins are shared with the corresponding tilted coordinate systems. The parallel local coordinates are transformed from or to the tilted local coordinates by
There is an arbitrariness in determining the matrix . We use the following Rodrigues rotation formula:2) is also given by .
3B. Surface Functions
The distribution of complex amplitudes, referred to as a surface function, is defined in the plane for the tilted local coordinates of the polygon . This function plays an important role to give the appearance of the polygon in optical reconstruction. Examples of the surface functions of the triangular prism are shown in Fig. 2b.
The surface function for a polygon is generally given in the form8], the amplitude has a different value dependent on the polygon. The texture of the polygon can be given as a modulation of the amplitude. In this case, is not constant but a function of and .
If the surface function is a simple polygon-shaped distribution of real-valued amplitude, the surface function behaves like an aperture that shapes the polygon. In this case, light from the polygon does not spread over the hologram. The amplitude of the surface function should be multiplied by a phase distribution to give diffusiveness to the polygon field. Random functions are candidates for the diffusive phase. However, full random functions are not appropriate, because the random phases are discontinuous and have a large spatial frequency. As a result, full random phases usually cause speckles in the reconstruction. In this paper, speckle-free quasi- random phase distribution proposed as a digital diffuser in Fourier holograms  is used for the phase distribution.
The wave field of a polygon is diffused by the diffuser phase. However, if the carrier frequency of the surface function is zero on the surface of the polygon, the emitted wave field propagates mainly in the direction normal to the polygon, and thus the polygon field cannot reach the hologram effectively. The center of the spectrum of the surface function should be shifted so that light travels along the optical axis of and almost perpendicularly intersects the plane parallel to the hologram. This shift operation is a part of rotational transformation described in the following section.
3C. Rotational Transformation and Spectrum Remapping
The polygons forming the object surfaces are usually not parallel to the hologram, and thus wave fields in the hologram plane cannot be obtained simply by using ordinary formulas for the propagation of wave fields between parallel planes, such as the Fresnel diffraction formula. Before translational propagation, the surface function must be rotationally transformed into the wave field given in a plane parallel to the hologram by using the formula for rotational transformation .
Surface functions are Fourier transformed as follows:
Supposing that the transformation matrix for changing the parallel local coordinates into the tilted local coordinates is given as8]. However, there are a few exceptions. In some polygons for which normal vectors are almost parallel to the hologram, fine adjustments need to be made to the shift values. Note that the nearly equal sign in Eq. (10) implies that an interpolation is required because coordinate rotation not only moves the origin but also distorts the sampling grid .
The object field is a superposition of the polygon fields in a plane that is parallel to the hologram. This parallel plane is referred to as the object plane as shown in Fig. 1. Since the positions of the polygons are not same, the polygon fields should be translationally propagated back or forth along the axis before superposition. We use the angular spectrum method  for translational propagation. Supposing that the object plane is placed at and the origin of the local coordinates is positioned at , translational propagation in the Fourier space is given as2c.
Finally, the object field is given by superposition of all polygon fields in the object plane as follows:2d. Note that the object plane is not the same as the hologram plane. The object plane should be placed so that it slices the object at the center, as shown in Fig. 1. The object field in the hologram plane is computed by translational propagation of the object field in the object plane.
The reason why the object field is not directly calculated in the hologram plane and the translational propagation is used is that calculating the object field in the object plane is faster than doing it in the hologram plane. Since the polygons are closer to the object plane than to the hologram plane, polygon fields are not diffracted as much in the object plane. In other words, the area for calculating the polygon field is much smaller than it is in the hologram plane. As a result, the object field can be computed much faster in the object plane than in the hologram plane.
Furthermore, there is another significant reason: light shielding of the wallpaper field is necessary to avoid a phantom image of the 3D object, as explained in the Section 5.
4. Segmentalized Computation of Object Fields by the Polygon-Based Method
4A. Segmented Frame Buffers
The wave field of Eq. (15) is a distribution of complex-valued amplitudes and is generally treated as a sampled 2D array of two floating-point variables in the numerical implementation. This 2D array is referred to as the frame buffer. The memory size required for the frame buffer for the surface function and the parallel local field directly depends on the dimensions of the polygon. These are generally much smaller than the memory size for the object field if the object is constructed of curved surfaces, because curved surfaces are composed of many small polygons. In contrast, the memory size for the object field is excessively large.
Since two single-precision floating points rep resent a complex value in our implementation, the memory size necessary for the frame buffer is for an array of N sampling points. Therefore, the memory size required for storing the whole object field is 32 Gbytes in the object plane, because our CGH has pixels. We note that this estimation is only for a single frame buffer. Other memory spaces required for running the program, such as the operating system area, code area, temporal frame buffers, I/O buffers, and so on are not included. As a result, segmentation of the frame buffer is essential for producing the Venus CGH by using ordinary personal computers (PCs). The segmented structure of the frame buffer for the object field is shown in Fig. 3. The frame buffer is divided into the same rectangular segments. Only a few segments are loaded in memory, and others are saved as files in a secondary storage medium such as a hard disk drive.
4B. Computation of Object Field by Using the Segmented Frame Buffer
The polygon field stored in the small frame buffers is numerically propagated in the object plane, and then all the polygon fields are gathered and summed up as described in Eqs. (13, 14, 15). The segmented frame buffer is used for summation in the object plane. Polygon fields are summed for a few segments loaded in memory. When the summation process is finished in a segment, the segment is saved in the file. After the segment is purged from memory, the next segment is loaded and processed in memory. This process is parallelized; i.e., a certain processor handles a certain segment, and multiple segments are processed simultaneously by multiple processors, which share the memory. Therefore, the number of segments simultaneously processed in memory is the same as the number of processors used for computation.
In the summation process, discrimination of polygons is important for avoiding unnecessary computations of the polygon fields. Polygons whose fields do not touch the current segment should be culled in computing the object field of the segment. For example, the fields of polygon in Fig. 3 should be computed and totaled in the segment (2, 1) that is processed by CPU3, but polygons and should not be processed by CPU3. This discrimination is achieved by determining the maximum diffraction area of the polygon.
4C. Estimation of Maximum Diffraction Area of a Polygon
The Venus object is composed of 718 polygons. Since the processors scan all the polygons for each segment every time, discrimination should be made as fast as possible. It is not easy to calculate the exact diffraction area of a polygon in the object plane, and it is unnecessary for discrimination. We use a simple approach to estimate the maximum diffraction area as shown in Fig. 4. The maximum diffraction area of the vertex of a polygon can be obtained from the maximum diffraction angle given by
4D. Numerical Propagation by Using Segmented Frame Buffers
Wave fields are propagated numerically using the shifted Fresnel method . The method is suited for propagation by using segmented frame buffers, because the sampling area in the destination plane can be shifted from the sampling area in the source plane, as shown in Fig. 5.
Suppose that the wave field in a segment of the destination plane is computed by numerical propagation of the wave field in the source plane at . If the destination plane at is sufficiently far from the source plane, the wave fields in all segments of the source plane contribute to the wave field in the segment of the destination plane. Therefore, the wave field in the segment is computed by
5. Light-Shielding by Silhouette Method
5A. Generation of Wallpaper Field
The wallpaper is a planar object and is given by a pixel texture. To give the texture image the appearance of wallpaper that is perpendicular to the optical axis and has the same dimensions as the hologram, the texture image is oversampled with the same sampling pitches as the hologram. The enlarged texture image is regarded as an amplitude distribution of the wave field and handled by the same segmented frame buffer as the object field. We multiply the same diffusive phase as in Eq. (4) into the oversampled texture image to give a diffusiveness to the wallpaper. Hence, the wallpaper field is given by
5B. Silhouette Masking
The wallpaper field generated according to Eq. (18) should be numerically propagated to the hologram plane and superimposed on the object field that is also propagated to the hologram plane. However, if the wallpaper field is simply propagated and added to the object field, the Venus object would be reconstructed as a phantom image, because simple superposition involves light transmission of the Venus object. To shield the wallpaper behind the Venus object and prevent the Venus from being a phantom image, the propagation process should be divided into two stages, as shown in Fig 6. The wallpaper field is first propagated to the object plane and then masked with the silhouette of the Venus object, as shown in Fig 6a. The silhouette mask is simply obtained by orthogonal projection of the polygons forming the object on the object plane as follows:6b is superimposed on the masked wallpaper field. Thus, the object field combined with the wallpaper field is represented as 6c is finally propagated to the hologram plane, and the field for the 3D scene is computed.
6. Creation of “The Venus”
6A. Producing Fringe Pattern
The fringe intensity of the CGH is generated by numerical interference of the combined field with a reference field in the hologram plane as follows:
The DWL 66 laser lithography system made by Heidelberg Instruments GmbH was used for printing the fringe pattern onto the photoresist that is coated on a chromium thin film on a substrate of fused sil ica. After development of the photoresist, the chromium thin film was etched and formed the binary transmittance pattern. As a result, the fabricated CGH has a fringe of binary amplitude.
6B. Optical Reconstruction
Photographs of the optical reconstruction of the fabricated hologram are shown in Fig. 7. A He–Ne laser is used for the transmission light source. The photographs in Fig. 7a, 7b, 7c, 7d captured from various viewpoints verify that the CGH reconstructs the 3D scene because the occlusion of the Venus and wallpaper is changed when the view point is changed. This motion parallax creates a strong sensation of depth in the viewer’s perception.
The fabricated CGH can be reconstructed as a reflection hologram as well as a transmission hologram because of the high reflectance of the chromium thin film. Photographs of optical reconstruction in the reflection mode are shown in Fig. 8. An ordinary red LED is used for the reflection light source. The CGH also gives good reconstruction. However, the dimensions of the reconstructed objects are somewhat reduced in this case, because the LED works as a point source of light, whereas a plane wave is assumed as the reference wave.
7A. Total Computation Time and the Bottleneck
The software for computing the CGH was implemented by using Intel C++ Compiler 10 and Visual Studio 2008. The Intel Math Kernel Library was also used as the FFT package. Computation of the Venus CGH was executed by using a PC with the four CPUs of the AMD Opteron 852 () processor and a shared memory of 32 Gbytes. The total computation time was approximately 45 h, and the itemized computation times are shown in Fig. 9. The longest time was consumed by the numerical propagation using the shifted Fresnel method and the segmented frame buffer. The computation time for this was 37 h and accounted for 82% of the total computation time.
Since the computation time of a 2D FFT is proportional to for N sampling points, the computation time of the shifted Fresnel method for a wave field sampled in points without segmentation is estimated as19]. Therefore, the computation time with segmented frame buffers is given approximately by
7B. Computation Time of Object Fields
The computation time of the object field is short compared with that of numerical propagation in creating the Venus CGH. However, when the number of polygons increases, as is needed to create more complex objects, or the number of segments decreases as a result of increasing the memory size installed in the PC, the computation time of the object field cannot be ignored. The computation time is given by a summation of the computation times of the polygon fields. The computation time of a polygon field is generally given by
The computation times of the FFTs are given by4. When the object plane is sufficiently close to the polygon, the number of sampling points of the surface function is the same as that of the polygon field, i.e., . Computation times for interpolation and translational propagation are also governed by as follows:
As a result, the computation time of polygon fields is determined by , that is, the number of sampling points within the maximum diffraction area of the polygon in the object plane. This is the reason that the object plane should be placed close to the polygons. We note that in cases where the number of polygons of an object increases but the dimension of the object does not change, the total computation time does not increase much. This is because an increase in the number of polygons leads to a reduction in the size of each polygon, and thus a decrease in each value of in this case.
7C. Limitations of Silhouette Masking in the Venus CGH
The silhouette method adopted for the creation of the Venus CGH is a wave-optical algorithm for hidden-surface removal in CGHs. It is one of the lowest approximations of light shielding by obstacles . This type of silhouette method shields only light that intersects a planar region given as the silhouette of the object in the object plane. Therefore, other light that does not intersect the planar silhouette region, but intersects the 3D region of the Venus object, are not shielded. This means that the silhouette method does not work well if a viewer observes the side of the Venus. However, a viewer cannot take a side view of the Venus in our CGH, because we place the Venus behind the hologram. This is a constraint in the configuration of the 3D scene. In addition, this type of silhouette method cannot handle the self- occlusion of objects; i.e., self-occluded objects such as a torus may be reconstructed as a partial phantom image.
Another type of silhouette approximation has been proposed . Light behind the 3D object in the method is not shielded by the full silhouette of the object, but is shielded by individual silhouettes of polygons. This type of silhouette method has the ability to correctly reconstruct self-occluded objects. However, this method requires the same number of numerical propagations as the number of polygons; therefore, it is not practical for high-definition CGHs such as our Venus. More rigorous light shielding by polygon surfaces without silhouette approximation has been proposed , but the method is much more costly in computation time and is not applicable to a large-scale CGH.
A large-scale full-parallax CGH with 4 G pixels, named “The Venus,” is created by using the polygon- based and the silhouette methods. The polygon-based method numerically generates the object fields in a shorter computation time compared with the conventional point-based methods, while the silhouette method makes it possible to reconstruct the occluded scene. These methods had previously suffered from the problem that the pixel size is limited to a value dependent on the memory size installed in the PC, because numerical propagation of wave fields is necessary for these methods. To overcome this limitation, the technique of the segmented frame buffer and shifted Fresnel method for numerical propagation are used in this research. These techniques also contribute to a reduction of computation time by making it easy to introduce high-level parallel processing. The Venus CGH was fabricated by using a laser lithography system. The optical reconstruction gives a fine true 3D image of the occluded scene, which creates a strong sensation of depth due to the motion parallax.
On the other hand, the shifted Fresnel method adopted in this investigation brings a constraint into the configuration of the 3D scene. A severe aliasing error occurs when the wave field of a large object is numerically propagated for a short distance by using the method. Future work will involve the development of a new method for numerical propagation, which is able to shift the sampling area and is applicable for a wide range of propagation distances.
The mesh data for the Venus object is provided courtesy of INRIA by the AIM@SHAPE Shape Re pository. This work was supported by the JSPS. KAKENHI (21500114).
1. M. Lucente, “Interactive computation of holograms using a look-up table,” J. Electron. Imaging 2, 28–34 (1993). [CrossRef]
2. A. Ritter, J. Böttger, O. Deussen, M. König, and T. Strothotte, “Hardware-based rendering of full-parallax synthetic holograms,” Appl. Opt. 38, 1364–1369 (1999). [CrossRef]
3. K. Matsushima and M. Takai, “Recurrence formulas for fast creation of synthetic three-dimensional holograms,” Appl. Opt. 39, 6587–6594 (2000). [CrossRef]
4. H. Yoshikawa, S. Iwase, and T. Oneda, “Fast computation of Fresnel holograms employing difference,” Proc. SPIE 3956, 48–55 (2000).
5. T. Ito, N. Masuda, K. Yoshimura, A. Shiraki, T. Shimobaba, and T. Sugie, “Special-purpose computer HORN-5 for a real-time electroholography,” Opt. Express 13, 1923–1932 (2005). [CrossRef]
6. N. Masuda, T. Ito, T. Tanaka, A. Shiraki, and T. Sugie, “Computer generated holography using a graphicsprocessing unit,” Opt. Express 14, 603–608 (2006). [CrossRef]
7. T. Shimobaba, A. Shiraki, Y. Ichihashi, N. Masuda, and T. Ito, “Interactive color electroholography using the FPGA technology and time division switching method,” IEICE Electron. Express 5, 271–277 (2008). [CrossRef]
8. K. Matsushima, “Computer-generated holograms for three- dimensional surface objects with shade and texture,” Appl. Opt. 44, 4607–4614 (2005). [CrossRef]
9. L. Ahrenberg, P. Benzie, M. Magnor, and J. Watson, “Computer generated holograms from three dimensional meshes using an analytic light transport model,” Appl. Opt. 47, 1567–1574 (2008). [CrossRef]
10. H. Kim, J. Hahn, and B. Lee, “Mathematical modeling of triangle-mesh-modeled three-dimensional surface objects for digital holography,” Appl. Opt. 47, D117–D127 (2008). [CrossRef]
11. K. Matsushima and A. Kondoh, “A wave optical algorithm for hidden-surface removal in digitally synthetic full-parallax holograms for three-dimensional objects,” Proc. SPIEProc. 5290, 90–97 (2004).
12. A. Kondoh and K. Matsushima, “Hidden surface removal in full-parallax CGHs by silhouette approximation,” Syst. Comput. Jpn. 38(6), 53–61 (2007). [CrossRef]
13. K. Matsushima, “Exact hidden-surface removal in digitally synthetic full-parallax holograms,” Proc. SPIE 5742, 25–32 (2005).
14. R. P. Muffoletto, J. M. Tyler, and J. E. Tohline, “Shifted Fresnel diffraction for computational holography,” Opt. Express 15, 5631–5640 (2007). [CrossRef]
15. K. Matsushima, H. Schimmel, and F. Wyrowski, “Fast calculation method for optical diffraction on tilted planes by use of the angular spectrum of plane waves,” J. Opt. Soc. Am. A 20, 1755–1762 (2003). [CrossRef]
16. K. Matsushima, “Formulation of the rotational transformation of wave fields and their application to digital holography,” Appl. Opt. 47, D110–D116 (2008). [CrossRef]
17. R. Bräuer, F. Wyrowski, and O. Bryngdahl, “Diffusers in digital holography,” J. Opt. Soc. Am. A 8, 572–578 (1991). [CrossRef]
18. J. W. Goodman, Introduction to Fourier Optics, 2nd ed. (McGraw-Hill, 1996), chap. 3.10.
19. D. H. Bailey and P. N. Swarztrauber, “The fractional Fourier transform and applications,” SIAM Rev. 33, 389–404 (1991). [CrossRef]