Recent developments in computer algorithms, image sensors, and microfabrication technologies make it possible to digitize the whole process of classical holography. This technique, referred to as digitized holography, allows us to create fine spatial three-dimensional (3D) images composed of virtual and real objects. In the technique, the wave field of real objects is captured in a wide area and at very high resolution using the technique of synthetic aperture digital holography. The captured field is incorporated in virtual 3D scenes including two-dimensional digital images and 3D polygon mesh objects. The synthetic field is optically reconstructed using the technique of computer-generated holograms. The reconstructed 3D images present all depth cues like classical holograms but are digitally editable, archivable, and transmittable unlike classical holograms. The synthetic hologram printed by a laser lithography system has a wide viewing zone in full-parallax and give viewers a strong sensation of depth, which has never been achieved by conventional 3D systems. A real hologram as well as the details of the technique is presented to verify the proposed technique.
©2011 Optical Society of America
In classical holography, the wave field emitted from a real object is recorded on light-sensitive films in the form of a fringe pattern generated by optical interference with a reference wave. The wave field of the object is optically reconstructed by diffraction with the fringe pattern after chemical processing of the film. This is the three-dimensional (3D) spatial image produced by classical holography. Therefore, we need a real object to create a 3D image in classical holography. Classical holography makes it possible to reconstruct brilliant 3D images that provide almost all depth cues, because holograms reconstruct the light of the recorded 3D scene itself. However, these holograms cannot be stored digitally and transmitted through digital networks. It is also almost impossible to edit the 3D scene after recording the interference fringe. These features are useful for some specific purposes such as security but are inconvenient in 3D imaging.
Two types of techniques are continuously being developed to advance classical holography. One technique commonly referred to as digital holography (DH)  captures the interference fringe pattern using digital image sensors. Images are numerically reconstructed by digital processing of captured fields in DH. However, these are not 3D images but two-dimensional (2D) digital images displayed on a screen or printed on a piece of paper. Since the captured data contains the information of the phase of light, this technique is mainly used for microscopes or some fields of metrology such as flow measurements.
The other technique is referred to as the generation of computer-generated hologram (CGH) . This technique numerically generates a fringe pattern using a digital computer and reconstructs wave fields of light using diffraction with the fringe pattern printed or displayed. In principle, this technique is capable of producing any light if one knows what light should be produced. In the case of CGHs, a real object is no longer required but all depth cues are reconstructed similarly to classical holograms. This is an ideal feature for modern 3D technology. Thus, 3D imaging with CGHs is sometimes referred to as final 3D technology. However, for a long time, it was not possible to create fine CGHs for virtual 3D scenes such as those in modern computer graphics (CG). Instead of being used for 3D imaging, CGHs were treated as optical components such as optical filters. This was mainly due to the gigantic display resolution necessary to create 3D images using CGHs. Once the wave field is provided in the form of numerical data, CGHs can reconstruct the wave field. However, it is extremely difficult, even using modern computers, to compute the high-definition wave field emitted from virtual 3D scenes. Printing or displaying 3D images using CGHs is also very difficult because of the extreme high-definition necessary for reconstructing fine 3D images.
However, recent development of polygon-based computer algorithms  allows us to calculate the high-definition wave field of a completely virtual 3D scene whose shape and properties are given by a numerical model. We have reported fully synthetic full-parallax computer holograms [4–9] that were printed with laser lithography equipment developed to fabricate photomasks and available in the market . These synthetic holograms are composed of more than a billion pixels and reconstruct brilliant 3D images of occluded virtual 3D scenes. The reconstructed 3D images are not motion pictures but stills, at least for now. However, the quality of the 3D images is comparable to that in classical holography. The reconstructed spatial 3D images are quite different from those reconstructed by currently available 3D systems; the 3D images give viewers a strong sensation of depth that has never been provided by conventional 3D systems, which provide only binocular disparity. In this paper, we refer to the technique for creating CGHs as computer holography and refer to the created hologram as a computer hologram. Figure 1 shows schematically the concept of computer holography.
Conventional CGHs or computer holograms reported so far mainly reconstruct virtual 3D scenes or objects. To reconstruct real objects through computer holography, three approaches can be adopted at this time. The easiest approach is to measure the shape of 3D objects using a laser rangefinder or 3D scanner and texture-map the photograph of the object onto the obtained polygon mesh [9,10]. However, the 3D image obtained taking this approach may be regarded as a type of synthetic image rather than a real image. The second approach is to use some technique based on multiple viewpoint projection . This may be the most promising approach, but no high-quality hologram comparable to classical holograms has been reported for this technique as far as we know.
The third approach is to capture real wave fields using DH. This has been attempted in order to reconstruct real objects through electro-holography [12,13]. However, electro-holography currently cannot reconstruct a fine 3D image in the first place. The reconstructed images are not comparable to classical holograms and the high-definition computer holograms. It is theoretically possible to capture any object field using DH, but it is not easy in practice to capture real fields that meet the following two requirements for creating the high-definition computer holograms mentioned above. The first requirement is that the sampling interval of the captured field is sufficiently small to provide a large viewing zone in the optical reconstruction. The interval should not exceed one micron. The second requirement is a large capturing area comparable in size to the created computer holograms. This also leads to a larger viewing zone and the requirement to reconstruct objects as large as the hologram itself, the length of which, for example, is approximately . Both requirements lead to capturing the field on a large number of pixels. Unfortunately, current image sensors available in the market do not meet these requirements.
To resolve the problem, we use lensless-Fourier synthetic aperture digital holography [14,15] (LFSA-DH), that is a type of DH using spherical reference waves. LFSA-DH resolves the problems of capturing real fields relating to image sensors and makes it possible to reconstruct 3D images of a real object through computer holography. This means that the whole process of classical holography is replaced by digital counterparts. Thus, we refer to this technique as digitized holography as in Fig. 1. Digitized holography allows us to digitally edit, archive, transmit, and optically reconstruct the wave field of real existing 3D objects. In addition, the real wave field can be mixed with virtual 3D scenes composed of digital 2D images and polygon mesh 3D objects. The detail of the technique is presented in this paper and an actual high-definition computer hologram is demonstrated to verify the reconstruction of a mixed 3D scene including real and virtual objects.
2. Capturing Large-Scale Wave Fields
Capturing wave fields of real existing objects using an image sensor is simply the digital counterpart of recording in classical holography. However, impressive synthetic holograms commonly need display resolution of at least a billion pixels and physical resolution of less than one micron . The wave fields captured by conventional DH do not meet these requirements, because there are no more than tens of millions of pixels even in state-of-the-art sensors and the resolution does not reach one micron. To resolve the problem, we use LFSA-DH .
2.A. Principle for Reducing Sampling Intervals
In LFSA-DH, the wave field of an object is obtained by the Fourier transformation of the field captured by the image sensor using a phase-shifting technique , as shown in Fig. 2. Here, the sampling intervals of the Fourier-transformed field in the image plane are 1). The sampling intervals directly depend on the distance and the sampling numbers and . This means that the sampling intervals can be controlled by these parameters so as to fit the high-definition computer holography.
2.B. Principle for Increasing the Sampling Cross Section
Since the sampling intervals decrease as the numbers of sensor pixels and increase, the synthetic aperture technique is used to increase the effective number of sensor pixels [14,15]. Here, the lensless-Fourier setup using a spherical reference wave has the advantage that the spatial frequency of the fringe is not increased at the edge of the sensor plane unlike the case for plane reference waves.
In the synthetic aperture DH, the image sensor is mechanically translated and captures the wave field at different positions. Here, the sensor shift is set to be smaller than the sensor area in order to overlap the captured field at each position, as shown in Fig. 3. This overlap area is used to avoid translation errors; i.e. sensor positions are measured exactly using a correlation function for the captured fields . As a result, all captured fields can be integrated into a single large-scale wave field using this technique.
2.C Experiment for Capturing the Large-Scale Wave Field through LFSA-DH
The experimental setup for capturing large-scale wave fields through LFSA-DH is shown in Fig. 4. The image sensor with pixels (Lumenera Lw625) is mechanically translated by a computer-controlled motor stage. The fringe pattern is captured three times for each position to obtain a complex wave field  using the phase-shift provided by the mirror M3 installed in a piezo phase-shifter.
Amplitude images of the captured and Fourier-transformed fields are shown in Fig. 5(a) and 5(b), respectively. The total field is obtained by stitching individual fields captured at positions. The total cross section of the captured field is . The parameters used for capturing are summarized in Table 1. Here, is the distance between SF3 generating the reference spherical wave and the sensor plane as in Fig. 4. The distance is set to in this experiment. This is a free parameter and thus is determined using Eq. (1) so that the sampling intervals of the field are exactly after Fourier transformation.
3. Editing a 3D Scene in Digitized Holography
Captured large-scale fields are incorporated into a 3D scene that includes virtual objects such as polygon mesh 3D objects and digital 2D images. These elements comprising the 3D scene are referred to as components in this section.
3.A. Configuration of a 3D Scene
The coordinate system used to design 3D scenes is shown in Fig. 6. The center of the hologram is positioned at the origin of the global coordinates . All of the real and virtual objects composing the 3D scene are given by their wave field, i.e., the distribution of complex amplitudes sampled in a plane parallel to the hologram. This is true even in cases of 3D objects. In this case, the object fields are computed from the CG model and then incorporated into the 3D scene in a given plane. The wave field of the component has its own local coordinates . The origin of the local coordinates, denoted in the global coordinates, defines the position of the components in the 3D scene and is determined by the designer of the scene. Computation of the whole wave field of the 3D scene begins with the farthest components from the hologram, whose field is , and ends in the hologram plane; the whole field of the scene is given by , where and are the -position of the farthest component and the hologram, respectively. is the total number of components. Note that by the definition of the global coordinates and the position of the nearest component is given by . This sequential calculation is necessary when employing the silhouette method [18,19,4] to shield the light behind each object and prevent individual objects from being see-through images.
3.B. Principle of Light Shielding Employing the Silhouette Method
The light behind a real existing object must be shielded to correctly reconstruct the occluded scene. The silhouette method proposed for light shielding in fully synthetic holography is applied to the real object. The principle of light shielding for captured fields is shown in Fig. 7. Since the incident field behind the captured object should be shielded over the cross section of the object, the incident field is multiplied by a binary mask that corresponds to the silhouette of the object. The captured field is then added to the masked background field. This process is exactly the same as the case of the synthetic field for virtual 2D and 3D objects [18,19]. This sequential light shielding is written as a recurrence formula:20] or band-limited angular spectrum method  is used for the numerical propagation if the memory installed in the computer is sufficient to store the whole field; otherwise, segmented propagation  employing the off-axis propagation method [22,5] is required.
3.C. Extraction of a Silhouette Mask from the Captured Wave Field
It is expected that the silhouette mask of real objects can be extracted from the captured field because the field retains the shape information of the object. However, in the amplitude image yielded from the captured field, the edge of the object is blurred by heavy defocusing, as shown in Fig. 5(b). This phenomenon is similar to blurring in macro photography, for which the depth of field is commonly small. Thus, an aperture should be used to capture the field so that the numerically reconstructed amplitude image is clear. In digital holography, however, this can be simply achieved by clipping a small part of the captured field after capturing. Figure 8(b) shows the amplitude image yielded by Fourier transformation of a small part of the captured field in (a). It is verified that the blurring disappears and the image is clear. The silhouette mask in (c) is obtained by binarizing and reversing the amplitude image in (b).
4. Computer Hologram of a Mixed 3D Scene Including Virtual and Real Objects
A computer hologram named “Bear II” is created using the captured field presented in section 2.C.
4.A. Mixed 3D Scene
A real existing object, a toy bear whose wave field is captured through LFSA-DH, is mixed with a virtual 3D scene. The design of the scene is shown in Fig. 9. Here, the bear appears twice in the scene, i.e., the same captured wave field is used twice in the scene. Virtual objects such as 2D wallpaper or 3D bees are arranged behind or in front of the two bears. The occluded relation is correctly reconstructed between the bears as well as between the objects behind and in front of the bears as if real objects are placed at the positions. This kind of editing of 3D scenes is impossible in classical holography. Only digitized holography allows us to edit the 3D scene.
4.B. Fabrication and Reconstruction of “Bear II”
After calculation of the whole wave field of the mixed 3D scene, the fringe pattern is generated by numerical interference with a reference wave and then quantized to produce the binary pattern. Finally, the binary amplitude hologram is fabricated using a laser lithography system. There are approximately four billion pixels for Bear II. Since the pixel pitches are , the viewing angle is both in the horizontal and vertical. The parameters used to create Bear II are summarized in Table 2.
Photographs and videos of the optical reconstruction of Bear II are shown in Fig. 10 and Fig. 11. It is verified that occlusion of the 3D scene is accurately reconstructed, with the appearance of the 3D scene varying as the point of view changes.
Occluded scenes are reconstructed by a silhouette-masking technique that shields the field behind the object. However, silhouette-masking is not a universally applicable technique for light shielding. For example, black shadows that are not seen from the in-line viewpoint appear around the object are visible from an off-axis viewpoint, as shown in Fig. 12. This is most likely due to disagreement between the planes in that the real wave field is given and the object has the maximum cross section. As shown in Fig. 13, viewers see the silhouette mask itself in this case; the background light cannot be seen, even though not hidden by the object. In this case, however, we can easily resolve the problem by numerically propagating the field a short distance so that the field plane is exactly placed at the maximum cross section of the object.
Unfortunately, silhouette-masking does not work well in some cases where the object has severe self-occlusion or the silhouette shape of the object does not fit with the cross section.
We proposed a technique called digitized holography. Using this technique, the wave field of a real object is captured by a personal computer using the technique of lensless-Fourier synthetic aperture digital holography. The captured field is incorporated in a virtual 3D scene and optically reconstructed by computer holography. This means that the whole process of classical holography is replaced with modern digital processing of the wave field. As a result, the 3D images reconstructed by holography can be edited, stored, and transmitted by digital technology unlike the case for classical holography. The reconstructed 3D image is a spatial image like its counterpart in classical holography and thus conveys a strong depth impression to viewers.
The authors thank Mr. Nishi for his assistance in designing the 3D scene of Bear II. This work was supported by JSPS. KAKENHI (21500114) and the Kansai University Research Grants: Grant-in-Aid for Encouragement of Scientists, 2011–2012.
1. J. W. Goodman and R. W. Lawrence, “Digital image formation from electronically detected holograms,” Appl. Phys. Lett. 11, 77–79 (1967). [CrossRef]
2. A. W. Lohmann and D. P. Paris, “Binary fraunhofer holograms, generated by computer,” Appl. Opt. 6, 1739–1748 (1967). [CrossRef]
3. K. Matsushima, “Computer-generated holograms for three-dimensional surface objects with shade and texture,” Appl. Opt. 44, 4607–4614 (2005). [CrossRef]
4. K. Matsushima and S. Nakahara, “Extremely high-definition full-parallax computer-generated hologram created by the polygon-based method,” Appl. Opt. 48, H54–H63 (2009). [CrossRef]
5. K. Matsushima and S. Nakahara, “High-definition full-parallax CGHs created by using the polygon-based method and the shifted angular spectrum method,” Proc. SPIE 7619, 761913 (2010).
6. K. Matsushima, M. Nakamura, and S. Nakahara, “Novel techniques introduced into polygon-based high-definition CGHs,” in Topical Meeting on Digital Holography and Three-Dimensional Imaging (Optical Society of America, 2010), paper JMA10.
7. K. Matsusima, M. Nakamura, I. Kanaya, and S. Nakahara, “Computational holography: Real 3D by fast wave-field rendering in ultra-high resolution,” in Proceedings of SIGGRAPH Posters’ 2010 (2010).
8. K. Matsushima, “Wave-field rendering in computational holography,” in 2010 IEEE/ACIS 9th International Conference on Computer and Information Science (2010), pp. 846–851.
9. H. Nishi, K. Higashi, Y. Arima, K. Matsushima, and S. Nakahara, “New techniques for wave-field rendering of polygon-based high-definition CGHs,” Proc. SPIE 7957, 79571A (2011).
10. K. Matsushima, H. Nishi, and S. Nakahara are preparing a manuscript to be called “Simple wave-field rendering for photorealistic reconstruction in polygon-based high-definition computer holography,”
11. N. T. Shaked, B. Katz, and J. Rosen, “Review of three-dimensional holographic imaging by multiple-viewpoint-projection based methods,” Appl. Opt. 48, H120–H136(2009). [CrossRef]
12. N. Hashimoto, K. Hoshino, and S. Morokawa, “Improved real-time holography system with LCDs,” Proc. SPIE 1667, 2–7 (1992).
13. K. Sato, “Record and display of color 3-D images by electronic holography,” in Topical Meeting on Digital Holography and Three-Dimensional Imaging (Optical Society of America, 2007), paper DWA2.
14. R. Binet, J. Colineau, and J.-C. Lehureau, “Short-range synthetic aperture imaging at 633 nm by digital holography,” Appl. Opt. 41, 4775–4782 (2002). [CrossRef]
15. T. Nakatsuji and K. Matsushima, “Free-viewpoint images captured using phase-shifting synthetic aperture digital holography,” Appl. Opt. 47, D136–D143 (2008). [CrossRef]
16. I. Yamaguchi and T. Zhang, “Phase-shifting digital holography,” Opt. Lett. 22, 1268–1270 (1997). [CrossRef]
17. Y. Takaki, H. Kawai, and H. Ohzu, “Hybrid holographic microscopy free of conjugate and zero-order images,” Appl. Opt. 38, 4990–4996 (1999). [CrossRef]
18. K. Matsushima and A. Kondoh, “A wave optical algorithm for hidden-surface removal in digitally synthetic full-parallax holograms for three-dimensional objects,” Proc. SPIE 5290, 90–97 (2004).
19. A. Kondoh and K. Matsushima, “Hidden surface removal in full-parallax CGHs by silhouette approximation,” Syst. Comput. Jpn. 38, 53–61 (2007). [CrossRef]
20. J. W. Goodman, Introduction to Fourier Optics, 2nd ed. (McGraw-Hill, 1996), chap. 3.10.
21. K. Matsushima and T. Shimobaba, “Band-limited angular spectrum method for numerical simulation of free-space propagation in far and near fields,” Opt. Express 17, 19662–19673 (2009). [CrossRef]
22. K. Matsushima, “Shifted angular spectrum method for off-axis numerical propagation,” Opt. Express 18, 18453–18463 (2010). [CrossRef]