## Abstract

A novel technique for synthesizing a hologram of three-dimensional objects from multiple orthographic projection view images is proposed. The three-dimensional objects are captured under incoherent white illumination and their orthographic projection view images are obtained. The orthographic projection view images are multiplied by the corresponding phase terms and integrated to form a Fourier or Fresnel hologram. Using simple manipulation of the orthographic projection view images, it is also possible to shift the three-dimensional objects by an arbitrary amount along the three axes in the reconstruction space or invert their depths with respect to the given depth plane. The principle is verified experimentally.

© 2009 Optical Society of America

## 1. Introduction

Holography has been exploited for various applications since it was first proposed by Gabor in 1948. For three-dimensional (3D) displays, holography has been considered as the perfect technique, since it can provide flawless 3D images with complete human depth cues. One problem that needs to be solved for realizing holographic 3D displays is the complicated hologram capture process. In order to get a hologram of 3D objects, we need to build a coherent optical system with laser illumination, which is generally complicated. Moreover, optical hologram acquisition is possible only for moderate sized objects and is not possible for distant objects or background scenes due to the difficulties of laser illumination. Computer generated holograms (CGH) can be one solution [1,2]. CGH calculates the hologram numerically, eliminating the necessity of a coherent optic system. CGH, however, requires the full 3D information for the objects to perform the hologram calculation. Therefore CGH is available only for computer-generated (CG) objects and not for objects in the real world. The hologram acquisition of 3D objects in the real world still requires a complicated coherent optical system.

In order to address this problem, various hologram generation methods from multiple view images taken under regular incoherent white illumination have been reported [3-9]. Mishina et al. proposed a calculation method for holograms from elemental images captured by integral photography [3]. They numerically simulated the 3D image integration process of integral photography using Fresnel diffraction theory and obtained a complex field of an integrated 3D image from the elemental images. Their method is based on the elemental image, which has the perspective projection geometry. Their method, however, is limited to Fresnel holograms only. Abookasis et al. and Sando et al. proposed hologram calculation methods from angular projections of 3D objects [4, 5]. The angular projection used in their method is an image that we can get after rotating the 3D object about its local coordinate origin and projecting it onto the central transverse plane, i.e. the XY plane [4]. In order to capture those images with the camera, we need to move the camera on a curved surface whose curvature radius is much larger than the object thickness [5]. N.T. Shaked et al., Abookasis et al., and Sando et al. extended these methods to make angular view point image acquisition easier using a lens array [6], to generate a Fresnel hologram [7, 8], or to generate a full-color Fourier hologram [9]. All this previous research, however, has not produced an exact hologram but requires small-angle approximation, which originates inherently from the projection geometry of the angular view images they used. Especially, the Fresnel hologram method requires indirect synthesis [8] or generates not an exact but a modified hologram [7]. Also, the 3D location of the 3D image in the reconstruction space is fixed as the original location of the 3D object in their methods and no manipulation methods have been proposed.

In this paper, we propose a novel method for generating a hologram from multiple view images. The unique point in the proposed method is the use of orthographic projection geometry instead of the perspective projection geometry or angular projection used in previous methods [3-9]. We have first reported an initial idea for generating holograms using the orthographic projection images [10]. The use of the orthographic projection geometry enabled us to generate an exact Fourier hologram in a straightforward way without any approximations. Our previous report, however, was limited to the Fourier hologram only, and the orthographic projection images were obtained by a simulation. In this paper, we extend our previous method to generate not only a Fourier hologram but also a Fresnel hologram. The use of the lens array for capturing the orthographic projection images is also proposed and experimentally verified. Moreover, a novel method for manipulating the 3D image location in the reconstruction space is proposed. The shift of the reconstructed image in 3D space and the inversion along the depth direction are achieved by exchanging or shifting the orthographic projection geometry view images. To the best of the authors’ knowledge, this feature has not been addressed yet. In the following, we explain the principle of the proposed method and present the experimental results for its verification.

## 2. Orthographic projection geometry

Projection geometry is the geometrical relationship between 3D objects and the view image at the image plane. Figure 1 shows three different types of projection geometry. Figure 1(a) is the angular orthogonal projection geometry that is used by Abookasis et al. and Sando et al [4,5]. The projection lines are parallel to each other and the image plane is slanted with a normal vector having angles *φ* and *θ* as defined in Fig. 1(a). The projection image coordinates (*x _{p}*,

*y*) and the object point (

_{p}*x*,

*y*,

*z*) are related by

$${y}_{p}=y\mathrm{cos}\theta -z\mathrm{sin}\theta \mathrm{cos}\phi -x\mathrm{sin}\phi \mathrm{sin}\theta .$$

Figure 1(b) is the perspective projection geometry used by N.T. Shaked et al. [6]. The projection lines converge at a vanishing point that corresponds to the principal point of the camera lens. The projection image coordinates (*x _{p}*,

*y*) in this projection geometry is given by

_{p}where (*x _{o}*,

*y*) is the camera position and

_{o}*f*is the focal length of the camera lens. Figure 1(c) is the orthographic projection geometry that we use in the proposed method. The projection lines are all parallel as the angular orthogonal projection shown in Fig. 1(a) but the image plane is not slanted like the perspective projection geometry shown in Fig. 1(b). If we let

**r**denote one of the projection lines, the angle

*φ*shown in Fig. 1(c) is the angle that the projection of

**r**onto the

*x*-

*z*plane makes with the

*z*-axis. Similarly, the angle

*θ*is the angle that the projection of

**r**onto the

*y*-

*z*plane makes with the

*z*-axis. The projection image coordinates (

*x*,

_{p}*y*) can be written as

_{p}where *s*, *t*, and *l* are defined as shown in Fig. 1(c) in order to represent the projection direction more conveniently [11,12]. The use of the orthographic projection geometry given by Eq. (3) leads to the exact Fourier or Fresnel hologram calculation and easy 3D manipulation of the reconstructed 3D images, as will be explained in the following sections.

One efficient way to capture the orthographic image is to use a lens array [11]. From single capture using the lens array, a number of orthographic projection images can be obtained. Figure 2 shows the configuration. When a 3D object is imaged through the lens array, each elemental lens of the lens array forms an image of the 3D object at its focal plane, which is called an elemental image. If the pixels are collected from each elemental image at the same local position, then the assembled pixels form an orthographic projection image. For example, in Fig. 2, pixel at the position of a red dot is collected from every elemental image to form an orthographic projection image with a projection angle of tan^{-1}(*s*
_{1}/*l*)=tan^{-1}(*s*
_{1}/*f _{la}*) where

*f*is the focal length of the lens array. In the same manner, the pixels at the green dots are assembled to form another orthographic image with a projection angle of tan

_{la}^{-1}(

*s*

_{2}/

*l*). Since one pixel is extracted from each elemental image, the number of the pixels in the synthesized orthographic image is the same as the number of the elemental lenses in the lens array. The total number of the synthesized orthographic images is given by the number of the pixels in one elemental image.

Although it provides a convenient way to obtain orthographic projection images at a single capture, it should be noted that this lens array method also brings several limitations. One limitation is the range of the projection angle. Due to the limited field of view and paraxial imaging of the lens array, the angular range that can be captured by the lens array is limited to a small value. The resolution of the captured orthographic projection images is also limited in the lens array method. The sampling rate of the captured orthographic projection images is determined dominantly by the elemental lens pitch that may not small enough to capture the high details of the object. This low sampling rate limits the maximum object spatial bandwidth that can be processed in the Fourier and Fresnel hologram generation.

## 3. Fourier hologram generation using multiple orthographic view images

Figure 3 illustrates Fourier hologram generation using the orthographic projection images. The Fourier hologram of 3D objects is generated by the following steps. First, the orthographic projection images of the 3D objects are captured. The second step is the multiplication of each orthographic projection image by the phase factor of the slanted plane wave. The slanting angle of the plane wave is determined by the projection angle of the orthographic projection image. Third, the product of the multiplication is integrated into a single complex value. This complex value is the complex field of the 3D object at a single point in the Fourier plane. Finally, by repeating the above steps for all orthographic projection images, the entire Fourier hologram is acquired.

The procedure of the proposed method is intuitively straightforward. Generally, the light field of the 3D object is diffracted along the Fresnel propagation and refracted by the lens, resulting in a redistributed light field in the Fourier plane. If we concentrate on the parallel rays as shown in Fig. 3, we can see that each point on the Fourier plane corresponds to the integration of the parallel rays. Since the orthographic projection image is the intensity distribution of the parallel projection rays, the complex field at a point in the Fourier plane can be calculated by integrating the orthographic projection image with the slanted plane wave phase factor that accounts for the slanted projection angle.

Let us present the proposed method mathematically. If we denote the orthographic projection image corresponding to the projection angles *φ* and *θ* or equivalently, *s*, *t*, and *l* as defined in Fig. 1(c) and Fig. 3 as P_{st}(*x _{p}*,

*y*), the proposed method calculates the Fourier hologram H by

_{p}where *l* is assumed to be a constant so that the orthographic projection direction is completely described by *s* and *t*, and *b* is a positive constant that will be determined later.

The Fourier hologram of the 3D object O(*x*,*y*,*z*) is given by[13]

where *f* is the focal length of the lens and λ is the wavelength. Now we show that the proposed method given by Eq. (4) produces an exact Fourier hologram given by Eq. (5) without any approximation using single-source-point method [6]. Let us consider one infinitesimal object point with the size of (Δ*x*, Δ*y*, Δ*z*), located at coordinates (*x*, *y*, *z*), and having the value of O(*x*, *y*, *z*). The orthographic projection image *p*
^{SSP}
_{st}(*x _{p}*,

*y*) that corresponds to this infinitesimal object point is given by Eq. (3) as

_{p}where *δ* is the Dirac delta impulse function. Substituting Eq. (6) into Eq. (4) leads to

$$\phantom{\rule{.2em}{0ex}}\times \mathrm{exp}[-j2\pi b({x}_{p}s+{y}_{p}t)]d{x}_{p}d{y}_{p}$$

$$\phantom{\rule{3.2em}{0ex}}=O\left(x,y,z\right)\mathrm{exp}\left[-j2\mathit{\pi b}\left(xp+ys+\frac{z}{l}{s}^{2}+\frac{z}{l}{t}^{2}\right)\right]\mathrm{\Delta}x\mathrm{\Delta}y\mathrm{\Delta}z$$

where H^{SSP}
_{(s,t)} is the hologram that corresponds to the infinitesimal object point. The hologram for entire 3D object scene is the volume integral of H^{SSP}(*s*,*t*) over all 3D object points. Hence we get

$$\phantom{\rule{3.2em}{0ex}}=\int \int \int O\left(x,y,z\right)\mathrm{exp}\left[-j2\mathit{\pi b}\left(xs+ys+\frac{z}{l}{s}^{2}+\frac{z}{l}{t}^{2}\right)\right]dxdydz.$$

In practice, the orthographic projection images are captured at discrete *s* and *t* values. If we rewrite Eq. (8) using continuous coordinates *u*=*Ms* and *v*=*Mt* at the Fourier hologram plane, finally we get

where *M* is a magnification factor. Equations (5) and (9) are the same, provided that

Therefore, the exact Fourier hologram of the 3D object can be generated using the orthographic projection images. Note that no approximation is necessary in the process, unlike other methods using different projection geometry.

## 4. Fresnel hologram generation using multiple orthographic view images

Figure 4 illustrates Fresnel hologram generation using the proposed method. The steps used for generating Fresnel hologram in the proposed method is as follows. First, the orthographic projection images of the 3D objects are captured as before. Second, each orthographic projection image is shifted and multiplied by a constant phase term. The amount of the shift and the phase angle of the constant phase term are determined by the projection angle, or equivalently *s* and *t*, of each orthographic projection image. Finally, the Fresnel hologram is obtained by adding all shifted and multiplied orthographic projection images. Like the case of the Fourier hologram, the proposed method for Fresnel hologram generation is also intuitively straightforward. The orthographic projection image represents the intensity distribution of one set of the parallel rays penetrating the projection image plane. Since the parallel rays undergo the same amount of phase change and lateral position shift when they propagate a distance *D* as shown in Fig. 4, we can estimate the complex field contributed by one set of the parallel rays by shifting laterally and multiplying a phase factor to the orthographic image. By repeating this process for all sets of parallel rays, or equivalently for all orthographic projection images, and adding them, we can get the complex field of 3D objects.

Letting the Fresnel hologram plane be at distance *D* from the projection image plane as shown in Fig. 4, the complex field *H _{s,t}*(

*u*,

*v*) contributed by an orthographic projection image

*P*(

_{s,t}*x*,

_{p}*y*) of the projection angles

_{p}*s*and

*t*is calculated by the proposed method as

where *b* and *c* are the constants to be determined later. The proposed method generates the Fresnel hologram by adding all *H _{s,t}*(

*u*,

*v*), i.e.

Now we show that the Fresnel hologram calculated using the orthographic projection images by Eq. (12) is equivalent to the Fresnel hologram of the 3D object that is given by [13]

where a phase term exp[*jkz*] is omitted since we can assume arbitrary phase distribution on the object surface. We start from Eq. (12). If we obtain the orthographic projection image with sufficiently small angular separation, or sufficiently small Δ_{s} and Δ_{t}, Eq. (12) can be represented as an integral form. From Eqs. (11) and (12), we get

Again we consider an infinitesimal object point O(*x*, *y*, *z*) located at (*x*, *y*, *z*). Using Eq. (6), the hologram H^{SSP}(*u*,*v*) for this infinitesimal object point is given by

$$\phantom{\rule{3.2em}{0ex}}=\int \int O\left(x,y,z\right)\delta (u-c\frac{sD}{l}-x-\frac{sz}{l},v-c\frac{tD}{l}-y-\frac{tz}{l})$$

$$\phantom{\rule{3.2em}{0ex}}\times \mathrm{\Delta}x\mathrm{\Delta}y\mathrm{\Delta}z\mathrm{exp}\left\{j2\pi b\left[{\left(\frac{s}{l}\right)}^{2}+{\left(\frac{t}{l}\right)}^{2}\right]\right\}dsdt$$

$$=\frac{{l}^{2}}{{\left(cD+z\right)}^{2}}O\left(x,y,z\right)\mathrm{exp}\left\{j\frac{2\pi b}{{\left(cD+z\right)}^{2}}\left[{\left(u-x\right)}^{2}+{\left(v-y\right)}^{2}\right]\right\}\mathrm{\Delta}x\mathrm{\Delta}y\mathrm{\Delta}z$$

The hologram for entire object scene is the volume integral of H^{SSP}(*u*,*v*) over all 3D object points. Hence we get

$$\phantom{\rule{3.2em}{0ex}}=\int \int \int \frac{{l}^{2}}{{\left(cD+z\right)}^{2}}O\left(x,y,z\right)\mathrm{exp}\left[j\frac{2\pi b}{{\left(cD+z\right)}^{2}}\left[{\left(u-x\right)}^{2}+{\left(v-y\right)}^{2}\right]\right]dxdydz.$$

If we assume that *cD*>>*z* and the object function O(*x*,*y*,*z*) is slowly varying in comparison to the quadratic phase exponential function in Eq. (16), we can approximate Eq. (16) to

where the constant term is ignored [13]. Equation (17) is the same as Fresnel hologram of the 3D object given by Eq. (13), provided that

Therefore, the hologram generated by the proposed method is equivalent to the Fresnel hologram of the 3D object.

## 5. Hologram generation for 3D location shifted or depth inverted object

A simple method to shift the 3D location or invert the depth of the captured 3D objects is proposed in this section. The depth inversion and the location shift are achieved by modifying the captured orthographic projection images. With the modified orthographic projection images, the method for generating Fourier or Fresnel holograms is performed as explained in sections 3 and 4, resulting in a hologram for the depth inverted or 3D location shifted 3D objects. Figure 5 shows the concept.

The capability of the 3D location shift and the depth inversion comes from the simple relation between the projection image coordinates and the 3D object coordinates of the orthographic projection geometry, which is given by Eq. (3). First, the lateral shift of the 3D objects in the reconstruction volume is achieved by shifting all orthographic projection images by the same amount. From Eq. (3), shifting the orthographic projection image P_{s,t}(*x _{p}*,

*y*) by (

_{p}*δx*,

*δy*) to form the modified orthographic projection image P'

_{s,t}(

*x*,

_{p}*y*)=

_{p}*P*(

_{s,t}*x*-

_{p}*δx*,

*y*-

_{p}*δy*) as shown in Fig. 5(a) leads to shifted coordinates (

*x*'

_{p},

*y*'

_{p}) given by

$${y\text{'}}_{p}={y}_{p}+\delta y=\left(y+\delta y\right)+zt/l,$$

implying that the 3D image is reconstructed with lateral shift (*δx*, *δy*).

The longitudinal shift of the 3D image is achieved by shifting each orthographic image according to its projection view angle, i.e. *s* and *t*. For the longitudinal shift of *δz*, each orthographic projection image P_{s,t}(*x _{p}*,

*y*) is shifted by (

_{p}*δzs*/

*l*,

*δzt*/

*l*) to form a modified orthographic projection image P'

_{s,t}(

*x*,

_{p}*y*)=P

_{p}_{s,t}(

*x*-

_{p}*δzs*/

*l*,

*y*-

_{p}*δzt*/

*l*) as shown in Fig. 5(b), giving new coordinates (

*x*'

_{p},

*y*'

_{p}) as

$${y\text{'}}_{p}={y}_{p}+\delta zt/l=y+\left(z+\delta z\right)t/l,$$

Equation (20) reveals that the new coordinates correspond to the shifted object depth *z*+*δz*. Note that the shift of the orthographic projection image by a constant value results in the lateral shift of the 3D image; the projection angle, i.e. *s* and *t*, dependent shift provides longitudinal shift of the 3D image.

Finally, the depth inversion is performed by exchanging each orthographic image of the projection angle *s* and *t* with the orthographic image corresponding to −*s* and -*t*, i.e. P'_{s,t}(*x _{p}*,

*y*)=P

_{p}_{-s,-t}(

*x*,

_{p}*y*) as shown in Fig. 5(c). The new coordinates are given by

_{p}which indicates that the new coordinates correspond to the inverted depth −*z*, resulting in a converted depth with respect to the *z*=0 plane. Depth inversion with respect to the arbitrary depth plane is also possible by sequentially applying the depth shift and the depth conversion. For example, shifting the depth by *δz*=-*d* using Eq. (20), inverting the depth with respect to *z*=0 plane, and shifting again the depth by *δz*=*d* gives a 3D image whose depth is inverted with respect to the *z*=*d* plane.

## 6. Experimental result

We verified the proposed method experimentally. In the experiment, two plane objects ‘C’ and ‘B’ at different depths are captured using a lens array. From the elemental images, the orthographic images are synthesized by collecting the pixels at the same position in each elemental image [11]. Using the synthesized orthographic images, the Fourier and Fresnel holograms are generated with and without depth shift and depth inversion based on the proposed method. Finally, the holograms are numerically reconstructed at various depths.

The experimental setup used to capture the elemental images is shown in Fig. 6. The objects are located away from the lens array at 30mm for ‘C’ and 50mm for ‘B’. The lens array consists of identical elemental lenses of 1mm (H) × 1mm (V) lens pitch and *f _{la}*=3.3mm focal length. The valid number of elemental lenses is 67(H) × 59(V). The elemental images formed by the lens array are captured by the CCD of 3288(H) × 2470(V) resolution through the imaging lens system, Nikon AF Nikkor 28-80mm.

Figure 7 shows the elemental images captured by the CCD. The resolution of each elemental image is 41(H) × 41(V) pixels. Since the elemental lens pitch is 1mm, the pixel size of the elemental image is given by Δ*s*=Δ*t*=1mm/41=24.4um. Due to a limited field of view of the elemental lens, each elemental image contains only a part of the object. Also there are elemental images that do not contain any object image since the object is out of the field of view of the corresponding elemental lenses. Figure 8 shows the orthographic images generated using the elemental images of Fig. 7. In Fig. 8, it is observed that the disparity of the closer object, object ‘C’, is smaller than that of the farther object, object ‘B’, which confirms that the generated image has orthographic projection geometry [12]. The resolution of the generated orthographic image is 67(H) × 59(V) pixels. The total number of the generated orthographic images is 41(H) × 41(V), but only 34(H) × 35(V) orthographic images in the central part are used in generating the holograms. The angular separation between the projection lines of the adjacent orthographic images is Δ*s*/*f _{la}*=Δ

*t*/

*f*=0.42° and the whole angular range is -7.2°~ +7.2° for the horizontal direction and -7.2°~ +7.6° for the vertical direction. The sampling rate of each orthographic image is given by the elemental lens pitch. Hence the object is sampled with 1 mm interval in the orthographic image.

_{la}The Fourier and Fresnel holograms are generated using the orthographic images shown in Fig. 8. Since the number of orthographic images, i.e. 34(H) × 35(V), is not sufficient and the sampling rate of each orthographic image, i.e. 1mm (H) × 1mm (V), is low, two techniques were used in the experiment. First, each orthographic image is repeated, doubling the number of orthographic images to 68(H) × 70(V). Note that the use of the intermediate view reconstruction (IVR) technique, which synthesizes the intermediate image by interpolating the neighboring images, can enhance the result [14,15]. In our experiment, however, the intermediate image is for simplicity generated by repeating without any interpolation. By this image repeating, the angular separation between the projection lines of the adjacent orthographic images is decreased from 0.42° to 0.21°, or equivalently the pixel size of the elemental image is reduced from Δ*s*=Δ*f*=24.4um to Δ*s*=Δ*f*=12.2um, while the whole angular range is maintained unchanged, i.e. -7.2°~ +7.2° for the horizontal direction and -7.2°~ +7.6° for the vertical direction.

Second, different sets of parameters were used in the generation and the reconstruction of the holograms. In the case of Fourier hologram, the parameters used in the generation process with Eqs. (4) and (10) are λ=1064um, *l*=*f _{la}*=3.3mm,

*b*=2/(

*λl*)=5.7×10

^{5}and Δ

*s*=Δ

*t*=12.2um. Note that the wavelength is set to a large value in order to alleviate any aliasing induced by low sampling rate of the orthographic images in the generation process. Assuming the pixel pitch of the generated hologram is Δ

*u*=Δ

*v*=22.4um, the corresponding focal length

*f*of the Fourier transform lens is given as

*f*=

*l*Δ

*u*/(2Δ

*s*)=3.03mm by Eq. (10). In the reconstruction stage, another sets of the parameters that have more realistic values, i.e. λ=532nm,

*f*=135.5mm, Δ

*u*=Δ

*v*=22.4um, are used. From Eq. (5), one can easily verify that the use of these different reconstruction parameters scales the lateral coordinates

*x*and

*y*of the object space, i.e. decreases the lateral size of the object, by a factor of 135.5/3.03=44.72 while leaving the axial coordinates

*z*nearly unchanged. Note that these reconstruction parameters were chosen in the experiment such that the axial coordinate is kept unchanged for the purpose of the clear demonstration of the theory. One can choose different focal lengths

**f**of the Fourier transform lens to control the lateral and axial magnifications of the object space.

In the case of Fresnel hologram, the parameters used in the generation process with Eqs. (11), (12), and (18) are λ=1064um, *l*=*f _{la}*=3.3mm,

*D*=350mm,

*b*=2

*D*/λ=657.9,

*c*=2, Δ

*u*=Δ

*v*=1mm and Δ

*s*=Δ

*f*=12.2um. Again, the wavelength was set to a large value to avoid any aliasing in the generation process. Also note that Δ

*u*=Δ

*v*=1mm are the same as the elemental lens pitch of the lens array used in the experiment since it determines the sampling rate of the orthographic images. In the reconstruction stage, the wavelength and the pixel pitch of the hologram are changed to λ=532nm, and Δ

*u*=Δ

*v*=22.4um. One can verify from Eq. (13) that these change scales the lateral coordinates

*x*and

*y*of the object by a factor of $\sqrt{1064\mathrm{um}/532\mathrm{nm}}$ ≈ 1mm/22.4um ≈ 44.7 while leaving the axial coordinate

*z*and the distance

*D*unchanged. Note that the use of the different sets of the parameters and the doubling of the orthographic projection images in our experiment are mainly due to low sampling rate of the lens array method. If the orthographic images are obtained with higher sampling rate using different methods, these processes will not be required.

Figure 9 shows the generated Fourier holograms. The resolution of the generated Fourier hologram is 68(H) × 70(V) pixels, which is the same as the number of the repeated orthographic images. The Fourier holograms are generated by the proposed method for three cases, i.e. (a) no shift (*δx*, *δy*, *δz*)=(0,0,0) and no depth inversion, (b) shift (*δx*, *δy*, *δz*)=(10mm,10mm,-20mm) and no depth inversion, and (c) depth shift
(*δx*, *δy*, *δz*)=(0,0,80mm) after depth inversion. Figure 10 shows the numerical reconstruction results. In the numerical reconstruction, the focal length of the Fourier transform lens is assumed to be 135.5mm as explained above. Using the Fresnel diffraction formula and the lens function [13], the intensity at 135.5mm+*z* from the Fourier transform lens is calculated. Figure 10(a) shows that the Fourier hologram generated with the proposed method can reconstruct two plane objects successfully with correct depth order. The effect of the lateral and depth shift is shown in Fig. 10(b). We can see that the lateral shift and the depth shift are reflected in the results, as desired. The depth inversion result is shown in Fig. 10(c). Note that the depths of the objects are originally 30mm for object ‘C’ and 50mm for object ‘B’. By depth inversion, they are transferred to -30mm for ‘C’ and -50mm for ‘B’. Then, by depth shifting by 80mm, they are brought back to 30mm for ‘B’ and 50mm for ‘C’. Figure 10(c) shows this final result. As expected, object ‘C’ is focused at 50mm and object ‘B’ is focused at 30mm, which reveals that the depth order is inverted.

Figures 11 and 12 show the Fresnel holograms generated by the proposed method and their numerical reconstruction results. The distance from the Fresnel hologram to the orthographic image plane, *D*, is set at 350mm. The resolution of the generated Fresnel holograms shown in Fig. 11 is 260(H) × 260(V) pixels including small zero padding around the active area. Note that, unlike the case of Fourier holograms, the resolution of the generated Fresnel hologram is not the same as the number of the orthographic projection images. In the case of Fresnel hologram, the orthographic images are shifted by *csD*/*l* and overlapped on the hologram plane as shown in Eqs. (11), (12) and Fig. 4. Hence the resolution of the generated hologram is determined by the area covered by the shifted orthographic images and the pixel size on the hologram plane. Since the distance is *D*=350mm, the angular range, i.e. tan^{-1} (*s*/*l*), is about -7.2°~ +7.2°, and the size of one orthographic image is Δ*u*(value used in generation process) × (number of pixels in one orthographic image) = 1mm × 67 = 67mm, the covered area on the hologram plane can be estimated by using the first term in Eq. (11) as around
67+350×2×2×tan(7.2°)≈244mm. The hologram pixel pitch used in the generation step is Δ*u*=1mm as explained before. Therefore, the resolution of the active area of the generated Fresnel hologram is around 244/1=244 pixels for *u*-axis. Similar estimation gives 59+350×2×{tan(7.2°)+ tan(7.6°)}≈241 pixels for *v*-axis.

Using the Fresnel diffraction formula [13], the intensity image at 350mm+*z* from the Fresnel hologram plane is calculated. Figures 11 and 12 reveal that the proposed method successfully generates a Fresnel hologram of the 3D objects from their orthographic projection images; their lateral/axial shift and depth inversion can also be performed with the given set of orthographic projection images.

## 7. Conclusion

A novel method to generate Fourier and Fresnel holograms of 3D objects from their orthographic projection images is proposed. The lateral/axial shift and depth inversion of the 3D object can also be performed with the given set of the orthographic projection images using the proposed method, making it possible to locate 3D objects at any position in the reconstruction volume. The principle and the feasibility of the proposed method are verified experimentally by capturing the orthographic projection images using a lens array and generating Fourier and Fresnel holograms under various conditions. Consequently, the proposed method provides an efficient way to generate Fourier and Fresnel holograms of the real, existing 3D objects without any need for a coherent holographic capture process.

## Acknowledgment

This research was partly supported by the MKE (The Ministry of Knowledge Economy), Korea under the ITRC (Information Technology Research Center) Support program supervised by the IITA (Institute for Information Technology Advancement) (IITA-2009-C1090-0902-0018)

This work was partly supported by the grant of the Korean Ministry of Education, Science and Technology. (The Regional Core Research Program / Chungbuk BIT Research-Oriented University Consortium)

## References and links

**1. **A. W. Lohmann and D. P. Paris, “Binary Fraunhofer holograms generated by computer,” Appl. Opt. **6**, 1739–1748 (1967). [CrossRef]

**2. **J. P. Waters, “Holographic image synthesis utilizing theoretical methods,” Appl. Phys. Lett. **9**, 405–407 (1966). [CrossRef]

**3. **T. Mishina, M. Okui, and F. Okano, “Calculation of holograms from elemental images captured by integral photography,” Appl. Opt. **45**, 4026–4036 (2006). [CrossRef]

**4. **D. Abookasis and J. Rosen, “Computer-generated holograms of three-dimensional objects synthesized from their multiple angular viewpoints,” J. Opt. Soc. Am. A **20**, 1537–1545 (2003). [CrossRef]

**5. **Y. Sando, M. Itoh, and T. Yatagai, “Holographic three-dimensional display synthesized from three-dimensional Fourier spectra of real existing objects,” Opt. Lett. **28**, 2518–2520 (2003). [CrossRef]

**6. **N. T. Shaked, J. Rosen, and A. Stern, “Integral holography: white-light single-shot hologram acquisition,” Opt. Express **15**, 5754–5760 (2007), http://www.opticsinfobase.org/abstract.cfm?URI=oe-15-9-5754 [CrossRef]

**7. **N. T. Shaked and J. Rosen, “Modified Fresnel computer-generated hologram directly recorded by multiple-viewpoint projections,” Appl. Opt. **47**, D21–D27 (2008). [CrossRef]

**8. **D. Abookasis and J. Rosen, “Three types of computer-generated hologram synthesized from multiple angular viewpoints of a three-dimensional scene,” Appl. Opt. **45**, 6533–6538 (2006). [CrossRef]

**9. **Y. Sando, M. Itoh, and T. Yatagai, “Full-color computer-generated holograms using 3-D Fourier spectra,” Opt. Express **12**, 6246–6251 (2004), http://www.opticsinfobase.org/oe/abstract.cfm?uri=OE-12-25-6246 [CrossRef]

**10. **M.-S. Kim, G. Baasantseren, N. Kim, and J.-H. Park, “Hologram generation of 3D objects using multiple orthographic view images,” J. Opt. Soc. Korea **12**, 269–274 (2008). [CrossRef]

**11. **J.-H. Park, J. Kim, and B. Lee, “Three-dimensional optical correlator using a sub-image array,” Opt. Express **13**, 5116–5126 (2005), http://www.opticsinfobase.org/abstract.cfm?URI=oe-13-13-5116. [CrossRef]

**12. **J.-H. Park, S. Jung, H. Choi, Y. Kim, and B. Lee, “Depth extraction by use of a rectangular lens array and one-dimensional elemental image modification,” Appl. Opt. **43**, 4882–4895 (2004). [CrossRef]

**13. **J. W. Goodman, *Introduction to Foureir Optics*, 2nd ed.(McGraw-Hill, New York, 1996), Chap. 4–5, p. 66–105.

**14. **L. Zhang, D. Wang, and A. Vincent, “Adaptive reconstruction of intermediate views from stereoscopic images,” IEEE Trans. Circuits Syst. Video Technol. **16**, 102–113 (2006). [CrossRef]

**15. **J.-H. Park, G. Baasantseren, N. Kim, G. Park, J.-M. Kang, and B. Lee, “View image generation in perspective and orthographic projection geometry based on integral imaging,” Opt. Express **16**, 8800–8813 (2008), http://www.opticsinfobase.org/oe/abstract.cfm?URI=oe-16-12-8800 [CrossRef]