Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Review of three-dimensional holographic imaging by multiple-viewpoint-projection based methods

Open Access Open Access

Abstract

Methods of generating multiple viewpoint projection holograms of three-dimensional (3-D) realistic objects illuminated by incoherent white light are reviewed in this paper. Using these methods, it is possible to obtain holograms with a simple digital camera, operating in regular light conditions. Thus, most disadvantages characterizing conventional digital holography, namely the need for a powerful, highly coherent laser and extreme stability of the optical system, are avoided. The proposed holographic processes are composed of two stages. In the first stage, regular intensity-based images of the 3-D scene are captured from multiple points of view by a simple digital camera. In the second stage, the acquired projections are digitally processed to yield the complex digital hologram of the 3-D scene, where no interference is involved in the process. For highly reflecting 3-D objects, the resulting hologram is equivalent to an optical hologram of the objects recorded from the central point of view. We first review various methods to acquire the multiple viewpoint projections. These include the use of a microlens array and a macrolens array, as well as digitally generated projections that are not acquired optically. Next, we show how to digitally process the acquired projections to Fourier, Fresnel, and image holograms. Additionally, to obtain certain advantages over the known types of holograms, the proposed hybrid optical-digital process can yield novel types of holograms such as the modified Fresnel hologram and the protected correlation hologram. The prospective goal of these methods is to facilitate the design of a simple and portable digital holographic camera that can be useful for a variety of practical applications, including 3-D video acquisition and various types of biomedical imaging. We review several of these applications to signify the advantages of multiple viewpoint projection holography.

© 2009 Optical Society of America

Data sets associated with this article are available at http://hdl.handle.net/10376/1444. Links such as View 1 that appear in figure captions and elsewhere will launch custom data views if ISP software is present.

1. Introduction

Holography is able to provide the most authentic three-dimensional (3-D) illusion to the human eye, with accurate depth cues, without the need for special viewing devices, and without causing eye strain [1, 2]. For visualization purposes, the holographically recorded 3-D image can be easily reconstructed optically (for example by illuminating the hologram with a coherent light). In addition, holograms have advantages as a storage medium of 3-D models since the acquired 3-D information can be stored within holograms in a very dense, efficient, and encrypted way [the entire volume is stored in a two-dimensional (2-D) complex matrix]. However, the conventional optical holographic recording process requires a coherent, powerful laser source and an extreme stability of the optical system, as well as long developing process of the recorded hologram.

In digital holography [3], a digital camera is used as the recording medium instead of the photographic plate or film, and thus no chemical development process is needed. This allows real-time acquisition and digital processing and reconstruction of the acquired holograms. However, the other strict conditions needed for the conventional optical holographic recording process still stay the same. The practical meaning of these restrictions is that conventional digital holograms cannot be recorded outside a well-equipped optical laboratory. Therefore, many practical applications requiring 3-D imaging have not employed holography notwithstanding its many attractive advantages described above.

By developing new holographic techniques for incoherently illuminated 3-D scenes, it is possible to eliminate most of the limitations of the laser-based holographic recording process. Old techniques of recording incoherent holograms are based on the fact that an object, illuminated by a spatially incoherent and quasi-monochromatic light, is composed of many points, each of which is self-spatially coherent and thus can create an interference pattern with the light coming from the mirrored image of the point [4, 5, 6, 7, 8, 9, 10].

A different approach for generating holograms of incoherently illuminated 3-D scenes is the scanning holography [11, 12, 13]. In this technique, digital Fresnel holograms are obtained, under spatially incoherent illumination, by scanning the 3-D scene with a pattern of a Fresnel zone plate, so that the light intensity at each scanning position is integrated by a point detector. Scanning holography is widely used and has been discussed in many articles. However, this technique requires time-consuming mechanical scanning and relatively complicated laser alignments. A possible solution to these problems could be Fresnel zone-plate coded holography [14], which is based on a motionless correlation between the objects and Fresnel zone plates. However, this technique has not yielded acceptable imaging performance in the optical regime.

Another motionless technique for obtaining digital incoherent Fresnel holograms, called FINCH (Fresnel incoherent correlation holography), has been recently proposed in Refs. [15, 16, 17, 18]. In this technique, the spatially incoherent and quasi-monochromatic light coming from the 3-D scene propagates through or is reflected from a diffractive optical element (DOE) and is recorded by a digital camera. The phase function of the DOE creates two spherical waves from each object point, so that the final intensity recorded by the camera is similar to the intensity of a conventional on-axis interference hologram. To avoid the twin-image problem [1, 2] in the hologram re construction, three different holograms, each with a different phase factor of the DOE, are recorded sequentially and superposed in the computer into the final Fresnel hologram. This technique is able to obtain reconstructed 3-D images with high resolution, without using physical scanning or complicated laser alignments as required in the scanning holographic technique. Still, the main disadvantage of the FINCH technique is that three different holograms of the same 3-D scene have to be acquired to yield the final hologram. Thus, the ability of this technique to acquire dynamic 3-D scenes (scenes that might be changed during the three hologram acquisitions) is limited. In addition, technical limitations of the spatial light modulator (SLM) used to display the three DOE phase functions, may cause the hologram reconstruction to have low signal-to-noise ratio, especially if the objects in the 3-D scene are complicated [19].

Multiple viewpoint projection (MVP) holography is a relatively new technique for generating digital and optical holograms of 3-D real-existing objects under regular “daylight” (temporary incoherent and spatially incoherent) illumination. To obtain these holograms, a conventional digital camera is used. The process does not require recording wave interference at all, and thus the twin image problem is avoided. MVP holograms are generated by first acquiring two-dimensional (2-D) projections (regular intensity images) of a 3-D scene from various perspective points of view. Then, the acquired MVPs are digitally processed to yield the digital hologram of the scene. In contrast to the composite hologram [20, 21], an MVP hologram is essentially equivalent to a conventional digital hologram of the scene recorded from the central point of view (under the assumption that there are no phase objects in the 3-D scene). The MVP acquisition process is performed under incoherent illumination, where neither an extreme stability of the optical system nor a powerful, highly coherent laser source is required as in conventional techniques of recording holograms.

Once the MVP hologram is generated, it can be decided whether to reconstruct the recorded 3-D scene optically or digitally. One method to reconstruct the 3-D scene optically is to encode the MVP hologram into a computer-generated hologram (CGH) [1, 2, 22] with real and positive transparency values, and to print the CGH on a slide, or to display it on an SLM. Then, the recorded 3-D scene can be reconstructed by illuminating the CGH with a coherent beam [1, 23]. Alternatively, the recorded 3-D scene can be reconstructed digitally by Fresnel backpropagator [3, 23], which actually means convolving the complex matrix of the MVP hologram with quadratic phase functions scaled according to the various axial reconstruction distances.

As mentioned above, the overall process of generating the MVP hologram can be divided into two main stages: the optical acquisition stage and the digital processing stage. In the optical acquisition stage, MVPs of the 3-D scene are captured, whereas in the digital processing stage, mathematical operations are performed on these perspective projections to yield a complex function representing the digital hologram of the 3-D scene. Section 2 reviews various MVP acquisition methods, whereas Section 3 reviews several methods of processing the acquired MVPs to different types of holograms. Section 4 reviews several possible applications for this technology, and Section 5 concludes the paper.

2. Optical Acquisition of the Multiple Viewpoint Projections

Capturing the MVPs can be performed in several ways, which include shifting the digital camera mechanically, utilizing a microlens array for acquiring viewpoint projections simultaneously, or acquiring only a small number of extreme projections and predicting the middle projections digitally using a 3-D interpolation algorithm. These MVP acquisition methods are reviewed in the following subsections.

2A. Shifting the Camera

In Refs. [24, 25, 26, 27, 28], the MVPs were acquired by shifting the camera mechanically, so that the camera captured only a single perspective projection at each location. This method was first implemented for generating MVP holograms, as it does not require any special hardware or capturing algorithms. However, this acquisition process is quite slow. For example, a 2-D MVP hologram with 256×256 pixels requires 65,536 MVPs (since, as we show in Section 3, each perspective projection yields a single pixel in the 2-D MVP hologram). This is a large number of MVPs, and capturing them by manual mechanical movements is a quite slow and problematic procedure. A step motor has been used to make this process faster and more accurate. However, the mechanical movement of the camera, even if performed by using the step motor, is not adequate when the object moves faster than one acquisition cycle, since the wrong MVPs would be captured in this situation. Therefore, alternative capturing methods are needed, and those are described next.

2B. Spatial Multiplexing for Completely-Optically Acquiring Multiple Viewpoint Projections

To avoid mechanical movements of the camera, the acquisition of the MVPs can be accomplished by spatial multiplexing. Positioning several cameras in an array is one option to do this. However, the method requires multiple cameras, and as a result the en tire camera array is quite expensive. Instead, in Ref. [29], it was proposed to use a microlens array to capture simultaneously a large number of MVPs in a single camera shot. This acquisition method is similar in some respects to that usually used in the integral imaging field [30, 31]. However, in the present case the goal is to produce an incoherent hologram that can, for example, be reconstructed optically, without the use of the microlens array or the multiple elemental images again, as needed in integral imaging. Additionally, in contrast to MVP holography, in most integral imaging methods only a small part of the 3-D scene (a subimage) is viewed by each microlens in the array, whereas in our case the entire 3-D scene is viewed by each microlens. Reference [32] presents a different method, in which subimages, rather than the full projections, are first captured by the microlens array and then used to generate the MVP hologram.

Figure 1a shows the experimental setup used for directly capturing the multiple full-scene projections by the microlens array. A plano–convex lens L1, positioned at a distance of its focal length f1 from the 3-D scene, is attached to the microlens array and utilized to collimate the beams coming from the 3-D scene and thus to increase the number of microlenses participating in the process. A spherical lens L2 projects the microlens array image plane upon the camera with the magnification of z2/z1. Then, the camera captures the entire microlens array image plane in a single exposure and sends the image to the computer for the processing stage. Figure 1b shows several selected projections cut from different parts of the overall microlens-array image plane which is recorded by the camera. Part of the entire image plane is illustrated in View 1 and Media 1. As shown in Fig. 1b, the gap between the two letter objects is changed depending on the location of the projection on the image plane. This is the effect that leads to the 3-D volumetric properties of the hologram obtained in the end of the digital process. Following the acquisition of the MVPs, different types of MVP holograms can be generated according to the specific digital process carried out on the acquired projections. These possible digital processes are discussed in Section 3. Figure 1c shows the amplitude and the phase of the 2-D complex Fourier hologram (the digital process of which is described in Subsection 3B) that has been produced from the acquired projections. Reconstructing this hologram can be done optically, by illuminating it with a coherent converging spherical wave, or alternatively, digitally by first computing the hologram’s inverse Fourier transform (IFT), and then simulating a Fresnel propagation to reveal different planes along the optical axis of the 3-D reconstructed image. The two best-in-focus reconstructed planes are shown in Fig. 1d. The fact that in each reconstructed plane only a single letter, the “I” or the “H,” is in focus whereas the other letter, the “H” or the “I,” respectively, is out of focus validates that a volume information is indeed encoded into the hologram. View 2 as well as Media 2 and Media 3 show the continuous Fresnel propagation along the optical axis slice by slice and the entire reconstructed volume.

In spite of the advantages of the method described above, generating a high-resolution hologram requires an array with more microlenses, because each projection yields only a single pixel in the 2-D complex matrix of the MVP hologram. However, since the total size of the microlens array is limited (as well as the digital camera imager size), having more microlenses in the array leads to a small diameter of each microlens. This results in a low resolving power of each microlens (as well as in a low number of camera pixels assigned to the elemental image from each microlens), causing a poor imaging resolution of the perspective projections. On the other hand, having microlenses with larger diameters leads to better projection resolution, but then fewer microlenses can be used, and hence there are fewer pixels in the resulting hologram.

2C. Shifting the Camera and Digital Postprocessing

To solve the problem mentioned in the end of the previous subsection, Ref. [33] has presented a possible way to reduce the number of MVPs, so that the camera acquires only a small number of perspective projections with high resolution, typically the extreme ones. Then the computer synthesizes the middle projections by using a 3-D interpolation algorithm called the view synthesis algorithm [34, 35]. Given two viewpoint projections, the algorithm first calculates the correspondence map, containing the displacements of each pair of corresponding pixels in the two given projections, and then predicts how the scene would look from new middle viewpoints by interpolating the locations and intensity values of the corresponding pixels in the two given projections. A schematic of the method is illustrated in Fig. 2a. View 3 and Media 4 show the entire set of MVPs generated by the process. Again, a relative “movement” of the different objects can be seen throughout the MVPs. However, this time, this “movement” is on the horizontal axis only, since the two original projections were optically acquired on the horizontal transverse axis. After synthesizing all the (optically and digitally obtained) projections, an MVP hologram was computed. The hologram shown in Fig. 2b is a one-dimensional (1-D) Fourier hologram generated by the process explained in Subsections 3A, 3B.The best-in-focus planes obtained from this hologram are shown in Fig. 2c, illustrating that, al though only two perspective projections are optically acquired, a good quality 3-D reconstructed image can still be obtained. View 4 as well as Media 5 and Media 6 show the continuous Fresnel propagation along the optical axis slice by slice and the entire reconstructed volume.

In Ref. [33], the camera moved mechanically to acquire the MVPs. Therefore, in spite of reducing the number of the acquired MVPs and capturing pro jections in an acceptable resolution, there is still a problem in utilizing this acquisition method for generating holograms of dynamic scenes.

2D. Spatial Multiplexing Using a Macrolens Array and Digital Postprocessing

To generate MVP holograms of dynamic scenes and still be able to get an acceptable resolution of each projection, we hybridized in Ref. [36] both methods presented in Refs. [29, 33]. This is performed by building the 3×3 macrolens array shown in Fig. 3a. To produce this array, we used nine standard negative lenses, each of which had a focal length of 25cm and an original diameter of 5cm. To take advantage of the entire array size, each lens was cut into a square of 3.53×3.53cm, and a 3×3 macrolens array of 10.6×10.6cm was formed by attaching the cut lenses with an optical adhesive. Figure 2b shows 3×3 projections acquired by the macrolens array in one camera exposure. The macrolens array provides a convenient way to capture a small number of projections on a 2-D grid simultaneously. Thus, it is possible to acquire the projections of a dynamic scene. After the projections are acquired, the view synthesis algorithm is applied so that the middle projections are digitally synthesized. Then, a 2-D MVP hologram of the 3-D scene can be generated by applying one of the digital processes described in the next section on the entire set of perspective projections.

3. Hologram Generation by Digital Processing of the Multiple Viewpoint Projections

Following the acquisition of the MVPs, the digital processing stage is carried out to yield the digital hologram of the 3-D scene. Our group has shown [24, 25, 28, 29, 33, 36, 37, 38] that by processing the MVPs it is possible to generate Fourier, Fresnel, image, modified Fresnel, and protected correlation holograms. In addition, for each of the above types, it is possible to generate both 1-D and 2-D MVP holograms. This section reviews the digital processes to obtain each of these holograms.

3A. One-Dimensional and Two-Dimensional Holograms

Both 1-D MVP holograms [24, 33, 37, 38] and 2-D MVP holograms [25, 26, 27, 28, 29, 32, 36, 38] can be produced, according to the MVP acquisition axes and the subsequent digital processing. The 1-D holograms are easier to produce because in this case the MVPs are captured along a single transverse axis only, unlike the 2-D holograms in which the MVPs are captured on a transverse 2-D grid. On the other hand, 2-D holograms have the advantage of encoding the 3-D information into the two transverse axes, and thus 2-D (rather than a 1-D) volumetric effects are obtained in the 3-D image reconstructed from the hologram. In case of the 2-D MVP hologram, the digital process includes multiplication of each projection by a certain complex point spread function (PSF) and summation of the resulting inner product to the corresponding pixel in the hologram matrix. On the contrary, in case of the 1-D MVP hologram, the sum of the 1-D inner product is the corresponding column in the hologram matrix.

For mathematical presentation of the digital process in case of the 1-D MVP hologram, let us assume that (2K+1) projections of the 3-D scene are acquired along the horizontal axis only. We number the projections by m, so that the middle projection is denoted by m=0, the right projection by m=K, and the left projection by m=K. Then, to generate the 1-D MVP hologram of the scene, any row of the mth projection Pm is multiplied by a certain PSF and the products are summed to the (m,n)th pixel in a complex matrix as follows:

H1(m,n)=Pm(xp,yp)E1(xp,ypnΔp)dxpdyp,
where E1(xp,yp) represents the generating PSF of the 1-D hologram, xp and yp are the axes on the projection plane, n is the row number in the complex matrix H1, and Δp is the pixel size of the digital camera. E1(xp,yp) is defined as the following:
E1(xp,yp)=A1(bxxp)exp[ig1(bxxp)]δ(yp),
where A1 and g1 are certain functions dependent on xp and may be defined differently for each type of MVP hologram; bx is an adjustable parameter (with units that preserve the arguments of A1 and g1 as unitless quantities), which may or may not be dependent on the projection index m; and δ is the Dirac delta function. According to Eq. (1), each projection contributes a different column to the complex matrix H1(m,n), which represents, in the end of the digital process, the 1-D MVP hologram of the 3-D scene.

Excluding Fourier holograms [in which pre liminary 1-D IFT and convolution with 1-D quadratic phase function are used], to digitally obtain the reconstructed planar image s1(m,n;zr) located at axial distance zr from the 1-D hologram, the hologram is digitally convolved with a reconstructing PSF as follows:

s1(m,n;zr)=|H1(m,n)*R1(m,n;zr)|,
where * denotes a 1-D convolution, and R1(m,n;zr) is the reconstructing PSF of the 1-D hologram. This PSF is given by
R1(m,n;zr)=A1(mΔpzr)exp[ig1(mΔpzr)]δ(nΔp),
where A1 and g1 are the same functions used for the generating PSF of the 1-D hologram [Eq. (2)].

To present a similar mathematical description for the case of the 2-D MVP hologram, let us assume that horizontal (2K+1) by vertical (2K+1) MVPs are acquired on a 2-D transverse grid. We number the projections by m and n, so that the middle projection is denoted by (m,n)=(0,0), the upper-right projection by (m,n)=(K,K), and the lower-left projection by (m,n)=(K,K). Then, to generate a 2-D MVP hologram, the (m,n)th projection Pm,n(xp,yp) is multiplied by a certain PSF and the product is summed to the (m,n)th pixel in a complex matrix as follows:

H2(m,n)=Pm,n(xp,yp)E2(xp,yp)dxpdyp,
where E2(xp,yp) represents the generating PSF of the 2-D hologram. This PSF is defined as the following:
E2(xp,yp)=A2(bxxp,byyp)exp[ig2(bxxp,byyp)],
where A2 and g2 are certain functions dependent on (xp,yp) and may be defined differently for every type of MVP hologram as discussed below. bx and by are adjustable parameters (with units that preserve the arguments of A2 and g2 as unitless quantities), which may or may not be dependent on m and n. The process manifested by Eq. (5) is repeated for all the projections, but in contrast to the 1-D case, in the 2-D case each projection contributes a single pixel to the hologram matrix rather than a column of pixels. In the end of this digital process, the obtained 2-D complex matrix H2(m,n) represents the 2-D MVP hologram of the 3-D scene.

Excluding Fourier holograms (in which preliminary 2-D IFT and convolution with 2-D quadratic phase function are used), the reconstructed planar image s2(m,n;zr) located at an axial distance zr from the 2-D hologram is obtained by digitally convolving the hologram with a reconstructing PSF as follows:

s2(m,n;zr)=|H2(m,n)*R2(m,n;zr)|,
where * denotes a 2-D convolution this time and R2(m,n;zr) is the reconstructing PSF of the 2-D hologram. This PSF is given by
R2(m,n;zr)=A2(mΔpzr,nΔpzr)exp[ig2(mΔpzr,nΔpzr)],
where A2 and g2 are the same functions used for the generating PSF of the 2-D hologram [Eq. (6)]. The reconstructing PSFs given by Eqs. (4, 8) are used for a digital reconstruction. Optical reconstruction of Fourier, Fresnel, and image holograms can be performed by illuminating them with a coherent wave (plane wave in case of Fresnel and image holograms and spherical converging wave in the case of the Fourier hologram). Even if a new type of digital hologram is defined by the MVP technique, an optical reconstruction of this hologram can still be obtained by digitally converting this hologram to a regular type of hologram (such as Fourier, Fresnel, or image holograms). Another possibility to obtain an optical reconstruction in these cases is using an optical correlator [39].

As an example of a specific reconstructing PSF, Fresnel propagation can be implemented digitally using Eq. (7), where in this case R2(m,n;zr) is a 2-D quadratic phase function given by

R2(m,n;zr)=exp[i(mΔp)2+(nΔp)2zr].

After choosing the hologram dimensionality (1-D or 2-D), several types of MVP holograms can be generated depending on the choice of the PSF used to multiply the MVPs. Several types of MVP holograms have been reported in the literature. These include Fourier holograms [24, 25, 26, 27, 28, 29, 33], Fresnel holograms [26, 27, 28], Fresnel–Fourier holograms [28], and image holograms [28]. In addition, it is also possible to produce new types of holograms in order to obtain various advantages over the known types of holograms [37, 38].

3B. Fourier Holograms

To generate a 1-D MVP hologram, Eq. (1) should be used, where the generating PSF of the 1-D Fourier hologram is defined as follows:

E1(xp,yp)=exp(ibmxp)δ(yp),
where b is an adjustable parameter. Similarly, in order to use the MVPs to generate a 2-D MVP Fourier hologram, Eq. (5) should be used, where the generating PSF of this hologram is given by
E2(xp,yp)=exp[ib(mxp+nyp)].

This means that the generation of MVP Fourier holograms is performed by multiplying each of the MVPs by a different linear phase function with a frequency that is dependent on the relative position of the projection in the entire projection set (m and n). In the mathematical proof of this method [24], it is shown that the method is valid only if we assume that the MVPs are acquired across a relatively narrow angular range (3040°). An example of a 1-D MVP Fourier hologram is shown in Fig. 2b, and an example of a 2-D MVP Fourier hologram is shown in Fig. 1c.

3C. Fresnel and Image Holograms

According to Refs. [27, 28], it is possible to generate Fresnel, image, and other types of holograms by using an MVP Fourier hologram generated beforehand. The basic idea is to take the MVP Fourier hologram obtained by the method elaborated in Subsection 3B and IFT it. Then, by a digital Fresnel propagation [convolution with quadratic phase functions as defined by Eq. (9)], either a Fresnel hologram or an image hologram can be generated, depending on the propagation distance.

However, there are several disadvantages with this method. First, since the generation of the Fresnel, or other types of Fourier-based holograms is performed indirectly, the resulting hologram is approximated and inaccurate, as well as requires redundant calculations. Digital errors may also occur during the various transformations. In addition, since the original Fourier hologram is limited to small angles of acquisition, the resulting holograms are limited to small angles of acquisition as well.

3D. Modified Fresnel Hologram (DIMFH)

To avoid these problems, the digital incoherent modified Fresnel hologram (DIMFH) has been proposed in Refs. [37, 38, 32]. This Fresnel hologram can be generated by processing the MVPs straightforwardly, without redundant calculations, approximations, or assumptions.

The 1-D DIMFH is generated by multiplying each projection by the 1-D quadratic phase function

E1(xp,yp)=exp(i2πb2xp2)δ(yp),
and summing the product, according to Eq. (1), to a column in the final hologram matrix. Similarly, the 2-D DIMFH processing is carried out by multiplying each projection by the 2-D quadratic phase function
E2(xp,yp)=exp[i2πb2(xp2+yp2)],
and summing the product, according to Eq. (5), to the corresponding pixel in the final 2-D hologram. The DIMFH is obtained by multiplying all the MVPs by the same quadratic PSF, independently of the projection indices m and n (rather than multiplying the MVPs by a changing PSF as in the case of an MVP Fourier hologram). Since each projection is multiplied by the same spatial function, this process is a spatial correlation between the observed 3-D scene and the PSF. The movement required by the correlation operation occurs naturally due to the relative movements between the camera and the scene. The summation of the multiplications between the various projections and the preliminary-determined PSF completes the correlation operation, and an incoherent correlation hologram is generated. The 3-D information of the scene is stored in a Fresnel hologram due to the fact that each 2-D transverse function along the z axis is actually correlated with an effectively scaled quadratic phase function having different number of cycles. This is achieved due to parallax effect, according to which, throughout the MVPs, different objects located at different axial distances have different “velocities” relative to the PSF, where the farther objects “move” slower than the closer objects. In contrast to other correlation holograms [40, 41], the present hologram is created from real existing objects illuminated by incoherent white light and do not involve wave interference.

It has been shown in Refs. [37, 28] that the transverse and the longitudinal magnifications of 1-D incoherent correlation holograms are given by

M1,x=Δpα,M1,y=fzs=M,M1,z=Δpbfα,
where α is the gap between two adjacent projections, f is the focal length of the imaging lens, zs is the axial coordinate of the inspected object, and M is the magnification of the imaging lens. Similarly, the magnifications of 2-D incoherent correlation holograms are given by
M2,x=M2,y=Δpα,M2,z=Δpbfα.

It is evident from Eqs. (14, 15) that contrary to the conventional imaging systems, the magnifications of correlation holograms on the acquisition axes (x for the 1-D case and xy for the 2-D case) are independent from the axial positions of the objects in the 3-D scene. This behavior is explained intuitively in the following. Although farther objects look smaller than closer objects in each captured projection, they also “move” slower throughout the projections because of the parallax effect. The slower “movement” broadens the correlation with the PSF in a way that the reduction of the image size in each projection is precisely compensated by the increment of the correlation result size. This feature can be useful for 3-D object recognition as shown in Subsection 4C. However, this effect can also be eliminated during the reconstruction process. To do this, the reconstructed image should be scaled by M/M1,x=(f/zs)/(Δp/α)=1/(bzr) [since zr/zs=Δp/(bfα)] across the acquisition axes for each and every reconstructed plane located at zr. This scaling process is demonstrated next for the 1-D case.

To generate and reconstruct incoherent correlation holograms, we implemented, in Ref. [38], the opti cal system illustrated in Fig. 4a. Three equal-size USAF resolution charts were printed and positioned at the scene in front of a dark background and were illuminated by a halogen white light. The digital camera was positioned on a micrometer slider and captured 1200 projections of the 3-D scene. Figure 4b shows the two most-extreme and central projections out of the 1200 projections captured by the camera. The full set of MVPs is shown in View 5 and Media 7. Based on these projections, we generated a 1-D DIMFH, according to Eqs. (1, 12), by multiplying each projection by the 1-D quadratic phase function, and summing the result to the corresponding column in the hologram complex matrix. The amplitude and phase of the resulting 1-D DIMFH are shown in Figs. 5a. According to Eqs. (3, 4), the digital reconstruction of this 1-D DIMFH was performed by 1-D convolution of the hologram with 1-D quadratic phase functions having a phase sign opposite to that of the generating PSF. The three best-in-focus reconstructing PSFs and the corresponding best-in-focus reconstructed planes are shown in Figs. 5b, 5c, respectively. In each of the planes shown in Fig. 5c, a different USAF resolution chart is in focus, whereas the other two charts are out of focus. This validates the success of the holographic process of the 1-D DIMFH. As explained above, resampling (rescaling) these reconstructed planes along the horizontal axis is required for retaining the original aspect ratios of the objects. These resampled best-in-focus reconstructed planes are shown in Fig. 5d. Figure 5e shows the corresponding zoomed-in images of the best in-focus charts. From this figure, one can conclude that the far objects in the 3-D scene have a reduced horizontal reconstruction resolution compared to the close objects in the scene.

3E. Protected Correlation Hologram (DIPCH)

The digital incoherent protected correlation hologram (DIPCH) [38] is another type of incoherent correlation hologram. This hologram has two advantages over the Fresnel hologram in general, and over the DIMFH in particular. First, since a random- constrained PSF is used to generate the hologram, only an authorized user who knows this PSF can reconstruct the scene encoded into the hologram. Thus, the DIPCH can be used as a method of encrypting the observed scene. Second, the reconstruction obtained from the DIPCH, compared to the DIMFH, has a significantly higher transverse resolution for the far objects in the 3-D scene.

The 1-D DIPCH process is still defined by Eqs. (1, 3). However, this time the generating PSF is a space-limited random function that fulfills the constraint that its FT is a phase-only function. In order to find this PSF, we used the projection onto the constraint sets (POCS) algorithm [42, 43, 44, 45]. The POCS algorithm used to find this PSF is illustrated in Fig. 6a. The POCS is an iterative algorithm that bounces from the PSF domain to its spatial- frequency spectrum domain and back, using an FT and IFT. In each domain, the function is projected onto the constraint set. The two constraints of the POCS express the two properties required for the PSF of the DIPCH. First, the FT of the PSF should be a phase-only function. This requirement enables to reconstruct the DIPCH properly. So, the constraint of the POCS in the spectral domain is the set of all phase-only functions, and each Fourier transformed PSF is projected onto this constraint by setting its magnitude distribution to the constant 1. The other property of the PSF is that it should be space limited into a relatively narrow region close to but outside the origin. This condition reduces the reconstruction noise from the out-of-focus objects because the overlap, occurring during the cross-correlation between the resampled space-limited reconstructing PSF and the hologram in out-of-focus regions, is lower than the overlap in case of using a widespread PSF. Of course, the narrower the existence region of the PSF, the lower the noise is. However narrowing the existence region makes it difficult for the POCS algorithm to converge to a PSF that satisfies both constraints within an acceptable error. In any event, the constraint set in the PSF domain is all of the complex functions that are identically equal to zero in any pixel outside the predefined narrow existence region. The projection onto the constraint set in the PSF domain is performed by multiplying the PSF by a function that is equal to 1 inside the narrow existence region of the PSF and equals 0 elsewhere. In the case of the 1-D DIPCH, the random-constrained PSF is limited to a narrow strip of columns, whereas in the case of the 2-D DIPCH, this PSF is limited to a narrow ring. In the end of the process, the POCS algorithm yields a suitable random-constrained PSF that is used in the hologram generation process. Figures 6b, 6c show the phase distributions of the resulting PSFs of the 1-D and the 2-D DIPCHs, respectively, after applying the POCS algorithm for each of these cases.

Let us compare the reconstruction resolutions of the DIMFH and the DIPCH. Far objects captured by the DIMFH are reconstructed with a reduced resolution because of two reasons: (a) Due to the parallax effect, farther objects “move” slower throughout the projections, and therefore they sample a magnified version of the generating PSF. This magnified version has narrower bandwidth, and thus the reconstruction resolution of farther objects decreases. (b) The quadratic phase function used in the DIMFH has lower frequencies as one approaches its origin. Since far objects are correlated with the central part of the quadratic phase function along a range that becomes shorter as the object is farther, the bandwidth of the DIMFH of far objects becomes even narrower beyond the bandwidth reduction mentioned in (a). In contrast to the DIMFH, the spatial frequencies of the DIPCH’s PSF are distributed uniformly all over its area. Therefore, the DIPCH sustain re solution reduction of far objects only due to reason (a). Hence, the images of far objects reconstructed from the DIPCH, besides being protected by the random-constrained PSF, also have higher transverse resolution.

In Ref. [38], the same MVPs of the 1-D DIMFH [part of which is shown in Fig. 4b] have been used to generate a 1-D DIPCH. As mentioned above, the digital process of the DIPCH includes multiplying each of the acquired projections by the random- constrained PSF computed by the POCS algorithm (Fig. 6). Each inner product is summed to a single column in the 1-D DIPCH, the amplitude and phase of which are shown in Fig. 7a. To reconstruct this hologram, the DIPCH is convolved with the conjugate of a scaled version of the same PSF used for the hologram generation. The phases of the three reconstructing PSFs yielding the best in-focus reconstructed planes are shown in Fig. 7b. The corresponding reconstructed planes are shown in Fig. 7c. As before, in each of these planes, a different USAF chart is in focus, whereas the other two charts are out of focus. Once again, a resamapling process has to be applied along the horizontal axis of the reconstructed planes to retain the original aspect ratios of the objects. Figure 7d shows these resampled best-in- focus reconstructed planes, whereas Fig. 7e shows the corresponding zoomed-in images of the in-focus charts in these three planes. Comparing Figs. 5e, 7e, we conclude that farther objects have a higher resolution in the DIPCH than in the DIMFH. This property signifies the advantage of the DIPCH over the DIMFH, while the other advantage of the DIPCH is that it is protected by the random- constrained PSF used for generating and reconstructing the hologram.

Tables 1, 2 gather all the generating and reconstructing PSFs of the MVP holograms discussed in this paper for the 1-D and 2-D cases, respectively.

4. Selected Applications

The prospective goal of the presented MVP holographic techniques is to yield a simple and portable digital holographic camera that works under regular illumination conditions and does not require special stability conditions. MVP holography might then be preferred whenever 3-D acquisition is required. This can make holography much more attractive to a variety of practical applications. We review several selected applications that signify the advantages of using MVP holography.

4A. Real-Time 3-D Video Acquisition and Video Conferencing

As mentioned above, the MVP technique yields a hologram that is essentially equivalent to an optical hologram and thus can be easily reconstructed optically by illuminating it with a coherent wave. Thus, real-time video acquisition in general and video confer encing over the internet in particular is one of the prospective applications of MVP holography. For the latter application, from the one side of the net, MVPs of a person can be acquired (using one of the parallel acquisition methods described in Section 2). Then, these projections can be digitally converted into a compressed form of 2-D complex matrix that is the digital hologram of this person’s 3-D image (using one of the methods described in Section 3). For visualization of the 3-D image in the other side of the net, only a 2-D complex matrix has to be transferred (rather than all the acquired MVPs or the entire volume). For this aim, even a limited data transfer rate over a modest internet connection can be enough. In the other side of the net, the 3-D image of the person can be reconstructed by displaying the complex hologram on an SLM (after encoding it to a phase-only or an amplitude-only CGH [1, 2, 22], if required). Then, a coherent laser can be used to illuminate the SLM and project the 3-D image of the person from the first side of the net, where no special viewing device (e.g., glasses) is needed.

4B. Three-Dimensional Biomedical Imaging

MVP holography can be utilized for various biomedical imaging applications. The straightforward applications are remote surgeries and remote medical diagnoses, since, as described above, the MVP techniques can be used to capture 3-D objects in real time, transfer their 3-D images in a compressed holographic way, and then easily visualize the 3-D image in the remote side. Other applications that might be useful for biomedical imaging are 3-D fluorescence imaging and 3-D imaging through thin turbulent media (e.g., thin biological tissues).

As shown in Ref. [36], the fluorescence phenomenon, characterized by high sensitivity and low background noise [46, 47], can be utilized to acquire multicolor fluorescence holograms of 3-D scenes. The fluorescence is an incoherent radiation and thus suits the MVP holographic techniques. The experimental setup used to demonstrate this concept is shown in Fig. 8a. Several objects in the 3-D scene are labeled with fluorescent dyes and excited by blue light. Each time another set of projections of a different fluorescent color emitted from the labeled objects is simultaneously acquired through the macrolens array shown in Fig. 3a and the corresponding chromatic filter. Additional set of projections of the nonfluorescent white-light reflected from the scene is also acquired in a single camera exposure. The composite (red/green/gray) projections are shown in Fig. 8b. Then, a separate DIMFH is generated for each fluorescent color and for the nonfluorescent light. The magnitude and phase of one of these holograms are shown in Fig. 8c. Finally, all reconstructed images from all the DIMFHs are digitally fused to yield a multicolor reconstruction of the 3-D scene. Figure 8d shows the best-in-focus reconstructed planes obtained at the axial distances of each of the three cube faces. Prospective application of the proposed method might be in vivo or ex vivo 3-D holographic imaging of biological objects that have been fluorescently labeled [46, 47].

As shown in Ref. [48], MVP holography can also be used to perform 3-D optical imaging of objects embedded in a scattering medium. Performing this type of imaging is required for various applications. One of these applications is optical imaging for medical diagnostic purposes. In contrast to the widely used X-ray computed tomography (CT), medical optical imaging has the advantages of being nonionized, safe, and inexpensive. Various optical coherence tomography (OCT) and spectroscopy techniques [49, 50] have been suggested as alternatives to CT. However, these methods usually require the use of low-coherence light sources and interferometeric optical setups, which imposes practical limitations on the optical systems. One approach to obtain 2-D imaging of objects embedded in biological tissues is the noninvasive optical imaging by speckle ensemble (NOISE) technique [51, 52]. According to this technique, the biological tissues are illuminated by a laser and observed from multiple perspective points of view using a microlens array. The multiple perspective images are first centered and then summed to a 2-D image of the embedded objects. Another version of the NOISE technique performs the summation in the spectral domain [53]. In addition, stereoscopic version of the NOISE technique was proposed for obtaining 3-D stereoscopic imaging of the embedded objects [54]. Another method of performing a coherent nonholographic integral imaging of 3-D objects embedded in a scattering medium has been presented by Moon and Javidi [55].

However, as mentioned above, holography has several advantages compared to stereoscopy, as it is able to provide the most authentic 3-D illusion to the human eyes, without the need for special viewing devices, and it is also able to hold the 3-D information in a more compressed way. This signifies the advantage of 3-D incoherent holographic imaging of objects that are hidden behind a scattering medium. The method proposed in Ref. [48] was inspired by the NOISE technique. However, this time we viewed the scene from different perspectives to get holographic imaging of the 3-D scene, rather than simple 2-D imaging. In addition, the capturing process was performed under incoherent white-light illumination rather than using laser light as performed in the NOISE technique.

The relevant experimental setup is shown in Fig. 9a. As shown in this figure, the incoherently illuminated scattering medium is mechanically rotated and vibrated, so that for each perspective viewpoint we acquire many spackled images through the medium and sum them to a relatively clear perspective projection of the hidden 3-D scene. Then, the smoothed MVPs are used to generate a DIMFH of the hidden 3-D scene.

For comparison purposes, three different sets of projections were acquired. The first set is shown in View 6 and Media 8, the middle projection of which is shown in Fig. 9b. This set was acquired without the presence of the diffuser (capturing the MVPs of the 3-D scene directly). The second set is shown in View 7 and Media 9, the middle projection of which is shown in Fig. 9c. This set was acquired through a stationary diffuser (without activating the electric motor). The third set is shown in View 8 and Media 10, the middle projection of which is shown in Fig. 9d. This set was acquired through the rotated/vibrated diffuser, where we averaged many speckled projections from the same point of view by increasing the exposure time of the digital camera. Based on these three sets of MVPs, three 1-D DIMFHs were generated. The magnitude and phase of one of these holograms are shown in Fig. 9e. Each one of these three holograms was reconstructed digitally. Figures 9f, 9g, 9h, respectively, show the final (rescaled) best-in-focus reconstructed planes obtained from the DIMFHs of the 3-D scene without the presence of the diffuser, when the scene is hidden behind a stationary diffuser, and when the scene is hidden behind a rotated/vibrated diffuser (and projection averaging is performed prior to the DIMFH generation). The coinciding continuous Fresnel propagations along the optical axis slice by slice and the resulting re constructed volumes are shown in View 9, View 10 and View 11, as well as in Media 11/Media 12, Media 13/Media 14, and Media 15/Media 16, respectively. Figure 9h presents a significant improvement compared to Fig. 9g. Further improvement was obtained by performing a blind convolution according to the algorithm presented in Ref. [56]. This algorithm finds the best step-edge in each reconstructed plane of the 3-D image without prior knowledge of the hidden 3-D scene. In the next stage, the derivative of the monotonic-gradient area in the chosen step edge is calculated. Following the theorem that the derivative of the step response function is equal to the impulse response function, and assuming isotropic blurring statistics, the derivative of the step edge (selected at any direction) would be a good approximation of the impulse response of the system. Finally, the improved reconstructed plane is obtained by Wiener filtering that uses the calculated impulse response function. The result of this process is an improved 3-D image, the best-in-focus planes of which are shown in Fig. 9i. The coinciding continuous Fresnel propagation along the optical axis slice by slice and the resulting reconstructed volume are shown in View 12, as well as in Media 17 and Media 18. A unique advantage of this blind deconvolution algorithm for digital holography is that it improves the images of focused objects in each reconstructed plane more than the images of the unfocused objects (the ones that are not located at the proper axial distance in the real 3-D scene). We believe that in the future the proposed holographic method might be useful for noninvasive, safe, simple and low-cost 3-D medical imaging, including observing 3-D objects embedded in biological tissues.

4C. Three-Dimensional Object Recognition

As mentioned in Subsection 3D, for the incoherent correlation holograms, a rescaling process should be applied to the reconstructed planes to retain the original perspective of the 3-D scene, so that farther objects look smaller in the hologram reconstruction, as it usually happens in conventional imaging systems. However, as we have shown in Ref. [57], the constant-magnification effect can also be utilized to perform efficient optical 3-D object recognition, since one can use only a single 2-D filter to recognize (detect) all originally identical objects in the 3-D scene, without dependency on their original axial distances from the acquisition plane (i.e., no matter if they are far from or close to the digital camera). This is an important improvement compared to other optical holographic and nonholographic 3-D object recognition methods [58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70], in which several matched filters had to be used to recognize the same objects located at different axial distances from the acquisition plane (e.g., Ref. [63]), or averaging methods had to be used to yield a scale-invariant filter at the expense of losing some of the correlation discrimination between objects that should be recognized and objects that should be rejected (e.g., Ref. [70]). So, the DIMFH can be used to perform efficiently optical 3-D object recognition. A nonholographic 3-D object recognition method that has similar advantage of having axial distance independency on the image size has been demonstrated in Ref. [71].

The electro-optical setup used to demonstrate the principle of incoherent-holographic 3-D object recognition is shown in Fig. 10a. The 3-D scene contained three objects, two identical tiger models and one goat model. The set of 200×200 MVPs, partially shown in View 13 and Media 19, was acquired, where the goal was to use these MVPs to create an MVP hologram and then perform a convolution operation with a single 2-D complex filter to recognize both the close and the distant tigers and reject the goat. This was achieved by obtaining high correlation peaks at the corresponding locations of the tigers in the correlation space, and a low correlation peak at the corresponding location of the goat in the correlation space. Figures 10b, 10c, 10d show surface plots of the re sulting correlation planes located at the axial reconstruction distances of the animal models, when the hologram that was generated is a DIMFH. As seen in Fig. 10b, a distinct correlation peak appears at the close tiger transverse position, whereas in Fig. 10d a distinct correlation peak appears at the distant tiger transverse position. On the other hand, any of the peaks that appear in Fig. 10c (obtained at the axial reconstruction distance of the goat) is lower than the tigers’ peaks [Figs. 10b, 10d]. Therefore, although only a single filter was used, the two tigers could be easily recognized by the correlation process, whereas the goat could be rejected by this process since its peak was low even at its axial reconstruction distance [Fig. 10c]. Figure 10h displays three correlation plots along the optical axis of the 3-D correlation space, at the transverse locations of the three objects. These graphs show that the correlation peaks of the two tigers are well-located in the 3-D correlation space, although only a single filter has been used in the correlation process. Thus, the method can reject the goat and recognize each of the two tigers, and it does not matter whether the tigers are close to or far from the acquisition plane. For comparison, we also generated an MVP Fourier hologram of the 3-D scene and a phase- only filter which is matched to the close tiger. Figures 10e, 10f, 10g show the surface plots of the resulting correlation planes at the axial reconstruction distances of the animal models. As expected from this old method, only Fig. 10e, obtained at the axial reconstruction distance of the close tiger, contains a distinct correlation peak. On the other hand, as shown in Fig. 10g, the correlation plane obtained at the axial reconstruction distance of the distant tiger contains several low correlation peaks, and thus the distant tiger will be wrongly rejected by the correlation process. Hence, in the old method, more than one filter is required to recognize the same objects located at different distances from the acquisition plane, which, as shown above, is not the case of the new (DIMFH-based) method. This conclusion is also obvious by comparing Figs. 10h, 10i. The latter figure presents the correlation cross- section plots of the old method along the optical axis of the 3-D correlation space at the transverse locations of each of the three objects, demonstrating that only the close tiger can be recognized by the old method, whereas in the new method [Fig. 10h], both tigers can be recognized by the correlation process.

5. Conclusions

In this paper, we have reviewed methods of generating different types of MVP holograms. The MVPs of the 3-D scene are captured by a simple digital camera, working under incoherent white-light illumination, and then processed into a digital hologram of the scene by a simple digital process. Since no interference recording is needed, we avoid the con ventional holographic recoding requirements for a coherent laser source and for an extreme stability of the optical system. We also avoid the twin-image problem present in interference-based holography. Using MVP holography, holograms can be captured outside of the laboratory, and many practical applications may benefit from the attractive advantages of holography compared to other 3-D acquisition methods. We have presented several different methods of capturing the MVPs, such as mechanical movement of the camera, using microlens, macrolens, or camera arrays, as well as applying digital algorithms for interpolating middle MVPs. The digital process applied to the acquired projections determines the type of hologram generated. It has been shown that it is possible to generate both 1-D and 2-D MVP holograms, and for each of them it is possible to generate regular types of holograms such as Fourier holograms, Fresnel, and image holograms, as well as new types of holograms such as the DIMFH and the DIPCH. We have also reviewed selected applications of MVP holography. These include 3-D video conferencing, remote surgeries and remote medical diagnosis, fluorescence holography, holographic imaging behind a turbulent medium, and 3-D object recognition. These applications signify the importance of MVP holography to a variety of fields, in part of which holography has not been widely employed yet.

Tables Icon

Table 1. Generating and Reconstructing PSFs of Various 1-D MVP Holograms

Tables Icon

Table 2. Generating and Reconstructing PSFs of Various 2-D MVP Holograms

 figure: Fig. 1

Fig. 1 Integral holography—MVP holography using a microlens array [29]. (a) Optical system for capturing the MVPs. (b) Several projections taken from different parts of the microlens array image plane captured by the camera. Larger part of this image plane is shown in View 1 and Media 1. (c) Magnitude (left) and phase (right) of the 2-D Fourier hologram obtained after performing the processing stage on the captured projections; (d) Best-in-focus reconstructed planes obtained by digital Fresnel propagation. Note that (b)–(d) are contrast-inverted. The continuous Fresnel propagation as 2-D slices and the entire reconstructed volume are shown in View 2, as well as in Media 2 and Media 3 (best-in-focus axial points are amplified).

Download Full Size | PDF

 figure: Fig. 2

Fig. 2 Synthetic projection holography (SPH)—optically acquiring only a small number of projections and synthesizing the middle MVPs by the view synthesis algorithm [33]: (a) Schematics of the experimental setup. Only two projections are optically acquired. The entire MVP set (including the synthesized projections) is shown in View 3 and Media 4. (b) Magnitude (left) and phase (right) of the 1-D Fourier hologram obtained from the final set of MVPs. (c) Best-in-focus reconstructed planes. The continuous Fresnel propagation as 2-D slices and the entire reconstructed volume are shown in View 4, as well as in Media 5 and Media 6 (best-in-focus axial points are amplified).

Download Full Size | PDF

 figure: Fig. 3

Fig. 3 Acquiring a small number of high-resolution projections in a single digital camera exposure using a macrolens array [36]: (a) Photo of the 3×3 macrolens array. (b) Image plane of the macrolens array captured by the camera in a single exposure.

Download Full Size | PDF

 figure: Fig. 4

Fig. 4 1-D MVP holography [38]: (a) Optical system for acquiring MVPs of a 3-D scene along the horizontal axis. (b) Several projections taken from the entire set of 1200 projections, which are shown in View 5 and Media 7.

Download Full Size | PDF

 figure: Fig. 5

Fig. 5 One-dimensional DIMFH results obtained from the MVP set, partially shown in Fig. 4b [38]: (a) Magnitude (left) and phase (right) of the hologram. (b) Phase distributions of the reconstructing PSFs used for obtaining the three best-in-focus reconstructed planes. (c) Corresponding three best-in-focus reconstructed planes along the optical axis. (d) Same as (c) but after the resampling along the horizontal axis. (e) Zoomed-in images of the corresponding best-in-focus reconstructed objects.

Download Full Size | PDF

 figure: Fig. 6

Fig. 6 Finding the PSF of the DIPCH [38]: (a) Schematics of the POCS algorithm used. (b), (c) Phase distribution of the random- constrained PSF of the: (b) 1-D DIPCH, and (c) 2-D DIPCH.

Download Full Size | PDF

 figure: Fig. 7

Fig. 7 One-dimensional DIPCH results obtained from the MVP set, partially shown in Fig. 4b [38]: (a) Magnitude (left) and phase (right) of the hologram. (b) Phase distribution of the reconstructing PSFs used for obtaining the three best-in-focus reconstructed planes. (c) Corresponding three best-in-focus reconstructed planes along the optical axis. (d) Same as (c) but after the resampling along the horizontal axis. (e) Zoomed-in images of the corresponding best-in-focus reconstructed objects.

Download Full Size | PDF

 figure: Fig. 8

Fig. 8 Fluorescence 3-D imaging by MVP holography [36]: (a) Optical system for acquiring 3×3 perspective projections simultaneously using the macrolens array shown in Fig. 3. Part of the objects in the scene are fluorescently labeled. (b) Composite image plane of the macrolens array acquired by the camera. (c) Magnitude (left) and phase (right) of the nonflorescence 2-D DIMFH. Two additional fluorescence 2-D DIMFHs are generated as well. (d) Three best-in-focus multicolor reconstructed planes. Viewing this figure in color online is highly recommended.

Download Full Size | PDF

 figure: Fig. 9

Fig. 9 MVP holography of a 3-D scene hidden behind a turbulent medium under incoherent illumination [48]: (a) Schematics of the experimental setup; (b)–(d) The middle perspective projection of the 3-D scene directly acquired by the digital camera (views and media references show the entire sets of 1100 projections each): (b) without a diffuser (View 6 and Media 8), (c) through a stationary diffuser (View 7 and Media 9), and (d) through a rotated/vibrated diffuser (View 8 and Media 10); (e) Magnitude (left) and phase (right) of the 1-D DIMFH generated without a diffuser. Similar DIMFHs were generated for the two other cases as well. (f)–(i) Best-in-focus reconstructed planes obtained from the DIMFHs that where generated (views and media references show the continuous Fresnel propagation as 2-D slices and the entire reconstructed volume. Axial best-in-focus points are amplified): (f) without a diffuser (View 9 and Media 11 and Media 12), (g) through a stationary diffuser (View 10 and Media 13 and Media 14), (h) through a rotated/vibrated diffuser (View 11 and Media 15 and Media 16), and (i) after applying the blind deconvolution algorithm (View 12 and Media 17 and Media 18).

Download Full Size | PDF

 figure: Fig. 10

Fig. 10 Three-dimensional object recognition under white light using incoherent correlation holography (taking into advantage the constant magnification feature of the DIMFH) [57]: (a) Schematics of the entire process. Part of the 200×200 projection set is shown in View 13 and Media 19. (b)–(g) Final correlation planes at the axial reconstruction distances of the: (b), (e) close tiger, (c), (f) goat, and (d), (g) distant tiger. Height axis: correlation intensity (A.U.). Ground axes: transverse coordinates. Higher correlation peaks indicate on better object recognition. (b)–(d) DIMFH-based results: a single filter is used to recognize both tigers simultaneously and reject the goat. (e)–(g) MVP-Fourier-hologram-based results (comparison to the old method, which does not have a constant magnification feature): by using a single filter, only the close tiger can be recognized, and both the distant tiger and the goat are rejected. (h), (i) Correlation values along the optical axis of the 3-D correlation space for each of the three objects. The axial distance points for each of the objects are circled: (h) DIMFH-based results (both tigers are recognized), (i) MVP-Fourier-hologram-based results (old method, only the close tiger is recognized).

Download Full Size | PDF

1. P. Hariharan, Optical Holography, Principles, Techniques and Applications (Cambridge University Press, 1996).

2. R. J. Collier, C. B. Burckhardt, and L. H. Lin, Optical Holography (Academic, 1971).

3. U. Schnars and W. Juptner, Digital Holography, Digital Hologram Recording, Numerical Reconstruction and Related Techniques (Springer, 2005).

4. A. W. Lohmann, “Wavefront reconstruction for incoherent objects,” J. Opt. Soc. Am. 55, 1555–1556 (1965). [CrossRef]  

5. G. W. Stroke and R. C. Restrick, “Holography with spatially noncoherent light,” Appl. Phys. Lett. 7, 229–231 (1965). [CrossRef]  

6. H. R. Worthington, “Production of holograms with incoherent illumination,” J. Opt. Soc. Am. 56, 1397–1398 (1966). [CrossRef]  

7. G. Cochran, “New method of making Fresnel transforms with incoherent light,” J. Opt. Soc. Am. 56, 1513–1517 (1966). [CrossRef]  

8. P. J. Peters, “Incoherent holograms with a mercury light source,” Appl. Phys. Lett. 8, 209–210 (1966). [CrossRef]  

9. G. Sirat and D. Psaltis, “Conoscopic holography,” Opt. Lett. 10, 4–6 (1985). [CrossRef]  

10. A. S. Marathay, “Noncoherent-object hologram: its reconstruction and opticalprocessing,” J. Opt. Soc. Am. A 4, 1861–1868 (1987). [CrossRef]  

11. T.-C. Poon, K. B. Doh, B. W. Schilling, M. H. Wu, K. Shinoda, and Y. Suzuki, “Three-dimensional microscopy by optical scanning holography,” Opt. Eng. 34, 1338–1344 (1995).

12. G. Indebetouw, P. Klysubun, T. Kim, and T.-C. Poon, “Imaging properties of scanning holographic microscopy,” J. Opt. Soc. Am. A 17, 380–390 (2000). [CrossRef]  

13. B. W. Schilling, T.-C. Poon, G. Indebetouw, B. Storrie, K. Shinoda, Y. Suzuki, and M. H. Wu, “Three-dimensional holographic fluorescence microscopy,” Opt. Lett. 22, 1506–1508 (1997). [CrossRef]  

14. L. Mertz and N. O. Young, “Fresnel transformations of images,” Proceedings of Conference on Optical Instruments and Techniques, K. J. Habell, Ed. (Chapman & Hall, 1962).

15. J. Rosen and G. Brooker “Digital spatially incoherent Fresnel holography,” Opt. Lett. 32, 912–914 (2007). [CrossRef]  

16. J. Rosen and G. Brooker “Fluorescence incoherent color holography,” Opt. Express 15, 2244–2250 (2007). [CrossRef]  

17. J. Rosen and G. Brooker, “Non-scanning motionless fluorescence three-dimensional holographic microscopy,” Nat. Photon. 2, 190–195 (2008). [CrossRef]  

18. J. Rosen, G. Indebetouw, G. Brooker, and N. T. Shaked, “A review of incoherent digital Fresnel holography,” J. Holography Speckle 5, 124–140 (2009).

19. T.-C. Poon, “Holography: Scan-free three-dimensional imag ing,” Nat. Photon. 2, 131–132 (2008). [CrossRef]  

20. T. Yatagai, “Stereoscopic approach to 3-D display using computer-generated holograms,” Appl. Opt. 15, 2722–2729 (1976). [CrossRef]  

21. T. Mishina, M. Okui, and F. Okano, “Calculation of holograms from elemental images captured by integral photography,” Appl. Opt. 45, 4026–4036 (2006). [CrossRef]  

22. J. W. Goodman, Introduction to Fourier Optics, 2nd ed. (McGraw-Hill, 1996), pp. 355–363.

23. T. Kreis, Handbookof Holographic Interferometry: Optical and Digital Methods (Wiley-VCH, 2005), Chap. 3.

24. Y. Li, D. Abookasis, and J. Rosen, “Computer-generated holograms of three-dimensional realistic objects recorded without wave interference,” Appl. Opt. 40, 2864–2870 (2001). [CrossRef]  

25. D. Abookasis and J. Rosen, “Computer-generated holograms of three-dimensional objects synthesized from their multiple angular viewpoints,” J. Opt. Soc. Am. A 20, 1537–1545 (2003). [CrossRef]  

26. Y. Sando, M. Itoh, and T. Yatagai, “Holographic three-dimensional display synthesized from three-dimensional Fourier spectra of real-existing objects,” Opt. Lett. 28, 2518–2520 (2003). [CrossRef]  

27. Y. Sando, M. Itoh, and T. Yatagai, “Full-color computer- generated holograms using 3-D Fourier spectra,” Opt. Express 12, 6246–6251 (2004). [CrossRef]  

28. D. Abookasis and J. Rosen, “Three types of computer- generated hologram synthesized from multiple angular viewpoints of a three-dimensional scene,” Appl. Opt. 45, 6533–6538 (2006). [CrossRef]  

29. N. T. Shaked, J. Rosen, and A. Stern, “Integral holography: white-light single-shot hologram acquisition,” Opt. Express 15, 5754–5760 (2007). [CrossRef]  

30. B. Lee, S. Jung, and J. H. Park, “Viewing-angle-enhanced integral imaging by lens switching,” Opt. Lett. 27, 818–820 (2002). [CrossRef]  

31. A. Stern and B. Javidi, “Three dimensional sensing, visualization, andprocessing using integral imaging,” Proc. IEEE 94, 591–607 (2006). [CrossRef]  

32. J.-H. Park, M.-S. Kim, G. Baasantseren, and N. Kim, “Fresnel and Fourier hologram generation using ortho graphic projection images,” Opt. Express 17, 6320–6334 (2009). [CrossRef]  

33. B. Katz, N. T. Shaked, and J. Rosen, “Synthesizing computer generated holograms with reduced number of perspective projections,” Opt. Express 15, 13250–13255 (2007). [CrossRef]  

34. D. Scharstein, View SynthesisUsing Stereo Vision, Vol. 1583 of Lecture Notes in Computer Science (Springer-Verlag, 1999), Chap. 2.

35. J.-S. Park, D.-C. Hwang, D.-H. Shin, and E.-S. Kim, “Enhanced-resolution computational integral imaging reconstruction using an intermediate-view reconstruction technique,” Opt. Eng. 45, 117004:1–7 (2006).

36. N. T. Shaked, B. Katz, and J. Rosen, “Fluorescence multicolor hologram recorded by using a macrolens array,” Opt. Lett. 33, 1461–1463 (2008). [CrossRef]  

37. N. T. Shaked and J. Rosen, “Modified Fresnel computer- generated hologram directly recorded by multiple-viewpoint projections,” Appl. Opt. 47, D21–D27 (2008). [CrossRef]  

38. N. T. Shaked and J. Rosen, “Multiple-viewpoint projection holograms synthesized by spatially incoherent correlation with broadband functions,” J. Opt. Soc. Am. A 25, 2129–2138 (2008). [CrossRef]  

39. D. Abookasis and J. Rosen, “Digital correlation holograms implemented on a joint transform correlator,” Opt. Commun. 225, 31–37 (2003). [CrossRef]  

40. B. Javidi and A. Sergent, “Fully phase encoded key and biometrics for security verification,” Opt. Eng. 36, 935–942 (1997).

41. D. Abookasis, A. Batikoff, H. Famini, and J. Rosen, “Performance comparison of iterative algorithms for generating digital correlation holograms used in optical security systems,” Appl. Opt. 45, 4617–4624 (2006). [CrossRef]  

42. D. C. Youla and H. Webb, “Image restoration by the method of convex projections: part 1—theory,” IEEE Trans. Med. Imag ing 1, 81–94 (1982). [CrossRef]  

43. J. R. Fienup, “Phase-retrieval algorithm: a comparison,” Appl. Opt. 21, 2758–2769 (1982). [CrossRef]  

44. H. Stark, Image Recovery: Theory and Application (Academic, 1987), pp. 29–78 and 277–320.

45. J. Rosen and J. Shamir, “Application of the projection-onto-constraint-sets algorithm for optical pattern recognition,” Opt. Lett. 16, 752–754 (1991). [CrossRef]  

46. J. R. Lakowicz, Principles of Fluorescence Spectroscopy, 3rd ed. (Springer, 2006), Chap. 1.

47. T. Vo-Dinh, ed., Biomedical Photonics Handbook (CRC Press, 2003), Chap. 3.5.

48. N. T. Shaked, Y. Yitzhaky, and J. Rosen “Incoherent holographic imaging through thin turbulent media,” Opt. Commun. 282, 1546–1550 (2009). [CrossRef]  

49. T. Vo-Dinh, ed., Biomedical Photonics Handbook (CRC Press, 2003), Chaps. 13 and 16.

50. J. C. Hebden, S. R. Arridge, and D. T. Delpy, “Optical imaging in medicine: I. Experimental techniques,” Phys. Med. Biol. 42, 825–840 (1997). [CrossRef]  

51. J. Rosen and D. Abookasis, “Seeing through biological tissues using the fly eye principle,” Opt. Express 11, 3605–3611 (2003).

52. J. Rosen and D. Abookasis, “Noninvasive optical imaging by speckle ensemble,” Opt. Lett. 29, 253–255 (2004). [CrossRef]  

53. J. Rosen and D. Abookasis, “NOISE 2 imaging system: Seeing through scattering tissue by correlation with a point,” Opt. Lett. 29, 253 (2004). [CrossRef]  

54. D. Abookasis and J. Rosen, “Stereoscopic imaging through scattering media,” Opt. Lett. 31, 724–726 (2006). [CrossRef]  

55. I. Moon and B. Javidi, “Three-dimensional visualization of objects in scattering medium by use of computational integral imaging,” Opt. Express 16, 13080–13089 (2008). [CrossRef]  

56. O. Shacham, O. Haik, and Y. Yitzhaky, “Blind restoration of atmospherically degraded images by automatic best step edge detection,” Pattern Recogn. Lett. 28, 2094–2103 (2007).

57. N. T. Shaked, G. Segev, and J. Rosen, “Three-dimensional object recognition using a quasi-correlator invariant to imaging distances,” Opt. Express 16, 17148–17153 (2008). [CrossRef]  

58. R. Bamler and J. Hofer-Alfeis, “Three- and four dimensional filter operations by coherent optics,” Opt. Acta 29, 747–757 (1982).

59. J. Rosen, “Three-dimensional optical Fourier transform and correlation,” Opt. Lett. 22, 964–966 (1997). [CrossRef]  

60. J. Rosen, “Three-dimensional electro-optical correlation,” J. Opt. Soc. Am. A 15, 430–436 (1998). [CrossRef]  

61. J. Rosen, “Three-dimensional joint transform correlator,” Appl. Opt. 37, 7538–7544 (1998). [CrossRef]  

62. Y. Li and J. Rosen, “Three-dimensional pattern recognition with a single two-dimensional synthetic reference function,” Appl. Opt. 39, 1251–1259 (2000). [CrossRef]  

63. Y. Li and J. Rosen, “Three-dimensional correlator with general complex filters,” Appl. Opt. 39, 6561–6572 (2000). [CrossRef]  

64. T.-C. Poon and T. Kim, “Optical image recognition of three dimensional objects,” Appl. Opt. 38, 370–381 (1999). [CrossRef]  

65. J. J. Esteve-Taboada, D. Mas, and J. Garcia, “Three dimensional object recognition by Fourier transform profilometry,” Appl. Opt. 38, 4760–4765 (1999). [CrossRef]  

66. Y. Li and J. Rosen, “Object recognition using three- dimensional optical quasi-correlation,” J. Opt. Soc. Am. A 19, 1755–1762 (2002). [CrossRef]  

67. B. Javidi, R. Ponce-Díaz, and S.-H. Hong, “Three- dimensional recognition of occluded objects by using com putational integral imaging,” Opt. Lett. 31, 1106–1108 (2006). [CrossRef]  

68. J.-S. Park, D.-C. Hwang, D.-H. Shin, and E.-S. Kim, “Resolution-enhanced three-dimensional image correlator using computationally reconstructed integral images,” Opt. Commun. 276, 72–79 (2007). [CrossRef]  

69. D.-H. Shin and H. Yoo, “Scale-variant magnification for computational integral imaging and its application to 3D object correlator,” Opt. Express 16, 8855–8867 (2008). [CrossRef]  

70. Y. Li and J. Rosen, “Scale-invariant recognition of three- dimensional objects using quasi-correlator,” Appl. Opt. 42, 811–819 (2003). [CrossRef]  

71. J.-H. Park, J. Kim, and B. Lee, “Three-dimensional optical correlator using a subimage array,” Opt. Express 13, 5116–5126 (2005). [CrossRef]  

Supplementary Material (19)

Media 1: AVI (14676 KB)     
Media 2: AVI (8772 KB)     
Media 3: AVI (8862 KB)     
Media 4: AVI (7639 KB)     
Media 5: AVI (14701 KB)     
Media 6: AVI (12129 KB)     
Media 7: AVI (14147 KB)     
Media 8: AVI (11746 KB)     
Media 9: AVI (14399 KB)     
Media 10: AVI (13470 KB)     
Media 11: AVI (7637 KB)     
Media 12: AVI (9136 KB)     
Media 13: AVI (10348 KB)     
Media 14: AVI (9971 KB)     
Media 15: AVI (7394 KB)     
Media 16: AVI (8309 KB)     
Media 17: AVI (9281 KB)     
Media 18: AVI (8594 KB)     
Media 19: AVI (4009 KB)     

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (10)

Fig. 1
Fig. 1 Integral holography—MVP holography using a microlens array [29]. (a) Optical system for capturing the MVPs. (b) Several projections taken from different parts of the microlens array image plane captured by the camera. Larger part of this image plane is shown in View 1 and Media 1. (c) Magnitude (left) and phase (right) of the 2-D Fourier hologram obtained after performing the processing stage on the captured projections; (d) Best-in-focus reconstructed planes obtained by digital Fresnel propagation. Note that (b)–(d) are contrast-inverted. The continuous Fresnel propagation as 2-D slices and the entire reconstructed volume are shown in View 2, as well as in Media 2 and Media 3 (best-in-focus axial points are amplified).
Fig. 2
Fig. 2 Synthetic projection holography (SPH)—optically acquiring only a small number of projections and synthesizing the middle MVPs by the view synthesis algorithm [33]: (a) Schematics of the experimental setup. Only two projections are optically acquired. The entire MVP set (including the synthesized projections) is shown in View 3 and Media 4. (b) Magnitude (left) and phase (right) of the 1-D Fourier hologram obtained from the final set of MVPs. (c) Best-in-focus reconstructed planes. The continuous Fresnel propagation as 2-D slices and the entire reconstructed volume are shown in View 4, as well as in Media 5 and Media 6 (best-in-focus axial points are amplified).
Fig. 3
Fig. 3 Acquiring a small number of high-resolution projections in a single digital camera exposure using a macrolens array [36]: (a) Photo of the 3 × 3 macrolens array. (b) Image plane of the macrolens array captured by the camera in a single exposure.
Fig. 4
Fig. 4 1-D MVP holography [38]: (a) Optical system for acquiring MVPs of a 3-D scene along the horizontal axis. (b) Several projections taken from the entire set of 1200 projections, which are shown in View 5 and Media 7.
Fig. 5
Fig. 5 One-dimensional DIMFH results obtained from the MVP set, partially shown in Fig. 4b [38]: (a) Magnitude (left) and phase (right) of the hologram. (b) Phase distributions of the reconstructing PSFs used for obtaining the three best-in-focus reconstructed planes. (c) Corresponding three best-in-focus reconstructed planes along the optical axis. (d) Same as (c) but after the resampling along the horizontal axis. (e) Zoomed-in images of the corresponding best-in-focus reconstructed objects.
Fig. 6
Fig. 6 Finding the PSF of the DIPCH [38]: (a) Schematics of the POCS algorithm used. (b), (c) Phase distribution of the random- constrained PSF of the: (b) 1-D DIPCH, and (c) 2-D DIPCH.
Fig. 7
Fig. 7 One-dimensional DIPCH results obtained from the MVP set, partially shown in Fig. 4b [38]: (a) Magnitude (left) and phase (right) of the hologram. (b) Phase distribution of the reconstructing PSFs used for obtaining the three best-in-focus reconstructed planes. (c) Corresponding three best-in-focus reconstructed planes along the optical axis. (d) Same as (c) but after the resampling along the horizontal axis. (e) Zoomed-in images of the corresponding best-in-focus reconstructed objects.
Fig. 8
Fig. 8 Fluorescence 3-D imaging by MVP holography [36]: (a) Optical system for acquiring 3 × 3 perspective projections simultaneously using the macrolens array shown in Fig. 3. Part of the objects in the scene are fluorescently labeled. (b) Composite image plane of the macrolens array acquired by the camera. (c) Magnitude (left) and phase (right) of the nonflorescence 2-D DIMFH. Two additional fluorescence 2-D DIMFHs are generated as well. (d) Three best-in-focus multicolor reconstructed planes. Viewing this figure in color online is highly recommended.
Fig. 9
Fig. 9 MVP holography of a 3-D scene hidden behind a turbulent medium under incoherent illumination [48]: (a) Schematics of the experimental setup; (b)–(d) The middle perspective projection of the 3-D scene directly acquired by the digital camera (views and media references show the entire sets of 1100 projections each): (b) without a diffuser (View 6 and Media 8), (c) through a stationary diffuser (View 7 and Media 9), and (d) through a rotated/vibrated diffuser (View 8 and Media 10); (e) Magnitude (left) and phase (right) of the 1-D DIMFH generated without a diffuser. Similar DIMFHs were generated for the two other cases as well. (f)–(i) Best-in-focus reconstructed planes obtained from the DIMFHs that where generated (views and media references show the continuous Fresnel propagation as 2-D slices and the entire reconstructed volume. Axial best-in-focus points are amplified): (f) without a diffuser (View 9 and Media 11 and Media 12), (g) through a stationary diffuser (View 10 and Media 13 and Media 14), (h) through a rotated/vibrated diffuser (View 11 and Media 15 and Media 16), and (i) after applying the blind deconvolution algorithm (View 12 and Media 17 and Media 18).
Fig. 10
Fig. 10 Three-dimensional object recognition under white light using incoherent correlation holography (taking into advantage the constant magnification feature of the DIMFH) [57]: (a) Schematics of the entire process. Part of the 200 × 200 projection set is shown in View 13 and Media 19. (b)–(g) Final correlation planes at the axial reconstruction distances of the: (b), (e) close tiger, (c), (f) goat, and (d), (g) distant tiger. Height axis: correlation intensity (A.U.). Ground axes: transverse coordinates. Higher correlation peaks indicate on better object recognition. (b)–(d) DIMFH-based results: a single filter is used to recognize both tigers simultaneously and reject the goat. (e)–(g) MVP-Fourier-hologram-based results (comparison to the old method, which does not have a constant magnification feature): by using a single filter, only the close tiger can be recognized, and both the distant tiger and the goat are rejected. (h), (i) Correlation values along the optical axis of the 3-D correlation space for each of the three objects. The axial distance points for each of the objects are circled: (h) DIMFH-based results (both tigers are recognized), (i) MVP-Fourier-hologram-based results (old method, only the close tiger is recognized).

Datasets

Datasets associated with ISP articles are stored in an online database called MIDAS. Clicking a "View" link in an Optica ISP article will launch the ISP software (if installed) and pull the relevant data from MIDAS. Visit MIDAS to browse and download the datasets directly. A package containing the PDF article and full datasets is available in MIDAS for offline viewing.

Questions or Problems? See the ISP FAQ. Already used the ISP software? Take a quick survey to tell us what you think.

Tables (2)

Tables Icon

Table 1 Generating and Reconstructing PSFs of Various 1-D MVP Holograms

Tables Icon

Table 2 Generating and Reconstructing PSFs of Various 2-D MVP Holograms

Equations (15)

Equations on this page are rendered with MathJax. Learn more.

H 1 ( m , n ) = P m ( x p , y p ) E 1 ( x p , y p n Δ p ) d x p d y p ,
E 1 ( x p , y p ) = A 1 ( b x x p ) exp [ i g 1 ( b x x p ) ] δ ( y p ) ,
s 1 ( m , n ; z r ) = | H 1 ( m , n ) * R 1 ( m , n ; z r ) | ,
R 1 ( m , n ; z r ) = A 1 ( m Δ p z r ) exp [ i g 1 ( m Δ p z r ) ] δ ( n Δ p ) ,
H 2 ( m , n ) = P m , n ( x p , y p ) E 2 ( x p , y p ) d x p d y p ,
E 2 ( x p , y p ) = A 2 ( b x x p , b y y p ) exp [ i g 2 ( b x x p , b y y p ) ] ,
s 2 ( m , n ; z r ) = | H 2 ( m , n ) * R 2 ( m , n ; z r ) | ,
R 2 ( m , n ; z r ) = A 2 ( m Δ p z r , n Δ p z r ) exp [ i g 2 ( m Δ p z r , n Δ p z r ) ] ,
R 2 ( m , n ; z r ) = exp [ i ( m Δ p ) 2 + ( n Δ p ) 2 z r ] .
E 1 ( x p , y p ) = exp ( i b m x p ) δ ( y p ) ,
E 2 ( x p , y p ) = exp [ i b ( m x p + n y p ) ] .
E 1 ( x p , y p ) = exp ( i 2 π b 2 x p 2 ) δ ( y p ) ,
E 2 ( x p , y p ) = exp [ i 2 π b 2 ( x p 2 + y p 2 ) ] ,
M 1 , x = Δ p α , M 1 , y = f z s = M , M 1 , z = Δ p b f α ,
M 2 , x = M 2 , y = Δ p α , M 2 , z = Δ p b f α .
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.