Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Compressive multiple view projection incoherent holography

Open Access Open Access

Abstract

Multiple view projection holography is a method to obtain a digital hologram by recording different views of a 3D scene with a conventional digital camera. Those views are digitally manipulated in order to create the digital hologram. The method requires a simple setup and operates under white light illuminating conditions. The multiple views are often generated by a camera translation, which usually involves a scanning effort. In this work we apply a compressive sensing approach to the multiple view projection holography acquisition process and demonstrate that the 3D scene can be accurately reconstructed from the highly subsampled generated Fourier hologram. It is also shown that the compressive sensing approach, combined with an appropriate system model, yields improved sectioning of the planes of different depths.

©2011 Optical Society of America

1. Introduction

Holography is a classical method [‎1] for recording a three dimensional information of a scene. Most of the times a hologram is acquired by using high coherence and high powered sources, such as lasers, in order to create the necessary waves interference. When implementing coherent light holography, we require both thermal and mechanical stability of the optical setup. All of these factors often confine holography recording to the laboratory. Thus, an ongoing effort is being made for developing incoherent illumination holography recording process [‎2]. We may name a few implementation techniques, such as multiple view projections created either from a translating camera [‎3-‎7] or a lenslet array [‎8,‎9], optical scanning holography [‎10], and the Fresnel incoherent correlation holography (FINCH) [‎11].

In the present work, we concentrate on the multiple view projection (MVP) holography technique. The acquisition process uses spatially and temporally incoherent “white” light, thus avoiding many of the drawbacks of traditional holographic recording applications. Any ordinary digital camera may be used as a recording device. The camera is usually translated, and during its movement it captures many views of the same scene, from different angles. Each one of these intensity images is a projection of the scene on the CCD plane [2]. Then, the different projections are used to synthesize a digital hologram. One of the drawbacks of this MVP method is that it requires a significant scanning effort. For instance, in order to produce a 256×256 pixels hologram, 256×256=65,536 projections should be acquired. A lenslet array can be employed [‎8,‎9] to remedy the scanning effort. However, this solution suffers from low resolving power of the optical system, which limits the quality of the reconstruction of the holograms. In [‎6] the scanning effort has been reduced by recording only a small number of the projections and synthesizing the rest using a view synthesis stereo vision algorithm. This technique encounters some difficulties in handling several scenes and needs some changes in the hologram generation process.

In this undertaking we adopt the compressive sensing (CS) approach in order to reduce dramatically the scanning effort of the acquisition step of the MVP holography. It is demonstrated that the three dimensional (3D) scene can be reconstructed accurately by using a mere fraction of the projections. Conventionally, the 3D scene is reconstructed plane by plane, i.e., a forward 2D-2D model relating each object 2D depth plane to the 2D hologram. However, we show that by utilizing a 3D-2D forward propagation model relating the entire object cube to the hologram plane, we can further use the sparsity of the information in order to obtain improved sectioning of the scene. As a result, a white light tomography from the reduced number of perspective projections of the scene is demonstrated. Therefore, a highly compact sensing process and representation of the scene is achieved.

We also note that one of the advantages of using multiple view approach is a higher axial resolution, compared with a single aperture holographic / imaging system. So, by applying CS to MVP holography the inherent axial resolution gain of MVP holography is accomplished by reduced number of projections, compared with regular multi-aperture imaging systems. It also should be mentioned that the proposed approaches do not require any hardware change at the sensing system level. The entire complexity is transferred to the decoder.

The current paper is organized as follows: section 2 deals briefly with the MVP technique; section 3 provides a short background on CS and applies it to a 2D-2D reconstruction problem. By doing so, we show how CS can help us to reduce significantly the scanning effort, storage and bandwidth requirements of the MVP hologram. To prove our method, simulation and experimental results are shown. In section 4, we apply a slightly different approach, employing a 3D-2D forward propagation model in order to demonstrate a highly efficient tomographic sectioning of the white light illuminated 3D scene, while keeping a highly compact representation of the scene. Conclusion is presented in section 5.

2. Multiple view projection holography

The process of obtaining a digital hologram using the MVP method can be divided into optical and digital stages. In the optical stage, we translate a conventional digital camera across a scene, and during its movement we record different perspectives of the 3D scene. Each perspective of the scene can be characterized by a pair of angles(φm,θn). Let us denote the mn-th projection bypmn(xp,yp), where xp,yp are the coordinates in the projection domain. In the digital stage each acquired projection is multiplied by a complex phase function, fmn=exp{j2πb(xpsinφm+ypsinθn)}, where b is some real constant. The function of b is to allow the accurate reconstruction of a scene [‎3,4]. By integrating (digital summation) the product of pmn and fmn, as the following: h(m,n)=pmn(xp,yp)fmndxpdyp, we obtain a complex scalar for every projection(φm,θn). It can be shown [4] that h(m,n) represents a Fourier hologram. Using similar techniques, other types of holograms also can be obtained [‎2,‎7].

If we perform a Fourier transform on h(m,n), we will get a reconstruction which corresponds only to z=0 plane of the scene. Details of the other planes will be out of focus. In order to reconstruct other planes, we should multiply the hologram by a quadratic phase function, which corresponds to zi plane. This is formulated by the following equation:

ui(x,y)=1{h(υx,υy)exp[jπλzi(υx2+υy2)]},
where ui is the reconstructed plane, υxy indicate spatial frequencies, λ denotes the central wavelength and repesents the Fourier transform. Since the hologram synthesis and reconstruction is performed digitally, in the following expressions we denote all the reconstruction operators in discrete form. Therefore, Eq. (1) should be rewritten as the following expression:
ui(p,q)=mnh(m,n)exp{jπλzi[(Δυxm)2+(Δυyn)2]}exp{j2π(mpNx+nqNy)},
ui=F1Q-λ2zih,
where Nx and Ny are the number of pixels in the x and y directions, respectively. From now on, we will assume that Nx = Ny = N. Equation (2b) is simply a matrix-vector representation of (2a), where u i is a lexicographically arranged N2×1 1D vector representation of the corresponding object plane. Let F be N×N, 1D discrete Fourier transform matrix, whose entries are Fm,p=ej2πmp/N, consequently F=FF is the N2×N2, 2D discrete Fourier transform matrix, where ⊗ is the Kronecker product [‎12]. F −1 symbolizes inverse (also the complex conjugate) of F. The matrixQ-λ2zi is an N2×N2 diagonal matrix with the appropriate quadratic phase elements along its diagonal. It also may be stated that in the MVP holography setup, the hologram size Nx×Ny=N2corresponds to the number of acquired projections.

Thus, it can be inferred from this section that MVP holography encodes the 3D scene in a form of a Fourier hologram, permitting its digital reconstruction using a straightforward numerical FFT based back propagation.

3. Compressive sensing approach for reducing the number of projections

3.1 Basics of compressive sensing

Compressive sensing [‎13,‎14] was established as a sensing paradigm in recent years. The CS mechanism seeks to capture the most essential signal information with the smallest number of measurements, i.e., to minimize the collection of redundant data in the acquisition step. Based on this relatively new theory, many new works have already sprung up in the field of compressive imaging [‎15-‎21]. Compressive sensing relies on the assumption that the signal we want to acquire has a sparse representation in some arbitrary (known) basis. Practically, CS suggests that a signal can be reconstructed only from M=O(KlogN) randomly chosen samples in Fourier space, where K is the number of nonzero elements in the signal under an arbitrary sparsifying operator (e.g. Haar wavelet, total-variation), and N 2 is the total number of object pixels [‎14]. The reconstruction is carried out by applying an algorithm solving an ℓ1 norm minimization problem. Similar results for other (non-Fourier) spaces also exist, but our interest in Fourier space comes from the fact that the present MVP method encodes the scene's data in Fourier space.

3.2 Compressive sampling multiple view projection holography

In this subsection, the reconstruction problem of different object planes from a subsampled Fourier hologram is formulated. As it has been explained in section 2, each value of the synthesized Fourier hologram, h(m,n), corresponds to a captured projection of the scene. This means that in order to reconstruct accurately the different planes of the scene only 2KlogN projections (Fourier samples) of the scene are needed. By doing so, we create a digital subsampled Fourier hologram with a fraction of the pixels and without any modification of the sensing hardware. We designate the subsampled Fourier hologram as hM, or in its vector form as hM, where M is the number of samples (projections). In order to reconstruct an object plane u i at distance zi from z=0 plane, given the subsampled Fourier hologram, hM, we solve the following equation:

min{uiFQ2zihM22+γΨiui1},
where γ is a regularization parameter which controls the ratio between the data fit and the sparsity level, ||.||p is the ℓp-norm and Ψ i is an operator which promotes the sparsest representation for the plane u i, such as Haar wavelet or total variation (TV). The sparsifying basis may be the same for all reconstructed planes, or specifically adapted for each object's plane u i. The compressive multiple view projection (CMVP) holography approach may be summarized as follows [‎22]:

  • ≈ Acquire only ≈2KlogN random projections of the 3D scene (instead of N 2 = Nx∙Ny projections, which are the nominal number of projections required for the original MVP method).
  • ≈ Multiply each acquired projection pmn by its corresponding phase function fmn (see section 2). The digital summation of each product yields a single Fourier hologram coefficient h(m,n). We obtain, an undersampled Fourier hologram from the ≈2KlogN acquired projections (coefficients).
  • ≈ Reconstruct the depth planes of the 3D object using an ℓ2-ℓ1 norm minimization (Eq. (3)) with an appropriate sparsifying operator.

A schematic illustration of the acquisition and hologram generation process is shown in Fig. 1 . For convenience, only a scan along the x-axis is shown. The minimal angular distance between two adjacent projections is Δφ, and zo is the distance between the imaging system and the object. The length of the CCD's translation trajectory is 2L. In Fig. 1 the synthesized Fourier hologram is heavily undersampled. This fact underlies the present work, but not the general MVP holography.

 figure: Fig. 1

Fig. 1 Illustration of CMVP hologram acquisition. Acquisition of only ≈KlogNx projections results in a heavily undersampled Fourier hologram. Each sample in the hologram plane corresponds to a nonuniformly randomly picked projection.

Download Full Size | PDF

Basically, by moving the camera to sparse translation locations we subsample the Fourier plane as pioneered by Candes, et al in [‎14]. However, unlike the common uniform random subsampling [‎14], we use a variable density sampling of the Fourier space, so that more samples are taken at low frequencies, near the origin, and less samples are taken as we tend away from the origin (as can be seen in Fig. 1). This random subsampling scheme has been shown to be more efficient than uniform random subsampling for CS in the Fourier [‎17] or Fresnel [‎21] domains.

The presented method has several advantages over the technique in [‎6] which has also demonstrated MVP holography with a reduced sampling of the scene. The method in [‎6] requires the distinct anchor points in order to interpolate the different perspectives of the scene from a small number of projections. Consequently, textured and smooth scenes may require much more projections. The number and locations of these anchor points, as well as the number of projections should be adapted to the particular 3D object. Our proposed method is free from this limitation; it is virtually universal, and its only assumption is that the scene has a sparse representation in some known basis, hence it can infer any natural scenery. Another advantage is that the method does not require any modification of the sensing hardware, unlike in [‎6], where the hologram synthesis should be performed at the sensor level.

3.3 Simulation and experimantal results

Simulation and experimental data were used in order to show the applicability of the method. A 3D scene was simulated with the letters B,G,U, which were placed at different axial locations. We have digitally generated a Fourier hologram according to the process described in section 2. The generated hologram was 256×256 pixels in size, which corresponds to 256×256 projections. Afterwards, we subsampled the Fourier hologram according to the random variable sampling scheme mentioned in subsection 3.2. We reconstructed the depth planes u i by solving Eq. (3) using the TwIST [‎23] solver, where the sparsifying operator was chosen as the Haar wavelet transform. The choice of the sparsifying operator affected the quality of the reconstruction and convergence rate. The Haar wavelet basis was chosen because of the piecewise constant nature of the objects, and its incoherence with Fourier sensing basis [ 17]. In Fig. 2 the resulted reconstruction from the undersampled Fourier hologram has been compared with the complete Fourier hologram. It could be seen in Fig. 2 that by using only 6% of the projections, the different planes have accurately been reconstructed.

 figure: Fig. 2

Fig. 2 Reconstruction examples of the B and U planes of simulated data. (a) Reconstruction of the B plane from 100% of the projections. (b) CS reconstruction of the B plane from 6% of the projections. (c) Reconstruction of the U plane from 100% of the projections. (d) CS reconstruction of the U plane from 6% of the projections.

Download Full Size | PDF

In the real experiment we have used a 3D scene which contains 3 cubes, each of them 3.5cm×3.5cm×3.5cm in size. The distances along the optical axis between the imaging lens of the CCD camera and the first, middle and last cubes were 30cm, 37cm, and 40cm, respectively. A Fourier hologram was synthesized from 400 1D projections captured along the x-axis. The 1D MVP algorithm and its difference from 2D MVP were detailed in [2]. The distance between the two most extreme projections along the CCD path was 4cm and the interval between every two successive projections was 0.1mm. In Fig. 3 an accurate reconstruction was demonstrated from only 25% of the nominal number of acquired projections.

 figure: Fig. 3

Fig. 3 CMVP reconstruction results of experimental data. (a) Reconstruction from 100% of the projections. (b) Reconstruction from 25% of the projections using the CS framework.

Download Full Size | PDF

We point out that in this experiment the scanning is performed only in one dimension, therefore the compressive sensing ratio is smaller than with 2D scanning. In fact, 25% subsampling obtained with 1D scanning is equivalent to 6% in 2D subsampling obtained in the simulated 2D scanning experiment (0.25×0.25≈0.06).

Thus, in this section we have demonstrated a reduction in the scanning effort by applying the CS framework to the acquisition process of an incoherent MVP hologram. This has also resulted in a major reduction in bandwidth or storage requirements. This reduction has come with no alteration in the system's hardware, and it has completely nonadaptive to the scene. The only assumption being made is that the scene is sparse, in some basis, which is a reasonable assumption for any natural scene.

4. Efficient depth sectioning with compressive multiple view projection holography

4.1 Applying a 3D-2D forward model

As seen in Figs. 2 and 3 the reconstruction obtained by focusing digitally on different object depth planes may be distorted due to out of focus object points located in other object planes. These disturbances are the result of an incomplete model of the system because the back propagation model of Eq. (2) follows a 2D-2D model linking the hologram to a single depth plane, and ignoring other object planes. Clearly, applying reconstruction by using this system or any such 2D-2D model is subject to distortions if an object point disobeys the model, i.e., the object point is located in another depth plane. In order to avoid these distortions, a 3D-2D forward model relating all the Nobject=Nx×Ny×Nz voxels to the synthesized Nholo=Nx×Ny hologram points should be considered. Such approach has been recently introduced for different types of coherent holography applications [‎24-‎27]. The 3D-2D model linking the contribution of the different object planes, located at distance zi, to z=0 plane is shown in the following equation:

h(υx,υy)=i=1Nzexp[jπλzi(υx2+υy2)]{ui}=i=1Nz{uiexp[jπλzi(x2+y2)]},
where each plane denoted by ui is separated by a distance zi from z=0 plane, and there is a total number of Nz planes which contribute to the hologram generation. Rewriting Eq. (4) in discrete vector-matrix form, as in section 2, yields the following:

h(m,n)=i=1NzQλ2ziFui.

When using this form we may express the model of the hologram generation as the following:

h=[Qλ2z1F;...;Qλ2zNzF][u1;...;uNZ]T=ΦuT.

Equations (5) and (6) represent a system forward model for the complete hologram, h. Here the reconstruction is applied to the subsampled Fourier hologram, h M, as described in section 3. Therefore, we may formulate our reconstruction problem as follows:

min{hMΦuT22+τuTV},
where
uTV=li,j(ui+1,j,lui,j,l)2+(ui,j+1,lui,j,l)2
as stated in [‎24-‎25]. In Eq. (7) τ is a regularization parameter which controls the ratio between the data fit and the sparsity level. Using Eq. (7) and Eq. (8), we are able to look for the sparsest solution in a 3D cube, rather than in each plane separately. Thereafter we combine the subsampling shown in the previous section with the extended ability to apply tomographic image reconstruction. This approach can be named CMVP tomography (CMVPT). The procedure may be summarized as the following:

  • ≈ Acquire only ≈2KlogN projections of the 3D scene.
  • ≈ Reconstruct the sparsest solution of the entire 3D data cube according to the problem formulation in Eq. (7). The reconstruction result is the collection of planes [uz1;uz2;...;uzN].

4.2 Experimental results

We again use the experimental data shown in section 3. One hundred 1D projections are used in order to reconstruct 400×400×3 object voxels. Figure 4 demonstrates the sectioning of the 3D scene.

 figure: Fig. 4

Fig. 4 Applying CMVP tomographic sectioning to experimental data. (a) Reconstruction from 100% of the projections. (b) Compressive holography approach applied to CMVP, with only 25% of the nominal number of projections.

Download Full Size | PDF

Figure 4 exhibits the ability of the method to increase the contrast between in focus and out of focus objects. The contrast between the in focus objects and the out of focus objects is increased by approximately 4-5 times, compared to regular back propagation applied to the MVP generated hologram (Fig. 3). The remaining unfocused data may be further easily removed by applying thresholding or filtering techniques.

4.3 System's Resolution Analysis

The theoretical analysis of the system's resolution limit is based upon the fact that the moving CCD captures projections of the object from different directions. Consequently, the resolution should be determined by the imaging system's parameters, and by the hologram generation process. The imaging system's resolution is governed by the optical or geometrical resolution (size of the CCD pixel). If we denote the finite aperture radius of the imaging lens as A and the distance from the object to the imaging system as zo, the optical lateral resolution is given byλ/NAin=λzo/A, where NA stands for numerical aperture. The geometrical lateral resolution is approximated by projecting the pixel size, Δs, to the object plane, and therefore is given by Δs/MT, where MT is the lateral magnification of the projections. Besides the imaging system's resolution, another limitation is introduced when attempting to reconstruct the object from the hologram. As shown in section 2, every projection is multiplied by the phase factorfm=exp{j2πbxpsinφm}. The minimal cycle of fm determines the lateral resolution limit. Assuming xp,max = 1, the minimal cycle of fm is: Np/(bsinφm,max) where Np is the number of pixels in each projection across the x-axis. Consequently, the minimal cycle of fm in the object plane isNpΔs/(bsinφm,maxMT). From Fig. 1 the following equation can be obtained: sinφm,max=L/L2+zo2. Therefore, the resolution limit induced by the NA of the hologram is NpΔsL2+zo2/(bLMT). Equation (9) concludes the discussion about the system's lateral resolution:

Δx=max{Δxoptical,Δxgeometrical,Δxhologram}=max{λzoA,ΔsMT,NpΔsL2+zo2bLMT}

The axial resolution for a single aperture imaging system is given by: ΔzSA=λ/NA2=Δxzo/A. Since our system is based on multi-aperture, the axial resolution is determined by the maximal angular range of the setup, i.e., Δz=Δx/tanφm,max. Therefore, the axial resolution can be approximated as follows:

ΔzMA=z0LΔx.

From the experimental data, the transversal resolution is approximately equal to 0.25cm, and the axial resolution is approximately equal to 3cm according to Eq. (9) and Eq. (10), respectively. Since less projections are taken within the given area, which is determined by the camera trajectory, the CMVPT is much more effective than MVP in terms of an axial resolution gain relative to acquisition effort (number of exposures). The axial resolution gain is achieved by using a multi-aperture setup instead of a single aperture setup, and the gain is expressed as: Δzgain=ΔzSA/ΔzMA=L/A. Eq. (11) shows this gain divided by the number of projections required to reconstruct the scene using the CS framework:

ΔzgainNumberofprojections=L/AKlogN=NKlogN.

As a rule of thumb, while the amount of pixels in an image grows, its sparsity level increases at a slower rate. Therefore, the term N/K of Eq. (11) increases as the dimensionality of the problem increases, and in turn, the ratio of the axial resolution gain relative to the number of projections grows accordingly. Hence, we are able to obtain the superior axial resolution benefits of the MVP method while reducing the scanning effort.

5. Conclusion

In this paper we have presented a simple and nonadaptive way to reduce the number of projections in incoherent MVP hologram, while accurately reconstructing the 3D scene. Accurate reconstruction of the planes was possible by applying the compressive sensing theory. Simulation and experimental results exhibited accurate reconstruction from mere 6% of the nominal number of projections. The practical implications have been the reduction of the scanning effort in the acquisition step, as well as the reduction of the requirements for the hologram bandwidth and storage. We have also demonstrated improved sectioning of the scene from a reduced number of projections by applying a proper 3D to 2D model. This setup has allowed to perform totally incoherent light tomography, while keeping a highly compact representation of the scene. All of these advantages have required no hardware changes at the sensor level.

Acknowledgements

The authors would like to thank Barak Katz and Natan T. Shaked for providing the experimental data. This research was partially supported by the Israel Science Foundation (grant No.1039/09), and Israel's Ministry of Science.

References and links

1. J. W. Goodman, Introduction to Fourier optics, 3rd Ed., (Roberts and Company Publishers, 2005).

2. N. T. Shaked, B. Katz, and J. Rosen, “Review of three-dimensional holographic imaging by multiple-viewpoint-projection based methods,” Appl. Opt. 48(34), H120–H136 (2009). [CrossRef]   [PubMed]  

3. Y. Li, D. Abookasis, and J. Rosen, “Computer-generated holograms of three-dimensional realistic objects recorded without wave interference,” Appl. Opt. 40(17), 2864–2870 (2001). [CrossRef]  

4. D. Abookasis and J. Rosen, “Computer-generated holograms of three-dimensional objects synthesized from their multiple angular viewpoints,” J. Opt. Soc. Am. A 20(8), 1537–1545 (2003). [CrossRef]  

5. Y. Sando, M. Itoh, and T. Yatagai, “Holographic three-dimensional display synthesized from three-dimensional fourier spectra of real existing objects,” Opt. Lett. 28(24), 2518–2520 (2003). [CrossRef]   [PubMed]  

6. B. Katz, N. T. Shaked, and J. Rosen, “Synthesizing computer generated holograms with reduced number of perspective projections,” Opt. Express 15(20), 13250–13255 (2007). [CrossRef]   [PubMed]  

7. N. T. Shaked and J. Rosen, “Modified Fresnel computer-generated hologram directly recorded by multiple-viewpoint projections,” Appl. Opt. 47(19), D21–D27 (2008). [CrossRef]   [PubMed]  

8. N. T. Shaked, J. Rosen, and A. Stern, “Integral holography: white-light single-shot hologram acquisition,” Opt. Express 15(9), 5754–5760 (2007). [CrossRef]   [PubMed]  

9. N. Chen, J.-H. Park, and N. Kim, “Parameter analysis of integral Fourier hologram and its resolution enhancement,” Opt. Express 18(3), 2152–2167 (2010). [CrossRef]   [PubMed]  

10. G. Indebetouw, P. Klysubun, T. Kim, and T.-C. Poon, “Imaging properties of scanning holographic microscopy,” J. Opt. Soc. Am. A 17(3), 380–390 (2000). [CrossRef]  

11. J. Rosen and G. Brooker, “Digital spatially incoherent Fresnel holography,” Opt. Lett. 32(8), 912–914 (2007). [CrossRef]   [PubMed]  

12. Y. Rivenson and A. Stern, “Compressed imaging with separable sensing operator,” IEEE Signal Process. Lett. 16(6), 449–452 (2009). [CrossRef]  

13. D. L. Donoho, “Compressed sensing,” IEEE Trans. Inf. Theory 52(4), 1289–1306 (2006). [CrossRef]  

14. E. Candès, J. Romberg, and T. Tao, “Robust uncertainty principles: Exact signal reconstruction from highly incomplete frequency information,” IEEE Trans. Inf. Theory 52(2), 489–509 (2006). [CrossRef]  

15. http://sites.google.com/site/igorcarron2/compressedsensinghardware.

16. A. Stern, “Compressed imaging system with linear sensors,” Opt. Lett. 32(21), 3077–3079 (2007). [CrossRef]   [PubMed]  

17. M. Lustig, “Sparse MRI,” Ph.D. dissertation, Dept. Elect. Eng., Stanford Univ., Palo Alto, CA, 2008.

18. S. Gazit, A. Szameit, Y. C. Eldar, and M. Segev, “Super-resolution and reconstruction of sparse sub-wavelength images,” Opt. Express 17(26), 23920–23946 (2009). [CrossRef]  

19. A. Bourquard, F. Aguet, and M. Unser, “Optical imaging using binary sensors,” Opt. Express 18(5), 4876–4888 (2010). [CrossRef]   [PubMed]  

20. Y. Rivenson, A. Stern, and B. Javidi, “Single exposure super-resolution compressive imaging by double phase encoding,” Opt. Express 18(14), 15094–15103 (2010). [CrossRef]   [PubMed]  

21. Y. Rivenson, A. Stern, and B. Javidi, “Compressive Fresnel Holography,” Disp. Tech, Journalism 506–509(10), 6 (2010).

22. Y. Rivenson, A. Stern, and J. Rosen, “Compressive Sensing Approach for Reducing the Number of Exposures in Multiple View Projection Holography,” in Frontiers in Optics, OSA Technical Digest (CD) (Optical Society of America, 2010), paper FThM2.

23. J. M. Bioucas-Dias and M. A. T. Figueiredo, “A new twist: two-step iterative shrinkage/thresholding algorithms for image restoration,” IEEE Trans. Image Process. 16(12), 2992–3004 (2007). [CrossRef]   [PubMed]  

24. D. J. Brady, K. Choi, D. L. Marks, R. Horisaki, and S. Lim, “Compressive holography,” Opt. Express 17(15), 13040–13049 (2009). [CrossRef]   [PubMed]  

25. C. F. Cull, D. A. Wikner, J. N. Mait, M. Mattheiss, and D. J. Brady, “Millimeter-wave compressive holography,” Appl. Opt. 49(19), E67–E82 (2010). [CrossRef]   [PubMed]  

26. K. Choi, R. Horisaki, J. Hahn, S. Lim, D. L. Marks, T. J. Schulz, and D. J. Brady, “Compressive holography of diffuse objects,” Appl. Opt. 49(34), H1–H10 (2010). [CrossRef]   [PubMed]  

27. X. Zhang and E. Y. Lam, “Edge-preserving sectional image reconstruction in optical scanning holography,” J. Opt. Soc. Am. A 27(7), 1630–1637 (2010). [CrossRef]  

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (4)

Fig. 1
Fig. 1 Illustration of CMVP hologram acquisition. Acquisition of only ≈KlogNx projections results in a heavily undersampled Fourier hologram. Each sample in the hologram plane corresponds to a nonuniformly randomly picked projection.
Fig. 2
Fig. 2 Reconstruction examples of the B and U planes of simulated data. (a) Reconstruction of the B plane from 100% of the projections. (b) CS reconstruction of the B plane from 6% of the projections. (c) Reconstruction of the U plane from 100% of the projections. (d) CS reconstruction of the U plane from 6% of the projections.
Fig. 3
Fig. 3 CMVP reconstruction results of experimental data. (a) Reconstruction from 100% of the projections. (b) Reconstruction from 25% of the projections using the CS framework.
Fig. 4
Fig. 4 Applying CMVP tomographic sectioning to experimental data. (a) Reconstruction from 100% of the projections. (b) Compressive holography approach applied to CMVP, with only 25% of the nominal number of projections.

Equations (12)

Equations on this page are rendered with MathJax. Learn more.

u i ( x , y ) = 1 { h ( υ x , υ y ) exp [ j π λ z i ( υ x 2 + υ y 2 ) ] } ,
u i ( p , q ) = m n h ( m , n ) exp { j π λ z i [ ( Δ υ x m ) 2 + ( Δ υ y n ) 2 ] } exp { j 2 π ( m p N x + n q N y ) } ,
u i = F 1 Q - λ 2 z i h ,
min { u i F Q 2 z i h M 2 2 + γ Ψ i u i 1 } ,
h ( υ x , υ y ) = i = 1 N z exp [ j π λ z i ( υ x 2 + υ y 2 ) ] { u i } = i = 1 N z { u i exp [ j π λ z i ( x 2 + y 2 ) ] } ,
h ( m , n ) = i = 1 N z Q λ 2 z i F u i .
h = [ Q λ 2 z 1 F ; ... ; Q λ 2 z N z F ] [ u 1 ; ... ; u N Z ] T = Φ u T .
min { h M Φ u T 2 2 + τ u T V } ,
u T V = l i , j ( u i + 1 , j , l u i , j , l ) 2 + ( u i , j + 1 , l u i , j , l ) 2
Δ x = max { Δ x o p t i c a l , Δ x g e o m e t r i c a l , Δ x h o log r a m } = max { λ z o A , Δ s M T , N p Δ s L 2 + z o 2 b L M T }
Δ z M A = z 0 L Δ x .
Δ z g a i n N u m b e r o f p r o j e c t i o n s = L / A K log N = N K log N .
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.