## Abstract

Multiple view projection holography is a method to obtain a digital hologram by recording different views of a 3D scene with a conventional digital camera. Those views are digitally manipulated in order to create the digital hologram. The method requires a simple setup and operates under white light illuminating conditions. The multiple views are often generated by a camera translation, which usually involves a scanning effort. In this work we apply a compressive sensing approach to the multiple view projection holography acquisition process and demonstrate that the 3D scene can be accurately reconstructed from the highly subsampled generated Fourier hologram. It is also shown that the compressive sensing approach, combined with an appropriate system model, yields improved sectioning of the planes of different depths.

© 2011 OSA

## 1. Introduction

Holography is a classical method [1] for recording a three dimensional information of a scene. Most of the times a hologram is acquired by using high coherence and high powered sources, such as lasers, in order to create the necessary waves interference. When implementing coherent light holography, we require both thermal and mechanical stability of the optical setup. All of these factors often confine holography recording to the laboratory. Thus, an ongoing effort is being made for developing incoherent illumination holography recording process [2]. We may name a few implementation techniques, such as multiple view projections created either from a translating camera [3-7] or a lenslet array [8,9], optical scanning holography [10], and the Fresnel incoherent correlation holography (FINCH) [11].

In the present work, we concentrate on the multiple view projection (MVP) holography technique. The acquisition process uses spatially and temporally incoherent “white” light, thus avoiding many of the drawbacks of traditional holographic recording applications. Any ordinary digital camera may be used as a recording device. The camera is usually translated, and during its movement it captures many views of the same scene, from different angles. Each one of these intensity images is a projection of the scene on the CCD plane [2]. Then, the different projections are used to synthesize a digital hologram. One of the drawbacks of this MVP method is that it requires a significant scanning effort. For instance, in order to produce a 256×256 pixels hologram, 256×256=65,536 projections should be acquired. A lenslet array can be employed [8,9] to remedy the scanning effort. However, this solution suffers from low resolving power of the optical system, which limits the quality of the reconstruction of the holograms. In [6] the scanning effort has been reduced by recording only a small number of the projections and synthesizing the rest using a view synthesis stereo vision algorithm. This technique encounters some difficulties in handling several scenes and needs some changes in the hologram generation process.

In this undertaking we adopt the compressive sensing (CS) approach in order to reduce dramatically the scanning effort of the acquisition step of the MVP holography. It is demonstrated that the three dimensional (3D) scene can be reconstructed accurately by using a mere fraction of the projections. Conventionally, the 3D scene is reconstructed plane by plane, i.e., a forward 2D-2D model relating each object 2D depth plane to the 2D hologram. However, we show that by utilizing a 3D-2D forward propagation model relating the entire object cube to the hologram plane, we can further use the sparsity of the information in order to obtain improved sectioning of the scene. As a result, a white light tomography from the reduced number of perspective projections of the scene is demonstrated. Therefore, a highly compact sensing process and representation of the scene is achieved.

We also note that one of the advantages of using multiple view approach is a higher axial resolution, compared with a single aperture holographic / imaging system. So, by applying CS to MVP holography the inherent axial resolution gain of MVP holography is accomplished by reduced number of projections, compared with regular multi-aperture imaging systems. It also should be mentioned that the proposed approaches do not require any hardware change at the sensing system level. The entire complexity is transferred to the decoder.

The current paper is organized as follows: section 2 deals briefly with the MVP technique; section 3 provides a short background on CS and applies it to a 2D-2D reconstruction problem. By doing so, we show how CS can help us to reduce significantly the scanning effort, storage and bandwidth requirements of the MVP hologram. To prove our method, simulation and experimental results are shown. In section 4, we apply a slightly different approach, employing a 3D-2D forward propagation model in order to demonstrate a highly efficient tomographic sectioning of the white light illuminated 3D scene, while keeping a highly compact representation of the scene. Conclusion is presented in section 5.

## 2. Multiple view projection holography

The process of obtaining a digital hologram using the MVP method can be divided into optical and digital stages. In the optical stage, we translate a conventional digital camera across a scene, and during its movement we record different perspectives of the 3D scene. Each perspective of the scene can be characterized by a pair of angles$({\phi}_{m},{\theta}_{n})$. Let us denote the *mn*-th projection by${p}_{mn}({x}_{p},{y}_{p})$, where *x _{p},y_{p}* are the coordinates in the projection domain. In the digital stage each acquired projection is multiplied by a complex phase function, ${f}_{mn}=\mathrm{exp}\left\{-j2\pi b\left({x}_{p}\mathrm{sin}{\phi}_{m}+{y}_{p}\mathrm{sin}{\theta}_{n}\right)\right\}$, where

*b*is some real constant. The function of

*b*is to allow the accurate reconstruction of a scene [3,4]. By integrating (digital summation) the product of ${p}_{mn}$ and ${f}_{mn}$, as the following: $h(m,n)={\displaystyle \iint {p}_{mn}({x}_{p},{y}_{p}){f}_{mn}d{x}_{p}d{y}_{p}}$, we obtain a complex scalar for every projection$({\phi}_{m},{\theta}_{n})$. It can be shown [4] that

*h(m,n)*represents a Fourier hologram. Using similar techniques, other types of holograms also can be obtained [2,7].

If we perform a Fourier transform on *h(m,n)*, we will get a reconstruction which corresponds only to *z*=0 plane of the scene. Details of the other planes will be out of focus. In order to reconstruct other planes, we should multiply the hologram by a quadratic phase function, which corresponds to *z _{i}* plane. This is formulated by the following equation:

*u*is the reconstructed plane,

_{i}*υ*indicate spatial frequencies, λ denotes the central wavelength and

_{x},υ_{y}*ℱ*repesents the Fourier transform. Since the hologram synthesis and reconstruction is performed digitally, in the following expressions we denote all the reconstruction operators in discrete form. Therefore, Eq. (1) should be rewritten as the following expression:

*N*and

_{x}*N*are the number of pixels in the

_{y}*x*and

*y*directions, respectively. From now on, we will assume that

*N*=

_{x}= N_{y}*N*. Equation (2b) is simply a matrix-vector representation of (2a), where

**u**

*is a lexicographically arranged ${N}^{2}\times 1$ 1D vector representation of the corresponding object plane. Let*

_{i}*F*be $N\times N$, 1D discrete Fourier transform matrix, whose entries are ${F}_{m,p}={e}^{-j2\pi mp/N}$, consequently $F=F\otimes F$ is the ${N}^{2}\times {N}^{2}$, 2D discrete Fourier transform matrix, where ⊗ is the Kronecker product [12].

**F**

^{−1}symbolizes inverse (also the complex conjugate) of

**F**. The matrix${Q}_{-{\lambda}^{2}{z}_{i}}$ is an ${N}^{2}\times {N}^{2}$ diagonal matrix with the appropriate quadratic phase elements along its diagonal. It also may be stated that in the MVP holography setup, the hologram size ${N}_{x}\times {N}_{y}={N}^{2}$corresponds to the number of acquired projections.

Thus, it can be inferred from this section that MVP holography encodes the 3D scene in a form of a Fourier hologram, permitting its digital reconstruction using a straightforward numerical FFT based back propagation.

## 3. Compressive sensing approach for reducing the number of projections

#### 3.1 Basics of compressive sensing

Compressive sensing [13,14] was established as a sensing paradigm in recent years. The CS mechanism seeks to capture the most essential signal information with the smallest number of measurements, i.e., to minimize the collection of redundant data in the acquisition step. Based on this relatively new theory, many new works have already sprung up in the field of compressive imaging [15-21]. Compressive sensing relies on the assumption that the signal we want to acquire has a sparse representation in some arbitrary (known) basis. Practically, CS suggests that a signal can be reconstructed only from *M=O(KlogN)* randomly chosen samples in Fourier space, where *K* is the number of nonzero elements in the signal under an arbitrary sparsifying operator (e.g. Haar wavelet, total-variation), and *N*
^{2} is the total number of object pixels [14]. The reconstruction is carried out by applying an algorithm solving an ℓ_{1} norm minimization problem. Similar results for other (non-Fourier) spaces also exist, but our interest in Fourier space comes from the fact that the present MVP method encodes the scene's data in Fourier space.

#### 3.2 Compressive sampling multiple view projection holography

In this subsection, the reconstruction problem of different object planes from a subsampled Fourier hologram is formulated. As it has been explained in section 2, each value of the synthesized Fourier hologram, *h(m,n)*, corresponds to a captured projection of the scene. This means that in order to reconstruct accurately the different planes of the scene only 2*KlogN* projections (Fourier samples) of the scene are needed. By doing so, we create a digital subsampled Fourier hologram with a fraction of the pixels and without any modification of the sensing hardware. We designate the subsampled Fourier hologram as *h ^{M}*, or in its vector form as ${h}^{M}$, where

*M*is the number of samples (projections). In order to reconstruct an object plane

**u**

*at distance*

_{i}*z*from

_{i}*z*=0 plane, given the subsampled Fourier hologram,

*h*, we solve the following equation:

^{M}*γ*is a regularization parameter which controls the ratio between the data fit and the sparsity level, ||.||

*is the ℓ*

_{p}*-norm and*

_{p}**Ψ**

*is an operator which promotes the sparsest representation for the plane*

_{i}**u**

*, such as Haar wavelet or total variation (TV). The sparsifying basis may be the same for all reconstructed planes, or specifically adapted for each object's plane*

_{i}**u**

*. The compressive multiple view projection (CMVP) holography approach may be summarized as follows [22]:*

_{i}- ≈ Acquire only ≈2
*KlogN*random projections of the 3D scene (instead of*N*^{2}=*N*projections, which are the nominal number of projections required for the original MVP method)._{x}∙N_{y} - ≈ Multiply each acquired projection ${p}_{mn}$ by its corresponding phase function ${f}_{mn}$ (see section 2). The digital summation of each product yields a single Fourier hologram coefficient
*h*(*m*,*n*). We obtain, an undersampled Fourier hologram from the ≈2*KlogN*acquired projections (coefficients). - ≈ Reconstruct the depth planes of the 3D object using an ℓ
_{2}-ℓ_{1}norm minimization (Eq. (3)) with an appropriate sparsifying operator.

A schematic illustration of the acquisition and hologram generation process is shown in Fig. 1
. For convenience, only a scan along the *x*-axis is shown. The minimal angular distance between two adjacent projections is $\Delta \phi $, and *z _{o}* is the distance between the imaging system and the object. The length of the CCD's translation trajectory is $2L.$ In Fig. 1 the synthesized Fourier hologram is heavily undersampled. This fact underlies the present work, but not the general MVP holography.

Basically, by moving the camera to sparse translation locations we subsample the Fourier plane as pioneered by Candes, *et al* in [14]. However, unlike the common uniform random subsampling [14], we use a variable density sampling of the Fourier space, so that more samples are taken at low frequencies, near the origin, and less samples are taken as we tend away from the origin (as can be seen in Fig. 1). This random subsampling scheme has been shown to be more efficient than uniform random subsampling for CS in the Fourier [17] or Fresnel [21] domains.

The presented method has several advantages over the technique in [6] which has also demonstrated MVP holography with a reduced sampling of the scene. The method in [6] requires the distinct anchor points in order to interpolate the different perspectives of the scene from a small number of projections. Consequently, textured and smooth scenes may require much more projections. The number and locations of these anchor points, as well as the number of projections should be adapted to the particular 3D object. Our proposed method is free from this limitation; it is virtually universal, and its only assumption is that the scene has a sparse representation in some known basis, hence it can infer any natural scenery. Another advantage is that the method does not require any modification of the sensing hardware, unlike in [6], where the hologram synthesis should be performed at the sensor level.

#### 3.3 Simulation and experimantal results

Simulation and experimental data were used in order to show the applicability of the method. A 3D scene was simulated with the letters B,G,U, which were placed at different axial locations. We have digitally generated a Fourier hologram according to the process described in section 2. The generated hologram was 256×256 pixels in size, which corresponds to 256×256 projections. Afterwards, we subsampled the Fourier hologram according to the random variable sampling scheme mentioned in subsection 3.2. We reconstructed the depth planes **u**
* _{i}* by solving Eq. (3) using the TwIST [23] solver, where the sparsifying operator was chosen as the Haar wavelet transform. The choice of the sparsifying operator affected the quality of the reconstruction and convergence rate. The Haar wavelet basis was chosen because of the piecewise constant nature of the objects, and its incoherence with Fourier sensing basis [

****17]. In Fig. 2 the resulted reconstruction from the undersampled Fourier hologram has been compared with the complete Fourier hologram. It could be seen in Fig. 2 that by using only 6% of the projections, the different planes have accurately been reconstructed.

In the real experiment we have used a 3D scene which contains 3 cubes, each of them 3.5cm×3.5cm×3.5cm in size. The distances along the optical axis between the imaging lens of the CCD camera and the first, middle and last cubes were 30cm, 37cm, and 40cm, respectively. A Fourier hologram was synthesized from 400 1D projections captured along the *x*-axis. The 1D MVP algorithm and its difference from 2D MVP were detailed in [2]. The distance between the two most extreme projections along the CCD path was 4cm and the interval between every two successive projections was 0.1mm. In Fig. 3
an accurate reconstruction was demonstrated from only 25% of the nominal number of acquired projections.

We point out that in this experiment the scanning is performed only in one dimension, therefore the compressive sensing ratio is smaller than with 2D scanning. In fact, 25% subsampling obtained with 1D scanning is equivalent to 6% in 2D subsampling obtained in the simulated 2D scanning experiment (0.25×0.25≈0.06).

Thus, in this section we have demonstrated a reduction in the scanning effort by applying the CS framework to the acquisition process of an incoherent MVP hologram. This has also resulted in a major reduction in bandwidth or storage requirements. This reduction has come with no alteration in the system's hardware, and it has completely nonadaptive to the scene. The only assumption being made is that the scene is sparse, in some basis, which is a reasonable assumption for any natural scene.

## 4. Efficient depth sectioning with compressive multiple view projection holography

#### 4.1 Applying a 3D-2D forward model

As seen in Figs. 2 and 3 the reconstruction obtained by focusing digitally on different object depth planes may be distorted due to out of focus object points located in other object planes. These disturbances are the result of an incomplete model of the system because the back propagation model of Eq. (2) follows a 2D-2D model linking the hologram to a single depth plane, and ignoring other object planes. Clearly, applying reconstruction by using this system or any such 2D-2D model is subject to distortions if an object point disobeys the model, i.e., the object point is located in another depth plane. In order to avoid these distortions, a 3D-2D forward model relating all the ${N}_{object}={N}_{x}\times {N}_{y}\times {N}_{z}$ voxels to the synthesized ${N}_{holo}={N}_{x}\times {N}_{y}$ hologram points should be considered. Such approach has been recently introduced for different types of coherent holography applications [24-27]. The 3D-2D model linking the contribution of the different object planes, located at distance *z _{i}*, to

*z*=0 plane is shown in the following equation:

*u*is separated by a distance

_{i}*z*from

_{i}*z*=0 plane, and there is a total number of

*N*planes which contribute to the hologram generation. Rewriting Eq. (4) in discrete vector-matrix form, as in section 2, yields the following:

_{z}When using this form we may express the model of the hologram generation as the following:

Equations (5) and (6) represent a system forward model for the complete hologram, **h**. Here the reconstruction is applied to the subsampled Fourier hologram, **h**
* ^{M}*, as described in section 3. Therefore, we may formulate our reconstruction problem as follows:

*τ*is a regularization parameter which controls the ratio between the data fit and the sparsity level. Using Eq. (7) and Eq. (8), we are able to look for the sparsest solution in a 3D cube, rather than in each plane separately. Thereafter we combine the subsampling shown in the previous section with the extended ability to apply tomographic image reconstruction. This approach can be named CMVP tomography (CMVPT). The procedure may be summarized as the following:

- ≈ Acquire only ≈2
*KlogN*projections of the 3D scene. - ≈ Reconstruct the sparsest solution of the entire 3D data cube according to the problem formulation in Eq. (7). The reconstruction result is the collection of planes $[{u}_{{z}_{1}};{u}_{{z}_{2}};\mathrm{...};{u}_{{z}_{N}}]$.

#### 4.2 Experimental results

We again use the experimental data shown in section 3. One hundred 1D projections are used in order to reconstruct 400×400×3 object voxels. Figure 4 demonstrates the sectioning of the 3D scene.

Figure 4 exhibits the ability of the method to increase the contrast between in focus and out of focus objects. The contrast between the in focus objects and the out of focus objects is increased by approximately 4-5 times, compared to regular back propagation applied to the MVP generated hologram (Fig. 3). The remaining unfocused data may be further easily removed by applying thresholding or filtering techniques.

#### 4.3 System's Resolution Analysis

The theoretical analysis of the system's resolution limit is based upon the fact that the moving CCD captures projections of the object from different directions. Consequently, the resolution should be determined by the imaging system's parameters, and by the hologram generation process. The imaging system's resolution is governed by the optical or geometrical resolution (size of the CCD pixel). If we denote the finite aperture radius of the imaging lens as *A* and the distance from the object to the imaging system as *z _{o}*, the optical lateral resolution is given by$\lambda /N{A}_{in}=\lambda {z}_{o}/A$, where NA stands for numerical aperture. The geometrical lateral resolution is approximated by projecting the pixel size, Δ

*s*, to the object plane, and therefore is given by $\Delta s/{M}_{T}$, where

*M*is the lateral magnification of the projections. Besides the imaging system's resolution, another limitation is introduced when attempting to reconstruct the object from the hologram. As shown in section 2, every projection is multiplied by the phase factor${f}_{m}=\mathrm{exp}\left\{-j2\pi b{x}_{p}\mathrm{sin}{\phi}_{m}\right\}$. The minimal cycle of

_{T}*f*determines the lateral resolution limit. Assuming

_{m}*x*= 1, the minimal cycle of

_{p,max}*f*is: ${N}_{p}/\left(b\mathrm{sin}{\phi}_{m,}{}_{\mathrm{max}}\right)$ where

_{m}*N*is the number of pixels in each projection across the

_{p}*x*-axis. Consequently, the minimal cycle of

*f*in the object plane is${N}_{p}\Delta s/\left(b\mathrm{sin}{\phi}_{m}{}_{,\mathrm{max}}{M}_{T}\right).$ From Fig. 1 the following equation can be obtained: $\mathrm{sin}{\phi}_{m}{}_{,\mathrm{max}}=L/\sqrt{{L}^{2}+{z}_{o}^{2}}$. Therefore, the resolution limit induced by the NA of the hologram is ${N}_{p}\Delta s\sqrt{{L}^{2}+{z}_{o}^{2}}/\left(bL{M}_{T}\right)$. Equation (9) concludes the discussion about the system's lateral resolution:

_{m}The axial resolution for a single aperture imaging system is given by: $\Delta {z}_{SA}=\lambda /N{A}^{2}=\Delta x\cdot {z}_{o}/A$. Since our system is based on multi-aperture, the axial resolution is determined by the maximal angular range of the setup, i.e., $\Delta z=\Delta x/\mathrm{tan}{\phi}_{m}{}_{,\mathrm{max}}$. Therefore, the axial resolution can be approximated as follows:

From the experimental data, the transversal resolution is approximately equal to 0.25cm, and the axial resolution is approximately equal to 3cm according to Eq. (9) and Eq. (10), respectively. Since less projections are taken within the given area, which is determined by the camera trajectory, the CMVPT is much more effective than MVP in terms of an axial resolution gain relative to acquisition effort (number of exposures). The axial resolution gain is achieved by using a multi-aperture setup instead of a single aperture setup, and the gain is expressed as: $\Delta {z}_{gain}=\Delta {z}_{SA}/\Delta {z}_{MA}=L/A.$ Eq. (11) shows this gain divided by the number of projections required to reconstruct the scene using the CS framework:

As a rule of thumb, while the amount of pixels in an image grows, its sparsity level increases at a slower rate. Therefore, the term *N*/*K* of Eq. (11) increases as the dimensionality of the problem increases, and in turn, the ratio of the axial resolution gain relative to the number of projections grows accordingly. Hence, we are able to obtain the superior axial resolution benefits of the MVP method while reducing the scanning effort.

## 5. Conclusion

In this paper we have presented a simple and nonadaptive way to reduce the number of projections in incoherent MVP hologram, while accurately reconstructing the 3D scene. Accurate reconstruction of the planes was possible by applying the compressive sensing theory. Simulation and experimental results exhibited accurate reconstruction from mere 6% of the nominal number of projections. The practical implications have been the reduction of the scanning effort in the acquisition step, as well as the reduction of the requirements for the hologram bandwidth and storage. We have also demonstrated improved sectioning of the scene from a reduced number of projections by applying a proper 3D to 2D model. This setup has allowed to perform totally incoherent light tomography, while keeping a highly compact representation of the scene. All of these advantages have required no hardware changes at the sensor level.

## Acknowledgements

The authors would like to thank Barak Katz and Natan T. Shaked for providing the experimental data. This research was partially supported by the Israel Science Foundation (grant No.1039/09), and Israel's Ministry of Science.

## References and links

**1. **J. W. Goodman, *Introduction to Fourier optics, 3rd Ed.*, (Roberts and Company Publishers, 2005).

**2. **N. T. Shaked, B. Katz, and J. Rosen, “Review of three-dimensional holographic imaging by multiple-viewpoint-projection based methods,” Appl. Opt. **48**(34), H120–H136 (2009). [CrossRef]

**3. **Y. Li, D. Abookasis, and J. Rosen, “Computer-generated holograms of three-dimensional realistic objects recorded without wave interference,” Appl. Opt. **40**(17), 2864–2870 (2001). [CrossRef]

**4. **D. Abookasis and J. Rosen, “Computer-generated holograms of three-dimensional objects synthesized from their multiple angular viewpoints,” J. Opt. Soc. Am. A **20**(8), 1537–1545 (2003). [CrossRef]

**5. **Y. Sando, M. Itoh, and T. Yatagai, “Holographic three-dimensional display synthesized from three-dimensional fourier spectra of real existing objects,” Opt. Lett. **28**(24), 2518–2520 (2003). [CrossRef]

**6. **B. Katz, N. T. Shaked, and J. Rosen, “Synthesizing computer generated holograms with reduced number of perspective projections,” Opt. Express **15**(20), 13250–13255 (2007). [CrossRef]

**7. **N. T. Shaked and J. Rosen, “Modified Fresnel computer-generated hologram directly recorded by multiple-viewpoint projections,” Appl. Opt. **47**(19), D21–D27 (2008). [CrossRef]

**8. **N. T. Shaked, J. Rosen, and A. Stern, “Integral holography: white-light single-shot hologram acquisition,” Opt. Express **15**(9), 5754–5760 (2007). [CrossRef]

**9. **N. Chen, J.-H. Park, and N. Kim, “Parameter analysis of integral Fourier hologram and its resolution enhancement,” Opt. Express **18**(3), 2152–2167 (2010). [CrossRef]

**10. **G. Indebetouw, P. Klysubun, T. Kim, and T.-C. Poon, “Imaging properties of scanning holographic microscopy,” J. Opt. Soc. Am. A **17**(3), 380–390 (2000). [CrossRef]

**11. **J. Rosen and G. Brooker, “Digital spatially incoherent Fresnel holography,” Opt. Lett. **32**(8), 912–914 (2007). [CrossRef]

**12. **Y. Rivenson and A. Stern, “Compressed imaging with separable sensing operator,” IEEE Signal Process. Lett. **16**(6), 449–452 (2009). [CrossRef]

**13. **D. L. Donoho, “Compressed sensing,” IEEE Trans. Inf. Theory **52**(4), 1289–1306 (2006). [CrossRef]

**14. **E. Candès, J. Romberg, and T. Tao, “Robust uncertainty principles: Exact signal reconstruction from highly incomplete frequency information,” IEEE Trans. Inf. Theory **52**(2), 489–509 (2006). [CrossRef]

**15. **http://sites.google.com/site/igorcarron2/compressedsensinghardware.

**16. **A. Stern, “Compressed imaging system with linear sensors,” Opt. Lett. **32**(21), 3077–3079 (2007). [CrossRef]

**17. **M. Lustig, “Sparse MRI,” Ph.D. dissertation, Dept. Elect. Eng., Stanford Univ., Palo Alto, CA, 2008.

**18. **S. Gazit, A. Szameit, Y. C. Eldar, and M. Segev, “Super-resolution and reconstruction of sparse sub-wavelength images,” Opt. Express **17**(26), 23920–23946 (2009). [CrossRef]

**19. **A. Bourquard, F. Aguet, and M. Unser, “Optical imaging using binary sensors,” Opt. Express **18**(5), 4876–4888 (2010). [CrossRef]

**20. **Y. Rivenson, A. Stern, and B. Javidi, “Single exposure super-resolution compressive imaging by double phase encoding,” Opt. Express **18**(14), 15094–15103 (2010). [CrossRef]

**21. **Y. Rivenson, A. Stern, and B. Javidi, “Compressive Fresnel Holography,” Disp. Tech, Journalism **506–509**(10), 6 (2010).

**22. **Y. Rivenson, A. Stern, and J. Rosen, “Compressive Sensing Approach for Reducing the Number of Exposures in Multiple View Projection Holography,” in *Frontiers in Optics*, OSA Technical Digest (CD) (Optical Society of America, 2010), paper FThM2.

**23. **J. M. Bioucas-Dias and M. A. T. Figueiredo, “A new twist: two-step iterative shrinkage/thresholding algorithms for image restoration,” IEEE Trans. Image Process. **16**(12), 2992–3004 (2007). [CrossRef]

**24. **D. J. Brady, K. Choi, D. L. Marks, R. Horisaki, and S. Lim, “Compressive holography,” Opt. Express **17**(15), 13040–13049 (2009). [CrossRef]

**25. **C. F. Cull, D. A. Wikner, J. N. Mait, M. Mattheiss, and D. J. Brady, “Millimeter-wave compressive holography,” Appl. Opt. **49**(19), E67–E82 (2010). [CrossRef]

**26. **K. Choi, R. Horisaki, J. Hahn, S. Lim, D. L. Marks, T. J. Schulz, and D. J. Brady, “Compressive holography of diffuse objects,” Appl. Opt. **49**(34), H1–H10 (2010). [CrossRef]

**27. **X. Zhang and E. Y. Lam, “Edge-preserving sectional image reconstruction in optical scanning holography,” J. Opt. Soc. Am. A **27**(7), 1630–1637 (2010). [CrossRef]