Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Compressive holographic video

Open Access Open Access

Abstract

Compressed sensing has been discussed separately in spatial and temporal domains. Compressive holography has been introduced as a method that allows 3D tomographic reconstruction at different depths from a single 2D image. Coded exposure is a temporal compressed sensing method for high speed video acquisition. In this work, we combine compressive holography and coded exposure techniques and extend the discussion to 4D reconstruction in space and time from one coded captured image. In our prototype, digital in-line holography was used for imaging macroscopic, fast moving objects. The pixel-wise temporal modulation was implemented by a digital micromirror device. In this paper we demonstrate 10× temporal super resolution with multiple depths recovery from a single image. Two examples are presented for the purpose of recording subtle vibrations and tracking small particles within 5 ms.

© 2017 Optical Society of America

1. Introduction

Recent years have witnessed a great interest in exploiting the redundant nature of signals. The redundancy of acquired signals provides the opportunity to sample data in a compressive approach. Candès et al. [1,2] and Donoho [3] have discussed the high probability of reconstructing signals with high fidelity from few random measurements, provided that the signals are sparse or compressible in a known basis. Since then, the theory of compressed sensing (CS) has been widely applied to computational imaging. Lustig et al. [4] described the natural fit of CS to magnetic resonance imaging (MRI). Gan [5] proposed block compressed sensing method for natural images, which is applicable for low-power, low-resolution imaging devices. Brady et al. [6] showed that holography can be viewed as a simple spatial encoder for CS and demonstrated 3D tomography from 2D holographic data.

Gabor’s invention of holography in 1948 [7] has provided an effective method for recording and reconstructing a 3D light field from a captured 2D hologram. The use of a CCD camera to digitally record holographic interference patterns has made digital holography (DH) an emerging technology with a variety of imaging applications, such as particle imaging, tracking in biomedical microscopy [8–12] and physical process profiling and measuring [13–17]. Digital Gabor/in-line holography (DIH) is a simple, lensless, yet effective setup for capturing holograms. The simplicity of DIH is balanced by the requirement that objects be small enough to avoid occluding the reference beam significantly [18]. Extensive discussions and applications of DIH have been focused on microscopic imaging, i.e. small and fast-moving objects [19–22]. The tracking of fast movements usually entails multiple exposures [8,10,15,16,23,24]. Temporal resolution is usually limited to the 10–100 millisecond range and little research has been conducted on temporal compression. However, in recent years, CS has proved a useful tool to increase the spatial information encoded in DH [6,25]. Rivenson et al. [26] discussed the application of CS to digital Fresnel holography. Liu et al. [27,28] and Song et al. [29] improved subpixel accuracy for object localization and enhanced spatial resolution (super-resolution). Furthermore, CS theory has proven successful for recovering scenes under holographic microscopic tomography [30], off-axis frequency-shifting holography [31] as well as millimeter-wave holography [32]. Coded apertures have also been used together with CS to provide robust solutions for snapshot phase retrieval [33,34]. In view of recent research in CS and DH, several natural questions arise: Can we extend coded aperture to coded exposure? Can we exploit the unused pixels in exchange for increased temporal resolution? Since holography is naturally suitable for recovering depth information, a further research question is whether 4D space-time information can be extracted from 2D data employing the CS framework.

Similar discussions have been initiated in the incoherent imaging regime. Leveraging multiplexing schemes in the temporal domain, e.g. coded exposure, has been demonstrated as an effective hardware strategy for exploiting spatiotemporal trade-offs in modern cameras. High speed sensors usually require high light sensitivity and large bandwidth due to their limited on-board memory. In 2006, Raskar et al. [35] pioneered the concept of coded exposure when he introduced the flutter shutter camera for motion deblurring. The technique requires knowledge of motion magnitude/direction and cannot handle general scenes exhibiting complex motion. Bub et al. [36] designed a high speed imaging system using a DMD (digital micromirror device) for temporal pixel multiplexing. Gupta et al. [37] showed how per-pixel temporal modulation allows flexible post-capture spatiotemporal resolution trade-off. Reddy et al. [38] used sparse representations (spatial) and brightness constancy (temporal) to preserve spatial resolution while achieving higher temporal resolution. Liu et al. [39] used an over-complete dictionary to sparsely represent time-varying scenes. Koller et al. [40] discussed several mask patterns and proposed a translational photomask to encode scene movements extending the work of [41]. These methods have proved successful for reconstructing fast moving scenes by combining cheap low frame-rate cameras with fast spatio-temporal modulating elements. While all of these techniques enable high speed reconstruction of 2D motion, incorporating holographic capture offers the potential to extend the capabilities to 3D motion. Moreover, in many holography setups, the energy from each scene is distributed across the entire detector so that each pixel contains partial information about the entire scene. This offers the potential for improved performance relative to incoherent architectures.

Our work exploits both spatial and temporal redundancy in natural scenes and generalizes to a 4D (3D positon with time) system model. We show that by combining digital holography and coded exposure techniques using a CS framework, it is feasible to reconstruct a 4D moving scene from a single 2D hologram. We demonstrate a temporal super resolution of 10×. Note that this increase in frame rate can be achieved for any sensor, regardless of the native frame rate, as long as the spatial-temporal modulator operates at a higher frame rate. We anticipate approximately 1 cm resolution with optical sectioning. As a test case, we focus on macroscopic scenes exhibiting fast motion of small objects (vibrating bars or small particles, etc.).

2. Generalized system model

Digital Gabor holography requires no separation of the reference beam and the object beam. The object is illuminated by a single beam and the portion that is not scattered by the object serves as the reference beam. This concept leads to a simple experimental setup but demands limited object sizes so that the reference beam is not excessively disturbed. In this case, the imaging process is a recording of the diffraction pattern of a 2D aperture.

2.1. Diffraction theory

We first model diffraction in a 2D aperture case. According to Fresnel-Kirchoff diffraction formula [18], the field at each observation point E(x, y; z) in a 2D plane can be written as

E(x,y;z)=ik2πΣ0E0(x0,y0)exp(ikr)rdx0dy0,ik2πzΣ0E0(x0,y0)exp{ik[(xx0)2+(yy0)2+z2]12}dx0dy0,
where r denotes the distance from (x0, y0) at the input plane Σ0, with input field E0(x0, y0), to (x, y) at the output plane, i.e. r=[(xx0)2+(yy0)2+z2]12. A further approximation can be made as rz in the denominator based on paraxial approaximation. In the second line of Eq. (1) we make a further approximation of rz in the denominator, but not in the exponent. The integral then becomes a convolution,
E(x,y;z)=H*E0,
with the kernel
H(x,y;z)=ik2πzexp[ik(x2+y2+z2)12].

The kernel H is also referred to as the point spread function (PSF). Since the propagation is along the z-axis, the form of the kernel is determined by the propagation distance z.

2.2. 4D model

We now extend our analysis to a 4-dimensional model. As illustrated in Fig. 1, consider a 4D field V(x, y, z, t), which propagates along the positive z-direction. Along the propagation path, a high-speed coded mask M(x, y, t) is located at z1. A sensor is placed on the sensing plane z2. In one frame, the sensor captures the intensity of the field during an exposure time of Δt. The volume can be discretized into Nd planes, with the furthest plane having a distance of dn with respect to the observation plane at z0. In Gabor holography, the object beam and the reference beam overlap with each other. This requires the objects to be sparse so that the occlusion of the reference beam is negligible. Under this assumption, the field V in reality represents the summation of the object field O and the constant reference field R. Thus, the field at z0 is

E0(x,y,t;z0)=n=1NdHdn*O(x,y,z0dn,t)+R,
where Hdn denotes the convolutional kernel for distance dn.

 figure: Fig. 1

Fig. 1 4D holographic model. E0(x, y, t; z0): projection of a 4D field at z0, the n-th depth plane has a distance of dn to z0; M(x, y, t): temporal coded mask located at z1; G(x, y, Δt): captured image with an integral over Δt. The sensor is located at z2.

Download Full Size | PDF

At the sensing plane, during one exposure time Δt, the sensed image can be expressed as an integral of the intensity I of the field as

G(x,y,Δt)=t=t0t0+ΔtI(x,y,t;z2)dt=t=t0t0+Δt|Hz2z1*{M(x,y,t)[Hz1z0*E0(x,y,t;z0)]}|2dt,
where Hz1z0 denotes the propagation process (convolutional kernel) from z0 to z1; Hz2z1 denotes the propagation process (convolutional kernel) from z1 to z2; M(x, y, t) denotes a time-variant mask located at z1. Equation (5) describes the continuous form of the sensing process. However, the mask is operated in a discrete form at high frame rates. Suppose that for each sensor frame the coded mask changes T times at equal intervals of τ = Δt/T during Δt. Then the discretized form of G is
G(x,y,Δt)=i=0T1t=titi+1|Hz2z1*{M(x,y,t)[Hz1z0*E0(x,y,t;z0)]}|2dt=τi=0T1|Hz2z1*{M(x,y,ti)[Hz1z0*E0(x,y,ti;z0)]}|2=τi=0T1|Hz2z1*{M(x,y,ti)[Hz1z0*(n=1NdHdn*O(x,y,z0dn,ti)+R)]}|2=τi=0T1|Oc,i+Rc,i|2,
where we denote Oc,i and Rc,i as the transformed field at the capture plane z2 for each time frame i.

Then the captured intensity term I can be expanded as I=|Oc+Rc|2=OcRc*+Oc*Rc+Oc2+Rc2. (Time frame notation i is omitted here.) In [6], Brady et al. neglected the nonlinearity imposed by the squared magnitude and considered the two terms Oc2+Rc2 (often referred to as noise and zero-order/DC term) as noise in the measurement model showing that they can be eliminated algorithmically using a CS reconstruction algorithm. In this work, we follow the same approach and the measured intensity can be expressed as

I={OcRc*+Oc*Rc}+Oc2+Rc2=2Re{OcRc*}+EE,
where EE combines Oc2 and Rc2 into a single term considered as error. We may further assume the reference to be 1 without loss of generality. Then the intensity term can be written as I = 2Re {Oc} + EE. In experiment, we approximate the error term by recording the background image and subtract the scene image by this background for reconstruction.

Now we assume that the sensor pixels have the same dimensions as the mask pixels, the unknown field O will have spatial dimensions NMx × NMy, depth dimension Nd and temporal dimension T. Further, if we represent the convolutional operations in Eq. (6) as circulant matrices, we can obtain the following compact form

g=2STRe{HT,z21{MT[HT,z10(HT,dno)]}}+e+n=A(o)+e+n,
where notation and dimensions of the introduced variables are summarized in Table 1 and A(·) describes the complete forward model. Specifically, ST = [I0, ..., IT−1] represents summation over time, where Ii, i = 0, ..., T − 1 is an identity matrix of size (NMx × NMy) × (NMx × NMy); MT = bldg(M0, M1, ..., MT−1) is a block diagonal matrix with each block Mi being a diagonal matrix. Each diagonal element of Mi represents the corresponding pixel operation, e.g. 0 or 1; HT,z = [H0,z, H1,z, ..., HT−1,z], where z represents z21 and z10. Hi,z is a circulant matrix with size (NMx × NMy) × (NMx × NMy) corresponding to the convolutional kernel H(x, y, z). HT,dn = bldg(H0,dn, H1,dn, ..., HT−1,dn], where Hi,dn = [Hi,1, Hi,2, ..., Hi,Nd] represents the summation over depths.

Tables Icon

Table 1. Analysis of all the variables appearing in Eq. (8).

In order to reconstruct the 4D volume, an optimization problem is formed as

o^=argmino12gA(o)22+λΦ(o)
where λ > 0 is a regularization parameter and Φ(·) is a regularizer on the unknown 4D field o.

In this work, we employ Total-Variation (TV) as the regularization function defined as

Φ(o)=oTV=t=0T1n=1Ndx=1NMxy=1NMy|(O)x,y,n,t|,
where we note here that o is the vectorized version of the unknown 4D object field O : NMx × NMy × Nd × T. Equation (10) is a generalized 4D TV regularizer. However, the choice of regularizer may vary by different purposes of reconstruction and/or properties of scenes. In experiment, a 3D TV (x, y, n) is used for resolving depths (Section 3.2), i.e., Φx,y,n(o)=nxy|(O)x,y,n|. TV on temporal domain is included for recovering subtle movement (Section 3.4), i.e., Φx,y,t,n(o)=ntxy|(On)x,y,t|. Also note that independent regularization parameters may be chosen for the spatial (x, y, n) and time (t) dimensions. We used Two-step IST (TwIST) algorithm [42] for reconstruction.

3. Experimental

3.1. Setup

Figure 2 shows the schematic of the experimental setup. The illumination is produced by a diode laser powered by a pulse generator with wavelength of 532 nm. The input beam is expanded and collimated by a neutral density (ND) filter and a collimating lens set (plano-convex lens, 300mm/35mm = 8.57 magnification, ND filter omitted). In this setup, all the lens are from Thorlabs LSB04. A digital micromirror device (DMD) is used to perform pixel-wise temporal modulation of the light field, similar to [36]. For our experiments, we used the DLP®LightCrafter 4500™ from Texas Instruments Inc. The light engine includes a 0.45-inch DMD with > 1million mirrors, each 7.6 μm, arranged in 912 columns by 1140 rows in a diamond pixel array geometry [43]. The DMD is placed approximately 70mm distance away from the objects. An objective lens (single lens with focal length of 125 mm, aperture diameter of 2.54 cm) is placed in front of the CMOS monochromatic sensor and well-aligned with the DMD so that it images the DMD plane onto the sensor. The lens introduces a quadratic phase factor inside the integral of Eq. (1). Thus, if the sensor is placed a distance of 2 f from the OL, the phase is the same as −2 f from the lens. In this way, HT,z21 from Eq. (6) reduces to the identity matrix. We used a CMOS monochromatic sensor (Pointgrey GS3-U3-23S6M) with a resolution of 1920 × 1200 with a pixel pitch of 5.86μm. The key factor is the synchronization between the DMD and the sensor. Each DMD pattern can be projected as fast as PT = 500μs with an effective pattern exposure of Pd = 250μs. After N patterns are projected, a trigger signal is sent out to the camera which controls the shutter and results in a single exposure.

 figure: Fig. 2

Fig. 2 Schematic of the experimental setup. (PG: Pulse Generator; DL: Diode Laser; CL: Collimating Lens; DMD: Digital Micromirror Device; OL: Objective Lens. A trigger signal generated from the DMD is sent to the camera for exposure. The minimum time between successive DMD mask patterns is PT = 500μs with a pattern exposure Pd = 250μs. The camera is triggered every N patterns. (N is equal to T in previous context.)

Download Full Size | PDF

3.2. Subsampling holograms

We start our experiment by examining the reconstruction performance of subsampled holograms. Recovery of a 3D object field from a 2D hologram has been proposed in previous work [6]. The recovery can be treated as inference of high-dimensional data from undersampled measurements. Figure 3 shows the experimental results of 3D recovery with pixel-wise subsampling. For this experiment, we captured two static hairs from craft fur (see below) placed a distance of 7.1 cm and 10.1 cm away from the DMD. Figure 3(a) shows the captured image. To preprocess the captured hologram, first we capture an image on the sensor with no object placed in the field of view - we refer to this as the background image. Note that this captured image corresponds to the term Rc2 in Eq. (7). We then subtract the hologram by the background image, down-sampled to 960 × 600 and cropped the central 285 × 285 ROI around the object. Figure 3(b) shows the captured image of one pattern from the DMD. Each pattern randomly selects 10% of the entire image. To avoid aliasing artifacts caused by the diamond shaped sampling patterns on the DMD, we group together 4 × 4 adjacent pixels on the DMD to make a single superpixel [43]. Since we are directly imaging on the DMD plane, the resolution is defined by the DMD. In our reconstructions, we design the regularizer to be Φx,y,n(o)=nxy|(O)x,y,n|, as our focus is the depth. In order to form the matrix A from Eq. (8), we capture images of the mask with no object present. These captured images are divided by the background image to remove the effect of beam non-uniformity. Figure 3(c) shows the subsampled hologram. Figure 3(d) and Fig. 3(e) compares reconstructions for the full hologram and subsampled hologram. The image (285 × 285) was reconstructed into a 3D volume (285 × 285 × 120) with a depth range from 65 mm to 108 mm. Shown are the images reconstructed at the depth planes corresponding to the location of the two hairs. In order to quantify the performance in terms of depth resolution, we used block variance [44] for the edge pixel of the cross section by the two hairs. Higher variance infers higher contrast, and thus, higher resolution. The block variance was computed within a window of 21 × 21 pixels highlighted as blue and red in Figs. 3(d) and 3(e). Figure 3(f) shows the normalized variance versus depth from sensor. Two principle peaks are observed and can be inferred as the focus distance for the two furs. The peak around d1 has strong signal in all four curves. This was because the object located there has larger size than the other one. As can be seen, using only 10% of the data deteriorates both BP and CS reconstruction resolutions. And in 10% from BP, it is even harder to track the second object because of the impact of mask pattern. This can also be observed in the left panel of Fig. 3(e) where the back propagation of the mask severely affected the objects. The variance decreases fast in CS reconstructions. This implies the denoising effect as well as the optical sectioning power of CS. In 10% reconstruction, the intermediate volume between the two objects were not denoised as good as in “100%” case. This shows that greater subsampling factors reduce the effective depth resolution.

 figure: Fig. 3

Fig. 3 Subsampling holograms (background subtracted). (a)hologram of two static furs 7.1 cm and 10.1 cm away from sensor. (b) DMD mask, 10%, uniformly random (background divided). (c) subsampled hologram. (d) Comparison of reconstructions from both back-propagation (BP) method and compressed sensing (CS) method using the full hologram. (e) Comparison for BP and CS using 10% subsampled hologram. (f) Normalized variance vs. distance on z direction. Blue series: BP; red series: CS; full curve: 100% hologram; dashed curve: 10% hologram. (See Visualization 1 and Visualization 2.)

Download Full Size | PDF

3.3. Temporal multiplexing

In the previous section, we analyzed the effect of subsampling on reconstruction performance for compressive holography. Here we show how to utilize the excess pixel bandwidth in the sensor to increase temporal resolution. A simulation experiment was carried out in order to quantitatively analyze our imaging system (Fig. 4). As shown in Fig. 4(a), two layers of objects (peranema with different scales) were used as a test case. Each layer had 256 × 256 pixels. The pixel pitch was set so that the whole scene size (9.85 × 9.85mm) was approximately identical to the DMD size. The first object was placed at 70mm away from the sensor. The other object was placed dz further away from the first object. dz is a changing variable. A spatiotemporal subsampling mask was displayed on the DMD. For example, when n time frames are required, each frame will have 1/n × 100% of the pixels being randomly selected and displayed. In this way, the summation of n frames is the full resolution scene image. In simulation, we omitted the propagation between the DMD and the sensor. For reconstruction, we compared back-propagation and compressed sensing. In order to have a better reconstruction result, we inserted 4 intermediate planes between the two objects. The results are shown in Figs. 4(b) and 4(c). In Fig. 4(b), the peak signal-to-noise ratio (PSNR) was used to measure the reconstruction performance. PSNR = 10log10(peakvalue2/MSE), where MSE is the mean-squared error between the reconstruction and the input object field. The PSNR is computed on the 4D volume, which can also be treated as an average over multiple time frames. The higher PSNR value is, the better fidelity the reconstruction is. We picked out a point from Fig. 4(b), marked as red ring, to show in Fig. 4(c) the visual meaning of the PSNR values. It can be seen that lower rate of subsampling causes worse reconstruction performance. PSNR also decreases with the decrease of object spacing.

 figure: Fig. 4

Fig. 4 Performance simulations. (a) Scenario: two Peranema with different sizes moving at different planes (dz), a single image is simulated at the sensor plane; (b) Space-time performance. Horizontal axis indicates different spacing between the two objects. “100%”: full resolution; “50%”: 50% of the pixels are randomly sampled at each time frame, which corresponds to a temporal increase of 2; “20%”: temporal increase of 5; “10%”: temporal increase of 10. Lines represent CS results and dashed lines represent BP results. PSNR in dB. (c) Reconstruction results at depth d1. Marked as red circle in (b).

Download Full Size | PDF

3.4. Spatiotemporal recovery for fast-moving objects

We present two illustrative examples which are aimed for the application of observing subtle vibrations and tracking small-but-fast-moving particles.

Figure 5 shows a reconstruction result demonstrating a 10× increase in temporal resolution. The captured image contains several strands of hair blown by an air conditioner. From a single captured image, we reconstruct 2 depth slices and 10 frames of video. In the case of small lateral movement, i.e. vibration, it is feasible to apply total variation on time domain, i.e., Φx,y,t,n(o)=ntxy|(On)x,y,t|. In this case, the depths of the objects are pre-determined. For the convenience of comparison, 3 time frames (3rd, 6th, 9th) are shown for both back-propagation and compressed sensing. In terms of depth, our CS result shows well-separated objects at different depth layers while the back-propagation method fails to achieve optical sectioning. The movement of the object is also recovered in our CS result. The reconstruction was performed on a computer with Intel Core i5 CPU at 3.2 GHz and 24 GB of RAM. The data processing takes about 2.4 hours for A with the size of (960 × 600) × (960 × 600 × 2 × 10). The codes were written in Matlab 2015b.

 figure: Fig. 5

Fig. 5 Reconstruction results from a single image (moving hairs). 10 frames of video and two depth frames are reconstructed from a single captured hologram. Due to space constraints, 3 video frames (3rd, 6th, 9th) and two depths (d1 = 73mm, d2 = 111mm) are presented (see Visualization 3 and Visualization 4).

Download Full Size | PDF

Figure 6 shows another reconstruction result for dropping several flakes of glitter. The glitter flakes in Fig. 6(a) had size of 1 mm and were dropped in a range of 60 mm to 80 mm away from sensor. The glitter flakes were also blown by an air conditioner. Figure 6(b) shows the captured single image. Figure 6(c) shows preprocessed image which is subtracted by background image. In this case, the glitter flakes were moving at high speed. There was no overlap between two consecutive frames for the same flake. So each frame was recovered independently. For each frame, a depth range was estimated with 120 layers. The regularizer is designed to be Φx,y,t,n(o)=ntxy|(On)x,y,t|. Figure 6(d) shows a reconstruction map of 2 depths and 4 time frames. The downward and leftward motion of two glitter flakes can be observed. A similar refocusing method was used as in [44]. Here, we scanned the reconstructed image by a 21 × 21 window and computed the variance (normalized) to get the focused depth information. If the normalized variance at defocused depth are higher than 0.5, that pixel was rejected as background/noise. For adjacent pixels which have similar variance profile, the pixels were treated as a single particle. Figure 6(e) shows the normalized variance for two particles at d1 and d2. The particles are tracked at two locations pointed out by the arrows in Fig. 6(d). The overall tracking results are shown in Fig. 6(f) and 6(g). 7 particles are detected with 4D motion within 5 ms. In Fig. 6(f), the temporal transition was represented by 10 different color arrows. Figure 6(g) shows a velocity chart of the 7 particles. The velocity of each particle was computed by v(tn) = [d(tn+1) − d(tn−1)]/2Δt, where d(tn) depicts the 3D location at n-th time frame, Δt = 500μs. The velocity of the particles ranges from 0.7 m/s to 5.5 m/s. In this case, each time frame was processed separately. The data processing time for each frame takes about 7 hours for A with the size of (960 × 600) × (960 × 600 × 60).

 figure: Fig. 6

Fig. 6 Reconstruction results from a single image (dropping glitters). (a) Glitters; (b) captured image; (c) normalized image; (d) reconstruction map. 2 depths and 4 out of 10 frames are shown; (e) normalized variance plot from 2 particles at d1 and d2; (f) 4D particle tracking; (g) velocity plotting with time range from 500μs to 4000μs.

Download Full Size | PDF

4. Discussion

We have demonstrated two illustrative cases where 4D spatio-temporal data is recovered from a single 2D data. In the case of vibrating hairs, 2 depth layers and 10 video frames in time were recovered. The spatio-temporal compression is 20×. In the case of dropping glitter flakes, a 4D volume was reconstructed to track the motion of small particles. The spatio-temporal compression is 120 × 10. We call our technique “compressive holographic video” to emphasize the compressive sampling approach to acquisition of spatio-temporal information. We show that our technique affords a significant reduction in space-time sampling, enabling 4D events to be acquired using only a single captured image.

In our prototype implementation we use a DMD as a coded aperture that is imaged directly onto a sensor. While non-trivial to implement, in principle it is possible to fabricate a CMOS sensor with pixel-wise coded exposure control. The prototype showed that it is possible to simultaneously exceed the capture rate of imagers and recover multiple depths with reasonable depth resolution. In this paper, as an example, we presented a temporal increase factor of 10×. A potential factor can be 24× based on the DMD we used. By means of spatio-temporal modulator, one is able to significantly increase the frame rate of the sensors. Based on this idea, the recovered frame rate is redefined by the modulator’s frame rate. The coded-exposure technique enables high speed imaging with a simple frame rate camera. Digital in-line holography brings the capability of 3D tomographic imaging with simple experimental setup. Our Compressive Holographic Video technique is also closely related to phase retrieval problems commonly faced in holographic microscopy. Our space-time subsampling technique can be viewed as a sequence of coded apertures applied to a spatiotemporally varying optical field. In the future we plan to explore the connections between our CS reconstruction approach and the methods introduced in [33]. In our general model, we place a coded aperture between the sensor and scene. In our prototype implementation we use a DMD as a coded aperture that is imaged directly onto a sensor. While not explored in this paper, we believe that adding defocus between the coded aperture plane and sensor may be beneficial for phase retrieval tasks, as in [33]. In this work, we focus on a proof-of-principle demonstration of compressive holographic video. In the future, we hope to explore a diverse set of mask designs, as well as techniques for mask optimization.

Funding

National Science Foundation (NSF) CAREER grant IIS-1453192;

Office of Naval Research (ONR) grant 1(GG010550)//N00014-14-1-0741;

Office of Naval Research (ONR) grant #N00014-15-1-2735.

Acknowledgments

The authors were grateful for the constructive discussions with Dr. Roarke Horstmeyer, Donghun Ryu and the reviewers.

References and links

1. E. Candès and T. Tao, “Robust uncertainty principles: exact signal reconstruction from highly incomplete frequency information,” IEEE Trans. Inform. Theory 52(12) 489–509 (2006). [CrossRef]  

2. E. Candès and T. Tao, “Near-optimal signal recovery from random projections: Universal encoding strategies?” IEEE Trans. Inform. Theory 52(12) 5406–5425 (2006). [CrossRef]  

3. D. L. Donoho, “Compressed sensing,” IEEE Trans. Inform. Theory 52(4) 1289–1306 (2006). [CrossRef]  

4. M. Lustig, D. L. Donoho, J. M. Santos, and J. M. Pauly, “Compressed sensing MRI,” IEEE Signal Processing Magazine 25(2), 72–82 (2008). [CrossRef]  

5. L. Gan, “Block compressed sensing of natural images,” in International Conference on Digital Signal Processing (2007), pp. 403–406.

6. D. Brady, K. Choi, D. Marks, R. Horisaki, and S. Lim, “Compressive holography,” Opt. Express 17(15), 13040–13049 (2009) [CrossRef]   [PubMed]  

7. D. Gabor, “A new microscopic principle,” Nature 161(4098), 777–778 (1948). [CrossRef]   [PubMed]  

8. P. Memmolo, L. Miccio, M. Paturzo, G. Di Caprio, G. Coppola, P. A. Netti, and P. Ferraro, “Recent advances in holographic 3D particle tracking,” Adv. Opt. Photon. 7(4), 713–755 (2015). [CrossRef]  

9. L. Xu, X. Peng, J. Miao, and A. K. Asundi, “Studies of digital microscopic holography with applications to microstructure testing,” Appl. Opt. 40(28), 5046–5051 (2001). [CrossRef]  

10. T. Su, L. Xue, and A. Ozcan, “High-throughput lensfree 3D tracking of human sperms reveals rare statistics of helical trajectories,” Proc. Natl. Acad. Sci. U. S. A. 109(40), 16018–16022 (2012). [CrossRef]   [PubMed]  

11. Q. Lü, Y. Chen, R. Yuan, B. Ge, Y. Gao, and Y. Zhang, “Trajectory and velocity measurement of a particle in spray by digital holography,” Appl. Opt. 48, 7000–7007 (2009). [CrossRef]   [PubMed]  

12. L. Dixon, F. C. Cheong, and D. G. Grier, “Holographic deconvolution microscopy for high-resolution particle tracking,” Opt. Express 19, 16410–16417 (2011). [CrossRef]   [PubMed]  

13. M. J. Saxton and K. Jacobson, “Single-particle tracking: applications to membrane dynamics,” Ann. Rev. Biophys. Biomolecular Structure 26(1), 373–399 (1997). [CrossRef]  

14. J. Katz and J. Sheng, “Applications of holography in fluid mechanics and particle dynamics,” Ann. Rev. Fluid Mech. 42, 531–555 (2010). [CrossRef]  

15. L. Tian, N. Loomis, J. A. Domínguez-Caballero, and G. Barbastathis, “Quantitative measurement of size and three-dimensional position of fast-moving bubbles in air-water mixture flows using digital holography,” Appl. Opt. 49(9), 1549–1554 (2010). [CrossRef]   [PubMed]  

16. W. Xu, M. H. Jericho, H. J. Kreuzer, and I. A. Meinertzhagen, “Tracking particles in four dimensions with in-line holographic microscopy,” Opt. Lett. 28(3), 164–166 (2003). [CrossRef]   [PubMed]  

17. B. J. Nilsson and T. E. Carlsson, “Simultaneous measurement of shape and deformation using digital light-in-flight recording by holography,” Opt. Eng. 39, 244–253 (2000). [CrossRef]  

18. M. K. Kim, Digital Holographic Microscopy (Springer, 2011). [CrossRef]  

19. J. Garcia-Sucerquia, W. Xu, S. K. Jericho, P. Klages, M. H. Jericho, and H. J. Kreuzer, “Digital in-line holographic microscopy,” Appl. Opt. 45(5), 836–850 (2006). [CrossRef]   [PubMed]  

20. W. Xu, M. H. Jericho, I. A. Meinertzhagen, and H. J. Kreuzer, “Digital in-line holography for biological applications,” Proc. Natl. Acad. Sci. 98(20), 11301–11305 (2001). [CrossRef]   [PubMed]  

21. W. Chen, L. Tian, S. Rehman, Z. Zhang, H. P. Lee, and G. Barbastathis, “Empirical concentration bounds for compressive holographic bubble imaging based on a Mie scattering model,” Opt. Express 23(4), 4715–4725 (2015). [CrossRef]   [PubMed]  

22. X. Yu, J. Hong, C. Liu, and M. K. Kim, “Review of digital holographic microscopy for three-dimensional profiling and tracking,” Opt. Eng. 53(11), 112306 (2014). [CrossRef]  

23. N. Salah, G. Godard, D. Lebrun, P. Paranthoën, D. Allano, and S. Coëtmellec, “Application of multiple exposure digital in-line holography to particle tracking in a Bénard–von Kármán vortex flow,” Meas. Sci. Technol. 19(7), 074001 (2008). [CrossRef]  

24. X. Yu, J. Hong, C. Liu, and M. K. Kim, “Review of digital holographic microscopy for three dimensional profiling and tracking,” Opt. Eng. 53, 112306 (2014). [CrossRef]  

25. S. Lim, D. L. Marks, and D. J. Brady, “Sampling and processing for compressive holography,” Appl. Opt. 50, H75–H86 (2011). [CrossRef]   [PubMed]  

26. Y. Rivenson, A. Stern, and B. Javidi, “Compressive Fresnel holography,” J. Disp. Technol. 6(10), 506–509 (2010). [CrossRef]  

27. Y. Liu, L. Tian, J. W. Lee, H. Y. Huang, M. S. Triantafyllou, and G. Barbastathis, “Scanning-free compressive holography for object localization with subpixel accuracy,” Opt. Lett. 37(16), 3357–3359 (2012). [CrossRef]  

28. Y. Liu, L. Tian, C. Hsieh, and G. Barbastathis, “Compressive holographic two-dimensional localization with 1/302 subpixel accuracy,” Opt. Express 22, 9774–9782 (2014). [CrossRef]   [PubMed]  

29. J. Song, C. L. Swisher, H. Im, S. Jeong, D. Pathania, Y. Iwamoto, M. Pivovarov, R. Weissleder, and H. Lee, “Sparsity-based pixel super resolution for lens-free digital in-line holography,” Sci. Rep. 6, 24681 (2016).

30. J. Hahn, S. Lim, K. Choi, R. Horisaki, and D. J. Brady, “Video-rate compressive holographic microscopic tomography,” Opt. Express 19, 7289–7298 (2011). [CrossRef]   [PubMed]  

31. M. M. Marim, M. Atlan, E. Angelini, and J.-C. Olivo-Martin, “Compressed sensing with off-axis frequency-shifting holography,” Opt. Lett. 35, 871–873 (2010). [CrossRef]   [PubMed]  

32. C. F. Cull, D. A. Wikner, J. N. Mait, M. Mattheiss, and D. J. Brady, “Millimeter-wave compressive holography,” Appl. Opt. 49, E67–E82 (2010). [CrossRef]   [PubMed]  

33. R. Horisaki, Y. Ogura, M. Aino, and J. Tanida, “Single-shot phase imaging with a coded aperture,” Opt. Lett. 39(22), 6466–6469 (2014). [CrossRef]   [PubMed]  

34. R. Egami, R. Horisaki, L. Tian, and J. Tanida, “Relaxation of mask design for single-shot phase imaging with a coded aperture,” Appl. Opt. 55(8), 1830–1837 (2016). [CrossRef]   [PubMed]  

35. R. Raskar, A. Agrawal, and J. Tumblin, “Coded exposure photography: motion deblurring using fluttered shutter,” ACM Trans. Graphics (TOG) 25(3), 795–804 (2006). [CrossRef]  

36. G. Bub, M. Tecza, M. Helmes, P. Lee, and P. Kohl, “Temporal pixel multiplexing for simultaneous high-speed, high-resolution imaging,” Nat. Methods 7, 209–211 (2010). [CrossRef]   [PubMed]  

37. M. Gupta, A. Agrawal, A. Veeraraghavan, and S. G. Narasimhan, “Flexible voxels for motion-aware videography,” in European Conference on Computer Vision (2010), pp. 100–114.

38. D. Reddy, A. Veeraraghavan, and R. Chellappa, “P2C2: Programmable pixel compressive camera for high speed imaging,” in IEEE Conference on Computer Vision and Pattern Recognition (CVPR, 2011), pp. 329–336.

39. D. Liu, J. Gu, Y. Hitomi, M. Gupta, T. Mitsunaga, and S. K. Nayar, “Efficient space-time sampling with pixel-wise coded exposure for high-speed imaging,” IEEE Trans. Pattern Anal. Machine Intelligence 36(2), 248–260 (2014). [CrossRef]  

40. R. Koller, L. Schmid, N. Matsuda, T. Niederberger, L. Spinoulas, O. Cossairt, G. Schuster, and A. K. Katsaggelos, “High spatio-temporal resolution video with compressed sensing,” Opt. Express 23(12), 15992–16007 (2015). [CrossRef]   [PubMed]  

41. P. Llull, X. Liao, X. Yuan, J. Yang, D. Kittle, L. Carin, G. Sapiro, and D. J. Brady, “Coded aperture compressive temporal imaging,” Opt. Express 21, 10526–10545 (2013). [CrossRef]   [PubMed]  

42. J. M. Bioucas-Dias and M. A. Figueiredo, “A new twist: two-step iterative shrinkage/thresholding algorithms for image restoration,” IEEE Trans. Image Process. 16, 2992–3004 (2007). [CrossRef]   [PubMed]  

43. “DLP LightCrafterTM 4500,” http://www.ti.com/lsds/ti/dlp/advanced-light-control/microarray-greater-than-1million-lightcrafter4500.page. Accessed: 2016-06-01.

44. C. P. McElhinney, J. B. McDonald, A. Castro, Y. Frauel, B. Javidi, and T. J. Naughton, “Depth-independent segmentation of macroscopic three-dimensional objects encoded in single perspectives of digital holograms,” Opt. Lett. 32, 1229–1231 (2007). [CrossRef]   [PubMed]  

Supplementary Material (4)

NameDescription
Visualization 1: MOV (30525 KB)      Tomographic reconstruction, full resolution
Visualization 2: MOV (30525 KB)      Tomographic reconstruction, 10% hologram
Visualization 3: AVI (210 KB)      4D reconstruction example, at d1
Visualization 4: AVI (171 KB)      4D reconstruction example, at d2

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (6)

Fig. 1
Fig. 1 4D holographic model. E0(x, y, t; z0): projection of a 4D field at z0, the n-th depth plane has a distance of dn to z0; M(x, y, t): temporal coded mask located at z1; G(x, y, Δt): captured image with an integral over Δt. The sensor is located at z2.
Fig. 2
Fig. 2 Schematic of the experimental setup. (PG: Pulse Generator; DL: Diode Laser; CL: Collimating Lens; DMD: Digital Micromirror Device; OL: Objective Lens. A trigger signal generated from the DMD is sent to the camera for exposure. The minimum time between successive DMD mask patterns is PT = 500μs with a pattern exposure Pd = 250μs. The camera is triggered every N patterns. (N is equal to T in previous context.)
Fig. 3
Fig. 3 Subsampling holograms (background subtracted). (a)hologram of two static furs 7.1 cm and 10.1 cm away from sensor. (b) DMD mask, 10%, uniformly random (background divided). (c) subsampled hologram. (d) Comparison of reconstructions from both back-propagation (BP) method and compressed sensing (CS) method using the full hologram. (e) Comparison for BP and CS using 10% subsampled hologram. (f) Normalized variance vs. distance on z direction. Blue series: BP; red series: CS; full curve: 100% hologram; dashed curve: 10% hologram. (See Visualization 1 and Visualization 2.)
Fig. 4
Fig. 4 Performance simulations. (a) Scenario: two Peranema with different sizes moving at different planes (dz), a single image is simulated at the sensor plane; (b) Space-time performance. Horizontal axis indicates different spacing between the two objects. “100%”: full resolution; “50%”: 50% of the pixels are randomly sampled at each time frame, which corresponds to a temporal increase of 2; “20%”: temporal increase of 5; “10%”: temporal increase of 10. Lines represent CS results and dashed lines represent BP results. PSNR in dB. (c) Reconstruction results at depth d1. Marked as red circle in (b).
Fig. 5
Fig. 5 Reconstruction results from a single image (moving hairs). 10 frames of video and two depth frames are reconstructed from a single captured hologram. Due to space constraints, 3 video frames (3rd, 6th, 9th) and two depths (d1 = 73mm, d2 = 111mm) are presented (see Visualization 3 and Visualization 4).
Fig. 6
Fig. 6 Reconstruction results from a single image (dropping glitters). (a) Glitters; (b) captured image; (c) normalized image; (d) reconstruction map. 2 depths and 4 out of 10 frames are shown; (e) normalized variance plot from 2 particles at d1 and d2; (f) 4D particle tracking; (g) velocity plotting with time range from 500μs to 4000μs.

Tables (1)

Tables Icon

Table 1 Analysis of all the variables appearing in Eq. (8).

Equations (10)

Equations on this page are rendered with MathJax. Learn more.

E ( x , y ; z ) = i k 2 π Σ 0 E 0 ( x 0 , y 0 ) exp ( i k r ) r d x 0 d y 0 , i k 2 π z Σ 0 E 0 ( x 0 , y 0 ) exp { i k [ ( x x 0 ) 2 + ( y y 0 ) 2 + z 2 ] 1 2 } d x 0 d y 0 ,
E ( x , y ; z ) = H * E 0 ,
H ( x , y ; z ) = i k 2 π z exp [ i k ( x 2 + y 2 + z 2 ) 1 2 ] .
E 0 ( x , y , t ; z 0 ) = n = 1 N d H d n * O ( x , y , z 0 d n , t ) + R ,
G ( x , y , Δ t ) = t = t 0 t 0 + Δ t I ( x , y , t ; z 2 ) d t = t = t 0 t 0 + Δ t | H z 2 z 1 * { M ( x , y , t ) [ H z 1 z 0 * E 0 ( x , y , t ; z 0 ) ] } | 2 d t ,
G ( x , y , Δ t ) = i = 0 T 1 t = t i t i + 1 | H z 2 z 1 * { M ( x , y , t ) [ H z 1 z 0 * E 0 ( x , y , t ; z 0 ) ] } | 2 d t = τ i = 0 T 1 | H z 2 z 1 * { M ( x , y , t i ) [ H z 1 z 0 * E 0 ( x , y , t i ; z 0 ) ] } | 2 = τ i = 0 T 1 | H z 2 z 1 * { M ( x , y , t i ) [ H z 1 z 0 * ( n = 1 N d H d n * O ( x , y , z 0 d n , t i ) + R ) ] } | 2 = τ i = 0 T 1 | O c , i + R c , i | 2 ,
I = { O c R c * + O c * R c } + O c 2 + R c 2 = 2 Re { O c R c * } + E E ,
g = 2 S T Re { H T , z 21 { M T [ H T , z 10 ( H T , d n o ) ] } } + e + n = A ( o ) + e + n ,
o ^ = argmin o 1 2 g A ( o ) 2 2 + λ Φ ( o )
Φ ( o ) = o TV = t = 0 T 1 n = 1 N d x = 1 N M x y = 1 N M y | ( O ) x , y , n , t | ,
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.