## Abstract

On-chip holographic video is a convenient way to monitor biological samples simultaneously at high spatial resolution and over a wide field-of-view. However, due to the limited readout rate of digital detector arrays, one often faces a tradeoff between the per-frame pixel count and frame rate of the captured video. In this report, we propose a subsampled phase retrieval (SPR) algorithm to overcome the spatial-temporal trade-off in holographic video. Compared to traditional phase retrieval approaches, our SPR algorithm uses over an order of magnitude less pixel measurements while maintaining suitable reconstruction quality. We use an on-chip holographic video setup with pixel sub-sampling to experimentally demonstrate a factor of 5.5 increase in sensor frame rate while monitoring the *in vivo* movement of *Peranema* microorganisms.

© 2017 Optical Society of America

## 1. Introduction

Lensless on-chip imaging offers the ability to simultaneously obtain a high resolution image over a wide field-of-view (FOV) in a simple optical setup. In a lensless on-chip imaging experiment, one typically places a sample within several millimeters of a digital sensor. By illuminating the sample with a spatially coherent light source, a diffraction pattern is formed at the nearby sensor, which may be captured as a digital in-line hologram. A phase retrieval algorithm typically recovers the sample’s amplitude and phase, at high fidelity, from the recorded hologram intensities [1].

Lensless on-chip holographic imaging has been widely used to investigate biological and chemical phenomena at the micro and/or nano scale. Recent examples include high resolution and wide field-of-view imaging of malaria-infected cells [2], dense pathology slides [3], and nanometer-scale viruses [4]. While these samples are primarily stationary over time, it is also possible to monitor in-vivo dynamic phenomena using lensless holographic video. On-chip examples of monitoring biophysical processes include discovering the spiral trajectories of sperm [5], the formation of endothelial cells into microvessels [6], and analyzing single-cell motility [7].

The total pixel count of the reconstructed images from these setups (i.e., the system space-bandwidth product) is simply set by the effective pixel count of the detector array. As detector array sizes grow into the regime of hundreds of megapixels, a limited detector array readout rate will eventually limit the rate of high-speed lensless image acquisition. A tradeoff space thus emerges between the spatial and temporal resolution of a lensless on-chip imaging experiment: either images can be acquired at either high resolution, or at high frame rates, but currently not both.

The same tradeoff space also currently impacts video capture in conventional cameras. The limited speed of sensor hardware for pixel readout, analog-to-digital conversion, and a constrained on-board memory together form a data bottleneck. To overcome this limitation, many high speed camera sensors now offer a multitude of video frame rates at different image resolutions. A typical example is the recent Casio EX-F1 camera, which trades off image resolution and frame rate in an inversely proportional manner, offering 2.07 megapixels (MP) at 30 frames per second (fps), 0.20 MP at 300 fps, 0.08 MP at 600 fps, and 0.03 MP at 1200 fps [8]. Here, the sensor data rate faces an approximate upper bound of 65 MP per second.

A number of different coding strategies were recently proposed to overcome this data readout limit in standard video. For example, offsetting the exposure time of interleaved pixels may simultaneously provide high-speed video and high-resolution imaging [9]. A similar strategy may be applied to the interleaved frames from a camera array [10]. Alternatively, the incident light may be coded into a spatio-temporal pattern, either using a spatial light modulator [11, 12], global shutter [13] or translating mask [14]. Subsequently, an inversion algorithm, typically operating within a compressive sensing framework that assumes scene sparsity, can recover a high-resolution and high-speed video [15]. This strategy was most recently applied with a streak camera to create videos of light propagation resolved down to picosecond time scales [16].

Similar coding strategies may also help overcome the space-time resolution tradeoff in lensless holographic imaging. Unlike traditional video, however, the operation of a lensless holographic setup is fundamentally connected to its phase-retrieval algorithm. An ideal strategy to improve lensless image readout rates would operate in tandem with phase retrieval [17]. As with the compressive video recovery schemes above, phase retrieval must also assume some prior knowledge about the imaged sample to ensure accurate algorithm convergence. Examples include a known finite sample support [18,19], sparsity [20], non-negativity or an intensity histogram [21]. Several recent works examine how sample sparsity permits accurate sample reconstruction from a limited number of holographic measurements [15,22–30]. To the best of our knowledge, no work has yet examined whether prior knowledge of sample support alone may also relax required in-line holographic image readout rates, nor has demonstrated that such a modified phase retrieval process can improve the frame rate of on-chip holographic video.

Here, we present a simple lensless on-chip imaging method and associated sub-sampled phase retrieval (SPR) algorithm that aims to simultaneously offer high resolution over both space and time. Or approach selectively reads off a limited subset of pixels per image frame. This reduces our per-frame data output, which equivalently increases the imaging system’s achievable video frame rate, assuming a fixed sensor readout rate. We then recover accurate, high-resolution maps of sample amplitude and phase using just our sparse set of measured intensities, along with a bootstrapped estimate of the sample support or using a well-known algorithm [31]. We demonstrate how this subsampling strategy can reduce the number of measured pixels in each image frame by up to a factor of 30 with minimal impact upon image fidelity (less than a doubling in recovery error) for several realistic objects. We additionally show that the SPR algorithm offers a factor of 5–6 experimental speed-up in video frame in an *in vivo* on-chip experiment.

Here is an outline for the rest of this paper. First, we review the process of phase retrieval for in-line holography. Second, we introduce our proposed sub-sampling strategy. Third, we test our new measurement and reconstruction method in simulation. We show that our subsampled phase retrieval (SPR) technique outperforms the naive approach of image interpolation. Fourth, we demonstrate the successful operation of SPR in two on-chip imaging experiments, including an in-vivo imaging experiment that demonstrates a 9× reduction in sampling requirements while imaging motile Peranema protists.

## 2. Background and theory

#### 2.1. In-line holography

A simple schematic of an in-line holography setup is shown in Fig. 1(a). Here, we assume a distant point source illuminates a thin sample with a quasi-monochromatic, spatially coherent plane wave. While not done so here, it is direct to take into account the effects of partially coherent sample illumination [32]. The optical field immediately after the sample, *f* (*x*, *y*) = *A*(*x*, *y*)*e ^{iϕ}*

^{(}

^{x,y}^{)}, offers a direct indication of its absorptivity within its amplitude

*A*(

*x*,

*y*), and optical thickness within its phase

*e*

^{iϕ}^{(}

^{x,y}^{)}.

The sample field *f* (*x*, *y*) then propagates a distance *d* to the detector plane, which contains an array of pixels. Directly above this plane, we denote the resulting complex field as *h*(*x _{a}*,

*y*), the hologram field, where (

_{a}*x*,

_{a}*y*) are the spatial coordinates at this plane. Given sufficient distance between the sample and detector plane, it is possible to perform holography with a reference beam. In this work, we consider reference-free holographic imaging scenarios, which instead rely upon a phase retrieval algorithm to recover the complex field at the sample plane. In this reference-free phase retrieval, as commonly utilized in Coherent Diffraction Imaging, the detected intensity image is not necessarily regarded as hologram, instead as the intensity of a diffraction pattern [18]. Such a reference-free computational approach is common in other on-chip imaging setups.

_{a}We may describe the diffraction of the sample field *f* into the hologram *h* using a propagation operator, *P _{d}*[·]. Neglecting evanescent field effects and assuming this propagation is lossless,

*P*is invertible, and its inverse ${P}_{d}^{-1}[\cdot ]$ represents time-reversed propagation from the detector plane back to the sample plane. The pixel array at the detector plane only detects the intensity of the hologram field:

_{d}*x′ y′*) are discretized versions of the detector plane coordinates (

*x*,

_{a}*y*), making |

_{a}*h*(

*x′*,

*y′*)|

^{2}a discrete function. We may assume the propagation operator

*P*also includes the effects of arbitrary pixel discretization. The goal of phase retrieval is to recover an accurate estimate of the complex sample transmission function,

*f*(

*x*,

*y*) from the measured set of intensities, |

*h*(

*x′*,

*y′*)|

^{2}.

#### 2.2. Standard in-line holographic phase retrieval

Phase retrieval algorithms compute the complex sample field from the measured diffraction pattern intensity through an iterative process [19]. Here, we adopt the simple error reduction (ER) algorithm [19]. It is also possible to use one of many other closely related strategies [33], including the hybrid input-output algorithm, or other more advanced solvers [34]. Phase retrieval iteratively projects an initial estimate of *f* onto two constraints in two different domains. In-line holography typically uses for its first constraint the object’s support in the sample plane, and for its second constraint the measured hologram intensities in the detector plane.

An outline of the phase retrieval algorithm for this “standard” case is diagrammed in Fig. 2. After initiating an initial complex sample estimate *g*_{0}(*x*, *y*) at the sample plane, ER first digitally propagates it to the detector plane: *G _{k}* (

*x*′,

*y*′) =

*P*[

_{d}*g*(

_{k}*x*,

*y*)]. Here,

*k*denotes the

*k*th iterative loop, for 0 ≤

*k*≤

*n*iterations. We use capital letters to denote our estimate at the detector plane, and lower case letters to denote it at the sample plane. We perform digital propagation using the angular spectrum method. Next, ER enforces the intensity constraint. It replaces the amplitudes of

*G*(

_{k}*x*′,

*y*′) with the experimentally measured amplitudes at the detector, |

*h*(

*x*′,

*y*′)|:

*D*represents the set of all pixels in the detector array and

*G*′ is the updated hologram estimate. We may equivalently represent this estimate update as, ${G}_{k}^{\prime}({x}^{\prime},{y}^{\prime})=|h({x}^{\prime},{y}^{\prime})|{e}^{i{\varphi}_{k}({x}^{\prime},{y}^{\prime})}$, which makes clear the intensity constraint step leaves the phase of the current hologram estimate,

*ϕ*(

_{k}*x*′,

*y*′), unchanged. Third, ER propagates this intensity-constrained hologram estimate back to the sample plane: ${g}_{k}^{\prime}(x,y)={P}_{d}^{-1}[{G}_{k}^{\prime}({x}^{\prime},{y}^{\prime})]$. Fourth, ER applies a sample support constraint. It leaves unchanged all values within a defined subset of pixels,

*S*, which typically represents the interior of a collection of cells or an organism of interest. However, it assumes that outside of this interior support area the sample exhibits a uniform absorptivity (i.e., the illumination light primarily passes to the detector unchanged). ER thus sets pixels outside of this support area to a uniform background value

_{k}*b*:

In this last step the iteration counter value *k* increments for the next iteration. The above ER loop runs for a fixed number of *n* iterations, or until some convergence criteria is satisfied. The complex algorithm output, *g _{n}*(

*x*,

*y*), typically offers an accurate estimate of the amplitude and phase of the original optical field

*f*(

*x*,

*y*) at the sample plane.

Since we rarely know the exact support of each sample a-priori, we use two recent insights to ensure the constraint in Eq. 3 encourages successful algorithm convergence. First, we adaptively update the assigned background value, *b*, each iteration. Following [32], we set
$b=\u3008\left|{g}_{k}^{\prime}(x,y)\right|\u3009r(x,y)/\u3008r(x,y)\u3009$ at iteration *k*, where 〈〉 denotes the mean over all pixels and *r*(*x*, *y*) is a fixed reference measurement formed by back-propagating a set of reference hologram amplitudes, |*h _{r}* (

*x*′,

*y*′)| (which do not contain diffracted light from any cells or sample structure). The reference amplitudes |

*h*| may be acquired before the experiment, or simply selected from a region of the hologram where no sample structure is present.

_{r}Second, to improve the accuracy of Eq. 3, we also vary the set of pixels defining the sample support each iteration, *S _{k}*. We update

*S*with the “shrink-wrap” method [31]. At a given iteration, this method first blurs and then thresholds the current sample estimate to form a new support boarder. Blurring helps smooth noise to regularize the support area, and also encourages algorithm stability. Unless otherwise stated, our shrink-wrap implementation uses a Gaussian blur kernel of 5

_{k}^{2}pixels, a normalized threshold value of 0.15, and updates the support every tenth iteration. The initial guess of the support follows the same routine.

We show an example simulation of standard ER phase retrieval in Fig. 3. Our simulated sample is 150 × 150 pixels of measured amplitudes and phases from a set of 5 polystyrene microspheres, shown in Fig. 3(a)–(b), acquired using an alternative phase retrieval approach [35, 36]. Assuming a lensless imaging setup that approximately matches our experimental parameters (150^{2} pixels, pixel size = 2.2 *µ*m, *d* = 1 mm), we then simulate the formation of a single in-line hologram, which we detect only the intensity of (Fig. 3(c)). From this hologram, we apply the standard ER phase retrieval algorithm, along with shrink wrap support estimation, to recover the complex sample estimate in Fig. 3(d). Fig. 3(f) shows the final sample support. We note that our reconstruction offers quantitatively accurate amplitude and phase *within* each microsphere, but sets the complex field to a constant value in all “background” areas outside of each sphere.

#### 2.3. Subsampled phase retrieval (SPR)

Our SPR algorithm makes one small but critical change to the ER phase retrieval workflow. Instead of measuring the hologram amplitude with all pixels in the digital detector array (the entire set of pixels *D*), SPR uses only a subset of available pixels, *R* ⊂ *D*. We replace the estimated hologram amplitude only at these pixel locations, and otherwise leave its estimated complex values unchanged at all other pixel locations. The third step of our SPR algorithm thus takes the form,

SPR uses the new constraint in Eq. 4 for algorithm step 3. All other steps of SPR match the standard ER pipeline. For initialization, the subset of pixels R is first interpolated to fill the unsampled area so as to be the same size of D. In our case, we use nearest-neighbor interpolation. The interpolated image is then used an input for our subsampled phase retrieval. In a digital sensor with a variably addressed pixel readout scheme, SPR only requires *R* measured intensity values. This reduction in data readout leads to a proportional increase in the detector frame rate. For a given subset size |*R*|, the maximum expected frame rate speedup with SPR is |*D*|/|*R*|. In this work, we examine multiple data reduction factors ranging from |*D*|/|*R*| = 4 to |*D*|/|R| = 36. Furthermore, we investigate two subsampling geometries: rectilinear and random subsampling, as diagrammed in Fig. 1(b)–(c). Rectilinear subsampling periodically skips the readout of a fixed number of pixels along *x* and *y* and can currently be achieved by modifying recently available CMOS pixel arrays (details Experiment section).

We also test SPR using a random subsampling strategy, which helps us directly compare our approach to related method of compressive holography [24, 29]. Like SPR, the framework of compressive sensing (CS) [22] can also estimate a complex signal from fewer measurements than originally required by Shannon’s sampling theorem. For in-line holography, this offers an alternative means to readout fewer pixels per image, and thus potentially achieve a higher frame rate, while maintaining an accurate reconstruction. CS has two primary requirements. First, the signal must exhibit sparsity - a property requiring that most of the signal energy is contained within just a few coefficients within some transform domain. Within the context of our lensless imaging setup, compressive sensing might require that our sample exhibit a near-zero amplitude at most pixels in the sample plane, or perhaps exhibit a spatial gradient that is mostly zero (i.e., contains only a few sharp edges). Second, CS requires the sensing matrix and the unknown signal to be incoherent, which is measured by restricted isometric properties [23]. Thus, many sparsity-based holography setups rely upon a semi-random sampling strategy, similar to the second subsampling strategy that we consider.

While SPR does not directly assume the imaged sample is sparse, it does assume a finite spatial support, which we must indirectly acquire. As supported by prior on-chip holography experiments (e.g., [5–7]), we find this support assumption to be realistic for most biological samples of interest. Cells, sperm, embryos and other micro-organisms, for example, all have a well-defined boarder. Given there is a certain amount of overlap in assuming sparsity versus a finite support (that is, both assumptions must set a number of coefficients used to describe the sample to a constant value), we believe that many of the above arguments proving accurate sparse sample recovery might also extend to prove the same for those with finite support. We do not attempt any formal proof of this claim here. Instead, we now demonstrate that SPR is a very effective strategy in practical experiments.

Finally, it is important to note that this subsampling strategy is not equivalent to sampling the hologram with proportionally large pixels and attempting to computationally improve resolution. Larger pixels will not only encounter aliasing issues, but also fail to realize the goal of SPR: to rely more upon the estimated sample support during image reconstruction by leaving a large fraction of hologram values unconstrained. Compared to previous interpolation schemes [17, 37–39], the important differentiation of SPR is that: 1) the interpolation is in a transform domain (i.e., Fresnel Transform), and 2) interpolation is achieved via constrained alternating minimization, as opposed to local neighborhood operations on each pixel. The support constraint at the sample plane is vital for filling in both the unknown amplitude and phase of the intermediately empty hologram pixels (i.e., filling in the white pixels in Fig. 1(b)). Due to the sample support constraint, these unknown amplitudes may take on values that are dramatically different from immediately neighboring pixel values, which an image interpolation strategy could not recognize or account for.

## 3. Simulations

To quantify the effectiveness of SPR in simulation, we assume a digital detector containing 150^{2} pixels (each 2.2 *µ*m wide) with each sample placed 1 mm above the detector plane (as in Fig. 3). We assume a spatially coherent, quasi-monochromatic source at λ =632 nm illuminates the sample with a plane wave. Our first simulated sample is the same set of 15 *µ*m polystyrene microspheres from Fig. 3. Now, instead of constraining our sample estimate with amplitudes measured at all detector pixels, we follow Eq. 4 and only select a subset *R* of the detector pixels for three different subsampling ratios: |*D*|/|*R*| =9, 25 and 36.

The resulting amplitudes and phases of each SPR reconstruction are in Fig. 4. It is clear that one may still faithfully reconstruct the amplitude and phase of this particular sample quite accurately with over an order of magnitude less data per image (i.e., after only reading out values from 1 out of every 36 pixels, either across a grid or randomly). We also compare SPR against the naive approach to subsampling PR: instead of selectively applying the intensity constraint to a subset of pixels, we use image interpolation to infer a “full-resolution” hologram estimate and apply standard phase retrieval (i.e., constraining all pixels with Eq. 2 each iteration). We display the results of this third “interpolation” strategy in Fig. 4(c), where we apply cubic interpolation to the rectilinearly subsampled intensity data (i.e., to fill in the white pixels in Fig. 3(b) before running the ER algorithm). This exercise highlights that image interpolation appears as a viable strategy, but certain artifacts begin to appear within the reconstructed phase, especially at higher rates of subsampling.

To quantitatively compare SPR against the interpolation strategy, we compute the normalized mean-squared error (NMSE) between each reconstruction in Fig. 4 and our ground-truth simulation sample, *t*(*x*, *y*). The NMSE metric takes the form:

Here, *g _{n}*(

*x*,

*y*) is the recovered sample’s amplitude and phase (after

*n*= 500 iterations) for each of the three strategies outlined in Fig. 4 and

*S*is the final sample support (no background pixels contribute to our final error metric). The constant parameter γ is defined as,

_{n}The microsphere simulation NMSE is in Fig. 5(a) for subsampling factors ranging from |*D*|/|*R*| =1 to 36. Both rectilinear and semi-random subsampling offer similar performance. For low amounts of subsampling, the interpolation strategy matches the performance of SPR. However, for high amounts of sub-sampling (|*D*|/|*R*| >9), SPR has a lower NMSE. After this critical rate, the interpolated values no longer faithfully reproduce the hologram amplitude, and simply leaving the unknown amplitudes unchanged every iteration via SPR becomes a better recovery strategy.

To test the performance of SPR for samples with a sharply delineated support boundary, we attempt a second simulation using the “circular” sample in Fig. 7. The sample and simulated detector now contain 256^{2} pixels. Again implementing SPR with rectilinear subsampling and semi-random subsampling leads to the reconstructions in Fig. 7(a) and (b), respectively, with the interpolation PR results in Fig. 7(c). This example clearly highlights that SPR easily outperforms interpolation when the sample exhibits a well-defined support boundary. Both the recovered amplitudes and phases are better approximated for all data reduction factors ranging from 1 to 36. As shown in the NMSE plot in Fig. 5(b), less than 3% of the original hologram image is required to recover the primary phase features of this particular sample, with only a 2× increase in NMSE. We conclude that, especially for samples exhibiting a well-defined support boundary, SPR can reduce per-image pixel readout by over an order of magnitude with minimal impact reconstruction fidelity.

## 4. Experiments

We now present two experimental verifications of SPR. The first experiment use the on-chip imaging setup for hologram capture with an Aptina CMOS sensor (Aptina MT9M002, monochromatic) containing 2560×1920 pixels, each 2.2 *µ*m wide. The second experiment use the same setup with different sensor (IDS uEyeLE UI-148xLE, monochromatic). This particular sensor now offers a rectilinear sub-sampling for an increase in frame rate. We position each sample a distance *d* above the active pixel layer of the CMOS detector (*d* is different in each experiment) and use a red LED (Thorlabs M625L3, 625 nm center wavelength, 16 nm spectral bandwidth) placed approximately 150 mm above the sample for illumination. To increase the spatial coherence of the LED, we also place a 100 *µ*m pinhole directly in front of the LED active area. By sweeping many different depths, we manually choose the distance between sample and detector. As for the support constraint, the modified shrink-wrap could successfully yield the support from the measured holograms in all subsampling factors.

First, we verify the ability of SPR to measure quantitative phase by imaging a fixed sample of polystyrene microspheres (30 *µ*m in diameter, *n _{m}* = 1.5875 refractive index, immersed in oil with

*n*= 1.595 refractive index). We first capture a full-resolution hologram of a large distribution of microspheres. One microsphere of interest, from a 300×300 pixel region, is shown in Fig. 8(a). We digitally subsample this measured hologram in two different geometries (rectilinear and semi-random, as outlined in Section 2) at the following subsampling rates: |

_{o}*D*|/|

*R*| = 9,16, and 25. Then, we input these subsampled images into our SPR algorithm and run

*n*= 200 iterations to recover the microsphere reconstructions shown in Fig. 8(b).

Although up to 96% of the original hologram image remains unused, SPR still accurately recovers the phase shift induced by each microsphere, as shown for example in Fig. 8(b). Here, we also attempt sample reconstruction after first performing cubic image interpolation on the sub-sampled holograms, which results in an unpredictable shift in the phase centroid. Finally, we quantify the accuracy of SPR by comparing its reconstructed phase to the known phase shift induced by an ideal sphere in Fig. 8(c1–c3). Here, we select the experimental phase shift values Δ*ϕ* from along one row of pixels through the center of each sphere (dashed line). The known microsphere phase shift is determined by the optical path length difference of a wave passing through a 30 *µ*m circle with an index shift of *n _{m}* −

*n*= .0075. From this experiment, we conclude that SPR maintains an accurate measure of quantitative phase.

_{o}Given quantitatively accuracy, we next use SPR with an *in vivo* biological specimen. A collection of *Peranema*, which are microorganisms that are primarily transparent and fall within the euglinoid family, are placed in medium onto a standard microscope slide. We position the slide at approximately 1910 *µ*m above the sensor and capture a series of holograms, a cropped example of which is shown in Fig. 9(a). First, we select a 300^{2} pixel region of the hologram and perform standard ER phase retrieval to reconstruct the sample amplitude and phase shown in Fig. 9(b)–(c).

For ground-truth comparison, we also place the same sample of *Peranema* beneath a 10X objective microscope and capture the intensity image in Fig. 9(d). Although the locations of each microorganism differ from those in reconstruction images due to their unpredictable movement, the structures of their main body qualitatively match. Two other qualitative points are worth noting: first, our reconstructed phase shows a clear boundary between the front and back section of each microorganism, consistent with the presence of their basal body. Second, our reconstructed images cannot resolve the microorganism flagellum (i.e., tail), which is primarily due to our system’s limited resolution. We experimentally determined the tail width as approximately 2 *µ*m, which is close to the 2.2 *µ*m pixel size of the digital sensor. Future experiments may resolve the flagellum by using a sensor with smaller pixels or with additional processing (see discussion section).

Next, we operate the second CMOS sensor which provides rectlinear subsampling mode to capture holographic movies of microorganisms moving over time. We test 3 different sub-sampling strategies: |*D*|/|*R*| = 1, 4, and 9. For this particular sensor, these subsampled data rates correspond to the ability to increase the sensor frame rate by a factor of 1, 3.1 and 5.5, respectively (from 4.4 FPS for no subsampling to 24.8 FPS for 9× subsampling). For each frame, we apply the SPR algorithm to recover amplitude and phase of each *Peranema* across the entire sensor. Example insets of the recovered amplitude and phase of single *Peranema* are displayed in Fig. 10. Supplemental videos, are provided as
Visualization 1,
Visualization 2 and
Visualization 3, demonstrate how our subsampling strategy offers videos with much smoother motion between consecutive frames, which is not originally captured in full resolution reconstructions. Again, the support from subsampled holograms in each frame is generated by the modified shrink-wrap algorithm, which plays an important role in the holographic video reconstruction of the *Peranema*.

Finally, to quantitatively verify the accuracy of our SPR reconstructions, we compare the average width and thickness of a collection of *Peranema* bodies as measured from a single reconstructed frame (with |*D*|/|*R*| = 9) to that measured from a standard microscope image. The mean value (MV) and standard deviation (SD) of the width of the *Peranema* from the SPR reconstructed frame is 50.137 *µ*m and 6.344 *µ*, respectively, which closely matches that from the microscope image (MV= 47.818 *µ*m and SD= 2.511 *µ*m, width labeled in Fig. 9(d1)). Similarly, for the reconstructed thickness we have MV= 11.579 *µ*m and SD= 1.607 *µ*m for the SPR reconstruction, whereas the microscope image yields MV= 12.937 *µ*m and SD= 2.786 *µ*m. Both width and thickness match within one standard deviation.

## 5. Discussion and future work

As we demonstrated both in simulation and experiment, SPR can dramatically reduce the number of measurements per frame in on-chip holography while still maintaining suitable reconstruction quality. Using a sensor that achieves a higher frame rate via pixel sub-sampling, we demonstrated a factor of 5.5× speedup in holographic video of moving microorganisms. Our experimental work demonstrates SPR is quite resilient to unknown sensor and shot noise. Placed in the context of alternative compressive holography schemes, SPR is simple, computationally efficient and accurate.

Several steps may help further improve the accuracy of subsampling. The primary challenge faced with live biological specimen imaging was correctly determining its support, using a somewhat arbitrary starting point. A modified shrink-wrap algorithm, which could incorporate prior knowledge of e.g. the *Peranema* body shape, would certainly help improve performance. In addition, the challenge of support identification becomes increasingly difficult when measuring from fewer pixels (i.e., with larger subsampling). Thus, a practical implementation might employ a bootstrapped approach, where the imaging process begins with a larger number of measured pixels per image and then forms a model of the expected sample support to use in later reconstructions with fewer measured pixels. Despite limitations in our experimental hardware, we successfully demonstrated our Sub-sampled Phase Retrieval technique applied to real experimental data, with a clear improvement in the fidelity of captured motion. SPR would ideally benefit from a fully addressable pixel readout scheme. In addition, we believe SPR and compressed sensing have similarity and would like to compare these two techniques side by side in future.

We believe SPR offers a useful conceptual starting point for more advanced procedures. First, SPR currently does not consider the redundant nature of the video signal over time. Adopting the insights gained by SPR into a more general approach to optimize phase retrieval over both space and time will likely lead to additional video speedup. Methods such as optical flow may provide a good path forward in this regard. Second, SPR is capable of removing objects that are not in focus, which offers a means to simultaneously achieve optical sectioning. Third, the effectiveness of SPR indicates that it might also be useful for X-ray imaging and coherent diffraction imaging, as well as related techniques for ptychography.

## Funding

Z.W., K.H. and O.C. acknowledge the following funding: NSF CAREER award IIS-1453192, ONR award 1(GG010550)//N00014-14-1-0741, ONR award N00014-15-1-2735, and DARPA award (G001534-7510)//HR0011-16-C-0028.

G.Z. acknowledges funding in part by NSF 1555986, NIH R21EB022378, and NIH R03EB022144

R.H. acknowledges financial support from the Einstein Foundation Berlin.

## Acknowledgments

We are thankful for the discussions and generous help from Dr. Xiaoze Ou, Dr. Mooseok Jang and Jaebum Chung. We also thank Prof. Changhuei Yang at Caltech and Prof. Do Young Noh at GIST for their insights and the kind use of his equipment for experimental development. D.R. acknowledges Caltech Electrical Engineering department Fellowship.

## References and links

**1. **D. Tseng, O. Mudanyali, C. Oztoprak, S.O. Isikman, I. Sencan, O. Yaglidere, and A. Ozcan, “Lensfree Microscopy on a Cell-phone,” Lab Chip **10**, 1787–1792 (2010). [CrossRef] [PubMed]

**2. **W. Bishara, U. Sikora, O. Mudanyali, T. Su, O. Yaglidere, S. Luckhart, and A. Ozcan, “Holographic pixel super-resolution in portable lensless on-chip microscopy using a fiber-optic array,” Lab Chip **11**, 1276–1779 (2011). [CrossRef] [PubMed]

**3. **A. Greenbaum, Y. Zhang, A. Feizi, P. Chung, W. Luo, S.R. Kandukuri, and A. Ozcan, “Wide-field Computational Imaging of Pathology Slides using Lensfree On-Chip Microscopy,” Sci. Trans. Med. **6**, 267ra175 (2014). [CrossRef]

**4. **E. McLeod, T.U. Dincer, M. Veli, Y.N. Ertas, C. Nguyen, W. Luo, A. Greenbaum, A. Feizi, and A. Ozcan, “High-Throughput and Label-Free Single Nanoparticle Sizing Based on Time-Resolved On-Chip Microscopy,” ACS Nano **9**, 3265–3273 (2015). [CrossRef] [PubMed]

**5. **T-W. Su, L. Xue, and A. Ozcan, “High-throughput lensfree 3D tracking of human sperms reveals rare statistics of helical trajectories,” Proc. Nat. Acad. Sci. **109**, 16018–16022 (2012). [CrossRef] [PubMed]

**6. **J. Weidling, S.O. Isikman, A. Greenbaum, A. Ozcan, and E. Botvinick, “Lensfree Computational Imaging of Capillary Morphogenesis within 3D Substrates,” J. Biomed. Opt. **17**, 126018 (2012). [CrossRef]

**7. **I. Pushkarsky, Y. Lyb, W. Weaver, T-W. Su, O. Mudanyali, A. Ozcan, and D. Di Carlo, “Automated single-cell motility analysis on a chip using lensfree microscopy,” Sci. Rep. **4**, 4717 (2014). [PubMed]

**8. **Casio cameras, http://www.casio.com/products/archive/Digital_Cameras/High-Speed/EX-F1/, (accessed June 2015).

**9. **G. Bub, M. Tecza, M. Helmes, P. Lee, and P. Kohl, “Temporal pixel multiplexing for simultaneous high-speed, high-resolution imaging,” Nat. Methods **7**(3), 209–211 (2010). [CrossRef] [PubMed]

**10. **A. Agrawal, M. Gupta, A. Veeraraghavan, and S. G. Narasimhan, “Optimal Coded Sampling for Temporal Super-Resolution,” in Proc. IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2011).

**11. **D. Reddy, A. Veeraraghavan, and R. Chellappa, “P2C2: Programmable Pixel Compressive Camera for High Speed Imaging,” in Proc. IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2011).

**12. **D. Liu, J. Gu, Y. Hitomi, M. Gupta, T. Mistunaga, and S. K. Nayar, “Efficient space-time sampling with pixel-wise coded exposure for high-speed imaging,” IEEE Trans. Pattern Anal. Mach. Intell. **36**(2), 248–260 (2014). [CrossRef]

**13. **J. Holloway, A. C. Sankaranarayanan, A. Veeraraghavan, and S. Tambe, “Flutter Shutter Video Camera for Compressive Sensing of Videos,” in IEEE Conference on Computational Photography (ICCP) (2012).

**14. **P. Llull, X. Liao, X. Yuan, J. Yang, D. Kittle, L. Carin, G. Sapiro, and D. J. Brady, “Coded aperture compressive temporal imaging,” Opt. Express **21**(9), 10526–10545 (2013). [CrossRef] [PubMed]

**15. **Z. Wang, L. Spinoulas, K. He, L. Tian, O. Cossairt, A. K. Katsaggelos, and H. Chen, “Compressive holographic video,” Opt. Express **25**(1), 250–262 (2017). [CrossRef] [PubMed]

**16. **L. Gao, J. Liang, C. Li, and L. V. Wang, “Single-shot compressed ultrafast photography at one hundred billion frames per second,” Nature **16**, 74–77 (2014). [CrossRef]

**17. **S. Hrivňak, J. Uličný, L. Mikeš, A. Cecilia, E. Hamann, T. Baumbach, L. Švéda, Z. Zápražný, D. Korytár, E. Gimenez-Navarro, U. H. Wagner, C. Rau, H. Greven, and P. Vagovič, “Single-distance phase retrieval algorithm for Bragg Magnifier microscope,” Opt. Express **24**, 27753–27762 (2016). [CrossRef]

**18. **J. R. Fienup, “Reconstruction of an object from the modulus of its Fourier transform,” Opt. Lett. **21**, 27–29 (1978). [CrossRef] [PubMed]

**19. **J. R. Fienup, “Phase retrieval algorithms: a comparison,” Appl. Optics **21**, 2758–2769 (1982). [CrossRef]

**20. **Y. Rivenson, Y. Wu, H. Wang, Y. Zhang, A. Feizi, and A. Ozcan, “Sparsity-based multi-height phase recovery in holographic microscopy,” Sci. Rep. **6**, 37862–3477(2016). [CrossRef] [PubMed]

**21. **V. Elser, “Phase retrieval by iterated projections,” J. Opt. Soc. Am. A Vol. **20**(1), 40–55 (2003). [CrossRef]

**22. **E. J. Candes and M. B. Wakin, “An introduction to compressive sampling,” IEEE Sig. Proc. Mag. **25**, 21–30 (2008). [CrossRef]

**23. **E. J. Candes, “The restricted isometry property and its implications for compressed sensing,” Comptes Rendus Mathematique 346 , **9**, 589–592 (2008). [CrossRef]

**24. **D. J. Brady, K. Choi, D. L. Marks, R. Horisaki, and S. Lim, “Compressive holography,” Opt. Express **17**(15), 13040–13049 (2009). [CrossRef] [PubMed]

**25. **L. Denis, D. Lorenz, E. Thiebaut, C. Fournier, and D. Trede, “Inline hologram reconstruction with sparsity constraints,” Opt. Lett. **34**(22), 3475–3477 (2009). [CrossRef] [PubMed]

**26. **A. Szameit, Y. Shechtman, E. Osherovich, E. Bullkich, P. Sidorenko, H. Dana, S. Steiner, E. B. Kley, S. Gazit, T. Cohen-Hyams, S. Shoham, M. Zibulevsky, I. Yavneh, Y. C. Eldar, O. Cohen, and M. Segev, “Sparsity-based single-shot subwavelength coherent diffractive imaging,” Nat. Mater. **11**(5), 455–459 (2012) [CrossRef] [PubMed]

**27. **J. Hahn, S. Lim, K. Choi, R. Horisaki, and D. J. Brady, “Video-rate compressive holographic microscopic tomography,” Opt. Express **19**(8), 7289–7298 (2011). [CrossRef] [PubMed]

**28. **K. He, M. K. Sharma, and O. Cossairt, “High dynamic range coherent imaging using compressed sensing,” Opt. Express **23**(24), 30904–30916 (2015). [CrossRef] [PubMed]

**29. **Y. Rivenson, A. Stern, and B. Javidi, “Compressive Fresnel holography,” J. Display Technol. **6**(10), 506–509 (2010). [CrossRef]

**30. **Siyuan Dong, Zichao Bian, Radhika Shiradkar, and Guoan Zheng, “Sparsely sampled Fourier ptychography,” Opt. Express , **22**(5) 5455–5464, (2014). [CrossRef] [PubMed]

**31. **S. Marchesini, H. He, H. N. Chapman, S. P. Hau-Riege, A. Noy, M. R. Howells, U. Weierstall, and J. C. H. Spence, “X-ray image reconstruction from a diffraction pattern alone,” Phys. Rev. B **68**, 140101 (2003). [CrossRef]

**32. **O. Mudanyali, D. Tseng, C. Oh, S. O. Isikman, I. Sencan, W. Bishara, C. Oztoprak, S. Seo, B. Khademhosseinia, and A. Ozcan, “Compact, light-weight and cost-effective microscope based on lensless incoherent holography for telemedicine applications,” Lab Chip **10**1417–1428 (2010). [CrossRef] [PubMed]

**33. **S. Marchesini, “A unified evaluation of iterative projection algorithms for phase retrieval,” Rev. Sci. Instrum. /bf **78**, 011301 (2007). [CrossRef]

**34. **K. Jaganathan, Y. C. Eldar, and B. Hassibi, “Phase retrieval: An overview of recent developments,” arXiv preprint arXiv:1510.07713 (2016).

**35. **G. Zheng, R. Horstmeyer, and C. Yang, “Wide-field, high-resolution Fourier ptychographic imaging,” Nat. Photonics **7**, 739–745 (2013). [CrossRef]

**36. **X. Ou, R. Horstmeyer, C. Yang, and G. Zheng, “Quantitative phase imaging via Fourier ptychographic microscopy,” Opt. Lett. **38**(22), 4845–4848 (2013). [CrossRef] [PubMed]

**37. **S. G. Podorov, A. I. Bishop, D. M. Paganin, and K. M. Pavlov, “Re-sampling of inline holographic images for improved reconstruction resolution,” arXiv preprint ar Xiv:0911.0520 (2009).

**38. **M. Wang and J. Wu, “Iterative digital in-line holographic reconstruction with improved resolution by data interpolation,” In *SPIE/COS Photonics Asia*, pp. 927110. International Society for Optics and Photonics, 2014.

**39. **S. Feng, M. Wang, and J. Wu, “Digital in-line holographic microscope based on the grating illumination with improved resolution by interpolation,” In *SPIE/COS Photonics Asia* (pp. 1002205), International Society for Optics and Photonics (2016).

**40. **R. Horstmeyer, R. Y. Chen, X. Ou, B. Ames, J. A. Tropp, and C. Yang, “Solving ptychography with a convex relaxation,” New J. Phys , **17**(5):053044 (2015). [CrossRef] [PubMed]