Abstract

Holography encodes the three-dimensional (3D) information of a sample in the form of an intensity-only recording. However, to decode the original sample image from its hologram(s), autofocusing and phase recovery are needed, which are in general cumbersome and time-consuming to perform digitally. Here we demonstrate a convolutional neural network (CNN)-based approach that simultaneously performs autofocusing and phase recovery to significantly extend the depth of field (DOF) and the reconstruction speed in holographic imaging. For this, a CNN is trained by using pairs of randomly defocused back-propagated holograms and their corresponding in-focus phase-recovered images. After this training phase, the CNN takes a single back-propagated hologram of a 3D sample as input to rapidly achieve phase recovery and reconstruct an in-focus image of the sample over a significantly extended DOF. This deep-learning-based DOF extension method is non-iterative and significantly improves the algorithm time complexity of holographic image reconstruction from O(nm) to O(1), where n refers to the number of individual object points or particles within the sample volume, and m represents the focusing search space within which each object point or particle needs to be individually focused. These results highlight some of the unique opportunities created by data-enabled statistical image reconstruction methods powered by machine learning, and we believe that the presented approach can be broadly applicable to computationally extend the DOF of other imaging modalities.

© 2018 Optical Society of America under the terms of the OSA Open Access Publishing Agreement

1. INTRODUCTION

Holography [18] encodes the three-dimensional (3D) information of a sample through interference of the object’s scattered light with a reference wave. Through this interference process, the intensity of a hologram that is recorded by, e.g., an image sensor, contains both the amplitude and phase information of the sample. Retrieval of this object information over a 3D sample space has been the subject of numerous holographic imaging techniques [24,912]. In a holographic image reconstruction process, there are two major steps. One of these is the phase recovery, which is required since only the intensity information of the holographic pattern is recorded at a given digital hologram. In general, for an off-axis holographic imaging system [3,4,11,12], this phase-recovery step can be achieved relatively more easily compared to an in-line holography setup at the cost of a reduction in the space-bandwidth product of the imaging system. For in-line holography, on the other hand, iterative phase-recovery approaches that utilize measurement diversity and/or prior information regarding the sample have been developed [6,7,1322]. Regardless of the specific holographic setup that is employed, phase recovery needs to be performed to eliminate the twin-image- and self-interference-related spatial artifacts in the reconstructed phase and amplitude images of the sample.

The other crucial step in holographic image reconstruction is autofocusing, where the sample-to-sensor distances (i.e., relative heights) of different parts of the 3D object need to be numerically estimated. Autofocusing accuracy is vital to the quality of the reconstructed holographic image such that the phase-recovered optical field can be back-propagated to the correct object locations in 3D. Conventionally, to perform autofocusing, the hologram is digitally propagated to a set of axial distances, where a focusing criterion is evaluated at each resulting complex-valued image. This step is ideally performed after the phase-recovery step, but can also be applied before it, which might reduce the focusing accuracy [23]. Various autofocusing criteria have been successfully used in holographic imaging, including, e.g., the Tamura coefficient [24], the Gini index [25], and others [23,2631]. Regardless of the specific focusing criterion that is used, and even with smart search strategies [32], the autofocusing step requires numerical back-propagation of optical fields and evaluation of a criterion at typically >1020 axial distances, which is time-consuming for even a small field of view (FOV). Furthermore, if the sample has multiple objects at different depths, this procedure needs to be repeated for every object in the FOV.

Some recent work has utilized deep learning to achieve autofocusing. Ren et al. formulated autofocusing as a classification problem and used a convolutional neural network (CNN) to provide rough estimates of the focusing distance with each classification class (i.e., bin) having an axial range of 3mm, which is more appropriate for imaging systems that do not need precise knowledge of the axial distance of each object [33]. As another example, Shimobaba et al. used a CNN regression model to achieve continuous autofocusing, also with a relatively coarse focusing accuracy of >5mm [34]. In parallel to these recent results, CNN-based phase-recovery methods that use a single intensity-only hologram to reconstruct a two-dimensional object’s image have also been demonstrated [3538]. However, in these former approaches, the neural networks were trained with in-focus images, where the sample-to-sensor (hologram) distances were precisely known a priori based on the imaging setup or were separately determined based on an autofocusing criterion. As a result, the reconstruction quality degraded rapidly outside the system depth of field (DOF); for example, for high-resolution imaging of a pathology slide (tissue section), 4μm deviation from the correct focus distance resulted in loss of resolution and distorted the sub-cellular structural details [38].

Here, we demonstrate a deep-learning-based holographic image reconstruction method that performs both autofocusing and phase recovery at the same time using a single hologram intensity, which significantly extends the DOF of the reconstructed image compared to previous approaches, while also improving the algorithm time-complexity of holographic image reconstruction from O(nm) to O(1). We term this approach as HIDEF (holographic imaging using deep learning for extended focus), and it relies on training a CNN with not only in-focus image patches but also with randomly defocused holographic images along with their corresponding in-focus and phase-recovered images, used as reference. Overall, HIDEF significantly boosts the computational efficiency and the reconstruction speed of high-resolution holographic imaging by simultaneously performing autofocusing and phase recovery and increases the robustness of the image reconstruction process to potential misalignments in the optical setup by extending the DOF of the reconstructed images.

In addition to digital holography, the same deep-learning-based approach can also be applied to improve the DOF of incoherent imaging modalities including, e.g., fluorescence microscopy. This work and its results highlight some of the exciting opportunities created by deep-learning-based statistical image reconstruction approaches that provide unique solutions to challenging imaging problems, enabled by the availability of high-quality image data.

2. METHODS

The architecture of the neural network that we used is shown in Fig. 1. This CNN architecture is inspired by U-Net [39], and it consists of a down-sampling path as well as a symmetric up-sampling path (see Supplement 1 for a detailed description of the network). Through a chain of down-sampling operations, the network learns to capture and separate the true image and twin image spatial features of a holographic input field at different scales [22]. Additional shortcut paths (blue arrows in Fig. 1) are also included to pass the information forward through residual connections, which are useful to increase the training speed of the network [40]. This CNN architecture is implemented using TensorFlow, an open-source deep learning software package [41]. During the training phase, the CNN minimizes the l1-norm distance of the network output from the target/reference images and iteratively updates the network’s weights and biases using the adaptive moment estimation (Adam) optimizer [42] with a learning rate of 104. Alternatively, a loss function that is based on the l2-norm distance could also be used to obtain similar results, with a potential penalty on spatial resolution. For each image data set, the ratio of the training to cross-validation was set to 14:3. The training and blind testing of the network were performed on a PC with six-core 3.60 GHz CPU, 16 GB of RAM, using Nvidia GeForce GTX 1080Ti GPU. On average, the training process takes 40h for, e.g., 200,000 iterations, corresponding to 100 epochs. After the training, the network inference time for a hologram patch of 512×512pixels (with phase and amplitude channels) is <0.2s.

 figure: Fig. 1.

Fig. 1. HIDEF CNN, after its training, simultaneously achieves phase recovery and autofocusing, significantly extending the DOF of holographic image reconstruction. The network has a down-sampling decomposition path (green arrows) and a symmetric up-sampling expansion path (red arrows). The blue arrows mark the paths that skip through the convolutional layers (defining the residual connections). The numbers in italics represent the number of the input and output channels in these blocks at different levels. The orange arrows represent the connections between the down-sampling and up-sampling paths, where the channels of the output from the down-sampling block are concatenated with the output from the corresponding up-sampling block, doubling the channel numbers (see Supplement 1 for further details). ASP, angular spectrum propagation.

Download Full Size | PPT Slide | PDF

3. RESULTS AND DISCUSSION

To demonstrate the success of HIDEF, in our initial set of experiments, we used aerosols that are captured by a soft impactor surface and imaged by an on-chip holographic microscope, where the optical field scattered by each aerosol interferes with the directly transmitted light forming an in-line hologram, sampled using a CMOS imager, without the use of any lenses [32]. The captured aerosols on the substrate are dispersed in multiple depths (z2) as a result of varying particle mass, flow speed, and flow direction during the air sampling period [32]. Based on this setup, the training image data set had 176 digitally cropped non-overlapping regions that only contained particles located at the same depth, which are further augmented by fourfold to 704 regions by rotating them to 0, 90, 180, and 270 deg. For each region, we used a single hologram intensity and back-propagated it to 81 random distances, spanning an axial range of 100 to 100 µm away from the correct global focus, determined by autofocusing using the Tamura of the gradient criterion [23]. We then used these complex-valued fields as the input to the network. The target images used in the training phase (i.e., the reference images corresponding to the same samples) were reconstructed using multi-height phase recovery (MH-PR) that utilized eight different in-line holograms of the sample, captured at different z2 distances, to iteratively recover the phase information of the sample after an initial autofocusing step was performed for each height [14,43].

After this training phase, next we blindly tested the HIDEF network on samples that had no overlap with the training or validation sets; these samples contained particles spread across different depths per image FOV. Figure 2 illustrates the success of HIDEF and how it simultaneously achieves an extended DOF and phase recovery. For a given in-line hologram of the captured aerosols [Fig. 2(a)], we first back-propagate the hologram intensity to a coarse distance of z2=1mm away from the active area of the CMOS imager, which is roughly determined based on the effective substrate thickness used in the experiment. This initial back-propagated hologram yields a strong twin image because of the short propagation distance (1mm) and the missing phase information. This complex-valued field, containing both the true and twin images, is then fed to the CNN. The output of the CNN is shown in Fig. 2(a), which demonstrates the extended DOF of HIDEF with various aerosols, spread over an axial range of 90μm, that are all brought into focus at the network output. In addition to bringing all the particles contained in a single hologram to a sharp focus, the network also performed phase recovery, resulting in phase and amplitude images that are free from twin-image- and self-interference-related artifacts. Figures 2(b) and 2(c) also compare the results of the network output with respect to a standard MH-PR approach that used eight in-line holograms to iteratively retrieve the phase information of the sample. These comparisons clearly demonstrate both the significantly extended DOF and phase-recovery performance of HIDEF, achieved using a single hologram intensity with a non-iterative inference time of <0.2s. In comparison, the iterative MH-PR approach took 4s for phase recovery and an additional 2.4s for autofocusing to the individual objects at eight planes, totaling 6.4s for the same FOV and object volume, i.e., >30-fold slower compared to HIDEF.

 figure: Fig. 2.

Fig. 2. Extended-DOF reconstruction of aerosols at different depths using HIDEF. (a) After its training, HIDEF CNN brings all the particles within the FOV into focus while also performing phase recovery. Each particle’s depth is color-coded with respect to the back-propagation distance (1 mm), as shown with the color bar on the right. (b) As a comparison, MH-PR images of the same FOV show that some of the particles come into focus at different depths and become invisible or distorted at other depths. For each particle’s arrow, the same color-coding is used as in (a). (c) The enhanced-DOF of HIDEF is illustrated by tracking a particle’s amplitude full width at half-maximum (FWHM) as a function of the axial defocus distance (see Supplement 1 for details). HIDEF preserves the particle’s FWHM diameter and its correct image across a large DOF of >0.2mm, which is expected since it was trained for this range of defocus (±0.1mm). On the other hand, MH-PR results show a much more limited DOF, as also confirmed with the same particle’s amplitude images at different defocus distances, reported at the bottom. Also see Visualization 1 and Visualization 2 for a detailed comparison.

Download Full Size | PPT Slide | PDF

In these results, we used a coarse back-propagation step of 1 mm before feeding the CNN with a complex-valued field. An important feature of our approach is that this back-propagation distance, z2, does not need to be precise. In Visualization 1, we demonstrate the stability of the HIDEF output image as we vary the initial back-propagation distance, providing the same extended DOF image regardless of the initial z2 selection. This is very much expected since the network was trained with defocused holograms spanning an axial defocus (dz) range of +/0.1mm. For this specific FOV shown in Visualization 1, all the aerosols that were randomly spread in 3D experienced a defocus amount that is limited by +/0.1mm (with respect to their correct axial distance in the sample volume). Beyond this range of defocusing, the HIDEF network cannot perform reliable image reconstruction since it was not trained for that [see, e.g., |dz|>120μm in Fig. 2(c)]. In fact, outside of its training range, HIDEF starts to hallucinate features as illustrated in Visualization 2, which covers a much larger axial defocus range of 1mmdz1mm—beyond what the network was trained for.

Interestingly, although the network was only trained with globally defocused hologram patches that only contain particles at the same depth/plane, it learned to individually focus various particles that lie at different depths within the same FOV [see Fig. 2(a)]. Based on this observation, one can argue that the HIDEF network does not perform the physical equivalent of free-space back-propagation of a certain hologram FOV to a focus plane. Instead, it statistically learns both in-focus and out-of-focus features of the input field, segments the out-of-focus parts, and replaces them with in-focus features in a parallel manner for a given hologram FOV. From an algorithm time-complexity perspective, this is a fixed processing time for a given hologram patch, i.e., a complexity of O(1), instead of the conventional O(nm), where n defines the number of individual object points or particles within the 3D sample volume, and m is the discrete focusing search space.

Based on the above argument, if the network statistically learns both in-focus and out-of-focus features of the sample, one could think that this approach should be limited to relatively sparse objects (such as the one shown in Fig. 2) that let the network learn out-of-focus sample features within a certain axial defocusing range, used in the training. In fact, to test this hypothesis with non-sparse samples, next we tested HIDEF on the holograms of spatially connected objects such as tissue slices, where there is no opening or empty region within the sample plane. For this goal, based on the CNN architecture shown in Fig. 1, we trained the network with 1,119 hologram patches (corresponding to breast tissue sections used in histopathology), which were randomly propagated to 41 distances spanning an axial defocus range of 100 to 100 µm with respect to the focus plane. In this training phase, we used MH-PR images as our target/reference. Our blind testing results after the training of the network are summarized in Fig. 3 and Visualization 3, which clearly demonstrate that HIDEF can simultaneously perform both phase recovery and autofocusing for an arbitrary, non-sparse, and connected sample. In Fig. 3, we also see that MH-PR images naturally exhibit a limited DOF: even at an axial defocus of 5μm, some of the fine features at the tissue level are distorted. With more axial defocus, the MH-PR results show significant artificial ripples and loss of further details. HIDEF, on the other hand, is very robust to axial defocusing, and is capable of correctly focusing the entire image and its fine features while also rejecting the twin-image artifact at different defocus distances, up to the range that it was trained for (±0.1mm); see Visualization 3.

 figure: Fig. 3.

Fig. 3. Comparison of HIDEF results against free-space back-propagation (CNN Input) and MH-PR (MH Phase Recovered) results, as a function of axial defocus distance (dz). The test sample is a thin section of a human breast tissue sample. The first two columns use a single intensity hologram, whereas the third column (MH-PR) uses eight in-line holograms of the same sample, acquired at different heights. These results clearly demonstrate that the HIDEF network simultaneously performs phase recovery and autofocusing over the axial defocus range that it was trained for (i.e., |dz|100μm in this case). Outside this training range (marked with red dz values), the network output is not reliable. See Visualization 3 and Visualization 4 for a detailed comparison. Scale bar: 20 µm.

Download Full Size | PPT Slide | PDF

However, as illustrated in Visualization 4 and Fig. 3, beyond its training range, HIDEF starts to hallucinate and create false features. A similar behavior is also observed in Visualization 2 for the aerosol images. There are several messages that one can take from these observations: the network does not learn or generalize a specific physical process such as wave propagation, hologram formation, or light interference; if it were to generalize such physical processes, one would not see sudden appearances of completely unrelated spatial features at the network output as one gradually goes outside the axial defocus range that it was trained for. For example, if one compares the network output within the training range and outside (see Visualization 3 and Visualization 4), one can clearly see that we do not see a physical smearing or diffraction-related smoothening effect as one continues to defocus in a range that the network was not trained for. In this defocus range that is “new” to the network, it still gives relatively sharp but unrelated features, which indicate that it is not learning or generalizing physics of wave propagation or interference. In fact, this highlights a unique aspect of the deep-learning-based and data-driven holographic image reconstruction framework that is presented here: the output images of the network are driven by the image transformation that the neural network was trained for (between the input and gold standard label images), and this learned transformation creates output images that deviate from wave equation-based (physics-driven) solutions. An important example of this is shown in Fig. 2, where the network rapidly performed phase recovery and autofocusing on all the particles that lie at different depths within the sample volume, bringing all the particles into focus at the output image. This image transformation from the network input to the output deviates from wave equation-based solutions that propagate a phase recovered field to different planes, where some particles will be in focus and some others will be out of focus, driven by free-space wave propagation.

To further quantify the improvements made by HIDEF, next we compared the amplitude of the network output image against the MH-PR result at the correct focus of the tissue section and used the structural similarity (SSIM) index for this comparison, defined as [44]

SSIM(U1,U2)=(2μ1μ2+C1)(2σ1,2+C2)(μ12+μ22+C1)(σ12+σ22+C2),
where U1 is the image to be evaluated, and U2 is the reference image, which in this case is the autofocused MH-PR result using eight in-line holograms. μp and σp are the mean and standard deviation for image Up (p=1,2), respectively. σ1,2 is the cross variance between the two images, and C1,C2 are stabilization constants used to prevent division by a small denominator. Based on these definitions, Fig. 4 shows the mean SSIM index calculated across an axial defocus range of 200 to 200 µm, which was averaged across 180 different breast tissue FOVs that were blindly tested. Consistent with the qualitative comparison reported in Fig. 3 and Visualization 3, HIDEF outputs SSIM values that are significantly higher than the hologram intensities back-propagated to the exact focus distances, owing to the phase-recovery capability of the network. Furthermore, as shown in Fig. 4, compared to a CNN (with the same network architecture) that is trained using only in-focus holograms (with exact z2 values), HIDEF has a much higher SSIM index for defocused holograms across a large DOF of 0.2mm. Interestingly, the network that is trained with in-focus holograms beats HIDEF for only one point in Fig. 4, i.e., for dz=0μm, which is expected, as this is what it was specifically trained for. However, this small difference in SSIM (0.78 versus 0.76) is visually negligible (see Visualization 3, the frame of dz=0μm).

 figure: Fig. 4.

Fig. 4. SSIM values as a function of the axial defocus distance. Each one of these SSIM curves is averaged over 180 test FOVs (512×512pixels) corresponding to thin sections of a human breast tissue sample. The results confirm the extended-DOF of the HIDEF network output images, up to the axial defocus range that it was trained for.

Download Full Size | PPT Slide | PDF

Our results reported so far demonstrate the unique capabilities of HIDEF to simultaneously perform phase recovery and autofocusing, yielding at least an order of magnitude increase in the DOF of the reconstructed images, as also confirmed by Figs. 2 and 3 and Visualization 1, Visualization 2Visualization 3. To further extend the DOF of the neural network output beyond 0.2 mm, one can use a larger network (with more layers, weights, and biases) and/or more training data, containing severely defocused images as part of its learning phase. Certainly, the proof-of-concept DOF enhancement reported here is not an ultimate limit for the presented approach. In fact, to better emphasize this opportunity, we also trained a third neural network, following the HIDEF architecture of Fig. 1, with a training image set that contained randomly defocused holograms of breast tissue sections, with an axial defocus range of 0.2 to 0.2 mm. The performance comparison of this new network against the previous one (demonstrated in Fig. 3) is reported in Fig. 4. As shown in this comparison, by using a training image set that included even more defocused holograms, we were able to significantly extend the axial defocus range to 0.4 mm (i.e., +/0.2mm), where the HIDEF network successfully performed both autofocusing and phase recovery, at the same output image.

In summary, we believe that the results reported in this work provide compelling evidence for some of the unique opportunities created by statistical image reconstruction methods enabled especially by deep learning, and the approach presented here can be widely applicable to computationally extend the DOF of other imaging modalities, including, e.g., fluorescence microscopy.

Funding

National Science Foundation (NSF); Howard Hughes Medical Institute (HHMI); Army Research Office (ARO).

Acknowledgment

The Ozcan Research Group at UCLA acknowledges the support of the NSF Engineering Research Center (ERC, PATHS-UP), the ARO, the ARO Life Sciences Division, the National Science Foundation (NSF) CBET Division Biophotonics Program, the NSF Emerging Frontiers in Research and Innovation (EFRI) Award, the NSF INSPIRE Award, NSF Partnerships for Innovation: Building Innovation Capacity (PFI:BIC) Program, the NIH, the HHMI, the Vodafone Americas Foundation, the Mary Kay Foundation, the Steven & Alexandra Cohen Foundation, and KAUST.

 

See Supplement 1 for supporting content.

REFERENCES

1. J. W. Goodman, Introduction to Fourier Optics, 3rd ed. (Roberts and Company, 2005).

2. W. Xu, M. H. Jericho, I. A. Meinertzhagen, and H. J. Kreuzer, “Digital in-line holography for biological applications,” Proc. Natl. Acad. Sci. USA 98, 11301–11305 (2001). [CrossRef]  

3. P. Marquet, B. Rappaz, P. J. Magistretti, E. Cuche, Y. Emery, T. Colomb, and C. Depeursinge, “Digital holographic microscopy: a noninvasive contrast imaging technique allowing quantitative visualization of living cells with subwavelength axial accuracy,” Opt. Lett. 30, 468–470 (2005). [CrossRef]  

4. G. Popescu, Y. Park, W. Choi, R. R. Dasari, M. S. Feld, and K. Badizadegan, “Imaging red blood cell dynamics by quantitative phase microscopy,” Blood Cells. Mol. Dis. 41, 10–16 (2008). [CrossRef]  

5. A. Greenbaum, W. Luo, T.-W. Su, Z. Göröcs, L. Xue, S. O. Isikman, A. F. Coskun, O. Mudanyali, and A. Ozcan, “Imaging without lenses: achievements and remaining challenges of wide-field on-chip microscopy,” Nat. Methods 9, 889–895 (2012). [CrossRef]  

6. O. Mudanyali, D. Tseng, C. Oh, S. O. Isikman, I. Sencan, W. Bishara, C. Oztoprak, S. Seo, B. Khademhosseini, and A. Ozcan, “Compact, light-weight and cost-effective microscope based on lensless incoherent holography for telemedicine applications,” Lab. Chip 10, 1417–1428 (2010). [CrossRef]  

7. Y. Wu and A. Ozcan, “Lensless digital holographic microscopy and its applications in biomedicine and environmental monitoring,” Methods 136, 4–16 (2017). [CrossRef]  

8. E. McLeod and A. Ozcan, “Unconventional methods of imaging: computational microscopy and compact implementations,” Rep. Prog. Phys. 79, 076001 (2016). [CrossRef]  

9. T.-W. Su, L. Xue, and A. Ozcan, “High-throughput lensfree 3D tracking of human sperms reveals rare statistics of helical trajectories,” Proc. Natl. Acad. Sci. USA 109, 16018–16022 (2012). [CrossRef]  

10. S. O. Isikman, W. Bishara, S. Mavandadi, W. Y. Frank, S. Feng, R. Lau, and A. Ozcan, “Lens-free optical tomographic microscope with a large imaging volume on a chip,” Proc. Natl. Acad. Sci. USA 108, 7296–7301 (2011). [CrossRef]  

11. E. Cuche, P. Marquet, and C. Depeursinge, “Spatial filtering for zero-order and twin-image elimination in digital off-axis holography,” Appl. Opt. 39, 4070–4075 (2000). [CrossRef]  

12. V. Bianco, B. Mandracchia, V. Marchesano, V. Pagliarulo, F. Olivieri, S. Coppola, M. Paturzo, and P. Ferraro, “Endowing a plain fluidic chip with micro-optics: a holographic microscope slide,” Light: Sci. Appl. 6, e17055 (2017). [CrossRef]  

13. J. Fienup, “Phase retrieval algorithms—a comparison,” Appl. Opt. 21, 2758–2769 (1982). [CrossRef]  

14. A. Greenbaum and A. Ozcan, “Maskless imaging of dense samples using pixel super-resolution based multi-height lensfree on-chip microscopy,” Opt. Express 20, 3129–3143 (2012). [CrossRef]  

15. A. Greenbaum, Y. Zhang, A. Feizi, P.-L. Chung, W. Luo, S. R. Kandukuri, and A. Ozcan, “Wide-field computational imaging of pathology slides using lens-free on-chip microscopy,” Sci. Transl. Med. 6, 267ra175 (2014). [CrossRef]  

16. L. J. Allen and M. P. Oxley, “Phase retrieval from series of images obtained by defocus variation,” Opt. Commun. 199, 65–75 (2001). [CrossRef]  

17. P. Almoro, G. Pedrini, and W. Osten, “Complete wavefront reconstruction using sequential intensity measurements of a volume speckle field,” Appl. Opt. 45, 8596 (2006). [CrossRef]  

18. W. Luo, A. Greenbaum, Y. Zhang, and A. Ozcan, “Synthetic aperture-based on-chip microscopy,” Light: Sci. Appl. 4, e261 (2015). [CrossRef]  

19. P. Bao, G. Situ, G. Pedrini, and W. Osten, “Lensless phase microscopy using phase retrieval with multiple illumination wavelengths,” Appl. Opt. 51, 5486–5494 (2012). [CrossRef]  

20. J. Min, B. Yao, M. Zhou, R. Guo, M. Lei, Y. Yang, D. Dan, S. Yan, and T. Peng, “Phase retrieval without unwrapping by single-shot dual-wavelength digital holography,” J. Opt. 16, 125409 (2014). [CrossRef]  

21. Y. Wu, Y. Zhang, W. Luo, and A. Ozcan, “Demosaiced pixel super-resolution for multiplexed holographic color imaging,” Sci. Rep. 6, 28601 (2016). [CrossRef]  

22. Y. Rivenson, Y. Wu, H. Wang, Y. Zhang, A. Feizi, and A. Ozcan, “Sparsity-based multi-height phase recovery in holographic microscopy,” Sci. Rep. 6, srep37862 (2016). [CrossRef]  

23. Y. Zhang, H. Wang, Y. Wu, M. Tamamitsu, and A. Ozcan, “Edge sparsity criterion for robust holographic autofocusing,” Opt. Lett. 42, 3824–3827 (2017). [CrossRef]  

24. P. Memmolo, C. Distante, M. Paturzo, A. Finizio, P. Ferraro, and B. Javidi, “Automatic focusing in digital holography and its application to stretched holograms,” Opt. Lett. 36, 1945–1947 (2011). [CrossRef]  

25. P. Memmolo, M. Paturzo, B. Javidi, P. A. Netti, and P. Ferraro, “Refocusing criterion via sparsity measurements in digital holography,” Opt. Lett. 39, 4719–4722 (2014). [CrossRef]  

26. P. Langehanenberg, B. Kemper, D. Dirksen, and G. von Bally, “Autofocusing in digital holographic phase contrast microscopy on pure phase objects for live cell imaging,” Appl. Opt. 47, D176–D182 (2008). [CrossRef]  

27. F. Dubois, C. Schockaert, N. Callens, and C. Yourassowsky, “Focus plane detection criteria in digital holography microscopy by amplitude analysis,” Opt. Express 14, 5895–5908 (2006). [CrossRef]  

28. M. Liebling and M. Unser, “Autofocus for digital Fresnel holograms by use of a Fresnelet-sparsity criterion,” J. Opt. Soc. Am. A 21, 2424–2430 (2004). [CrossRef]  

29. F. Dubois, A. E. Mallahi, J. Dohet-Eraly, and C. Yourassowsky, “Refocus criterion for both phase and amplitude objects in digital holographic microscopy,” Opt. Lett. 39, 4286–4289 (2014). [CrossRef]  

30. M. Lyu, C. Yuan, D. Li, and G. Situ, “Fast autofocusing in digital holography using the magnitude differential,” Appl. Opt. 56, F152–F157 (2017). [CrossRef]  

31. M. Tamamitsu, Y. Zhang, H. Wang, Y. Wu, and A. Ozcan, “Comparison of Gini index and Tamura coefficient for holographic autofocusing based on the edge sparsity of the complex optical wavefront,” ArXiv1708.08055 Phys. (2017).

32. Y.-C. Wu, A. Shiledar, Y.-C. Li, J. Wong, S. Feng, X. Chen, C. Chen, K. Jin, S. Janamian, Z. Yang, Z. S. Ballard, Z. Göröcs, A. Feizi, and A. Ozcan, “Air quality monitoring using mobile microscopy and machine learning,” Light: Sci. Appl. 6, e17046 (2017). [CrossRef]  

33. Z. Ren, Z. Xu, and E. Y. Lam, “Autofocusing in digital holography using deep learning,” in Three-Dimensional and Multidimensional Microscopy: Image Acquisition and Processing XXV (International Society for Optics and Photonics, 2018), Vol. 10499, p. 104991V.

34. T. Shimobaba, T. Kakue, and T. Ito, “Convolutional neural network-based regression for depth prediction in digital holography,” ArXiv1802.00664 Cs Eess (2018).

35. Y. Rivenson, Y. Zhang, H. Gunaydin, D. Teng, and A. Ozcan, “Phase recovery and holographic image reconstruction using deep learning in neural networks,” ArXiv1705.04286 Phys. (2017).

36. A. Sinha, J. Lee, S. Li, and G. Barbastathis, “Lensless computational imaging through deep learning,” ArXiv1702.08516 Phys. (2017).

37. A. Sinha, J. Lee, S. Li, and G. Barbastathis, “Lensless computational imaging through deep learning,” Optica 4, 1117–1125 (2017). [CrossRef]  

38. Y. Rivenson, Y. Zhang, H. Günaydın, D. Teng, and A. Ozcan, “Phase recovery and holographic image reconstruction using deep learning in neural networks,” Light Sci. Appl. 7, 17141 (2018). [CrossRef]  

39. O. Ronneberger, P. Fischer, and T. Brox, “U-Net: convolutional networks for biomedical image segmentation,” ArXiv1505.04597 Cs (2015).

40. K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition,” in IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2016), pp. 770–778.

41. M. Abadi, P. Barham, J. Chen, Z. Chen, A. Davis, J. Dean, M. Devin, S. Ghemawat, G. Irving, and M. Isard, “TensorFlow: a system for large-scale machine learning,” in Proceedings of the 12th USENIX Symposium on Operating Systems Design and Implementation (2016), Vol. 16, pp. 265–283.

42. D. P. Kingma and J. Ba, “Adam: A Method for Stochastic Optimization,” ArXiv E-Prints 1412, arXiv:1412.6980 (2014).

43. W. Luo, Y. Zhang, Z. Göröcs, A. Feizi, and A. Ozcan, “Propagation phasor approach for holographic image reconstruction,” Sci. Rep. 6, 22738 (2016). [CrossRef]  

44. Z. Wang, A. C. Bovik, H. R. Sheikh, and E. P. Simoncelli, “Image quality assessment: from error visibility to structural similarity,” IEEE Trans. Image Process. 13, 600–612 (2004). [CrossRef]  

References

  • View by:
  • |
  • |
  • |

  1. J. W. Goodman, Introduction to Fourier Optics, 3rd ed. (Roberts and Company, 2005).
  2. W. Xu, M. H. Jericho, I. A. Meinertzhagen, and H. J. Kreuzer, “Digital in-line holography for biological applications,” Proc. Natl. Acad. Sci. USA 98, 11301–11305 (2001).
    [Crossref]
  3. P. Marquet, B. Rappaz, P. J. Magistretti, E. Cuche, Y. Emery, T. Colomb, and C. Depeursinge, “Digital holographic microscopy: a noninvasive contrast imaging technique allowing quantitative visualization of living cells with subwavelength axial accuracy,” Opt. Lett. 30, 468–470 (2005).
    [Crossref]
  4. G. Popescu, Y. Park, W. Choi, R. R. Dasari, M. S. Feld, and K. Badizadegan, “Imaging red blood cell dynamics by quantitative phase microscopy,” Blood Cells. Mol. Dis. 41, 10–16 (2008).
    [Crossref]
  5. A. Greenbaum, W. Luo, T.-W. Su, Z. Göröcs, L. Xue, S. O. Isikman, A. F. Coskun, O. Mudanyali, and A. Ozcan, “Imaging without lenses: achievements and remaining challenges of wide-field on-chip microscopy,” Nat. Methods 9, 889–895 (2012).
    [Crossref]
  6. O. Mudanyali, D. Tseng, C. Oh, S. O. Isikman, I. Sencan, W. Bishara, C. Oztoprak, S. Seo, B. Khademhosseini, and A. Ozcan, “Compact, light-weight and cost-effective microscope based on lensless incoherent holography for telemedicine applications,” Lab. Chip 10, 1417–1428 (2010).
    [Crossref]
  7. Y. Wu and A. Ozcan, “Lensless digital holographic microscopy and its applications in biomedicine and environmental monitoring,” Methods 136, 4–16 (2017).
    [Crossref]
  8. E. McLeod and A. Ozcan, “Unconventional methods of imaging: computational microscopy and compact implementations,” Rep. Prog. Phys. 79, 076001 (2016).
    [Crossref]
  9. T.-W. Su, L. Xue, and A. Ozcan, “High-throughput lensfree 3D tracking of human sperms reveals rare statistics of helical trajectories,” Proc. Natl. Acad. Sci. USA 109, 16018–16022 (2012).
    [Crossref]
  10. S. O. Isikman, W. Bishara, S. Mavandadi, W. Y. Frank, S. Feng, R. Lau, and A. Ozcan, “Lens-free optical tomographic microscope with a large imaging volume on a chip,” Proc. Natl. Acad. Sci. USA 108, 7296–7301 (2011).
    [Crossref]
  11. E. Cuche, P. Marquet, and C. Depeursinge, “Spatial filtering for zero-order and twin-image elimination in digital off-axis holography,” Appl. Opt. 39, 4070–4075 (2000).
    [Crossref]
  12. V. Bianco, B. Mandracchia, V. Marchesano, V. Pagliarulo, F. Olivieri, S. Coppola, M. Paturzo, and P. Ferraro, “Endowing a plain fluidic chip with micro-optics: a holographic microscope slide,” Light: Sci. Appl. 6, e17055 (2017).
    [Crossref]
  13. J. Fienup, “Phase retrieval algorithms—a comparison,” Appl. Opt. 21, 2758–2769 (1982).
    [Crossref]
  14. A. Greenbaum and A. Ozcan, “Maskless imaging of dense samples using pixel super-resolution based multi-height lensfree on-chip microscopy,” Opt. Express 20, 3129–3143 (2012).
    [Crossref]
  15. A. Greenbaum, Y. Zhang, A. Feizi, P.-L. Chung, W. Luo, S. R. Kandukuri, and A. Ozcan, “Wide-field computational imaging of pathology slides using lens-free on-chip microscopy,” Sci. Transl. Med. 6, 267ra175 (2014).
    [Crossref]
  16. L. J. Allen and M. P. Oxley, “Phase retrieval from series of images obtained by defocus variation,” Opt. Commun. 199, 65–75 (2001).
    [Crossref]
  17. P. Almoro, G. Pedrini, and W. Osten, “Complete wavefront reconstruction using sequential intensity measurements of a volume speckle field,” Appl. Opt. 45, 8596 (2006).
    [Crossref]
  18. W. Luo, A. Greenbaum, Y. Zhang, and A. Ozcan, “Synthetic aperture-based on-chip microscopy,” Light: Sci. Appl. 4, e261 (2015).
    [Crossref]
  19. P. Bao, G. Situ, G. Pedrini, and W. Osten, “Lensless phase microscopy using phase retrieval with multiple illumination wavelengths,” Appl. Opt. 51, 5486–5494 (2012).
    [Crossref]
  20. J. Min, B. Yao, M. Zhou, R. Guo, M. Lei, Y. Yang, D. Dan, S. Yan, and T. Peng, “Phase retrieval without unwrapping by single-shot dual-wavelength digital holography,” J. Opt. 16, 125409 (2014).
    [Crossref]
  21. Y. Wu, Y. Zhang, W. Luo, and A. Ozcan, “Demosaiced pixel super-resolution for multiplexed holographic color imaging,” Sci. Rep. 6, 28601 (2016).
    [Crossref]
  22. Y. Rivenson, Y. Wu, H. Wang, Y. Zhang, A. Feizi, and A. Ozcan, “Sparsity-based multi-height phase recovery in holographic microscopy,” Sci. Rep. 6, srep37862 (2016).
    [Crossref]
  23. Y. Zhang, H. Wang, Y. Wu, M. Tamamitsu, and A. Ozcan, “Edge sparsity criterion for robust holographic autofocusing,” Opt. Lett. 42, 3824–3827 (2017).
    [Crossref]
  24. P. Memmolo, C. Distante, M. Paturzo, A. Finizio, P. Ferraro, and B. Javidi, “Automatic focusing in digital holography and its application to stretched holograms,” Opt. Lett. 36, 1945–1947 (2011).
    [Crossref]
  25. P. Memmolo, M. Paturzo, B. Javidi, P. A. Netti, and P. Ferraro, “Refocusing criterion via sparsity measurements in digital holography,” Opt. Lett. 39, 4719–4722 (2014).
    [Crossref]
  26. P. Langehanenberg, B. Kemper, D. Dirksen, and G. von Bally, “Autofocusing in digital holographic phase contrast microscopy on pure phase objects for live cell imaging,” Appl. Opt. 47, D176–D182 (2008).
    [Crossref]
  27. F. Dubois, C. Schockaert, N. Callens, and C. Yourassowsky, “Focus plane detection criteria in digital holography microscopy by amplitude analysis,” Opt. Express 14, 5895–5908 (2006).
    [Crossref]
  28. M. Liebling and M. Unser, “Autofocus for digital Fresnel holograms by use of a Fresnelet-sparsity criterion,” J. Opt. Soc. Am. A 21, 2424–2430 (2004).
    [Crossref]
  29. F. Dubois, A. E. Mallahi, J. Dohet-Eraly, and C. Yourassowsky, “Refocus criterion for both phase and amplitude objects in digital holographic microscopy,” Opt. Lett. 39, 4286–4289 (2014).
    [Crossref]
  30. M. Lyu, C. Yuan, D. Li, and G. Situ, “Fast autofocusing in digital holography using the magnitude differential,” Appl. Opt. 56, F152–F157 (2017).
    [Crossref]
  31. M. Tamamitsu, Y. Zhang, H. Wang, Y. Wu, and A. Ozcan, “Comparison of Gini index and Tamura coefficient for holographic autofocusing based on the edge sparsity of the complex optical wavefront,” ArXiv1708.08055 Phys. (2017).
  32. Y.-C. Wu, A. Shiledar, Y.-C. Li, J. Wong, S. Feng, X. Chen, C. Chen, K. Jin, S. Janamian, Z. Yang, Z. S. Ballard, Z. Göröcs, A. Feizi, and A. Ozcan, “Air quality monitoring using mobile microscopy and machine learning,” Light: Sci. Appl. 6, e17046 (2017).
    [Crossref]
  33. Z. Ren, Z. Xu, and E. Y. Lam, “Autofocusing in digital holography using deep learning,” in Three-Dimensional and Multidimensional Microscopy: Image Acquisition and Processing XXV (International Society for Optics and Photonics, 2018), Vol. 10499, p. 104991V.
  34. T. Shimobaba, T. Kakue, and T. Ito, “Convolutional neural network-based regression for depth prediction in digital holography,” ArXiv1802.00664 Cs Eess (2018).
  35. Y. Rivenson, Y. Zhang, H. Gunaydin, D. Teng, and A. Ozcan, “Phase recovery and holographic image reconstruction using deep learning in neural networks,” ArXiv1705.04286 Phys. (2017).
  36. A. Sinha, J. Lee, S. Li, and G. Barbastathis, “Lensless computational imaging through deep learning,” ArXiv1702.08516 Phys. (2017).
  37. A. Sinha, J. Lee, S. Li, and G. Barbastathis, “Lensless computational imaging through deep learning,” Optica 4, 1117–1125 (2017).
    [Crossref]
  38. Y. Rivenson, Y. Zhang, H. Günaydın, D. Teng, and A. Ozcan, “Phase recovery and holographic image reconstruction using deep learning in neural networks,” Light Sci. Appl. 7, 17141 (2018).
    [Crossref]
  39. O. Ronneberger, P. Fischer, and T. Brox, “U-Net: convolutional networks for biomedical image segmentation,” ArXiv1505.04597 Cs (2015).
  40. K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition,” in IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2016), pp. 770–778.
  41. M. Abadi, P. Barham, J. Chen, Z. Chen, A. Davis, J. Dean, M. Devin, S. Ghemawat, G. Irving, and M. Isard, “TensorFlow: a system for large-scale machine learning,” in Proceedings of the 12th USENIX Symposium on Operating Systems Design and Implementation (2016), Vol. 16, pp. 265–283.
  42. D. P. Kingma and J. Ba, “Adam: A Method for Stochastic Optimization,” ArXiv E-Prints 1412, arXiv:1412.6980 (2014).
  43. W. Luo, Y. Zhang, Z. Göröcs, A. Feizi, and A. Ozcan, “Propagation phasor approach for holographic image reconstruction,” Sci. Rep. 6, 22738 (2016).
    [Crossref]
  44. Z. Wang, A. C. Bovik, H. R. Sheikh, and E. P. Simoncelli, “Image quality assessment: from error visibility to structural similarity,” IEEE Trans. Image Process. 13, 600–612 (2004).
    [Crossref]

2018 (1)

Y. Rivenson, Y. Zhang, H. Günaydın, D. Teng, and A. Ozcan, “Phase recovery and holographic image reconstruction using deep learning in neural networks,” Light Sci. Appl. 7, 17141 (2018).
[Crossref]

2017 (6)

Y. Zhang, H. Wang, Y. Wu, M. Tamamitsu, and A. Ozcan, “Edge sparsity criterion for robust holographic autofocusing,” Opt. Lett. 42, 3824–3827 (2017).
[Crossref]

M. Lyu, C. Yuan, D. Li, and G. Situ, “Fast autofocusing in digital holography using the magnitude differential,” Appl. Opt. 56, F152–F157 (2017).
[Crossref]

Y.-C. Wu, A. Shiledar, Y.-C. Li, J. Wong, S. Feng, X. Chen, C. Chen, K. Jin, S. Janamian, Z. Yang, Z. S. Ballard, Z. Göröcs, A. Feizi, and A. Ozcan, “Air quality monitoring using mobile microscopy and machine learning,” Light: Sci. Appl. 6, e17046 (2017).
[Crossref]

A. Sinha, J. Lee, S. Li, and G. Barbastathis, “Lensless computational imaging through deep learning,” Optica 4, 1117–1125 (2017).
[Crossref]

Y. Wu and A. Ozcan, “Lensless digital holographic microscopy and its applications in biomedicine and environmental monitoring,” Methods 136, 4–16 (2017).
[Crossref]

V. Bianco, B. Mandracchia, V. Marchesano, V. Pagliarulo, F. Olivieri, S. Coppola, M. Paturzo, and P. Ferraro, “Endowing a plain fluidic chip with micro-optics: a holographic microscope slide,” Light: Sci. Appl. 6, e17055 (2017).
[Crossref]

2016 (4)

E. McLeod and A. Ozcan, “Unconventional methods of imaging: computational microscopy and compact implementations,” Rep. Prog. Phys. 79, 076001 (2016).
[Crossref]

W. Luo, Y. Zhang, Z. Göröcs, A. Feizi, and A. Ozcan, “Propagation phasor approach for holographic image reconstruction,” Sci. Rep. 6, 22738 (2016).
[Crossref]

Y. Wu, Y. Zhang, W. Luo, and A. Ozcan, “Demosaiced pixel super-resolution for multiplexed holographic color imaging,” Sci. Rep. 6, 28601 (2016).
[Crossref]

Y. Rivenson, Y. Wu, H. Wang, Y. Zhang, A. Feizi, and A. Ozcan, “Sparsity-based multi-height phase recovery in holographic microscopy,” Sci. Rep. 6, srep37862 (2016).
[Crossref]

2015 (1)

W. Luo, A. Greenbaum, Y. Zhang, and A. Ozcan, “Synthetic aperture-based on-chip microscopy,” Light: Sci. Appl. 4, e261 (2015).
[Crossref]

2014 (4)

A. Greenbaum, Y. Zhang, A. Feizi, P.-L. Chung, W. Luo, S. R. Kandukuri, and A. Ozcan, “Wide-field computational imaging of pathology slides using lens-free on-chip microscopy,” Sci. Transl. Med. 6, 267ra175 (2014).
[Crossref]

J. Min, B. Yao, M. Zhou, R. Guo, M. Lei, Y. Yang, D. Dan, S. Yan, and T. Peng, “Phase retrieval without unwrapping by single-shot dual-wavelength digital holography,” J. Opt. 16, 125409 (2014).
[Crossref]

P. Memmolo, M. Paturzo, B. Javidi, P. A. Netti, and P. Ferraro, “Refocusing criterion via sparsity measurements in digital holography,” Opt. Lett. 39, 4719–4722 (2014).
[Crossref]

F. Dubois, A. E. Mallahi, J. Dohet-Eraly, and C. Yourassowsky, “Refocus criterion for both phase and amplitude objects in digital holographic microscopy,” Opt. Lett. 39, 4286–4289 (2014).
[Crossref]

2012 (4)

P. Bao, G. Situ, G. Pedrini, and W. Osten, “Lensless phase microscopy using phase retrieval with multiple illumination wavelengths,” Appl. Opt. 51, 5486–5494 (2012).
[Crossref]

A. Greenbaum and A. Ozcan, “Maskless imaging of dense samples using pixel super-resolution based multi-height lensfree on-chip microscopy,” Opt. Express 20, 3129–3143 (2012).
[Crossref]

T.-W. Su, L. Xue, and A. Ozcan, “High-throughput lensfree 3D tracking of human sperms reveals rare statistics of helical trajectories,” Proc. Natl. Acad. Sci. USA 109, 16018–16022 (2012).
[Crossref]

A. Greenbaum, W. Luo, T.-W. Su, Z. Göröcs, L. Xue, S. O. Isikman, A. F. Coskun, O. Mudanyali, and A. Ozcan, “Imaging without lenses: achievements and remaining challenges of wide-field on-chip microscopy,” Nat. Methods 9, 889–895 (2012).
[Crossref]

2011 (2)

S. O. Isikman, W. Bishara, S. Mavandadi, W. Y. Frank, S. Feng, R. Lau, and A. Ozcan, “Lens-free optical tomographic microscope with a large imaging volume on a chip,” Proc. Natl. Acad. Sci. USA 108, 7296–7301 (2011).
[Crossref]

P. Memmolo, C. Distante, M. Paturzo, A. Finizio, P. Ferraro, and B. Javidi, “Automatic focusing in digital holography and its application to stretched holograms,” Opt. Lett. 36, 1945–1947 (2011).
[Crossref]

2010 (1)

O. Mudanyali, D. Tseng, C. Oh, S. O. Isikman, I. Sencan, W. Bishara, C. Oztoprak, S. Seo, B. Khademhosseini, and A. Ozcan, “Compact, light-weight and cost-effective microscope based on lensless incoherent holography for telemedicine applications,” Lab. Chip 10, 1417–1428 (2010).
[Crossref]

2008 (2)

G. Popescu, Y. Park, W. Choi, R. R. Dasari, M. S. Feld, and K. Badizadegan, “Imaging red blood cell dynamics by quantitative phase microscopy,” Blood Cells. Mol. Dis. 41, 10–16 (2008).
[Crossref]

P. Langehanenberg, B. Kemper, D. Dirksen, and G. von Bally, “Autofocusing in digital holographic phase contrast microscopy on pure phase objects for live cell imaging,” Appl. Opt. 47, D176–D182 (2008).
[Crossref]

2006 (2)

2005 (1)

2004 (2)

M. Liebling and M. Unser, “Autofocus for digital Fresnel holograms by use of a Fresnelet-sparsity criterion,” J. Opt. Soc. Am. A 21, 2424–2430 (2004).
[Crossref]

Z. Wang, A. C. Bovik, H. R. Sheikh, and E. P. Simoncelli, “Image quality assessment: from error visibility to structural similarity,” IEEE Trans. Image Process. 13, 600–612 (2004).
[Crossref]

2001 (2)

W. Xu, M. H. Jericho, I. A. Meinertzhagen, and H. J. Kreuzer, “Digital in-line holography for biological applications,” Proc. Natl. Acad. Sci. USA 98, 11301–11305 (2001).
[Crossref]

L. J. Allen and M. P. Oxley, “Phase retrieval from series of images obtained by defocus variation,” Opt. Commun. 199, 65–75 (2001).
[Crossref]

2000 (1)

1982 (1)

Abadi, M.

M. Abadi, P. Barham, J. Chen, Z. Chen, A. Davis, J. Dean, M. Devin, S. Ghemawat, G. Irving, and M. Isard, “TensorFlow: a system for large-scale machine learning,” in Proceedings of the 12th USENIX Symposium on Operating Systems Design and Implementation (2016), Vol. 16, pp. 265–283.

Allen, L. J.

L. J. Allen and M. P. Oxley, “Phase retrieval from series of images obtained by defocus variation,” Opt. Commun. 199, 65–75 (2001).
[Crossref]

Almoro, P.

Ba, J.

D. P. Kingma and J. Ba, “Adam: A Method for Stochastic Optimization,” ArXiv E-Prints 1412, arXiv:1412.6980 (2014).

Badizadegan, K.

G. Popescu, Y. Park, W. Choi, R. R. Dasari, M. S. Feld, and K. Badizadegan, “Imaging red blood cell dynamics by quantitative phase microscopy,” Blood Cells. Mol. Dis. 41, 10–16 (2008).
[Crossref]

Ballard, Z. S.

Y.-C. Wu, A. Shiledar, Y.-C. Li, J. Wong, S. Feng, X. Chen, C. Chen, K. Jin, S. Janamian, Z. Yang, Z. S. Ballard, Z. Göröcs, A. Feizi, and A. Ozcan, “Air quality monitoring using mobile microscopy and machine learning,” Light: Sci. Appl. 6, e17046 (2017).
[Crossref]

Bao, P.

Barbastathis, G.

A. Sinha, J. Lee, S. Li, and G. Barbastathis, “Lensless computational imaging through deep learning,” Optica 4, 1117–1125 (2017).
[Crossref]

A. Sinha, J. Lee, S. Li, and G. Barbastathis, “Lensless computational imaging through deep learning,” ArXiv1702.08516 Phys. (2017).

Barham, P.

M. Abadi, P. Barham, J. Chen, Z. Chen, A. Davis, J. Dean, M. Devin, S. Ghemawat, G. Irving, and M. Isard, “TensorFlow: a system for large-scale machine learning,” in Proceedings of the 12th USENIX Symposium on Operating Systems Design and Implementation (2016), Vol. 16, pp. 265–283.

Bianco, V.

V. Bianco, B. Mandracchia, V. Marchesano, V. Pagliarulo, F. Olivieri, S. Coppola, M. Paturzo, and P. Ferraro, “Endowing a plain fluidic chip with micro-optics: a holographic microscope slide,” Light: Sci. Appl. 6, e17055 (2017).
[Crossref]

Bishara, W.

S. O. Isikman, W. Bishara, S. Mavandadi, W. Y. Frank, S. Feng, R. Lau, and A. Ozcan, “Lens-free optical tomographic microscope with a large imaging volume on a chip,” Proc. Natl. Acad. Sci. USA 108, 7296–7301 (2011).
[Crossref]

O. Mudanyali, D. Tseng, C. Oh, S. O. Isikman, I. Sencan, W. Bishara, C. Oztoprak, S. Seo, B. Khademhosseini, and A. Ozcan, “Compact, light-weight and cost-effective microscope based on lensless incoherent holography for telemedicine applications,” Lab. Chip 10, 1417–1428 (2010).
[Crossref]

Bovik, A. C.

Z. Wang, A. C. Bovik, H. R. Sheikh, and E. P. Simoncelli, “Image quality assessment: from error visibility to structural similarity,” IEEE Trans. Image Process. 13, 600–612 (2004).
[Crossref]

Brox, T.

O. Ronneberger, P. Fischer, and T. Brox, “U-Net: convolutional networks for biomedical image segmentation,” ArXiv1505.04597 Cs (2015).

Callens, N.

Chen, C.

Y.-C. Wu, A. Shiledar, Y.-C. Li, J. Wong, S. Feng, X. Chen, C. Chen, K. Jin, S. Janamian, Z. Yang, Z. S. Ballard, Z. Göröcs, A. Feizi, and A. Ozcan, “Air quality monitoring using mobile microscopy and machine learning,” Light: Sci. Appl. 6, e17046 (2017).
[Crossref]

Chen, J.

M. Abadi, P. Barham, J. Chen, Z. Chen, A. Davis, J. Dean, M. Devin, S. Ghemawat, G. Irving, and M. Isard, “TensorFlow: a system for large-scale machine learning,” in Proceedings of the 12th USENIX Symposium on Operating Systems Design and Implementation (2016), Vol. 16, pp. 265–283.

Chen, X.

Y.-C. Wu, A. Shiledar, Y.-C. Li, J. Wong, S. Feng, X. Chen, C. Chen, K. Jin, S. Janamian, Z. Yang, Z. S. Ballard, Z. Göröcs, A. Feizi, and A. Ozcan, “Air quality monitoring using mobile microscopy and machine learning,” Light: Sci. Appl. 6, e17046 (2017).
[Crossref]

Chen, Z.

M. Abadi, P. Barham, J. Chen, Z. Chen, A. Davis, J. Dean, M. Devin, S. Ghemawat, G. Irving, and M. Isard, “TensorFlow: a system for large-scale machine learning,” in Proceedings of the 12th USENIX Symposium on Operating Systems Design and Implementation (2016), Vol. 16, pp. 265–283.

Choi, W.

G. Popescu, Y. Park, W. Choi, R. R. Dasari, M. S. Feld, and K. Badizadegan, “Imaging red blood cell dynamics by quantitative phase microscopy,” Blood Cells. Mol. Dis. 41, 10–16 (2008).
[Crossref]

Chung, P.-L.

A. Greenbaum, Y. Zhang, A. Feizi, P.-L. Chung, W. Luo, S. R. Kandukuri, and A. Ozcan, “Wide-field computational imaging of pathology slides using lens-free on-chip microscopy,” Sci. Transl. Med. 6, 267ra175 (2014).
[Crossref]

Colomb, T.

Coppola, S.

V. Bianco, B. Mandracchia, V. Marchesano, V. Pagliarulo, F. Olivieri, S. Coppola, M. Paturzo, and P. Ferraro, “Endowing a plain fluidic chip with micro-optics: a holographic microscope slide,” Light: Sci. Appl. 6, e17055 (2017).
[Crossref]

Coskun, A. F.

A. Greenbaum, W. Luo, T.-W. Su, Z. Göröcs, L. Xue, S. O. Isikman, A. F. Coskun, O. Mudanyali, and A. Ozcan, “Imaging without lenses: achievements and remaining challenges of wide-field on-chip microscopy,” Nat. Methods 9, 889–895 (2012).
[Crossref]

Cuche, E.

Dan, D.

J. Min, B. Yao, M. Zhou, R. Guo, M. Lei, Y. Yang, D. Dan, S. Yan, and T. Peng, “Phase retrieval without unwrapping by single-shot dual-wavelength digital holography,” J. Opt. 16, 125409 (2014).
[Crossref]

Dasari, R. R.

G. Popescu, Y. Park, W. Choi, R. R. Dasari, M. S. Feld, and K. Badizadegan, “Imaging red blood cell dynamics by quantitative phase microscopy,” Blood Cells. Mol. Dis. 41, 10–16 (2008).
[Crossref]

Davis, A.

M. Abadi, P. Barham, J. Chen, Z. Chen, A. Davis, J. Dean, M. Devin, S. Ghemawat, G. Irving, and M. Isard, “TensorFlow: a system for large-scale machine learning,” in Proceedings of the 12th USENIX Symposium on Operating Systems Design and Implementation (2016), Vol. 16, pp. 265–283.

Dean, J.

M. Abadi, P. Barham, J. Chen, Z. Chen, A. Davis, J. Dean, M. Devin, S. Ghemawat, G. Irving, and M. Isard, “TensorFlow: a system for large-scale machine learning,” in Proceedings of the 12th USENIX Symposium on Operating Systems Design and Implementation (2016), Vol. 16, pp. 265–283.

Depeursinge, C.

Devin, M.

M. Abadi, P. Barham, J. Chen, Z. Chen, A. Davis, J. Dean, M. Devin, S. Ghemawat, G. Irving, and M. Isard, “TensorFlow: a system for large-scale machine learning,” in Proceedings of the 12th USENIX Symposium on Operating Systems Design and Implementation (2016), Vol. 16, pp. 265–283.

Dirksen, D.

Distante, C.

Dohet-Eraly, J.

Dubois, F.

Emery, Y.

Feizi, A.

Y.-C. Wu, A. Shiledar, Y.-C. Li, J. Wong, S. Feng, X. Chen, C. Chen, K. Jin, S. Janamian, Z. Yang, Z. S. Ballard, Z. Göröcs, A. Feizi, and A. Ozcan, “Air quality monitoring using mobile microscopy and machine learning,” Light: Sci. Appl. 6, e17046 (2017).
[Crossref]

W. Luo, Y. Zhang, Z. Göröcs, A. Feizi, and A. Ozcan, “Propagation phasor approach for holographic image reconstruction,” Sci. Rep. 6, 22738 (2016).
[Crossref]

Y. Rivenson, Y. Wu, H. Wang, Y. Zhang, A. Feizi, and A. Ozcan, “Sparsity-based multi-height phase recovery in holographic microscopy,” Sci. Rep. 6, srep37862 (2016).
[Crossref]

A. Greenbaum, Y. Zhang, A. Feizi, P.-L. Chung, W. Luo, S. R. Kandukuri, and A. Ozcan, “Wide-field computational imaging of pathology slides using lens-free on-chip microscopy,” Sci. Transl. Med. 6, 267ra175 (2014).
[Crossref]

Feld, M. S.

G. Popescu, Y. Park, W. Choi, R. R. Dasari, M. S. Feld, and K. Badizadegan, “Imaging red blood cell dynamics by quantitative phase microscopy,” Blood Cells. Mol. Dis. 41, 10–16 (2008).
[Crossref]

Feng, S.

Y.-C. Wu, A. Shiledar, Y.-C. Li, J. Wong, S. Feng, X. Chen, C. Chen, K. Jin, S. Janamian, Z. Yang, Z. S. Ballard, Z. Göröcs, A. Feizi, and A. Ozcan, “Air quality monitoring using mobile microscopy and machine learning,” Light: Sci. Appl. 6, e17046 (2017).
[Crossref]

S. O. Isikman, W. Bishara, S. Mavandadi, W. Y. Frank, S. Feng, R. Lau, and A. Ozcan, “Lens-free optical tomographic microscope with a large imaging volume on a chip,” Proc. Natl. Acad. Sci. USA 108, 7296–7301 (2011).
[Crossref]

Ferraro, P.

Fienup, J.

Finizio, A.

Fischer, P.

O. Ronneberger, P. Fischer, and T. Brox, “U-Net: convolutional networks for biomedical image segmentation,” ArXiv1505.04597 Cs (2015).

Frank, W. Y.

S. O. Isikman, W. Bishara, S. Mavandadi, W. Y. Frank, S. Feng, R. Lau, and A. Ozcan, “Lens-free optical tomographic microscope with a large imaging volume on a chip,” Proc. Natl. Acad. Sci. USA 108, 7296–7301 (2011).
[Crossref]

Ghemawat, S.

M. Abadi, P. Barham, J. Chen, Z. Chen, A. Davis, J. Dean, M. Devin, S. Ghemawat, G. Irving, and M. Isard, “TensorFlow: a system for large-scale machine learning,” in Proceedings of the 12th USENIX Symposium on Operating Systems Design and Implementation (2016), Vol. 16, pp. 265–283.

Goodman, J. W.

J. W. Goodman, Introduction to Fourier Optics, 3rd ed. (Roberts and Company, 2005).

Göröcs, Z.

Y.-C. Wu, A. Shiledar, Y.-C. Li, J. Wong, S. Feng, X. Chen, C. Chen, K. Jin, S. Janamian, Z. Yang, Z. S. Ballard, Z. Göröcs, A. Feizi, and A. Ozcan, “Air quality monitoring using mobile microscopy and machine learning,” Light: Sci. Appl. 6, e17046 (2017).
[Crossref]

W. Luo, Y. Zhang, Z. Göröcs, A. Feizi, and A. Ozcan, “Propagation phasor approach for holographic image reconstruction,” Sci. Rep. 6, 22738 (2016).
[Crossref]

A. Greenbaum, W. Luo, T.-W. Su, Z. Göröcs, L. Xue, S. O. Isikman, A. F. Coskun, O. Mudanyali, and A. Ozcan, “Imaging without lenses: achievements and remaining challenges of wide-field on-chip microscopy,” Nat. Methods 9, 889–895 (2012).
[Crossref]

Greenbaum, A.

W. Luo, A. Greenbaum, Y. Zhang, and A. Ozcan, “Synthetic aperture-based on-chip microscopy,” Light: Sci. Appl. 4, e261 (2015).
[Crossref]

A. Greenbaum, Y. Zhang, A. Feizi, P.-L. Chung, W. Luo, S. R. Kandukuri, and A. Ozcan, “Wide-field computational imaging of pathology slides using lens-free on-chip microscopy,” Sci. Transl. Med. 6, 267ra175 (2014).
[Crossref]

A. Greenbaum and A. Ozcan, “Maskless imaging of dense samples using pixel super-resolution based multi-height lensfree on-chip microscopy,” Opt. Express 20, 3129–3143 (2012).
[Crossref]

A. Greenbaum, W. Luo, T.-W. Su, Z. Göröcs, L. Xue, S. O. Isikman, A. F. Coskun, O. Mudanyali, and A. Ozcan, “Imaging without lenses: achievements and remaining challenges of wide-field on-chip microscopy,” Nat. Methods 9, 889–895 (2012).
[Crossref]

Gunaydin, H.

Y. Rivenson, Y. Zhang, H. Gunaydin, D. Teng, and A. Ozcan, “Phase recovery and holographic image reconstruction using deep learning in neural networks,” ArXiv1705.04286 Phys. (2017).

Günaydin, H.

Y. Rivenson, Y. Zhang, H. Günaydın, D. Teng, and A. Ozcan, “Phase recovery and holographic image reconstruction using deep learning in neural networks,” Light Sci. Appl. 7, 17141 (2018).
[Crossref]

Guo, R.

J. Min, B. Yao, M. Zhou, R. Guo, M. Lei, Y. Yang, D. Dan, S. Yan, and T. Peng, “Phase retrieval without unwrapping by single-shot dual-wavelength digital holography,” J. Opt. 16, 125409 (2014).
[Crossref]

He, K.

K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition,” in IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2016), pp. 770–778.

Irving, G.

M. Abadi, P. Barham, J. Chen, Z. Chen, A. Davis, J. Dean, M. Devin, S. Ghemawat, G. Irving, and M. Isard, “TensorFlow: a system for large-scale machine learning,” in Proceedings of the 12th USENIX Symposium on Operating Systems Design and Implementation (2016), Vol. 16, pp. 265–283.

Isard, M.

M. Abadi, P. Barham, J. Chen, Z. Chen, A. Davis, J. Dean, M. Devin, S. Ghemawat, G. Irving, and M. Isard, “TensorFlow: a system for large-scale machine learning,” in Proceedings of the 12th USENIX Symposium on Operating Systems Design and Implementation (2016), Vol. 16, pp. 265–283.

Isikman, S. O.

A. Greenbaum, W. Luo, T.-W. Su, Z. Göröcs, L. Xue, S. O. Isikman, A. F. Coskun, O. Mudanyali, and A. Ozcan, “Imaging without lenses: achievements and remaining challenges of wide-field on-chip microscopy,” Nat. Methods 9, 889–895 (2012).
[Crossref]

S. O. Isikman, W. Bishara, S. Mavandadi, W. Y. Frank, S. Feng, R. Lau, and A. Ozcan, “Lens-free optical tomographic microscope with a large imaging volume on a chip,” Proc. Natl. Acad. Sci. USA 108, 7296–7301 (2011).
[Crossref]

O. Mudanyali, D. Tseng, C. Oh, S. O. Isikman, I. Sencan, W. Bishara, C. Oztoprak, S. Seo, B. Khademhosseini, and A. Ozcan, “Compact, light-weight and cost-effective microscope based on lensless incoherent holography for telemedicine applications,” Lab. Chip 10, 1417–1428 (2010).
[Crossref]

Ito, T.

T. Shimobaba, T. Kakue, and T. Ito, “Convolutional neural network-based regression for depth prediction in digital holography,” ArXiv1802.00664 Cs Eess (2018).

Janamian, S.

Y.-C. Wu, A. Shiledar, Y.-C. Li, J. Wong, S. Feng, X. Chen, C. Chen, K. Jin, S. Janamian, Z. Yang, Z. S. Ballard, Z. Göröcs, A. Feizi, and A. Ozcan, “Air quality monitoring using mobile microscopy and machine learning,” Light: Sci. Appl. 6, e17046 (2017).
[Crossref]

Javidi, B.

Jericho, M. H.

W. Xu, M. H. Jericho, I. A. Meinertzhagen, and H. J. Kreuzer, “Digital in-line holography for biological applications,” Proc. Natl. Acad. Sci. USA 98, 11301–11305 (2001).
[Crossref]

Jin, K.

Y.-C. Wu, A. Shiledar, Y.-C. Li, J. Wong, S. Feng, X. Chen, C. Chen, K. Jin, S. Janamian, Z. Yang, Z. S. Ballard, Z. Göröcs, A. Feizi, and A. Ozcan, “Air quality monitoring using mobile microscopy and machine learning,” Light: Sci. Appl. 6, e17046 (2017).
[Crossref]

Kakue, T.

T. Shimobaba, T. Kakue, and T. Ito, “Convolutional neural network-based regression for depth prediction in digital holography,” ArXiv1802.00664 Cs Eess (2018).

Kandukuri, S. R.

A. Greenbaum, Y. Zhang, A. Feizi, P.-L. Chung, W. Luo, S. R. Kandukuri, and A. Ozcan, “Wide-field computational imaging of pathology slides using lens-free on-chip microscopy,” Sci. Transl. Med. 6, 267ra175 (2014).
[Crossref]

Kemper, B.

Khademhosseini, B.

O. Mudanyali, D. Tseng, C. Oh, S. O. Isikman, I. Sencan, W. Bishara, C. Oztoprak, S. Seo, B. Khademhosseini, and A. Ozcan, “Compact, light-weight and cost-effective microscope based on lensless incoherent holography for telemedicine applications,” Lab. Chip 10, 1417–1428 (2010).
[Crossref]

Kingma, D. P.

D. P. Kingma and J. Ba, “Adam: A Method for Stochastic Optimization,” ArXiv E-Prints 1412, arXiv:1412.6980 (2014).

Kreuzer, H. J.

W. Xu, M. H. Jericho, I. A. Meinertzhagen, and H. J. Kreuzer, “Digital in-line holography for biological applications,” Proc. Natl. Acad. Sci. USA 98, 11301–11305 (2001).
[Crossref]

Lam, E. Y.

Z. Ren, Z. Xu, and E. Y. Lam, “Autofocusing in digital holography using deep learning,” in Three-Dimensional and Multidimensional Microscopy: Image Acquisition and Processing XXV (International Society for Optics and Photonics, 2018), Vol. 10499, p. 104991V.

Langehanenberg, P.

Lau, R.

S. O. Isikman, W. Bishara, S. Mavandadi, W. Y. Frank, S. Feng, R. Lau, and A. Ozcan, “Lens-free optical tomographic microscope with a large imaging volume on a chip,” Proc. Natl. Acad. Sci. USA 108, 7296–7301 (2011).
[Crossref]

Lee, J.

A. Sinha, J. Lee, S. Li, and G. Barbastathis, “Lensless computational imaging through deep learning,” Optica 4, 1117–1125 (2017).
[Crossref]

A. Sinha, J. Lee, S. Li, and G. Barbastathis, “Lensless computational imaging through deep learning,” ArXiv1702.08516 Phys. (2017).

Lei, M.

J. Min, B. Yao, M. Zhou, R. Guo, M. Lei, Y. Yang, D. Dan, S. Yan, and T. Peng, “Phase retrieval without unwrapping by single-shot dual-wavelength digital holography,” J. Opt. 16, 125409 (2014).
[Crossref]

Li, D.

Li, S.

A. Sinha, J. Lee, S. Li, and G. Barbastathis, “Lensless computational imaging through deep learning,” Optica 4, 1117–1125 (2017).
[Crossref]

A. Sinha, J. Lee, S. Li, and G. Barbastathis, “Lensless computational imaging through deep learning,” ArXiv1702.08516 Phys. (2017).

Li, Y.-C.

Y.-C. Wu, A. Shiledar, Y.-C. Li, J. Wong, S. Feng, X. Chen, C. Chen, K. Jin, S. Janamian, Z. Yang, Z. S. Ballard, Z. Göröcs, A. Feizi, and A. Ozcan, “Air quality monitoring using mobile microscopy and machine learning,” Light: Sci. Appl. 6, e17046 (2017).
[Crossref]

Liebling, M.

Luo, W.

Y. Wu, Y. Zhang, W. Luo, and A. Ozcan, “Demosaiced pixel super-resolution for multiplexed holographic color imaging,” Sci. Rep. 6, 28601 (2016).
[Crossref]

W. Luo, Y. Zhang, Z. Göröcs, A. Feizi, and A. Ozcan, “Propagation phasor approach for holographic image reconstruction,” Sci. Rep. 6, 22738 (2016).
[Crossref]

W. Luo, A. Greenbaum, Y. Zhang, and A. Ozcan, “Synthetic aperture-based on-chip microscopy,” Light: Sci. Appl. 4, e261 (2015).
[Crossref]

A. Greenbaum, Y. Zhang, A. Feizi, P.-L. Chung, W. Luo, S. R. Kandukuri, and A. Ozcan, “Wide-field computational imaging of pathology slides using lens-free on-chip microscopy,” Sci. Transl. Med. 6, 267ra175 (2014).
[Crossref]

A. Greenbaum, W. Luo, T.-W. Su, Z. Göröcs, L. Xue, S. O. Isikman, A. F. Coskun, O. Mudanyali, and A. Ozcan, “Imaging without lenses: achievements and remaining challenges of wide-field on-chip microscopy,” Nat. Methods 9, 889–895 (2012).
[Crossref]

Lyu, M.

Magistretti, P. J.

Mallahi, A. E.

Mandracchia, B.

V. Bianco, B. Mandracchia, V. Marchesano, V. Pagliarulo, F. Olivieri, S. Coppola, M. Paturzo, and P. Ferraro, “Endowing a plain fluidic chip with micro-optics: a holographic microscope slide,” Light: Sci. Appl. 6, e17055 (2017).
[Crossref]

Marchesano, V.

V. Bianco, B. Mandracchia, V. Marchesano, V. Pagliarulo, F. Olivieri, S. Coppola, M. Paturzo, and P. Ferraro, “Endowing a plain fluidic chip with micro-optics: a holographic microscope slide,” Light: Sci. Appl. 6, e17055 (2017).
[Crossref]

Marquet, P.

Mavandadi, S.

S. O. Isikman, W. Bishara, S. Mavandadi, W. Y. Frank, S. Feng, R. Lau, and A. Ozcan, “Lens-free optical tomographic microscope with a large imaging volume on a chip,” Proc. Natl. Acad. Sci. USA 108, 7296–7301 (2011).
[Crossref]

McLeod, E.

E. McLeod and A. Ozcan, “Unconventional methods of imaging: computational microscopy and compact implementations,” Rep. Prog. Phys. 79, 076001 (2016).
[Crossref]

Meinertzhagen, I. A.

W. Xu, M. H. Jericho, I. A. Meinertzhagen, and H. J. Kreuzer, “Digital in-line holography for biological applications,” Proc. Natl. Acad. Sci. USA 98, 11301–11305 (2001).
[Crossref]

Memmolo, P.

Min, J.

J. Min, B. Yao, M. Zhou, R. Guo, M. Lei, Y. Yang, D. Dan, S. Yan, and T. Peng, “Phase retrieval without unwrapping by single-shot dual-wavelength digital holography,” J. Opt. 16, 125409 (2014).
[Crossref]

Mudanyali, O.

A. Greenbaum, W. Luo, T.-W. Su, Z. Göröcs, L. Xue, S. O. Isikman, A. F. Coskun, O. Mudanyali, and A. Ozcan, “Imaging without lenses: achievements and remaining challenges of wide-field on-chip microscopy,” Nat. Methods 9, 889–895 (2012).
[Crossref]

O. Mudanyali, D. Tseng, C. Oh, S. O. Isikman, I. Sencan, W. Bishara, C. Oztoprak, S. Seo, B. Khademhosseini, and A. Ozcan, “Compact, light-weight and cost-effective microscope based on lensless incoherent holography for telemedicine applications,” Lab. Chip 10, 1417–1428 (2010).
[Crossref]

Netti, P. A.

Oh, C.

O. Mudanyali, D. Tseng, C. Oh, S. O. Isikman, I. Sencan, W. Bishara, C. Oztoprak, S. Seo, B. Khademhosseini, and A. Ozcan, “Compact, light-weight and cost-effective microscope based on lensless incoherent holography for telemedicine applications,” Lab. Chip 10, 1417–1428 (2010).
[Crossref]

Olivieri, F.

V. Bianco, B. Mandracchia, V. Marchesano, V. Pagliarulo, F. Olivieri, S. Coppola, M. Paturzo, and P. Ferraro, “Endowing a plain fluidic chip with micro-optics: a holographic microscope slide,” Light: Sci. Appl. 6, e17055 (2017).
[Crossref]

Osten, W.

Oxley, M. P.

L. J. Allen and M. P. Oxley, “Phase retrieval from series of images obtained by defocus variation,” Opt. Commun. 199, 65–75 (2001).
[Crossref]

Ozcan, A.

Y. Rivenson, Y. Zhang, H. Günaydın, D. Teng, and A. Ozcan, “Phase recovery and holographic image reconstruction using deep learning in neural networks,” Light Sci. Appl. 7, 17141 (2018).
[Crossref]

Y.-C. Wu, A. Shiledar, Y.-C. Li, J. Wong, S. Feng, X. Chen, C. Chen, K. Jin, S. Janamian, Z. Yang, Z. S. Ballard, Z. Göröcs, A. Feizi, and A. Ozcan, “Air quality monitoring using mobile microscopy and machine learning,” Light: Sci. Appl. 6, e17046 (2017).
[Crossref]

Y. Zhang, H. Wang, Y. Wu, M. Tamamitsu, and A. Ozcan, “Edge sparsity criterion for robust holographic autofocusing,” Opt. Lett. 42, 3824–3827 (2017).
[Crossref]

Y. Wu and A. Ozcan, “Lensless digital holographic microscopy and its applications in biomedicine and environmental monitoring,” Methods 136, 4–16 (2017).
[Crossref]

E. McLeod and A. Ozcan, “Unconventional methods of imaging: computational microscopy and compact implementations,” Rep. Prog. Phys. 79, 076001 (2016).
[Crossref]

Y. Wu, Y. Zhang, W. Luo, and A. Ozcan, “Demosaiced pixel super-resolution for multiplexed holographic color imaging,” Sci. Rep. 6, 28601 (2016).
[Crossref]

Y. Rivenson, Y. Wu, H. Wang, Y. Zhang, A. Feizi, and A. Ozcan, “Sparsity-based multi-height phase recovery in holographic microscopy,” Sci. Rep. 6, srep37862 (2016).
[Crossref]

W. Luo, Y. Zhang, Z. Göröcs, A. Feizi, and A. Ozcan, “Propagation phasor approach for holographic image reconstruction,” Sci. Rep. 6, 22738 (2016).
[Crossref]

W. Luo, A. Greenbaum, Y. Zhang, and A. Ozcan, “Synthetic aperture-based on-chip microscopy,” Light: Sci. Appl. 4, e261 (2015).
[Crossref]

A. Greenbaum, Y. Zhang, A. Feizi, P.-L. Chung, W. Luo, S. R. Kandukuri, and A. Ozcan, “Wide-field computational imaging of pathology slides using lens-free on-chip microscopy,” Sci. Transl. Med. 6, 267ra175 (2014).
[Crossref]

A. Greenbaum and A. Ozcan, “Maskless imaging of dense samples using pixel super-resolution based multi-height lensfree on-chip microscopy,” Opt. Express 20, 3129–3143 (2012).
[Crossref]

T.-W. Su, L. Xue, and A. Ozcan, “High-throughput lensfree 3D tracking of human sperms reveals rare statistics of helical trajectories,” Proc. Natl. Acad. Sci. USA 109, 16018–16022 (2012).
[Crossref]

A. Greenbaum, W. Luo, T.-W. Su, Z. Göröcs, L. Xue, S. O. Isikman, A. F. Coskun, O. Mudanyali, and A. Ozcan, “Imaging without lenses: achievements and remaining challenges of wide-field on-chip microscopy,” Nat. Methods 9, 889–895 (2012).
[Crossref]

S. O. Isikman, W. Bishara, S. Mavandadi, W. Y. Frank, S. Feng, R. Lau, and A. Ozcan, “Lens-free optical tomographic microscope with a large imaging volume on a chip,” Proc. Natl. Acad. Sci. USA 108, 7296–7301 (2011).
[Crossref]

O. Mudanyali, D. Tseng, C. Oh, S. O. Isikman, I. Sencan, W. Bishara, C. Oztoprak, S. Seo, B. Khademhosseini, and A. Ozcan, “Compact, light-weight and cost-effective microscope based on lensless incoherent holography for telemedicine applications,” Lab. Chip 10, 1417–1428 (2010).
[Crossref]

Y. Rivenson, Y. Zhang, H. Gunaydin, D. Teng, and A. Ozcan, “Phase recovery and holographic image reconstruction using deep learning in neural networks,” ArXiv1705.04286 Phys. (2017).

M. Tamamitsu, Y. Zhang, H. Wang, Y. Wu, and A. Ozcan, “Comparison of Gini index and Tamura coefficient for holographic autofocusing based on the edge sparsity of the complex optical wavefront,” ArXiv1708.08055 Phys. (2017).

Oztoprak, C.

O. Mudanyali, D. Tseng, C. Oh, S. O. Isikman, I. Sencan, W. Bishara, C. Oztoprak, S. Seo, B. Khademhosseini, and A. Ozcan, “Compact, light-weight and cost-effective microscope based on lensless incoherent holography for telemedicine applications,” Lab. Chip 10, 1417–1428 (2010).
[Crossref]

Pagliarulo, V.

V. Bianco, B. Mandracchia, V. Marchesano, V. Pagliarulo, F. Olivieri, S. Coppola, M. Paturzo, and P. Ferraro, “Endowing a plain fluidic chip with micro-optics: a holographic microscope slide,” Light: Sci. Appl. 6, e17055 (2017).
[Crossref]

Park, Y.

G. Popescu, Y. Park, W. Choi, R. R. Dasari, M. S. Feld, and K. Badizadegan, “Imaging red blood cell dynamics by quantitative phase microscopy,” Blood Cells. Mol. Dis. 41, 10–16 (2008).
[Crossref]

Paturzo, M.

Pedrini, G.

Peng, T.

J. Min, B. Yao, M. Zhou, R. Guo, M. Lei, Y. Yang, D. Dan, S. Yan, and T. Peng, “Phase retrieval without unwrapping by single-shot dual-wavelength digital holography,” J. Opt. 16, 125409 (2014).
[Crossref]

Popescu, G.

G. Popescu, Y. Park, W. Choi, R. R. Dasari, M. S. Feld, and K. Badizadegan, “Imaging red blood cell dynamics by quantitative phase microscopy,” Blood Cells. Mol. Dis. 41, 10–16 (2008).
[Crossref]

Rappaz, B.

Ren, S.

K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition,” in IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2016), pp. 770–778.

Ren, Z.

Z. Ren, Z. Xu, and E. Y. Lam, “Autofocusing in digital holography using deep learning,” in Three-Dimensional and Multidimensional Microscopy: Image Acquisition and Processing XXV (International Society for Optics and Photonics, 2018), Vol. 10499, p. 104991V.

Rivenson, Y.

Y. Rivenson, Y. Zhang, H. Günaydın, D. Teng, and A. Ozcan, “Phase recovery and holographic image reconstruction using deep learning in neural networks,” Light Sci. Appl. 7, 17141 (2018).
[Crossref]

Y. Rivenson, Y. Wu, H. Wang, Y. Zhang, A. Feizi, and A. Ozcan, “Sparsity-based multi-height phase recovery in holographic microscopy,” Sci. Rep. 6, srep37862 (2016).
[Crossref]

Y. Rivenson, Y. Zhang, H. Gunaydin, D. Teng, and A. Ozcan, “Phase recovery and holographic image reconstruction using deep learning in neural networks,” ArXiv1705.04286 Phys. (2017).

Ronneberger, O.

O. Ronneberger, P. Fischer, and T. Brox, “U-Net: convolutional networks for biomedical image segmentation,” ArXiv1505.04597 Cs (2015).

Schockaert, C.

Sencan, I.

O. Mudanyali, D. Tseng, C. Oh, S. O. Isikman, I. Sencan, W. Bishara, C. Oztoprak, S. Seo, B. Khademhosseini, and A. Ozcan, “Compact, light-weight and cost-effective microscope based on lensless incoherent holography for telemedicine applications,” Lab. Chip 10, 1417–1428 (2010).
[Crossref]

Seo, S.

O. Mudanyali, D. Tseng, C. Oh, S. O. Isikman, I. Sencan, W. Bishara, C. Oztoprak, S. Seo, B. Khademhosseini, and A. Ozcan, “Compact, light-weight and cost-effective microscope based on lensless incoherent holography for telemedicine applications,” Lab. Chip 10, 1417–1428 (2010).
[Crossref]

Sheikh, H. R.

Z. Wang, A. C. Bovik, H. R. Sheikh, and E. P. Simoncelli, “Image quality assessment: from error visibility to structural similarity,” IEEE Trans. Image Process. 13, 600–612 (2004).
[Crossref]

Shiledar, A.

Y.-C. Wu, A. Shiledar, Y.-C. Li, J. Wong, S. Feng, X. Chen, C. Chen, K. Jin, S. Janamian, Z. Yang, Z. S. Ballard, Z. Göröcs, A. Feizi, and A. Ozcan, “Air quality monitoring using mobile microscopy and machine learning,” Light: Sci. Appl. 6, e17046 (2017).
[Crossref]

Shimobaba, T.

T. Shimobaba, T. Kakue, and T. Ito, “Convolutional neural network-based regression for depth prediction in digital holography,” ArXiv1802.00664 Cs Eess (2018).

Simoncelli, E. P.

Z. Wang, A. C. Bovik, H. R. Sheikh, and E. P. Simoncelli, “Image quality assessment: from error visibility to structural similarity,” IEEE Trans. Image Process. 13, 600–612 (2004).
[Crossref]

Sinha, A.

A. Sinha, J. Lee, S. Li, and G. Barbastathis, “Lensless computational imaging through deep learning,” Optica 4, 1117–1125 (2017).
[Crossref]

A. Sinha, J. Lee, S. Li, and G. Barbastathis, “Lensless computational imaging through deep learning,” ArXiv1702.08516 Phys. (2017).

Situ, G.

Su, T.-W.

A. Greenbaum, W. Luo, T.-W. Su, Z. Göröcs, L. Xue, S. O. Isikman, A. F. Coskun, O. Mudanyali, and A. Ozcan, “Imaging without lenses: achievements and remaining challenges of wide-field on-chip microscopy,” Nat. Methods 9, 889–895 (2012).
[Crossref]

T.-W. Su, L. Xue, and A. Ozcan, “High-throughput lensfree 3D tracking of human sperms reveals rare statistics of helical trajectories,” Proc. Natl. Acad. Sci. USA 109, 16018–16022 (2012).
[Crossref]

Sun, J.

K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition,” in IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2016), pp. 770–778.

Tamamitsu, M.

Y. Zhang, H. Wang, Y. Wu, M. Tamamitsu, and A. Ozcan, “Edge sparsity criterion for robust holographic autofocusing,” Opt. Lett. 42, 3824–3827 (2017).
[Crossref]

M. Tamamitsu, Y. Zhang, H. Wang, Y. Wu, and A. Ozcan, “Comparison of Gini index and Tamura coefficient for holographic autofocusing based on the edge sparsity of the complex optical wavefront,” ArXiv1708.08055 Phys. (2017).

Teng, D.

Y. Rivenson, Y. Zhang, H. Günaydın, D. Teng, and A. Ozcan, “Phase recovery and holographic image reconstruction using deep learning in neural networks,” Light Sci. Appl. 7, 17141 (2018).
[Crossref]

Y. Rivenson, Y. Zhang, H. Gunaydin, D. Teng, and A. Ozcan, “Phase recovery and holographic image reconstruction using deep learning in neural networks,” ArXiv1705.04286 Phys. (2017).

Tseng, D.

O. Mudanyali, D. Tseng, C. Oh, S. O. Isikman, I. Sencan, W. Bishara, C. Oztoprak, S. Seo, B. Khademhosseini, and A. Ozcan, “Compact, light-weight and cost-effective microscope based on lensless incoherent holography for telemedicine applications,” Lab. Chip 10, 1417–1428 (2010).
[Crossref]

Unser, M.

von Bally, G.

Wang, H.

Y. Zhang, H. Wang, Y. Wu, M. Tamamitsu, and A. Ozcan, “Edge sparsity criterion for robust holographic autofocusing,” Opt. Lett. 42, 3824–3827 (2017).
[Crossref]

Y. Rivenson, Y. Wu, H. Wang, Y. Zhang, A. Feizi, and A. Ozcan, “Sparsity-based multi-height phase recovery in holographic microscopy,” Sci. Rep. 6, srep37862 (2016).
[Crossref]

M. Tamamitsu, Y. Zhang, H. Wang, Y. Wu, and A. Ozcan, “Comparison of Gini index and Tamura coefficient for holographic autofocusing based on the edge sparsity of the complex optical wavefront,” ArXiv1708.08055 Phys. (2017).

Wang, Z.

Z. Wang, A. C. Bovik, H. R. Sheikh, and E. P. Simoncelli, “Image quality assessment: from error visibility to structural similarity,” IEEE Trans. Image Process. 13, 600–612 (2004).
[Crossref]

Wong, J.

Y.-C. Wu, A. Shiledar, Y.-C. Li, J. Wong, S. Feng, X. Chen, C. Chen, K. Jin, S. Janamian, Z. Yang, Z. S. Ballard, Z. Göröcs, A. Feizi, and A. Ozcan, “Air quality monitoring using mobile microscopy and machine learning,” Light: Sci. Appl. 6, e17046 (2017).
[Crossref]

Wu, Y.

Y. Zhang, H. Wang, Y. Wu, M. Tamamitsu, and A. Ozcan, “Edge sparsity criterion for robust holographic autofocusing,” Opt. Lett. 42, 3824–3827 (2017).
[Crossref]

Y. Wu and A. Ozcan, “Lensless digital holographic microscopy and its applications in biomedicine and environmental monitoring,” Methods 136, 4–16 (2017).
[Crossref]

Y. Wu, Y. Zhang, W. Luo, and A. Ozcan, “Demosaiced pixel super-resolution for multiplexed holographic color imaging,” Sci. Rep. 6, 28601 (2016).
[Crossref]

Y. Rivenson, Y. Wu, H. Wang, Y. Zhang, A. Feizi, and A. Ozcan, “Sparsity-based multi-height phase recovery in holographic microscopy,” Sci. Rep. 6, srep37862 (2016).
[Crossref]

M. Tamamitsu, Y. Zhang, H. Wang, Y. Wu, and A. Ozcan, “Comparison of Gini index and Tamura coefficient for holographic autofocusing based on the edge sparsity of the complex optical wavefront,” ArXiv1708.08055 Phys. (2017).

Wu, Y.-C.

Y.-C. Wu, A. Shiledar, Y.-C. Li, J. Wong, S. Feng, X. Chen, C. Chen, K. Jin, S. Janamian, Z. Yang, Z. S. Ballard, Z. Göröcs, A. Feizi, and A. Ozcan, “Air quality monitoring using mobile microscopy and machine learning,” Light: Sci. Appl. 6, e17046 (2017).
[Crossref]

Xu, W.

W. Xu, M. H. Jericho, I. A. Meinertzhagen, and H. J. Kreuzer, “Digital in-line holography for biological applications,” Proc. Natl. Acad. Sci. USA 98, 11301–11305 (2001).
[Crossref]

Xu, Z.

Z. Ren, Z. Xu, and E. Y. Lam, “Autofocusing in digital holography using deep learning,” in Three-Dimensional and Multidimensional Microscopy: Image Acquisition and Processing XXV (International Society for Optics and Photonics, 2018), Vol. 10499, p. 104991V.

Xue, L.

A. Greenbaum, W. Luo, T.-W. Su, Z. Göröcs, L. Xue, S. O. Isikman, A. F. Coskun, O. Mudanyali, and A. Ozcan, “Imaging without lenses: achievements and remaining challenges of wide-field on-chip microscopy,” Nat. Methods 9, 889–895 (2012).
[Crossref]

T.-W. Su, L. Xue, and A. Ozcan, “High-throughput lensfree 3D tracking of human sperms reveals rare statistics of helical trajectories,” Proc. Natl. Acad. Sci. USA 109, 16018–16022 (2012).
[Crossref]

Yan, S.

J. Min, B. Yao, M. Zhou, R. Guo, M. Lei, Y. Yang, D. Dan, S. Yan, and T. Peng, “Phase retrieval without unwrapping by single-shot dual-wavelength digital holography,” J. Opt. 16, 125409 (2014).
[Crossref]

Yang, Y.

J. Min, B. Yao, M. Zhou, R. Guo, M. Lei, Y. Yang, D. Dan, S. Yan, and T. Peng, “Phase retrieval without unwrapping by single-shot dual-wavelength digital holography,” J. Opt. 16, 125409 (2014).
[Crossref]

Yang, Z.

Y.-C. Wu, A. Shiledar, Y.-C. Li, J. Wong, S. Feng, X. Chen, C. Chen, K. Jin, S. Janamian, Z. Yang, Z. S. Ballard, Z. Göröcs, A. Feizi, and A. Ozcan, “Air quality monitoring using mobile microscopy and machine learning,” Light: Sci. Appl. 6, e17046 (2017).
[Crossref]

Yao, B.

J. Min, B. Yao, M. Zhou, R. Guo, M. Lei, Y. Yang, D. Dan, S. Yan, and T. Peng, “Phase retrieval without unwrapping by single-shot dual-wavelength digital holography,” J. Opt. 16, 125409 (2014).
[Crossref]

Yourassowsky, C.

Yuan, C.

Zhang, X.

K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition,” in IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2016), pp. 770–778.

Zhang, Y.

Y. Rivenson, Y. Zhang, H. Günaydın, D. Teng, and A. Ozcan, “Phase recovery and holographic image reconstruction using deep learning in neural networks,” Light Sci. Appl. 7, 17141 (2018).
[Crossref]

Y. Zhang, H. Wang, Y. Wu, M. Tamamitsu, and A. Ozcan, “Edge sparsity criterion for robust holographic autofocusing,” Opt. Lett. 42, 3824–3827 (2017).
[Crossref]

Y. Wu, Y. Zhang, W. Luo, and A. Ozcan, “Demosaiced pixel super-resolution for multiplexed holographic color imaging,” Sci. Rep. 6, 28601 (2016).
[Crossref]

Y. Rivenson, Y. Wu, H. Wang, Y. Zhang, A. Feizi, and A. Ozcan, “Sparsity-based multi-height phase recovery in holographic microscopy,” Sci. Rep. 6, srep37862 (2016).
[Crossref]

W. Luo, Y. Zhang, Z. Göröcs, A. Feizi, and A. Ozcan, “Propagation phasor approach for holographic image reconstruction,” Sci. Rep. 6, 22738 (2016).
[Crossref]

W. Luo, A. Greenbaum, Y. Zhang, and A. Ozcan, “Synthetic aperture-based on-chip microscopy,” Light: Sci. Appl. 4, e261 (2015).
[Crossref]

A. Greenbaum, Y. Zhang, A. Feizi, P.-L. Chung, W. Luo, S. R. Kandukuri, and A. Ozcan, “Wide-field computational imaging of pathology slides using lens-free on-chip microscopy,” Sci. Transl. Med. 6, 267ra175 (2014).
[Crossref]

M. Tamamitsu, Y. Zhang, H. Wang, Y. Wu, and A. Ozcan, “Comparison of Gini index and Tamura coefficient for holographic autofocusing based on the edge sparsity of the complex optical wavefront,” ArXiv1708.08055 Phys. (2017).

Y. Rivenson, Y. Zhang, H. Gunaydin, D. Teng, and A. Ozcan, “Phase recovery and holographic image reconstruction using deep learning in neural networks,” ArXiv1705.04286 Phys. (2017).

Zhou, M.

J. Min, B. Yao, M. Zhou, R. Guo, M. Lei, Y. Yang, D. Dan, S. Yan, and T. Peng, “Phase retrieval without unwrapping by single-shot dual-wavelength digital holography,” J. Opt. 16, 125409 (2014).
[Crossref]

Appl. Opt. (6)

Blood Cells. Mol. Dis. (1)

G. Popescu, Y. Park, W. Choi, R. R. Dasari, M. S. Feld, and K. Badizadegan, “Imaging red blood cell dynamics by quantitative phase microscopy,” Blood Cells. Mol. Dis. 41, 10–16 (2008).
[Crossref]

IEEE Trans. Image Process. (1)

Z. Wang, A. C. Bovik, H. R. Sheikh, and E. P. Simoncelli, “Image quality assessment: from error visibility to structural similarity,” IEEE Trans. Image Process. 13, 600–612 (2004).
[Crossref]

J. Opt. (1)

J. Min, B. Yao, M. Zhou, R. Guo, M. Lei, Y. Yang, D. Dan, S. Yan, and T. Peng, “Phase retrieval without unwrapping by single-shot dual-wavelength digital holography,” J. Opt. 16, 125409 (2014).
[Crossref]

J. Opt. Soc. Am. A (1)

Lab. Chip (1)

O. Mudanyali, D. Tseng, C. Oh, S. O. Isikman, I. Sencan, W. Bishara, C. Oztoprak, S. Seo, B. Khademhosseini, and A. Ozcan, “Compact, light-weight and cost-effective microscope based on lensless incoherent holography for telemedicine applications,” Lab. Chip 10, 1417–1428 (2010).
[Crossref]

Light Sci. Appl. (1)

Y. Rivenson, Y. Zhang, H. Günaydın, D. Teng, and A. Ozcan, “Phase recovery and holographic image reconstruction using deep learning in neural networks,” Light Sci. Appl. 7, 17141 (2018).
[Crossref]

Light: Sci. Appl. (3)

W. Luo, A. Greenbaum, Y. Zhang, and A. Ozcan, “Synthetic aperture-based on-chip microscopy,” Light: Sci. Appl. 4, e261 (2015).
[Crossref]

V. Bianco, B. Mandracchia, V. Marchesano, V. Pagliarulo, F. Olivieri, S. Coppola, M. Paturzo, and P. Ferraro, “Endowing a plain fluidic chip with micro-optics: a holographic microscope slide,” Light: Sci. Appl. 6, e17055 (2017).
[Crossref]

Y.-C. Wu, A. Shiledar, Y.-C. Li, J. Wong, S. Feng, X. Chen, C. Chen, K. Jin, S. Janamian, Z. Yang, Z. S. Ballard, Z. Göröcs, A. Feizi, and A. Ozcan, “Air quality monitoring using mobile microscopy and machine learning,” Light: Sci. Appl. 6, e17046 (2017).
[Crossref]

Methods (1)

Y. Wu and A. Ozcan, “Lensless digital holographic microscopy and its applications in biomedicine and environmental monitoring,” Methods 136, 4–16 (2017).
[Crossref]

Nat. Methods (1)

A. Greenbaum, W. Luo, T.-W. Su, Z. Göröcs, L. Xue, S. O. Isikman, A. F. Coskun, O. Mudanyali, and A. Ozcan, “Imaging without lenses: achievements and remaining challenges of wide-field on-chip microscopy,” Nat. Methods 9, 889–895 (2012).
[Crossref]

Opt. Commun. (1)

L. J. Allen and M. P. Oxley, “Phase retrieval from series of images obtained by defocus variation,” Opt. Commun. 199, 65–75 (2001).
[Crossref]

Opt. Express (2)

Opt. Lett. (5)

Optica (1)

Proc. Natl. Acad. Sci. USA (3)

W. Xu, M. H. Jericho, I. A. Meinertzhagen, and H. J. Kreuzer, “Digital in-line holography for biological applications,” Proc. Natl. Acad. Sci. USA 98, 11301–11305 (2001).
[Crossref]

T.-W. Su, L. Xue, and A. Ozcan, “High-throughput lensfree 3D tracking of human sperms reveals rare statistics of helical trajectories,” Proc. Natl. Acad. Sci. USA 109, 16018–16022 (2012).
[Crossref]

S. O. Isikman, W. Bishara, S. Mavandadi, W. Y. Frank, S. Feng, R. Lau, and A. Ozcan, “Lens-free optical tomographic microscope with a large imaging volume on a chip,” Proc. Natl. Acad. Sci. USA 108, 7296–7301 (2011).
[Crossref]

Rep. Prog. Phys. (1)

E. McLeod and A. Ozcan, “Unconventional methods of imaging: computational microscopy and compact implementations,” Rep. Prog. Phys. 79, 076001 (2016).
[Crossref]

Sci. Rep. (3)

Y. Wu, Y. Zhang, W. Luo, and A. Ozcan, “Demosaiced pixel super-resolution for multiplexed holographic color imaging,” Sci. Rep. 6, 28601 (2016).
[Crossref]

Y. Rivenson, Y. Wu, H. Wang, Y. Zhang, A. Feizi, and A. Ozcan, “Sparsity-based multi-height phase recovery in holographic microscopy,” Sci. Rep. 6, srep37862 (2016).
[Crossref]

W. Luo, Y. Zhang, Z. Göröcs, A. Feizi, and A. Ozcan, “Propagation phasor approach for holographic image reconstruction,” Sci. Rep. 6, 22738 (2016).
[Crossref]

Sci. Transl. Med. (1)

A. Greenbaum, Y. Zhang, A. Feizi, P.-L. Chung, W. Luo, S. R. Kandukuri, and A. Ozcan, “Wide-field computational imaging of pathology slides using lens-free on-chip microscopy,” Sci. Transl. Med. 6, 267ra175 (2014).
[Crossref]

Other (10)

J. W. Goodman, Introduction to Fourier Optics, 3rd ed. (Roberts and Company, 2005).

M. Tamamitsu, Y. Zhang, H. Wang, Y. Wu, and A. Ozcan, “Comparison of Gini index and Tamura coefficient for holographic autofocusing based on the edge sparsity of the complex optical wavefront,” ArXiv1708.08055 Phys. (2017).

Z. Ren, Z. Xu, and E. Y. Lam, “Autofocusing in digital holography using deep learning,” in Three-Dimensional and Multidimensional Microscopy: Image Acquisition and Processing XXV (International Society for Optics and Photonics, 2018), Vol. 10499, p. 104991V.

T. Shimobaba, T. Kakue, and T. Ito, “Convolutional neural network-based regression for depth prediction in digital holography,” ArXiv1802.00664 Cs Eess (2018).

Y. Rivenson, Y. Zhang, H. Gunaydin, D. Teng, and A. Ozcan, “Phase recovery and holographic image reconstruction using deep learning in neural networks,” ArXiv1705.04286 Phys. (2017).

A. Sinha, J. Lee, S. Li, and G. Barbastathis, “Lensless computational imaging through deep learning,” ArXiv1702.08516 Phys. (2017).

O. Ronneberger, P. Fischer, and T. Brox, “U-Net: convolutional networks for biomedical image segmentation,” ArXiv1505.04597 Cs (2015).

K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition,” in IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2016), pp. 770–778.

M. Abadi, P. Barham, J. Chen, Z. Chen, A. Davis, J. Dean, M. Devin, S. Ghemawat, G. Irving, and M. Isard, “TensorFlow: a system for large-scale machine learning,” in Proceedings of the 12th USENIX Symposium on Operating Systems Design and Implementation (2016), Vol. 16, pp. 265–283.

D. P. Kingma and J. Ba, “Adam: A Method for Stochastic Optimization,” ArXiv E-Prints 1412, arXiv:1412.6980 (2014).

Supplementary Material (5)

NameDescription
» Supplement 1       Supplemental document
» Visualization 1       Comparison of particle images as a function of the axial defocus distance
» Visualization 2       Comparison of particle images as a function of the axial defocus distance, also containing the range that the HIDEF network was not trained for
» Visualization 3       Comparison of breast tissue images as a function of the axial defocus distance
» Visualization 4       Comparison of breast tissue images as a function of the axial defocus distance, also containing the range that the HIDEF network was not trained for

Cited By

OSA participates in Crossref's Cited-By Linking service. Citing articles from OSA journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (4)

Fig. 1.
Fig. 1. HIDEF CNN, after its training, simultaneously achieves phase recovery and autofocusing, significantly extending the DOF of holographic image reconstruction. The network has a down-sampling decomposition path (green arrows) and a symmetric up-sampling expansion path (red arrows). The blue arrows mark the paths that skip through the convolutional layers (defining the residual connections). The numbers in italics represent the number of the input and output channels in these blocks at different levels. The orange arrows represent the connections between the down-sampling and up-sampling paths, where the channels of the output from the down-sampling block are concatenated with the output from the corresponding up-sampling block, doubling the channel numbers (see Supplement 1 for further details). ASP, angular spectrum propagation.
Fig. 2.
Fig. 2. Extended-DOF reconstruction of aerosols at different depths using HIDEF. (a) After its training, HIDEF CNN brings all the particles within the FOV into focus while also performing phase recovery. Each particle’s depth is color-coded with respect to the back-propagation distance (1 mm), as shown with the color bar on the right. (b) As a comparison, MH-PR images of the same FOV show that some of the particles come into focus at different depths and become invisible or distorted at other depths. For each particle’s arrow, the same color-coding is used as in (a). (c) The enhanced-DOF of HIDEF is illustrated by tracking a particle’s amplitude full width at half-maximum (FWHM) as a function of the axial defocus distance (see Supplement 1 for details). HIDEF preserves the particle’s FWHM diameter and its correct image across a large DOF of > 0.2 mm , which is expected since it was trained for this range of defocus ( ± 0.1 mm ). On the other hand, MH-PR results show a much more limited DOF, as also confirmed with the same particle’s amplitude images at different defocus distances, reported at the bottom. Also see Visualization 1 and Visualization 2 for a detailed comparison.
Fig. 3.
Fig. 3. Comparison of HIDEF results against free-space back-propagation (CNN Input) and MH-PR (MH Phase Recovered) results, as a function of axial defocus distance (dz). The test sample is a thin section of a human breast tissue sample. The first two columns use a single intensity hologram, whereas the third column (MH-PR) uses eight in-line holograms of the same sample, acquired at different heights. These results clearly demonstrate that the HIDEF network simultaneously performs phase recovery and autofocusing over the axial defocus range that it was trained for (i.e., | d z | 100 μm in this case). Outside this training range (marked with red dz values), the network output is not reliable. See Visualization 3 and Visualization 4 for a detailed comparison. Scale bar: 20 µm.
Fig. 4.
Fig. 4. SSIM values as a function of the axial defocus distance. Each one of these SSIM curves is averaged over 180 test FOVs ( 512 × 512 pixels ) corresponding to thin sections of a human breast tissue sample. The results confirm the extended-DOF of the HIDEF network output images, up to the axial defocus range that it was trained for.

Equations (1)

Equations on this page are rendered with MathJax. Learn more.

SSIM ( U 1 , U 2 ) = ( 2 μ 1 μ 2 + C 1 ) ( 2 σ 1 , 2 + C 2 ) ( μ 1 2 + μ 2 2 + C 1 ) ( σ 1 2 + σ 2 2 + C 2 ) ,

Metrics