Lens-free holographic microscopy (LFHM) provides a cost-effective tool for large field-of-view imaging in various biomedical applications. However, due to the unit optical magnification, its spatial resolution is limited by the pixel size of the imager. Pixel super-resolution (PSR) technique tackles this problem by using a series of sub-pixel shifted low-resolution (LR) lens-free holograms to form the high-resolution (HR) hologram. Conventional iterative PSR methods require a large number of measurements and a time-consuming reconstruction process, limiting the throughput of LFHM in practice. Here we report a deep learning-based PSR approach to enhance the resolution of LFHM. Compared with the existing PSR methods, our neural network-based approach outputs the HR hologram in an end-to-end fashion and maintains consistency in resolution improvement with a reduced number of LR holograms. Moreover, by exploiting the resolution degradation model in the imaging process, the network can be trained with a data set synthesized from the LR hologram itself without resorting to the HR ground truth. We validated the effectiveness and the robustness of our method by imaging various types of samples using a single network trained on an entirely different data set. This deep learning-based PSR approach can significantly accelerate both the data acquisition and the HR hologram reconstruction processes, therefore providing a practical solution to fast, lens-free, super-resolution imaging.
© 2019 Optical Society of America under the terms of the OSA Open Access Publishing Agreement
Lens-free holographic microscopy (LFHM) is a rapidly developing technology based on the principle of in-line holography [1–3]. Because of its lensless feature, together with its unit optical magnification, the LFHM has the advantage of a large and scalable field-of-view (FOV) and is more compact and cost-effective than the conventional lens-based microscopy. Recently, it has found numerous applications in high-throughput screening, point-of-care diagnosis and lab-on-a-chip technologies [4–10]. Meanwhile, the growing demand for visualizing fine details in biological samples such as subcellular structures poses a challenge for the LFHM since its spatial resolution is limited by the pixel size of the image sensor used for recording the holograms. According to the Nyquist-Shannon sampling theorem, details of the hologram smaller than the pixel size are lost during image acquisition. To overcome this limitation, various pixel super-resolution (PSR) techniques have been proposed to extract the high resolution (HR) information of the hologram via multi-frame imaging [11–17]. For example, a series of low resolution (LR) holograms with sub-pixel shift in lateral directions can be combined to reconstruct the HR hologram. Despite their success to achieve sub-pixel resolution, these approaches often require a large number of measurements or a time-consuming iterative algorithm for HR image synthesis . As such, the PSR techniques are reducing the achievable speed or throughput and hence the appeal of LFHM for many applications. To address this challenge, Song et al. proposed a sparsity-based PSR method where the spatial resolution was improved with single-frame measurement by using compressed sensing . However, such method is only applicable to sparse samples which prevents it from being used in digital pathology and dense cell culture imaging.
On the other hand, machine learning, in particular deep learning, has achieved considerable success in the image super-resolution task in computer vision [20–25]. Compared with the conventional approaches, deep learning-based image super-resolution uses a convolutional neural network (CNN) to learn an end-to-end mapping between the LR image and the HR image without any handcrafted model , making it a promising approach for super-resolution optical microscopy. Recently, deep learning techniques have been successfully applied to enhance the resolution and accelerate the acquisition speed for different imaging modalities [27–32]. Nevertheless, the following challenges remain unaddressed. First, the training of the CNN described in these works usually demands a data set comprising pairs of the LR images and the corresponding HR ground truth. In practice, such ground truth is either unavailable due to hardware limitation or too cumbersome and expensive to produce. For example, in , a high numerical aperture (NA) objective is required to generate the HR images. Moreover, an image registration algorithm needs to be implemented to align the LR-HR image pair. Second, the performance of the trained network could deteriorate when it is tested with images significantly different from the training data set , limiting its applicability in real-world problems since the network needs to be re-trained for each type of sample. To improve the generality of the deep learning approach, the authors of [28,31] suggested training a universal network based on a data set comprising various types of samples or training a set of networks to deal with different scenarios. However, such methods can result in a more complicated neural network structure  and increased the training and inference time.
In this paper, we propose a robust deep learning-based PSR approach that can achieve sub-pixel resolution in the LFHM in a non-iterative fashion and without a need of an HR ground truth. Specifically, we train a CNN to take a stack of sub-pixel shifted LR holograms as the input and produce the corresponding HR hologram as the output from which the HR image of the object is reconstructed (Fig. 1). The training data set is generated based on a single-frame hologram captured by the LFHM which serves as the “HR ground truth” itself. The corresponding LR image sequence is synthesized by digitally shifting and down-sampling this hologram i.e. by mimicking the image acquisition process of sub-pixel shifted LR holograms. Our assumption is that the CNN can learn the inverse mapping of the resolution degradation model from the imaging process instead of inferring the HR details from the LR image. Since the resolution degradation model is independent of the acquired images, a neural network trained in this way can achieve a robust super-resolution even if the HR ground truth is not available or the testing image is significantly different from the images of the training data set. To demonstrate this, we applied a CNN trained on the neural cell hologram to predict the HR holograms of different types of samples including a USAF resolution test chart, polystyrene beads and a lily anther. By comparing the results with those obtained by an iterative PSR method , we validated the effectiveness and robustness of our approach. Furthermore, we observed similar resolution enhancement from the network when the number of LR frames was reduced from 25 to 9 whereas the iterative method suffered from a noticeable drop in resolution.
Our work demonstrates a promising deep learning-based PSR approach for super resolution LFHM. Compared with conventional iterative PSR methods, it can significantly accelerate both the HR hologram reconstruction and the data acquisition processes by directly mapping a HR hologram from the LR holograms while reducing the required number of frame acquisition. In contrast to previously reported deep learning approaches, the proposed method needs no HR ground truth for the network training and is sample agnostic in the sense that the same network is applicable to widely different samples. Our approach can provide a flexible and practical solution for fast super-resolution lens-free imaging.
2.1 Experimental setup for super-resolution LFHM
In the super-resolution LFHM setup (Fig. 2(a)), the sample is illuminated by a coherent beam from a fiber pigtailed laser diode (Thorlabs LP642-SF20). The interference between the incident and the scattered light generates a hologram which is captured by a CMOS image sensor (TOSHIBA TC358743XBG, monochrome, 13-megapixel, pixel size: 1.12 μm) behind the sample. To achieve sub-pixel shift of the hologram relative to the imager, we move the fiber tip in xy direction with a translational stage (Thorlabs PT3A). The corresponding hologram shift on the sensor plane can be approximated by geometrical optics as follows (Fig. 2(b) and Eq. (1)):Fig. 2(c), each hologram in the sequence is shifted with respect to each other by fractions of the LR grid. An HR hologram with an effective pixel size of 224 nm can be reconstructed by combining these measurements with a PSR algorithm. It is worth noting that although the zigzag itinerary of the sub-pixel shift is used here, it is possible to implement other shift patterns for multiple LR hologram acquisition.
2.2 Data set generation and neural network training
To train a CNN to predict the HR hologram from a series of LR holograms, we created a data set based on a single hologram of the neural cell culture (see Appendix A for details of sample preparation). During the training, this hologram serves as our “HR ground truth” from which a series of 25 LR holograms are digitally synthesized by mimicking the image acquisition process described above (Fig. 3). In the digital synthesis process, the “HR ground truth” hologram was shifted with a step size of 1 physical pixel following the same itinerary described in Fig. 2(c). The LR frames were generated by convolving each shifted frame with a 5 × 5 normalized box filter and down-sampling the resulting image by a factor of 5. The LR image stack and the original hologram forms a LR-HR pair from which we implemented image augmentation to enlarge the size of the data set by rotating it to 8 random angles between 0 and 180 degrees and by cropping each rotated image pair into 200 randomly distributed 128 × 128 sub-regions. Lastly, a random flip in the vertical and horizontal directions was applied for further data augmentation, yielding a final data set with 1600 LR-HR pairs. In addition to the training data set, we created a test data set of 400 LR-HR pairs using the same procedure based on the hologram of a different cell culture sample. This data set will be used to evaluate the accuracy of the learning model along with the training data set during the model training. The accuracy is quantified by the structural similarity (SSIM) index metric  which is defined as follows:
The architecture of the neural network reported here is inspired by U-net . The input LR image stack is first up-sampled by a factor of 5 to match the size of the target HR image. The rest of the network consists of 5 levels of successive convolutional layers with adjacent levels connected by a down-sampling path and a symmetric up-sampling path to process image features at different scales (see Appendix B for a detailed description of the neural network). The CNN is implemented in Keras with Tensorflow backend [38,39]. We trained the neural network for 24000 iterations at a learning rate of 10−4 with a mini-batch size of 4. During the training, the adaptive moment optimizer  is used to reduce the mean squared error between the CNN prediction and the target image by iteratively changing the network weights. The training of the neural network only needs be performed once. Afterwards, the CNN is ready to reconstruct the HR hologram from a stack of LR holograms in an end-to-end fashion.
Since the number of images for the network input can be easily modified, our CNN structure has the flexibility to achieve PSR using fewer LR images, therefore reducing the complexity of data acquisition. To demonstrate this, we created a new training data set based on the previously generated training data set. For each LR-HR pair in it, 9 out of 25 LR images were selected by skipping every other row and column of the HR grid shown in Fig. 2(c). The resulting data set was used to train a second CNN following the same training strategy. After the training, this CNN only needs 9 LR holograms to produce the HR hologram with the same up-sampling factor. The performance of both CNNs will be compared in the following sections.
3.1 Evaluation of resolution improvement with USAF resolution test chart
The accuracy of the CNN during the training is shown in Figs. 4(a) and 4(b) for both neural networks with 25 and 9 holograms as input. The accuracy of the learning model reaches an accuracy > 0.9 after a few thousand iterations for both the training data and the test data. To evaluate the resolution improvement of the deep learning-based PSR approach, we captured 25 sub-pixel shifted LR holograms of a custom-made 1951 USAF resolution test chart at a sample-to-sensor distance of ~300 μm. As a baseline method, the first LR hologram in the sequence was up-sampled by a factor of 5 using bicubic interpolation. The square root of the intensity of the interpolated hologram (Fig. 5(a)) was back-propagated to the focal plane by the angular spectrum method . Figure 5(c) shows the reconstructed amplitude of the test chart. While three valleys of group 8, element 5 of the resolution chart (1.23 μm line width) are well-differentiated, the 6th element in the same group (1.10 μm line width) is barely resolved, which is consistent with the resolution limit imposed by Nyquist-Shannon sampling theorem. On the other hand, as shown in Fig. 5(d), lost details of the interference pattern are recovered in the HR hologram predicted by the CNN. The reconstructed amplitude image from this hologram is shown in Fig. 5(e) where group 9, element 3 was clearly resolved along with 2 valleys of group 9, element 4, indicating a half-pitch resolution between 690 nm and 780 nm. To further evaluate the resolution improvement of the proposed method, we performed a modulation transfer function (MTF) analysis by calculating the contrast of each element of the resolution test chart [27,42]. The MTFs obtained by the baseline method, the deep learning-based method and a 40X 0.5NA objective are shown in Fig. 5(k) demonstrating the resolution enhancement from the proposed method with a significant higher modulation contrast.
For a comparison, we implemented the iterative PSR algorithm proposed by Farsiu et al.  (see Appendix C for details of the implementation). The same LR image sequence was utilized to generate an HR hologram with 20 iterations. Despite the same ability to resolve group 9, element 3, the cross-section profile of the reconstructed image (Fig. 6(b)) reveals a lower contrast compared with the result of the CNN-based approach. Moreover, it is worth mentioning that the latter performs PSR in an end-to-end fashion which is an important advantage over the time-consuming iterative algorithm for large FOV lens-free imaging. Next, we selected 9 out of 25 LR images in the sequence to explore the possibility of achieving the same level of resolution improvement using a reduced amount of data. It is not surprising that the reconstructed image based on the iterative method exhibits a noticeable blurriness with a smaller number of measurements. In contrast, using 9 LR holograms, the CNN-based method succeeded in maintaining the sharpness of the image as shown in Fig. 6(c).
3.2 Full width at half maximum (FWHM) measurement of polystyrene beads
In our second experiment, the same CNN was used to recover the HR hologram of polystyrene beads with a diameter of 1 μm (see Appendix A for details of sample preparation). Figure 7(a) shows the reconstructed amplitude image of the bead sample based on the prediction of the network. Note that two beads which are unresolved in the reconstruction of the baseline hologram are now separated as demonstrated in Figs. 7(b) and 7(d). The FWHM of the beads were measured to evaluate quantitatively the resolution performance of previously mentioned PSR approaches. To calculate the FWHM, we averaged the reconstructed amplitude of 108 isolated beads and fitted a 2D Gaussian function to the resulting image (Fig. 7(f)). The standard deviation of the Gaussian distribution in x and y direction was averaged and converted to the corresponding FWHM. By comparing the FWHM values obtained with different methods (Fig. 7(g)), we validated the conclusions from the USAF resolution test chart experiment. In line with our previous observations, the CNN-based method yields similar results to the iterative method using 25 LR frames. However, the proposed method outperforms the latter when only 9 LR frames are available, approaching the result produced by 25 holograms.
3.3 Super-resolution lens-free imaging of lily anther
To demonstrate its capability of improving the resolution of LFHM for dense biological samples, we applied the deep learning-based approach to the lily anther (Euromex SB 2210). Since a simple back-propagation of the intensity-only hologram to the sample plane can result in twin-image artifacts for dense-biological samples , we implemented a multi-wavelength-based phase retrieval algorithm as described by Stahl et al.  to obtain a clear reconstructed image of the object. For each sub-pixel shift, three holograms were captured by illuminating the sample with coherent beams of different wavelengths. The sequence of LR holograms under the same illumination wavelength condition were utilized by the CNN to predict the HR hologram. The phase retrieval algorithm took three HR holograms as input and generated the reconstructed amplitude and phase of the sample after 10 iterations. Figure 8(a) shows the reconstructed phase image of the lily anther over a FOV of ~8 mm2 which only occupies ~50% of the full FOV of our system (The whole FOV of a 10X microscope objective is shown in Fig. 8(c) for comparison). As a baseline measurement, we applied the same phase retrieval algorithm to the LR holograms without using any super-resolution techniques. Figures 8(d)-8(g) visually demonstrate the enhanced resolution due to the deep learning-based PSR method over the baseline measurement and also suggest that the resolution is on a par with a 40X microscope objective lens image with an order of magnitude larger FOV. Additionally, the SSIMs of the images reconstructed by the baseline approach and the deep learning approach with respect to the result from the iterative method were calculated and listed in Table 1. The considerably improved structural similarity demonstrates the potential of the proposed method to enhance the resolution of dense biological samples.
3.4 Influence of the training data set generated with different types of samples
Note that the CNN used to generate the HR image is trained only on a hologram from the neural cell culture. A comparison of the training hologram and the testing hologram is shown in Fig. 9(a). Although these holograms varied significantly in terms of features, brightness and sample-to-sensor distance, the super-resolution performance of the neural network stayed comparable to the results of the iterative algorithm as described in the previous sections which demonstrates its potential to perform PSR for different types of samples without re-training. To further test the robustness of the deep learning-based approach, we trained another CNN using the hologram from the MCF7 cell culture (see Appendix A for details of sample preparation). The new network achieved a consistent resolution improvement as indicated by the cross-section profiles of the USAF resolution chart (Figs. 9(b) and 9(c)) and the FWHM of the polystyrene beads (Fig. 9(d)), demonstrating the robustness and the universality of our approach.
3.5 Run-time profiling
On a laptop with a Core i7-4720HQ CPU and an Nvidia GTX 970M GPU, we profiled the run-time of previously mentioned techniques. The LR holograms used for the inference have the size of 512 × 512 which corresponds to a HR hologram with the size of 2460 × 2460. The results are summarized in Table 2. Although the deep learning-based methods take longer time to train the neural network, its inference time is two orders of magnitude shorter. Moreover, benefiting from the generalizability demonstrated in the previous sections, the training of the neural network needs to be performed only once.
To overcome the resolution limit of the LFHM, we developed a deep learning-based PSR approach to reconstruct the HR hologram from a sequence of LR holograms with sub-pixel shift. One of the critical practical factors impacting the PSR performance is the motion error originating from errors in sub-pixel shifts. To minimize such errors in our measurements, we scanned the light source to create sub-pixel shifts in the LR hologram recording steps. Due to the geometry of the imaging system as shown in Fig. 2(b), the step size between LR hologram acquisitions are demagnified by a factor of z1/z2 which is typically larger than 100. Therefore, the effect of the mechanical scanning errors on the PSR performance is significantly reduced in such hologram recording methodology. In other approaches such as shifting the sample or image sensor, it would be interesting to implement a sub-pixel shift estimation in data processing step  to improve the robustness of the PSR performance. Moreover, the motion estimation could be performed by a neural network and be unified with the HR hologram reconstruction procedure under a single deep learning-based framework .
Another important consideration is the resolution limit of our approach. One would expect that up-sampling the hologram by a factor of 5 can bring 5-fold resolution improvement. However, the experiment results in the previous sections suggest that the proposed method achieves a resolution between 690 nm and 720 nm which is larger than the Nyquist limitation of the effective pixel size i.e. 448 nm. To explain this, we should note that besides the limit imposed by the pixel size, the resolution of the LFHM is also limited by diffraction i.e. by the NA of the imaging system in our case. While the deep learning-based PSR method can alleviate the first limitation by combining multiple LR holograms with sub-pixel shift, the second challenge remains unaddressed. Factors such as the sample-to-sensor distance, the signal-to-noise ratio of the imaging system and the angular response of the image sensor can affect the effective NA of the LFHM. It is possible to further improve the resolution of the LFHM by denoising the hologram or by using synthetic aperture imaging based on multi-angle illumination . We believe that the deep learning technique could also help tackle these challenges. For example, Zhang et al. recently proposed a deep CNN model which exhibits high effectiveness in several general image denoising tasks . Furthermore, the same data set generation and training strategy in our deep learning-based PSR approach could also be used for synthetic aperture super-resolution imaging and could potentially reduce the amount of time required for measurements and data processing.
In this paper, a deep learning-based PSR approach has been proposed to improve the resolution of LFHM in a non-iterative fashion using multiple sub-pixel shifted LR holograms. Unlike conventional iterative PSR approach, the resolution performance of the proposed method remains consistent with a reduced number of LR measurements which can significantly speed up the data acquisition process to enable fast HR lens-free imaging. Moreover, we showed that the network can be trained with a data set synthesized from the LR hologram itself, making it applicable for real-world applications where the HR ground truth is not available or very difficult to acquire. Finally, the robustness of this approach was validated by performing super-resolution lens-free reconstruction on various types of samples using a single network trained on significantly different holograms. We envision that this approach, together with the recently proposed deep learning-based holographic image reconstruction method  and the rapid growth in computing power of modern graphics processing units (GPUs), can pave the way for ultra-fast, ultra-large FOV super-resolution lens-free imaging for high-throughput screening, digital pathology and lab-on-a-chip applications.
Appendix A sample preparation
The polystyrene beads (Thermo Fisher Scientific F8816) were diluted by a factor of 104 with deionized water to provide an appropriate sample density for the full width at half maximum measurement of the bead. Afterwards, 10 μl of the diluted solution was pipetted to the microscopy slide and sealed with the cover slip.
Primary rat hippocampal neurons were prepared in house, from E19 wistar rat embryos.
Neurons were seeded on PDLO (Poly-DL-ornithine hydrobromide, Sigma) pre-coated 35 mm tissue treated culture dishes. Cells were kept in culture for 2-3 days to allow neural outgrowth formation and were fixed with 4% formaldehyde.
MCF7 cells were cultured according to the ATCC protocol. For imaging, cells were plated at different densities on the 35 mm tissue treated culture dishes and let to attach and spread. At the desired density, cells were fixed with 4% formaldehyde (Pierce 16% Formaldehyde (w/v), Methanol-free, diluted in PBS to 4%).
Appendix B neural network architecture
As shown in Fig. 10, the input image stack is first up-sampled by a factor of 5 through a 5 × 5 transposed convolution (up-conv) to match the size of the target HR image in lateral directions. The rest of the network is adapted from the U-net architecture  which consists of a down-sampling path and an up-sampling path to extract and process image features at different scales. The down-sampling path is constructed by repeatedly applying two 3 × 3 convolutions and a 2 × 2 maxpooling operation. Each convolution layer is activated by a rectified linear unit (Relu) function with the feature maps doubled after each down-sampling. The up-sampling path starts at the minimum scale where a transposed convolution with stride 2 is utilized to up-sample the output of the previous convolutional layer by a factor of 2. Next, the result is concatenated with the corresponding extracted feature maps from the down-sampling path. Afterwards, two 3 × 3 convolutions are applied with reduced number of features as described in Fig. 10. This process is repeated until the maximum scale is reached. Finally, the resulting feature maps are transferred to a 2D image through a 1 × 1 convolution.
Appendix C implementation of the iterative PSR method
We formulate the forward model of the PSR problem in LFHM as follows :
The iterative PSR algorithm proposed by Farsiu et al. is based on L1 norm minimization with bilateral total variation regularization:
We initialized X by zero filling, shifting, and adding the LR measurements. Afterwards, a gradient descent algorithm with backtracking line search was implemented to iteratively update X. The HR hologram was generated after 20 iterations. The reconstruction algorithm described here is programmed using python (version 3.7).
H2020 European Research Council (ERC) under the consolidator grant (617312); Agentschap Innoveren en Ondernemen (VLAIO) (IWT.150031).
The authors thank Ziduo Lin for providing the access to the LFHM. We would also like to show our gratitude to Olga Krylychkina for preparing the cell cultures.
4. G. Stybayeva, O. Mudanyali, S. Seo, J. Silangcruz, M. Macal, E. Ramanculov, S. Dandekar, A. Erlinger, A. Ozcan, and A. Revzin, “Lensfree holographic imaging of antibody microarrays for high-throughput detection of leukocyte numbers and function,” Anal. Chem. 82(9), 3736–3744 (2010). [CrossRef] [PubMed]
5. O. Mudanyali, D. Tseng, C. Oh, S. O. Isikman, I. Sencan, W. Bishara, C. Oztoprak, S. Seo, B. Khademhosseini, and A. Ozcan, “Compact, light-weight and cost-effective microscope based on lensless incoherent holography for telemedicine applications,” Lab Chip 10(11), 1417–1428 (2010). [CrossRef] [PubMed]
8. S. Schumacher, J. Nestler, T. Otto, M. Wegener, E. Ehrentreich-Förster, D. Michel, K. Wunderlich, S. Palzer, K. Sohn, A. Weber, M. Burgard, A. Grzesiak, A. Teichert, A. Brandenburg, B. Koger, J. Albers, E. Nebling, and F. F. Bier, “Highly-integrated lab-on-chip system for point-of-care multiparameter analysis,” Lab Chip 12(3), 464–473 (2012). [CrossRef] [PubMed]
9. L. Lagae, D. Vercruysse, A. Dusa, C. Liu, K. de Wijs, R. Stahl, G. Vanmeerbeeck, B. Majeed, Y. Li, and P. Peumans, “High throughput cell sorter based on lensfree imaging of cells,” in Proceedings of IEEE International Electron Devices Meeting (IEEE, 2015), pp. 333–336. [CrossRef]
10. C. Allier, S. Morel, R. Vincent, L. Ghenim, F. Navarro, M. Menneteau, T. Bordy, L. Hervé, O. Cioni, X. Gidrol, Y. Usson, and J. M. Dinten, “Imaging of dense cell cultures by multiwavelength lens-free video microscopy,” Cytometry A 91(5), 433–442 (2017). [CrossRef] [PubMed]
11. W. Bishara, T.-W. Su, A. F. Coskun, and A. Ozcan, “Lensfree on-chip microscopy over a wide field-of-view using pixel super-resolution,” Opt. Express 18(11), 11181–11191 (2010). [CrossRef] [PubMed]
12. A. Greenbaum, W. Luo, B. Khademhosseinieh, T.-W. Su, A. F. Coskun, and A. Ozcan, “Increased space-bandwidth product in pixel super-resolved lensfree on-chip microscopy,” Sci. Rep. 3(1), 1717 (2013). [CrossRef]
15. J. Zhang, J. Sun, Q. Chen, J. Li, and C. Zuo, “Adaptive pixel-super-resolved lensfree in-line digital holography for wide-field on-chip microscopy,” Sci. Rep. 7(1), 11777 (2017). [CrossRef] [PubMed]
16. C. Fournier, F. Jolivet, L. Denis, N. Verrier, E. Thiebaut, C. Allier, and T. Fournel, “Pixel super-resolution in digital holography by regularized reconstruction,” Appl. Opt. 56(1), 69–77 (2017). [CrossRef]
18. K. Nelson, A. Bhatti, and S. Nahavandi, “Performance Evaluation of Multi-Frame Super-Resolution Algorithms,” in 2012 International Conference on Digital Image Computing Techniques and Applications (DICTA) (IEEE, 2012), pp. 1–8.
19. J. Song, C. Leon Swisher, H. Im, S. Jeong, D. Pathania, Y. Iwamoto, M. Pivovarov, R. Weissleder, and H. Lee, “Sparsity-based pixel super resolution for lens-free digital in-line holography,” Sci. Rep. 6, 24681 (2016). [CrossRef] [PubMed]
20. W. T. Freeman, T. R. Jones, and E. C. Pasztor, “Example-based super-resolution,” IEEE Comput. Graph. Appl. 22(2), 56–65 (2002). [CrossRef]
21. Y. Lu, M. Inamura, and M. del Carmen Valdes, “Super-resolution of the undersampled and subpixel shifted image sequence by a neural network,” Int. J. Imaging Syst. Technol. 14(1), 8–15 (2004). [CrossRef]
23. J. Kim, J.K. Lee, and K.M. Lee, “Accurate image super-resolution using very deep convolutional networks,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2016), pp. 1646–1654. [CrossRef]
24. B. Lim, S. Son, H. Kim, S. Nah, and K. M. Lee, “Enhanced deep residual networks for single image super-resolution,” in The IEEE conference on computer vision and pattern recognition (CVPR) workshops , vol. 1, p. 4 (2017).
25. C. Ledig, L. Theis, F. Huszár, J. Caballero, A. Cunningham, A. Acosta, A. P. Aitken, A. Tejani, J. Totz, and Z. Wang et al., “Photo-Realistic Single Image Super-Resolution Using a Generative Adversarial Network,” in CVPR , 2(3), p. 4 (2017).
26. W. Yang, X. Zhang, Y. Tian, W. Wang, and J.-H. Xue, “Deep learning for single image super-resolution: A brief review,” arXiv 1808.03344 (2018).
27. Y. Rivenson, Z. Göröcs, H. Günaydin, Y. Zhang, H. Wang, and A. Ozcan, “Deep learning microscopy,” Optica 4(11), 1437–1443 (2017). [CrossRef]
28. Y. Rivenson, H. Ceylan Koydemir, H. Wang, Z. Wei, Z. Ren, H. Günaydın, Y. Zhang, Z. Gorocs, K. Liang, D. Tseng, and A. Ozcan, “Deep learning enhanced mobile-phone microscopy,” ACS Photonics 5(6), 2354–2364 (2018). [CrossRef]
29. H. Wang, Y. Rivenson, Y. Jin, Z. Wei, R. Gao, H. Gunaydin, L. Bentolila, and A. Ozcan, “Deep learning achieves super-resolution in fluorescence microscopy,” bioRxiv309641 (2018).
31. E. Nehme, L. E. Weiss, T. Michaeli, and Y. Shechtman, “Deep-STORM: super-resolution single-molecule microscopy by deep learning,” Optica 5(4), 458–464 (2018). [CrossRef]
33. S. Kazeminia, C. Baur, A. Kuijper, B. van Ginneken, N. Navab, S. Albarqouni, and A. Mukhopadhyay, “GANs for medical image analysis,” arXiv 1809.06222 (2018).
34. R. Eldan and O. Shamir, “The power of depth for feedforward neural networks,” in Conference on Learning Theory (2016), pp. 907–940.
36. Z. Wang, A. C. Bovik, H. R. Sheikh, and E. P. Simoncelli, “Image quality assessment: from error visibility to structural similarity,” IEEE Trans. Image Process. 13(4), 600–612 (2004). [CrossRef] [PubMed]
37. O. Ronneberger, P. Fischer, and T. Brox, “U-net: Convolutional networks for biomedical image segmentation,” in International Conference on Medical Image Computing and Computer-Assisted Intervention (2015), pp. 234–241. [CrossRef]
38. F. Chollet, “Keras: Deep learning library for theano and tensorflow,” https://keras.io
39. M. Abadi, P. Barham, J. Chen, Z. Chen, A. Davis, J. Dean, M. Devin, S. Ghemawat, G. Irving, and M. Isard, “TensorFlow: A System for Large-Scale Machine Learning,” in Proceedings of the 12th USENIX Symposium on Operating Systems Design and Implementation (OSDI, 2016), pp. 265–283.
40. D. P. Kingma and J. Ba, “Adam: A method for stochastic optimization,” arXiv 1412.6980 (2014).
41. K. Matsushima and T. Shimobaba, “Band-limited angular spectrum method for numerical simulation of free-space propagation in far and near fields,” Opt. Express 17(22), 19662–19673 (2009). [CrossRef] [PubMed]
42. J. Rosen, N. Siegel, and G. Brooker, “Theoretical and experimental demonstration of resolution beyond the Rayleigh limit by FINCH fluorescence microscopic imaging,” Opt. Express 19(27), 26249–26268 (2011). [CrossRef] [PubMed]
44. R. Stahl, G. Vanmeerbeeck, G. Lafruit, R. Huys, V. Reumers, A. Lambrechts, C.-K. Liao, C.-C. Hsiao, M. Yashiro, and M. Takemoto et al., “Lens-free digital in-line holographic imaging for wide field-of-view, high-resolution and real-time monitoring of complex microscopic objects,” Proc. SPIE 8947, 89471F (2014).
46. Y. Huang, W. Wang, and L. Wang, “Bidirectional recurrent convolutional networks for multi-frame super-resolution,” Adv. Neural Inf. Process. Syst. 28235–243 (2015).
47. W. Luo, A. Greenbaum, Y. Zhang, and A. Ozcan, “Synthetic aperture-based on-chip microscopy,” Light Sci. Appl. 4(3), e261 (2015). [CrossRef]
48. K. Zhang, W. Zuo, Y. Chen, D. Meng, and L. Zhang, “Beyond a gaussian denoiser: Residual learning of deep cnn for image denoising,” IEEE Trans. Image Process. 26(7), 3142–3155 (2017). [CrossRef] [PubMed]
49. Y. Rivenson, Y. Zhang, H. Günaydın, D. Teng, and A. Ozcan, “Phase recovery and holographic image reconstruction using deep learning in neural networks,” Light Sci. Appl. 7(2), 17141 (2018). [CrossRef]