Abstract

Lens-free holographic microscopy (LFHM) provides a cost-effective tool for large field-of-view imaging in various biomedical applications. However, due to the unit optical magnification, its spatial resolution is limited by the pixel size of the imager. Pixel super-resolution (PSR) technique tackles this problem by using a series of sub-pixel shifted low-resolution (LR) lens-free holograms to form the high-resolution (HR) hologram. Conventional iterative PSR methods require a large number of measurements and a time-consuming reconstruction process, limiting the throughput of LFHM in practice. Here we report a deep learning-based PSR approach to enhance the resolution of LFHM. Compared with the existing PSR methods, our neural network-based approach outputs the HR hologram in an end-to-end fashion and maintains consistency in resolution improvement with a reduced number of LR holograms. Moreover, by exploiting the resolution degradation model in the imaging process, the network can be trained with a data set synthesized from the LR hologram itself without resorting to the HR ground truth. We validated the effectiveness and the robustness of our method by imaging various types of samples using a single network trained on an entirely different data set. This deep learning-based PSR approach can significantly accelerate both the data acquisition and the HR hologram reconstruction processes, therefore providing a practical solution to fast, lens-free, super-resolution imaging.

© 2019 Optical Society of America under the terms of the OSA Open Access Publishing Agreement

1. Introduction

Lens-free holographic microscopy (LFHM) is a rapidly developing technology based on the principle of in-line holography [1–3]. Because of its lensless feature, together with its unit optical magnification, the LFHM has the advantage of a large and scalable field-of-view (FOV) and is more compact and cost-effective than the conventional lens-based microscopy. Recently, it has found numerous applications in high-throughput screening, point-of-care diagnosis and lab-on-a-chip technologies [4–10]. Meanwhile, the growing demand for visualizing fine details in biological samples such as subcellular structures poses a challenge for the LFHM since its spatial resolution is limited by the pixel size of the image sensor used for recording the holograms. According to the Nyquist-Shannon sampling theorem, details of the hologram smaller than the pixel size are lost during image acquisition. To overcome this limitation, various pixel super-resolution (PSR) techniques have been proposed to extract the high resolution (HR) information of the hologram via multi-frame imaging [11–17]. For example, a series of low resolution (LR) holograms with sub-pixel shift in lateral directions can be combined to reconstruct the HR hologram. Despite their success to achieve sub-pixel resolution, these approaches often require a large number of measurements or a time-consuming iterative algorithm for HR image synthesis [18]. As such, the PSR techniques are reducing the achievable speed or throughput and hence the appeal of LFHM for many applications. To address this challenge, Song et al. proposed a sparsity-based PSR method where the spatial resolution was improved with single-frame measurement by using compressed sensing [19]. However, such method is only applicable to sparse samples which prevents it from being used in digital pathology and dense cell culture imaging.

On the other hand, machine learning, in particular deep learning, has achieved considerable success in the image super-resolution task in computer vision [20–25]. Compared with the conventional approaches, deep learning-based image super-resolution uses a convolutional neural network (CNN) to learn an end-to-end mapping between the LR image and the HR image without any handcrafted model [26], making it a promising approach for super-resolution optical microscopy. Recently, deep learning techniques have been successfully applied to enhance the resolution and accelerate the acquisition speed for different imaging modalities [27–32]. Nevertheless, the following challenges remain unaddressed. First, the training of the CNN described in these works usually demands a data set comprising pairs of the LR images and the corresponding HR ground truth. In practice, such ground truth is either unavailable due to hardware limitation or too cumbersome and expensive to produce. For example, in [27], a high numerical aperture (NA) objective is required to generate the HR images. Moreover, an image registration algorithm needs to be implemented to align the LR-HR image pair. Second, the performance of the trained network could deteriorate when it is tested with images significantly different from the training data set [33], limiting its applicability in real-world problems since the network needs to be re-trained for each type of sample. To improve the generality of the deep learning approach, the authors of [28,31] suggested training a universal network based on a data set comprising various types of samples or training a set of networks to deal with different scenarios. However, such methods can result in a more complicated neural network structure [34] and increased the training and inference time.

In this paper, we propose a robust deep learning-based PSR approach that can achieve sub-pixel resolution in the LFHM in a non-iterative fashion and without a need of an HR ground truth. Specifically, we train a CNN to take a stack of sub-pixel shifted LR holograms as the input and produce the corresponding HR hologram as the output from which the HR image of the object is reconstructed (Fig. 1). The training data set is generated based on a single-frame hologram captured by the LFHM which serves as the “HR ground truth” itself. The corresponding LR image sequence is synthesized by digitally shifting and down-sampling this hologram i.e. by mimicking the image acquisition process of sub-pixel shifted LR holograms. Our assumption is that the CNN can learn the inverse mapping of the resolution degradation model from the imaging process instead of inferring the HR details from the LR image. Since the resolution degradation model is independent of the acquired images, a neural network trained in this way can achieve a robust super-resolution even if the HR ground truth is not available or the testing image is significantly different from the images of the training data set. To demonstrate this, we applied a CNN trained on the neural cell hologram to predict the HR holograms of different types of samples including a USAF resolution test chart, polystyrene beads and a lily anther. By comparing the results with those obtained by an iterative PSR method [35], we validated the effectiveness and robustness of our approach. Furthermore, we observed similar resolution enhancement from the network when the number of LR frames was reduced from 25 to 9 whereas the iterative method suffered from a noticeable drop in resolution.

 figure: Fig. 1

Fig. 1 Overview of the deep learning-based pixel super-resolution approach. The CNN is trained to take a stack of LR holograms with subpixel shift as input and produce the corresponding HR hologram as output. The training data set is generated based on the single frame hologram captured by the LFHM which serves as the “HR ground truth” itself. The corresponding LR image sequence is synthesized by digitally shifting and down-sampling this hologram.

Download Full Size | PPT Slide | PDF

Our work demonstrates a promising deep learning-based PSR approach for super resolution LFHM. Compared with conventional iterative PSR methods, it can significantly accelerate both the HR hologram reconstruction and the data acquisition processes by directly mapping a HR hologram from the LR holograms while reducing the required number of frame acquisition. In contrast to previously reported deep learning approaches, the proposed method needs no HR ground truth for the network training and is sample agnostic in the sense that the same network is applicable to widely different samples. Our approach can provide a flexible and practical solution for fast super-resolution lens-free imaging.

2. Methods

2.1 Experimental setup for super-resolution LFHM

In the super-resolution LFHM setup (Fig. 2(a)), the sample is illuminated by a coherent beam from a fiber pigtailed laser diode (Thorlabs LP642-SF20). The interference between the incident and the scattered light generates a hologram which is captured by a CMOS image sensor (TOSHIBA TC358743XBG, monochrome, 13-megapixel, pixel size: 1.12 μm) behind the sample. To achieve sub-pixel shift of the hologram relative to the imager, we move the fiber tip in xy direction with a translational stage (Thorlabs PT3A). The corresponding hologram shift on the sensor plane can be approximated by geometrical optics as follows (Fig. 2(b) and Eq. (1)):

dx1dx2=dy1dy2=z1z2
where z1 is the distance between the sample and the fiber tip, z2 denotes the sample-to-sensor distance, dx1, dy1 are the shifts of the fiber tip in x and y direction, the sub-pixel shift of the hologram is represented by dx2, dy2. In the experiment reported here, 25 different holograms with a physical pixel size of 1.12 μm are acquired to realize a 5 × 5 image up-sampling. As shown in Fig. 2(c), each hologram in the sequence is shifted with respect to each other by fractions of the LR grid. An HR hologram with an effective pixel size of 224 nm can be reconstructed by combining these measurements with a PSR algorithm. It is worth noting that although the zigzag itinerary of the sub-pixel shift is used here, it is possible to implement other shift patterns for multiple LR hologram acquisition.

 figure: Fig. 2

Fig. 2 Image acquisition of LR holograms with sub-pixel shift. (a) The schematic diagram of the PSR LFHM experiment setup. Shifting the light source in lateral directions leads to sub-pixel shift of the hologram at the sensor plane. (b) Geometric relationship between the light source shifting and the hologram shifting. (c) Itinerary of the hologram sub-pixel shift where LR grid corresponds to the physical pixel size of the imager and HR grid represents the effective pixel size of the HR hologram.

Download Full Size | PPT Slide | PDF

2.2 Data set generation and neural network training

To train a CNN to predict the HR hologram from a series of LR holograms, we created a data set based on a single hologram of the neural cell culture (see Appendix A for details of sample preparation). During the training, this hologram serves as our “HR ground truth” from which a series of 25 LR holograms are digitally synthesized by mimicking the image acquisition process described above (Fig. 3). In the digital synthesis process, the “HR ground truth” hologram was shifted with a step size of 1 physical pixel following the same itinerary described in Fig. 2(c). The LR frames were generated by convolving each shifted frame with a 5 × 5 normalized box filter and down-sampling the resulting image by a factor of 5. The LR image stack and the original hologram forms a LR-HR pair from which we implemented image augmentation to enlarge the size of the data set by rotating it to 8 random angles between 0 and 180 degrees and by cropping each rotated image pair into 200 randomly distributed 128 × 128 sub-regions. Lastly, a random flip in the vertical and horizontal directions was applied for further data augmentation, yielding a final data set with 1600 LR-HR pairs. In addition to the training data set, we created a test data set of 400 LR-HR pairs using the same procedure based on the hologram of a different cell culture sample. This data set will be used to evaluate the accuracy of the learning model along with the training data set during the model training. The accuracy is quantified by the structural similarity (SSIM) index metric [36] which is defined as follows:

SSIM(x,y)=(2μxμy+c1)(2σxy+c2)(μx2+μy2+c1)(σx2+σy2+c2)
where x, y are the images to be compared, µx, µy are the averages of x, y; σx, σy are the standard deviation of x, y; σxy is co-variance of x and y; c1 and c2 are variables to stabilize the division with weak denominator. During the training after every 1600 iterations, the SSIM between the HR hologram predicted by the current neural network and the corresponding HR ground truth is calculated using the LR-HR pair from the training/test data set to represent the training/test accuracy.

 figure: Fig. 3

Fig. 3 The schematic diagram of the LR sequence synthesis process. A series of LR holograms is generated by repeatedly applying geometric warping, blurring and decimation with different lateral shifts to the single frame hologram captured by the LFHM.

Download Full Size | PPT Slide | PDF

The architecture of the neural network reported here is inspired by U-net [37]. The input LR image stack is first up-sampled by a factor of 5 to match the size of the target HR image. The rest of the network consists of 5 levels of successive convolutional layers with adjacent levels connected by a down-sampling path and a symmetric up-sampling path to process image features at different scales (see Appendix B for a detailed description of the neural network). The CNN is implemented in Keras with Tensorflow backend [38,39]. We trained the neural network for 24000 iterations at a learning rate of 10−4 with a mini-batch size of 4. During the training, the adaptive moment optimizer [40] is used to reduce the mean squared error between the CNN prediction and the target image by iteratively changing the network weights. The training of the neural network only needs be performed once. Afterwards, the CNN is ready to reconstruct the HR hologram from a stack of LR holograms in an end-to-end fashion.

Since the number of images for the network input can be easily modified, our CNN structure has the flexibility to achieve PSR using fewer LR images, therefore reducing the complexity of data acquisition. To demonstrate this, we created a new training data set based on the previously generated training data set. For each LR-HR pair in it, 9 out of 25 LR images were selected by skipping every other row and column of the HR grid shown in Fig. 2(c). The resulting data set was used to train a second CNN following the same training strategy. After the training, this CNN only needs 9 LR holograms to produce the HR hologram with the same up-sampling factor. The performance of both CNNs will be compared in the following sections.

3. Results

3.1 Evaluation of resolution improvement with USAF resolution test chart

The accuracy of the CNN during the training is shown in Figs. 4(a) and 4(b) for both neural networks with 25 and 9 holograms as input. The accuracy of the learning model reaches an accuracy > 0.9 after a few thousand iterations for both the training data and the test data. To evaluate the resolution improvement of the deep learning-based PSR approach, we captured 25 sub-pixel shifted LR holograms of a custom-made 1951 USAF resolution test chart at a sample-to-sensor distance of ~300 μm. As a baseline method, the first LR hologram in the sequence was up-sampled by a factor of 5 using bicubic interpolation. The square root of the intensity of the interpolated hologram (Fig. 5(a)) was back-propagated to the focal plane by the angular spectrum method [41]. Figure 5(c) shows the reconstructed amplitude of the test chart. While three valleys of group 8, element 5 of the resolution chart (1.23 μm line width) are well-differentiated, the 6th element in the same group (1.10 μm line width) is barely resolved, which is consistent with the resolution limit imposed by Nyquist-Shannon sampling theorem. On the other hand, as shown in Fig. 5(d), lost details of the interference pattern are recovered in the HR hologram predicted by the CNN. The reconstructed amplitude image from this hologram is shown in Fig. 5(e) where group 9, element 3 was clearly resolved along with 2 valleys of group 9, element 4, indicating a half-pitch resolution between 690 nm and 780 nm. To further evaluate the resolution improvement of the proposed method, we performed a modulation transfer function (MTF) analysis by calculating the contrast of each element of the resolution test chart [27,42]. The MTFs obtained by the baseline method, the deep learning-based method and a 40X 0.5NA objective are shown in Fig. 5(k) demonstrating the resolution enhancement from the proposed method with a significant higher modulation contrast.

 figure: Fig. 4

Fig. 4 Accuracy of the model during the training. (a) training/test accuracy with respect to the number of training iterations of the neural network using 25 holograms as input. (b) training/test accuracy with respect to the number of training iterations of the neural network using 9 holograms as input.

Download Full Size | PPT Slide | PDF

 figure: Fig. 5

Fig. 5 Deep learning-based PSR of the USAF resolution test chart. (a) The interpolated LR hologram of the USAF resolution test chart. (b) The enlarged image within the yellow dashed box. (c) Reconstructed amplitude image from the interpolated LR hologram. The first 4 elements of group 9 of the chart is shown in the green dashed box. (d) HR hologram predicted by the trained CNN within the yellow dashed box. (e) Reconstructed amplitude image from the HR hologram. (f), (g) Cross-section profiles of group 8, element 5 and element 6 of the LR reconstructed image. (h), (i) Cross-section profiles of group 9, element 3 and element 4 of the HR reconstructed image. (j) Ground truth of the resolution chart provided by a 40X 0.5NA objective. (k) MTFs obtained by the baseline method (LR reconstruction), the deep learning-based method (HR reconstruction) and the 40X 0.5NA objective.

Download Full Size | PPT Slide | PDF

For a comparison, we implemented the iterative PSR algorithm proposed by Farsiu et al. [35] (see Appendix C for details of the implementation). The same LR image sequence was utilized to generate an HR hologram with 20 iterations. Despite the same ability to resolve group 9, element 3, the cross-section profile of the reconstructed image (Fig. 6(b)) reveals a lower contrast compared with the result of the CNN-based approach. Moreover, it is worth mentioning that the latter performs PSR in an end-to-end fashion which is an important advantage over the time-consuming iterative algorithm for large FOV lens-free imaging. Next, we selected 9 out of 25 LR images in the sequence to explore the possibility of achieving the same level of resolution improvement using a reduced amount of data. It is not surprising that the reconstructed image based on the iterative method exhibits a noticeable blurriness with a smaller number of measurements. In contrast, using 9 LR holograms, the CNN-based method succeeded in maintaining the sharpness of the image as shown in Fig. 6(c).

 figure: Fig. 6

Fig. 6 Comparison of resolution improvement by different approaches. (a) Comparison between the reconstructed amplitude image by different approaches. The results from the bicubic interpolation, the deep learning-based PSR method using 25 holograms and 9 holograms, the iterative PSR method using 25 holograms and 9 holograms are denoted by Bicubic, CNNPSR25, CNNPSR9, IPSR25 and IPSR9 respectively. (b) – (d) Cross section profiles of group 9, element 3 between CNNPSR25 and IPSR25, CNNPSR25 and CNNPSR9, IPSR25 and IPSR9 respectively.

Download Full Size | PPT Slide | PDF

3.2 Full width at half maximum (FWHM) measurement of polystyrene beads

In our second experiment, the same CNN was used to recover the HR hologram of polystyrene beads with a diameter of 1 μm (see Appendix A for details of sample preparation). Figure 7(a) shows the reconstructed amplitude image of the bead sample based on the prediction of the network. Note that two beads which are unresolved in the reconstruction of the baseline hologram are now separated as demonstrated in Figs. 7(b) and 7(d). The FWHM of the beads were measured to evaluate quantitatively the resolution performance of previously mentioned PSR approaches. To calculate the FWHM, we averaged the reconstructed amplitude of 108 isolated beads and fitted a 2D Gaussian function to the resulting image (Fig. 7(f)). The standard deviation of the Gaussian distribution in x and y direction was averaged and converted to the corresponding FWHM. By comparing the FWHM values obtained with different methods (Fig. 7(g)), we validated the conclusions from the USAF resolution test chart experiment. In line with our previous observations, the CNN-based method yields similar results to the iterative method using 25 LR frames. However, the proposed method outperforms the latter when only 9 LR frames are available, approaching the result produced by 25 holograms.

 figure: Fig. 7

Fig. 7 FWHM measurement of polystyrene beads. (a) The reconstructed amplitude image of the polystyrene beads based on the network prediction. (b), (c) The enlarged image of two beads from CNNPSR25 and Bicubic. (d), (e) Cross section profiles along the arrows in the blue and red dashed box. (f) The averaged amplitude image of 108 beads from CNNPSR25. (g) Comparison between the FWHMs of various approaches.

Download Full Size | PPT Slide | PDF

3.3 Super-resolution lens-free imaging of lily anther

To demonstrate its capability of improving the resolution of LFHM for dense biological samples, we applied the deep learning-based approach to the lily anther (Euromex SB 2210). Since a simple back-propagation of the intensity-only hologram to the sample plane can result in twin-image artifacts for dense-biological samples [43], we implemented a multi-wavelength-based phase retrieval algorithm as described by Stahl et al. [44] to obtain a clear reconstructed image of the object. For each sub-pixel shift, three holograms were captured by illuminating the sample with coherent beams of different wavelengths. The sequence of LR holograms under the same illumination wavelength condition were utilized by the CNN to predict the HR hologram. The phase retrieval algorithm took three HR holograms as input and generated the reconstructed amplitude and phase of the sample after 10 iterations. Figure 8(a) shows the reconstructed phase image of the lily anther over a FOV of ~8 mm2 which only occupies ~50% of the full FOV of our system (The whole FOV of a 10X microscope objective is shown in Fig. 8(c) for comparison). As a baseline measurement, we applied the same phase retrieval algorithm to the LR holograms without using any super-resolution techniques. Figures 8(d)-8(g) visually demonstrate the enhanced resolution due to the deep learning-based PSR method over the baseline measurement and also suggest that the resolution is on a par with a 40X microscope objective lens image with an order of magnitude larger FOV. Additionally, the SSIMs of the images reconstructed by the baseline approach and the deep learning approach with respect to the result from the iterative method were calculated and listed in Table 1. The considerably improved structural similarity demonstrates the potential of the proposed method to enhance the resolution of dense biological samples.

 figure: Fig. 8

Fig. 8 Super-resolution lens-free imaging of lily anther. (a) The reconstructed phase image of the lily anther using deep learning-based PSR method. The yellow box corresponds to the FOV of a 10X microscope objective. Scale bar: 200 μm. (b), (c) The enlarged image within the yellow dashed box from the HR lens-free reconstruction and 10X objective respectively. Scale bar: 100 μm. (d) - (f) The zoom-in images of the red dashed box area of the baseline LR lens-free image, the HR lens-free image and the ground truth (GT) image from the 40X objective respectively. Scale bar: 15 μm. (g) Cross-section profiles along the arrows in (d) - (f).

Download Full Size | PPT Slide | PDF

Tables Icon

Table 1. SSIMs of the images reconstructed by the baseline approach and the deep learning approach with respect to the result from the iterative PSR approach

3.4 Influence of the training data set generated with different types of samples

Note that the CNN used to generate the HR image is trained only on a hologram from the neural cell culture. A comparison of the training hologram and the testing hologram is shown in Fig. 9(a). Although these holograms varied significantly in terms of features, brightness and sample-to-sensor distance, the super-resolution performance of the neural network stayed comparable to the results of the iterative algorithm as described in the previous sections which demonstrates its potential to perform PSR for different types of samples without re-training. To further test the robustness of the deep learning-based approach, we trained another CNN using the hologram from the MCF7 cell culture (see Appendix A for details of sample preparation). The new network achieved a consistent resolution improvement as indicated by the cross-section profiles of the USAF resolution chart (Figs. 9(b) and 9(c)) and the FWHM of the polystyrene beads (Fig. 9(d)), demonstrating the robustness and the universality of our approach.

 figure: Fig. 9

Fig. 9 Influence of the training data set generated with different types of samples. (a) Holograms of different types of samples and their corresponding sample-to-sensor distances (denoted by dss). Scale bar: 50 μm. (b) Cross-section profiles of group 9, element 3 of the USAF resolution based on the HR hologram obtained by networks trained on different types of samples. 25 holograms were used. (c) Same as (b) but only 9 holograms were used. (d) FWHM of the polystyrene beads based on the HR hologram obtained by networks trained on different types of samples.

Download Full Size | PPT Slide | PDF

3.5 Run-time profiling

On a laptop with a Core i7-4720HQ CPU and an Nvidia GTX 970M GPU, we profiled the run-time of previously mentioned techniques. The LR holograms used for the inference have the size of 512 × 512 which corresponds to a HR hologram with the size of 2460 × 2460. The results are summarized in Table 2. Although the deep learning-based methods take longer time to train the neural network, its inference time is two orders of magnitude shorter. Moreover, benefiting from the generalizability demonstrated in the previous sections, the training of the neural network needs to be performed only once.

Tables Icon

Table 2. Run-time profiling of the deep learning-based PSR methods and the iterative PSR methods

4. Discussion

To overcome the resolution limit of the LFHM, we developed a deep learning-based PSR approach to reconstruct the HR hologram from a sequence of LR holograms with sub-pixel shift. One of the critical practical factors impacting the PSR performance is the motion error originating from errors in sub-pixel shifts. To minimize such errors in our measurements, we scanned the light source to create sub-pixel shifts in the LR hologram recording steps. Due to the geometry of the imaging system as shown in Fig. 2(b), the step size between LR hologram acquisitions are demagnified by a factor of z1/z2 which is typically larger than 100. Therefore, the effect of the mechanical scanning errors on the PSR performance is significantly reduced in such hologram recording methodology. In other approaches such as shifting the sample or image sensor, it would be interesting to implement a sub-pixel shift estimation in data processing step [45] to improve the robustness of the PSR performance. Moreover, the motion estimation could be performed by a neural network and be unified with the HR hologram reconstruction procedure under a single deep learning-based framework [46].

Another important consideration is the resolution limit of our approach. One would expect that up-sampling the hologram by a factor of 5 can bring 5-fold resolution improvement. However, the experiment results in the previous sections suggest that the proposed method achieves a resolution between 690 nm and 720 nm which is larger than the Nyquist limitation of the effective pixel size i.e. 448 nm. To explain this, we should note that besides the limit imposed by the pixel size, the resolution of the LFHM is also limited by diffraction i.e. by the NA of the imaging system in our case. While the deep learning-based PSR method can alleviate the first limitation by combining multiple LR holograms with sub-pixel shift, the second challenge remains unaddressed. Factors such as the sample-to-sensor distance, the signal-to-noise ratio of the imaging system and the angular response of the image sensor can affect the effective NA of the LFHM. It is possible to further improve the resolution of the LFHM by denoising the hologram or by using synthetic aperture imaging based on multi-angle illumination [47]. We believe that the deep learning technique could also help tackle these challenges. For example, Zhang et al. recently proposed a deep CNN model which exhibits high effectiveness in several general image denoising tasks [48]. Furthermore, the same data set generation and training strategy in our deep learning-based PSR approach could also be used for synthetic aperture super-resolution imaging and could potentially reduce the amount of time required for measurements and data processing.

5. Conclusion

In this paper, a deep learning-based PSR approach has been proposed to improve the resolution of LFHM in a non-iterative fashion using multiple sub-pixel shifted LR holograms. Unlike conventional iterative PSR approach, the resolution performance of the proposed method remains consistent with a reduced number of LR measurements which can significantly speed up the data acquisition process to enable fast HR lens-free imaging. Moreover, we showed that the network can be trained with a data set synthesized from the LR hologram itself, making it applicable for real-world applications where the HR ground truth is not available or very difficult to acquire. Finally, the robustness of this approach was validated by performing super-resolution lens-free reconstruction on various types of samples using a single network trained on significantly different holograms. We envision that this approach, together with the recently proposed deep learning-based holographic image reconstruction method [49] and the rapid growth in computing power of modern graphics processing units (GPUs), can pave the way for ultra-fast, ultra-large FOV super-resolution lens-free imaging for high-throughput screening, digital pathology and lab-on-a-chip applications.

Appendix A sample preparation

The polystyrene beads (Thermo Fisher Scientific F8816) were diluted by a factor of 104 with deionized water to provide an appropriate sample density for the full width at half maximum measurement of the bead. Afterwards, 10 μl of the diluted solution was pipetted to the microscopy slide and sealed with the cover slip.

Primary rat hippocampal neurons were prepared in house, from E19 wistar rat embryos.

Neurons were seeded on PDLO (Poly-DL-ornithine hydrobromide, Sigma) pre-coated 35 mm tissue treated culture dishes. Cells were kept in culture for 2-3 days to allow neural outgrowth formation and were fixed with 4% formaldehyde.

MCF7 cells were cultured according to the ATCC protocol. For imaging, cells were plated at different densities on the 35 mm tissue treated culture dishes and let to attach and spread. At the desired density, cells were fixed with 4% formaldehyde (Pierce 16% Formaldehyde (w/v), Methanol-free, diluted in PBS to 4%).

Appendix B neural network architecture

As shown in Fig. 10, the input image stack is first up-sampled by a factor of 5 through a 5 × 5 transposed convolution (up-conv) to match the size of the target HR image in lateral directions. The rest of the network is adapted from the U-net architecture [37] which consists of a down-sampling path and an up-sampling path to extract and process image features at different scales. The down-sampling path is constructed by repeatedly applying two 3 × 3 convolutions and a 2 × 2 maxpooling operation. Each convolution layer is activated by a rectified linear unit (Relu) function with the feature maps doubled after each down-sampling. The up-sampling path starts at the minimum scale where a transposed convolution with stride 2 is utilized to up-sample the output of the previous convolutional layer by a factor of 2. Next, the result is concatenated with the corresponding extracted feature maps from the down-sampling path. Afterwards, two 3 × 3 convolutions are applied with reduced number of features as described in Fig. 10. This process is repeated until the maximum scale is reached. Finally, the resulting feature maps are transferred to a 2D image through a 1 × 1 convolution.

 figure: Fig. 10

Fig. 10 The schematic diagram of the CNN architecture.

Download Full Size | PPT Slide | PDF

Appendix C implementation of the iterative PSR method

We formulate the forward model of the PSR problem in LFHM as follows [35]:

Yk=DHFkX+Vkk=1,2,...,N
where Yk is the LR hologram from the kth measurement, X is the HR hologram, Vk is the system noise, Fk is the kth sub-pixel shift operation, H is the point spread function of the camera, D is the decimation operator. For the LFHM setup, we assume that H can be approximated by a normalized box filter which maps the intensity of neighboring pixels (5 × 5 in our case) to the pixel in the center.

The iterative PSR algorithm proposed by Farsiu et al. is based on L1 norm minimization with bilateral total variation regularization:

X=argminX[k=1NDHFkXYk1+λXBTV]
where λ is the weight for the regularization term, ||X||BTV represents the bilateral total variation. The reader is referred to Farsiu et al. for details of this regularization technique.

We initialized X by zero filling, shifting, and adding the LR measurements. Afterwards, a gradient descent algorithm with backtracking line search was implemented to iteratively update X. The HR hologram was generated after 20 iterations. The reconstruction algorithm described here is programmed using python (version 3.7).

Funding

H2020 European Research Council (ERC) under the consolidator grant (617312); Agentschap Innoveren en Ondernemen (VLAIO) (IWT.150031).

Acknowledgments

The authors thank Ziduo Lin for providing the access to the LFHM. We would also like to show our gratitude to Olga Krylychkina for preparing the cell cultures.

References

1. D. Gabor, “A new microscopic principle,” Nature 161(4098), 777–778 (1948). [CrossRef]   [PubMed]  

2. J. Garcia-Sucerquia, W. Xu, S. K. Jericho, P. Klages, M. H. Jericho, and H. J. Kreuzer, “Digital in-line holographic microscopy,” Appl. Opt. 45(5), 836–850 (2006). [CrossRef]   [PubMed]  

3. A. Ozcan and U. Demirci, “Ultra wide-field lens-free monitoring of cells on-chip,” Lab Chip 8(1), 98–106 (2008). [CrossRef]   [PubMed]  

4. G. Stybayeva, O. Mudanyali, S. Seo, J. Silangcruz, M. Macal, E. Ramanculov, S. Dandekar, A. Erlinger, A. Ozcan, and A. Revzin, “Lensfree holographic imaging of antibody microarrays for high-throughput detection of leukocyte numbers and function,” Anal. Chem. 82(9), 3736–3744 (2010). [CrossRef]   [PubMed]  

5. O. Mudanyali, D. Tseng, C. Oh, S. O. Isikman, I. Sencan, W. Bishara, C. Oztoprak, S. Seo, B. Khademhosseini, and A. Ozcan, “Compact, light-weight and cost-effective microscope based on lensless incoherent holography for telemedicine applications,” Lab Chip 10(11), 1417–1428 (2010). [CrossRef]   [PubMed]  

6. S. Seo, T.-W. Su, D. K. Tseng, A. Erlinger, and A. Ozcan, “Lensfree holographic imaging for on-chip cytometry and diagnostics,” Lab Chip 9(6), 777–787 (2009). [CrossRef]   [PubMed]  

7. W. Bishara, H. Zhu, and A. Ozcan, “Holographic opto-fluidic microscopy,” Opt. Express 18(26), 27499–27510 (2010). [CrossRef]   [PubMed]  

8. S. Schumacher, J. Nestler, T. Otto, M. Wegener, E. Ehrentreich-Förster, D. Michel, K. Wunderlich, S. Palzer, K. Sohn, A. Weber, M. Burgard, A. Grzesiak, A. Teichert, A. Brandenburg, B. Koger, J. Albers, E. Nebling, and F. F. Bier, “Highly-integrated lab-on-chip system for point-of-care multiparameter analysis,” Lab Chip 12(3), 464–473 (2012). [CrossRef]   [PubMed]  

9. L. Lagae, D. Vercruysse, A. Dusa, C. Liu, K. de Wijs, R. Stahl, G. Vanmeerbeeck, B. Majeed, Y. Li, and P. Peumans, “High throughput cell sorter based on lensfree imaging of cells,” in Proceedings of IEEE International Electron Devices Meeting (IEEE, 2015), pp. 333–336. [CrossRef]  

10. C. Allier, S. Morel, R. Vincent, L. Ghenim, F. Navarro, M. Menneteau, T. Bordy, L. Hervé, O. Cioni, X. Gidrol, Y. Usson, and J. M. Dinten, “Imaging of dense cell cultures by multiwavelength lens-free video microscopy,” Cytometry A 91(5), 433–442 (2017). [CrossRef]   [PubMed]  

11. W. Bishara, T.-W. Su, A. F. Coskun, and A. Ozcan, “Lensfree on-chip microscopy over a wide field-of-view using pixel super-resolution,” Opt. Express 18(11), 11181–11191 (2010). [CrossRef]   [PubMed]  

12. A. Greenbaum, W. Luo, B. Khademhosseinieh, T.-W. Su, A. F. Coskun, and A. Ozcan, “Increased space-bandwidth product in pixel super-resolved lensfree on-chip microscopy,” Sci. Rep. 3(1), 1717 (2013). [CrossRef]  

13. W. Luo, Y. Zhang, Z. Göröcs, A. Feizi, and A. Ozcan, “Propagation phasor approach for holographic image reconstruction,” Sci. Rep. 6(1), 22738 (2016). [CrossRef]   [PubMed]  

14. W. Luo, Y. Zhang, A. Feizi, Z. Göröcs, and A. Ozcan, “Pixel super-resolution using wavelength scanning,” Light Sci. Appl. 5(4), e16060 (2016). [CrossRef]   [PubMed]  

15. J. Zhang, J. Sun, Q. Chen, J. Li, and C. Zuo, “Adaptive pixel-super-resolved lensfree in-line digital holography for wide-field on-chip microscopy,” Sci. Rep. 7(1), 11777 (2017). [CrossRef]   [PubMed]  

16. C. Fournier, F. Jolivet, L. Denis, N. Verrier, E. Thiebaut, C. Allier, and T. Fournel, “Pixel super-resolution in digital holography by regularized reconstruction,” Appl. Opt. 56(1), 69–77 (2017). [CrossRef]  

17. J. Zhang, Q. Chen, J. Li, J. Sun, and C. Zuo, “Lensfree dynamic super-resolved phase imaging based on active micro-scanning,” Opt. Lett. 43(15), 3714–3717 (2018). [CrossRef]   [PubMed]  

18. K. Nelson, A. Bhatti, and S. Nahavandi, “Performance Evaluation of Multi-Frame Super-Resolution Algorithms,” in 2012 International Conference on Digital Image Computing Techniques and Applications (DICTA) (IEEE, 2012), pp. 1–8.

19. J. Song, C. Leon Swisher, H. Im, S. Jeong, D. Pathania, Y. Iwamoto, M. Pivovarov, R. Weissleder, and H. Lee, “Sparsity-based pixel super resolution for lens-free digital in-line holography,” Sci. Rep. 6, 24681 (2016). [CrossRef]   [PubMed]  

20. W. T. Freeman, T. R. Jones, and E. C. Pasztor, “Example-based super-resolution,” IEEE Comput. Graph. Appl. 22(2), 56–65 (2002). [CrossRef]  

21. Y. Lu, M. Inamura, and M. del Carmen Valdes, “Super-resolution of the undersampled and subpixel shifted image sequence by a neural network,” Int. J. Imaging Syst. Technol. 14(1), 8–15 (2004). [CrossRef]  

22. C. Dong, C. C. Loy, K. He, and X. Tang, “Image super-resolution using deep convolutional networks,” IEEE Trans. Pattern Anal. Mach. Intell. 38(2), 295–307 (2016). [CrossRef]   [PubMed]  

23. J. Kim, J.K. Lee, and K.M. Lee, “Accurate image super-resolution using very deep convolutional networks,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2016), pp. 1646–1654. [CrossRef]  

24. B. Lim, S. Son, H. Kim, S. Nah, and K. M. Lee, “Enhanced deep residual networks for single image super-resolution,” in The IEEE conference on computer vision and pattern recognition (CVPR) workshops , vol. 1, p. 4 (2017).

25. C. Ledig, L. Theis, F. Huszár, J. Caballero, A. Cunningham, A. Acosta, A. P. Aitken, A. Tejani, J. Totz, and Z. Wang et al., “Photo-Realistic Single Image Super-Resolution Using a Generative Adversarial Network,” in CVPR , 2(3), p. 4 (2017).

26. W. Yang, X. Zhang, Y. Tian, W. Wang, and J.-H. Xue, “Deep learning for single image super-resolution: A brief review,” arXiv 1808.03344 (2018).

27. Y. Rivenson, Z. Göröcs, H. Günaydin, Y. Zhang, H. Wang, and A. Ozcan, “Deep learning microscopy,” Optica 4(11), 1437–1443 (2017). [CrossRef]  

28. Y. Rivenson, H. Ceylan Koydemir, H. Wang, Z. Wei, Z. Ren, H. Günaydın, Y. Zhang, Z. Gorocs, K. Liang, D. Tseng, and A. Ozcan, “Deep learning enhanced mobile-phone microscopy,” ACS Photonics 5(6), 2354–2364 (2018). [CrossRef]  

29. H. Wang, Y. Rivenson, Y. Jin, Z. Wei, R. Gao, H. Gunaydin, L. Bentolila, and A. Ozcan, “Deep learning achieves super-resolution in fluorescence microscopy,” bioRxiv309641 (2018).

30. T. Nguyen, Y. Xue, Y. Li, L. Tian, and G. Nehmetallah, “Deep learning approach for Fourier ptychography microscopy,” Opt. Express 26(20), 26470–26484 (2018). [CrossRef]   [PubMed]  

31. E. Nehme, L. E. Weiss, T. Michaeli, and Y. Shechtman, “Deep-STORM: super-resolution single-molecule microscopy by deep learning,” Optica 5(4), 458–464 (2018). [CrossRef]  

32. W. Ouyang, A. Aristov, M. Lelek, X. Hao, and C. Zimmer, “Deep learning massively accelerates super-resolution localization microscopy,” Nat. Biotechnol. 36(5), 460–468 (2018). [CrossRef]   [PubMed]  

33. S. Kazeminia, C. Baur, A. Kuijper, B. van Ginneken, N. Navab, S. Albarqouni, and A. Mukhopadhyay, “GANs for medical image analysis,” arXiv 1809.06222 (2018).

34. R. Eldan and O. Shamir, “The power of depth for feedforward neural networks,” in Conference on Learning Theory (2016), pp. 907–940.

35. S. Farsiu, M. D. Robinson, M. Elad, and P. Milanfar, “Fast and robust multiframe super resolution,” IEEE Trans. Image Process. 13(10), 1327–1344 (2004). [CrossRef]   [PubMed]  

36. Z. Wang, A. C. Bovik, H. R. Sheikh, and E. P. Simoncelli, “Image quality assessment: from error visibility to structural similarity,” IEEE Trans. Image Process. 13(4), 600–612 (2004). [CrossRef]   [PubMed]  

37. O. Ronneberger, P. Fischer, and T. Brox, “U-net: Convolutional networks for biomedical image segmentation,” in International Conference on Medical Image Computing and Computer-Assisted Intervention (2015), pp. 234–241. [CrossRef]  

38. F. Chollet, “Keras: Deep learning library for theano and tensorflow,” https://keras.io

39. M. Abadi, P. Barham, J. Chen, Z. Chen, A. Davis, J. Dean, M. Devin, S. Ghemawat, G. Irving, and M. Isard, “TensorFlow: A System for Large-Scale Machine Learning,” in Proceedings of the 12th USENIX Symposium on Operating Systems Design and Implementation (OSDI, 2016), pp. 265–283.

40. D. P. Kingma and J. Ba, “Adam: A method for stochastic optimization,” arXiv 1412.6980 (2014).

41. K. Matsushima and T. Shimobaba, “Band-limited angular spectrum method for numerical simulation of free-space propagation in far and near fields,” Opt. Express 17(22), 19662–19673 (2009). [CrossRef]   [PubMed]  

42. J. Rosen, N. Siegel, and G. Brooker, “Theoretical and experimental demonstration of resolution beyond the Rayleigh limit by FINCH fluorescence microscopic imaging,” Opt. Express 19(27), 26249–26268 (2011). [CrossRef]   [PubMed]  

43. E. McLeod and A. Ozcan, “Unconventional methods of imaging: computational microscopy and compact implementations,” Rep. Prog. Phys. 79(7), 076001 (2016). [CrossRef]   [PubMed]  

44. R. Stahl, G. Vanmeerbeeck, G. Lafruit, R. Huys, V. Reumers, A. Lambrechts, C.-K. Liao, C.-C. Hsiao, M. Yashiro, and M. Takemoto et al., “Lens-free digital in-line holographic imaging for wide field-of-view, high-resolution and real-time monitoring of complex microscopic objects,” Proc. SPIE 8947, 89471F (2014).

45. M. Guizar-Sicairos, S. T. Thurman, and J. R. Fienup, “Efficient subpixel image registration algorithms,” Opt. Lett. 33(2), 156–158 (2008). [CrossRef]   [PubMed]  

46. Y. Huang, W. Wang, and L. Wang, “Bidirectional recurrent convolutional networks for multi-frame super-resolution,” Adv. Neural Inf. Process. Syst. 28235–243 (2015).

47. W. Luo, A. Greenbaum, Y. Zhang, and A. Ozcan, “Synthetic aperture-based on-chip microscopy,” Light Sci. Appl. 4(3), e261 (2015). [CrossRef]  

48. K. Zhang, W. Zuo, Y. Chen, D. Meng, and L. Zhang, “Beyond a gaussian denoiser: Residual learning of deep cnn for image denoising,” IEEE Trans. Image Process. 26(7), 3142–3155 (2017). [CrossRef]   [PubMed]  

49. Y. Rivenson, Y. Zhang, H. Günaydın, D. Teng, and A. Ozcan, “Phase recovery and holographic image reconstruction using deep learning in neural networks,” Light Sci. Appl. 7(2), 17141 (2018). [CrossRef]  

References

  • View by:
  • |
  • |
  • |

  1. D. Gabor, “A new microscopic principle,” Nature 161(4098), 777–778 (1948).
    [Crossref] [PubMed]
  2. J. Garcia-Sucerquia, W. Xu, S. K. Jericho, P. Klages, M. H. Jericho, and H. J. Kreuzer, “Digital in-line holographic microscopy,” Appl. Opt. 45(5), 836–850 (2006).
    [Crossref] [PubMed]
  3. A. Ozcan and U. Demirci, “Ultra wide-field lens-free monitoring of cells on-chip,” Lab Chip 8(1), 98–106 (2008).
    [Crossref] [PubMed]
  4. G. Stybayeva, O. Mudanyali, S. Seo, J. Silangcruz, M. Macal, E. Ramanculov, S. Dandekar, A. Erlinger, A. Ozcan, and A. Revzin, “Lensfree holographic imaging of antibody microarrays for high-throughput detection of leukocyte numbers and function,” Anal. Chem. 82(9), 3736–3744 (2010).
    [Crossref] [PubMed]
  5. O. Mudanyali, D. Tseng, C. Oh, S. O. Isikman, I. Sencan, W. Bishara, C. Oztoprak, S. Seo, B. Khademhosseini, and A. Ozcan, “Compact, light-weight and cost-effective microscope based on lensless incoherent holography for telemedicine applications,” Lab Chip 10(11), 1417–1428 (2010).
    [Crossref] [PubMed]
  6. S. Seo, T.-W. Su, D. K. Tseng, A. Erlinger, and A. Ozcan, “Lensfree holographic imaging for on-chip cytometry and diagnostics,” Lab Chip 9(6), 777–787 (2009).
    [Crossref] [PubMed]
  7. W. Bishara, H. Zhu, and A. Ozcan, “Holographic opto-fluidic microscopy,” Opt. Express 18(26), 27499–27510 (2010).
    [Crossref] [PubMed]
  8. S. Schumacher, J. Nestler, T. Otto, M. Wegener, E. Ehrentreich-Förster, D. Michel, K. Wunderlich, S. Palzer, K. Sohn, A. Weber, M. Burgard, A. Grzesiak, A. Teichert, A. Brandenburg, B. Koger, J. Albers, E. Nebling, and F. F. Bier, “Highly-integrated lab-on-chip system for point-of-care multiparameter analysis,” Lab Chip 12(3), 464–473 (2012).
    [Crossref] [PubMed]
  9. L. Lagae, D. Vercruysse, A. Dusa, C. Liu, K. de Wijs, R. Stahl, G. Vanmeerbeeck, B. Majeed, Y. Li, and P. Peumans, “High throughput cell sorter based on lensfree imaging of cells,” in Proceedings of IEEE International Electron Devices Meeting (IEEE, 2015), pp. 333–336.
    [Crossref]
  10. C. Allier, S. Morel, R. Vincent, L. Ghenim, F. Navarro, M. Menneteau, T. Bordy, L. Hervé, O. Cioni, X. Gidrol, Y. Usson, and J. M. Dinten, “Imaging of dense cell cultures by multiwavelength lens-free video microscopy,” Cytometry A 91(5), 433–442 (2017).
    [Crossref] [PubMed]
  11. W. Bishara, T.-W. Su, A. F. Coskun, and A. Ozcan, “Lensfree on-chip microscopy over a wide field-of-view using pixel super-resolution,” Opt. Express 18(11), 11181–11191 (2010).
    [Crossref] [PubMed]
  12. A. Greenbaum, W. Luo, B. Khademhosseinieh, T.-W. Su, A. F. Coskun, and A. Ozcan, “Increased space-bandwidth product in pixel super-resolved lensfree on-chip microscopy,” Sci. Rep. 3(1), 1717 (2013).
    [Crossref]
  13. W. Luo, Y. Zhang, Z. Göröcs, A. Feizi, and A. Ozcan, “Propagation phasor approach for holographic image reconstruction,” Sci. Rep. 6(1), 22738 (2016).
    [Crossref] [PubMed]
  14. W. Luo, Y. Zhang, A. Feizi, Z. Göröcs, and A. Ozcan, “Pixel super-resolution using wavelength scanning,” Light Sci. Appl. 5(4), e16060 (2016).
    [Crossref] [PubMed]
  15. J. Zhang, J. Sun, Q. Chen, J. Li, and C. Zuo, “Adaptive pixel-super-resolved lensfree in-line digital holography for wide-field on-chip microscopy,” Sci. Rep. 7(1), 11777 (2017).
    [Crossref] [PubMed]
  16. C. Fournier, F. Jolivet, L. Denis, N. Verrier, E. Thiebaut, C. Allier, and T. Fournel, “Pixel super-resolution in digital holography by regularized reconstruction,” Appl. Opt. 56(1), 69–77 (2017).
    [Crossref]
  17. J. Zhang, Q. Chen, J. Li, J. Sun, and C. Zuo, “Lensfree dynamic super-resolved phase imaging based on active micro-scanning,” Opt. Lett. 43(15), 3714–3717 (2018).
    [Crossref] [PubMed]
  18. K. Nelson, A. Bhatti, and S. Nahavandi, “Performance Evaluation of Multi-Frame Super-Resolution Algorithms,” in 2012 International Conference on Digital Image Computing Techniques and Applications (DICTA) (IEEE, 2012), pp. 1–8.
  19. J. Song, C. Leon Swisher, H. Im, S. Jeong, D. Pathania, Y. Iwamoto, M. Pivovarov, R. Weissleder, and H. Lee, “Sparsity-based pixel super resolution for lens-free digital in-line holography,” Sci. Rep. 6, 24681 (2016).
    [Crossref] [PubMed]
  20. W. T. Freeman, T. R. Jones, and E. C. Pasztor, “Example-based super-resolution,” IEEE Comput. Graph. Appl. 22(2), 56–65 (2002).
    [Crossref]
  21. Y. Lu, M. Inamura, and M. del Carmen Valdes, “Super-resolution of the undersampled and subpixel shifted image sequence by a neural network,” Int. J. Imaging Syst. Technol. 14(1), 8–15 (2004).
    [Crossref]
  22. C. Dong, C. C. Loy, K. He, and X. Tang, “Image super-resolution using deep convolutional networks,” IEEE Trans. Pattern Anal. Mach. Intell. 38(2), 295–307 (2016).
    [Crossref] [PubMed]
  23. J. Kim, J.K. Lee, and K.M. Lee, “Accurate image super-resolution using very deep convolutional networks,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2016), pp. 1646–1654.
    [Crossref]
  24. B. Lim, S. Son, H. Kim, S. Nah, and K. M. Lee, “Enhanced deep residual networks for single image super-resolution,” in The IEEE conference on computer vision and pattern recognition (CVPR) workshops, vol.  1, p. 4 (2017).
  25. C. Ledig, L. Theis, F. Huszár, J. Caballero, A. Cunningham, A. Acosta, A. P. Aitken, A. Tejani, J. Totz, Z. Wang, and et al., “Photo-Realistic Single Image Super-Resolution Using a Generative Adversarial Network,” in CVPR,  2(3), p. 4 (2017).
  26. W. Yang, X. Zhang, Y. Tian, W. Wang, and J.-H. Xue, “Deep learning for single image super-resolution: A brief review,” arXiv 1808.03344 (2018).
  27. Y. Rivenson, Z. Göröcs, H. Günaydin, Y. Zhang, H. Wang, and A. Ozcan, “Deep learning microscopy,” Optica 4(11), 1437–1443 (2017).
    [Crossref]
  28. Y. Rivenson, H. Ceylan Koydemir, H. Wang, Z. Wei, Z. Ren, H. Günaydın, Y. Zhang, Z. Gorocs, K. Liang, D. Tseng, and A. Ozcan, “Deep learning enhanced mobile-phone microscopy,” ACS Photonics 5(6), 2354–2364 (2018).
    [Crossref]
  29. H. Wang, Y. Rivenson, Y. Jin, Z. Wei, R. Gao, H. Gunaydin, L. Bentolila, and A. Ozcan, “Deep learning achieves super-resolution in fluorescence microscopy,” bioRxiv309641 (2018).
  30. T. Nguyen, Y. Xue, Y. Li, L. Tian, and G. Nehmetallah, “Deep learning approach for Fourier ptychography microscopy,” Opt. Express 26(20), 26470–26484 (2018).
    [Crossref] [PubMed]
  31. E. Nehme, L. E. Weiss, T. Michaeli, and Y. Shechtman, “Deep-STORM: super-resolution single-molecule microscopy by deep learning,” Optica 5(4), 458–464 (2018).
    [Crossref]
  32. W. Ouyang, A. Aristov, M. Lelek, X. Hao, and C. Zimmer, “Deep learning massively accelerates super-resolution localization microscopy,” Nat. Biotechnol. 36(5), 460–468 (2018).
    [Crossref] [PubMed]
  33. S. Kazeminia, C. Baur, A. Kuijper, B. van Ginneken, N. Navab, S. Albarqouni, and A. Mukhopadhyay, “GANs for medical image analysis,” arXiv 1809.06222 (2018).
  34. R. Eldan and O. Shamir, “The power of depth for feedforward neural networks,” in Conference on Learning Theory (2016), pp. 907–940.
  35. S. Farsiu, M. D. Robinson, M. Elad, and P. Milanfar, “Fast and robust multiframe super resolution,” IEEE Trans. Image Process. 13(10), 1327–1344 (2004).
    [Crossref] [PubMed]
  36. Z. Wang, A. C. Bovik, H. R. Sheikh, and E. P. Simoncelli, “Image quality assessment: from error visibility to structural similarity,” IEEE Trans. Image Process. 13(4), 600–612 (2004).
    [Crossref] [PubMed]
  37. O. Ronneberger, P. Fischer, and T. Brox, “U-net: Convolutional networks for biomedical image segmentation,” in International Conference on Medical Image Computing and Computer-Assisted Intervention (2015), pp. 234–241.
    [Crossref]
  38. F. Chollet, “Keras: Deep learning library for theano and tensorflow,” https://keras.io
  39. M. Abadi, P. Barham, J. Chen, Z. Chen, A. Davis, J. Dean, M. Devin, S. Ghemawat, G. Irving, and M. Isard, “TensorFlow: A System for Large-Scale Machine Learning,” in Proceedings of the 12th USENIX Symposium on Operating Systems Design and Implementation (OSDI, 2016), pp. 265–283.
  40. D. P. Kingma and J. Ba, “Adam: A method for stochastic optimization,” arXiv 1412.6980 (2014).
  41. K. Matsushima and T. Shimobaba, “Band-limited angular spectrum method for numerical simulation of free-space propagation in far and near fields,” Opt. Express 17(22), 19662–19673 (2009).
    [Crossref] [PubMed]
  42. J. Rosen, N. Siegel, and G. Brooker, “Theoretical and experimental demonstration of resolution beyond the Rayleigh limit by FINCH fluorescence microscopic imaging,” Opt. Express 19(27), 26249–26268 (2011).
    [Crossref] [PubMed]
  43. E. McLeod and A. Ozcan, “Unconventional methods of imaging: computational microscopy and compact implementations,” Rep. Prog. Phys. 79(7), 076001 (2016).
    [Crossref] [PubMed]
  44. R. Stahl, G. Vanmeerbeeck, G. Lafruit, R. Huys, V. Reumers, A. Lambrechts, C.-K. Liao, C.-C. Hsiao, M. Yashiro, M. Takemoto, and et al., “Lens-free digital in-line holographic imaging for wide field-of-view, high-resolution and real-time monitoring of complex microscopic objects,” Proc. SPIE 8947, 89471F (2014).
  45. M. Guizar-Sicairos, S. T. Thurman, and J. R. Fienup, “Efficient subpixel image registration algorithms,” Opt. Lett. 33(2), 156–158 (2008).
    [Crossref] [PubMed]
  46. Y. Huang, W. Wang, and L. Wang, “Bidirectional recurrent convolutional networks for multi-frame super-resolution,” Adv. Neural Inf. Process. Syst. 28235–243 (2015).
  47. W. Luo, A. Greenbaum, Y. Zhang, and A. Ozcan, “Synthetic aperture-based on-chip microscopy,” Light Sci. Appl. 4(3), e261 (2015).
    [Crossref]
  48. K. Zhang, W. Zuo, Y. Chen, D. Meng, and L. Zhang, “Beyond a gaussian denoiser: Residual learning of deep cnn for image denoising,” IEEE Trans. Image Process. 26(7), 3142–3155 (2017).
    [Crossref] [PubMed]
  49. Y. Rivenson, Y. Zhang, H. Günaydın, D. Teng, and A. Ozcan, “Phase recovery and holographic image reconstruction using deep learning in neural networks,” Light Sci. Appl. 7(2), 17141 (2018).
    [Crossref]

2018 (6)

J. Zhang, Q. Chen, J. Li, J. Sun, and C. Zuo, “Lensfree dynamic super-resolved phase imaging based on active micro-scanning,” Opt. Lett. 43(15), 3714–3717 (2018).
[Crossref] [PubMed]

Y. Rivenson, H. Ceylan Koydemir, H. Wang, Z. Wei, Z. Ren, H. Günaydın, Y. Zhang, Z. Gorocs, K. Liang, D. Tseng, and A. Ozcan, “Deep learning enhanced mobile-phone microscopy,” ACS Photonics 5(6), 2354–2364 (2018).
[Crossref]

T. Nguyen, Y. Xue, Y. Li, L. Tian, and G. Nehmetallah, “Deep learning approach for Fourier ptychography microscopy,” Opt. Express 26(20), 26470–26484 (2018).
[Crossref] [PubMed]

E. Nehme, L. E. Weiss, T. Michaeli, and Y. Shechtman, “Deep-STORM: super-resolution single-molecule microscopy by deep learning,” Optica 5(4), 458–464 (2018).
[Crossref]

W. Ouyang, A. Aristov, M. Lelek, X. Hao, and C. Zimmer, “Deep learning massively accelerates super-resolution localization microscopy,” Nat. Biotechnol. 36(5), 460–468 (2018).
[Crossref] [PubMed]

Y. Rivenson, Y. Zhang, H. Günaydın, D. Teng, and A. Ozcan, “Phase recovery and holographic image reconstruction using deep learning in neural networks,” Light Sci. Appl. 7(2), 17141 (2018).
[Crossref]

2017 (7)

K. Zhang, W. Zuo, Y. Chen, D. Meng, and L. Zhang, “Beyond a gaussian denoiser: Residual learning of deep cnn for image denoising,” IEEE Trans. Image Process. 26(7), 3142–3155 (2017).
[Crossref] [PubMed]

B. Lim, S. Son, H. Kim, S. Nah, and K. M. Lee, “Enhanced deep residual networks for single image super-resolution,” in The IEEE conference on computer vision and pattern recognition (CVPR) workshops, vol.  1, p. 4 (2017).

C. Ledig, L. Theis, F. Huszár, J. Caballero, A. Cunningham, A. Acosta, A. P. Aitken, A. Tejani, J. Totz, Z. Wang, and et al., “Photo-Realistic Single Image Super-Resolution Using a Generative Adversarial Network,” in CVPR,  2(3), p. 4 (2017).

Y. Rivenson, Z. Göröcs, H. Günaydin, Y. Zhang, H. Wang, and A. Ozcan, “Deep learning microscopy,” Optica 4(11), 1437–1443 (2017).
[Crossref]

J. Zhang, J. Sun, Q. Chen, J. Li, and C. Zuo, “Adaptive pixel-super-resolved lensfree in-line digital holography for wide-field on-chip microscopy,” Sci. Rep. 7(1), 11777 (2017).
[Crossref] [PubMed]

C. Fournier, F. Jolivet, L. Denis, N. Verrier, E. Thiebaut, C. Allier, and T. Fournel, “Pixel super-resolution in digital holography by regularized reconstruction,” Appl. Opt. 56(1), 69–77 (2017).
[Crossref]

C. Allier, S. Morel, R. Vincent, L. Ghenim, F. Navarro, M. Menneteau, T. Bordy, L. Hervé, O. Cioni, X. Gidrol, Y. Usson, and J. M. Dinten, “Imaging of dense cell cultures by multiwavelength lens-free video microscopy,” Cytometry A 91(5), 433–442 (2017).
[Crossref] [PubMed]

2016 (5)

J. Song, C. Leon Swisher, H. Im, S. Jeong, D. Pathania, Y. Iwamoto, M. Pivovarov, R. Weissleder, and H. Lee, “Sparsity-based pixel super resolution for lens-free digital in-line holography,” Sci. Rep. 6, 24681 (2016).
[Crossref] [PubMed]

W. Luo, Y. Zhang, Z. Göröcs, A. Feizi, and A. Ozcan, “Propagation phasor approach for holographic image reconstruction,” Sci. Rep. 6(1), 22738 (2016).
[Crossref] [PubMed]

W. Luo, Y. Zhang, A. Feizi, Z. Göröcs, and A. Ozcan, “Pixel super-resolution using wavelength scanning,” Light Sci. Appl. 5(4), e16060 (2016).
[Crossref] [PubMed]

C. Dong, C. C. Loy, K. He, and X. Tang, “Image super-resolution using deep convolutional networks,” IEEE Trans. Pattern Anal. Mach. Intell. 38(2), 295–307 (2016).
[Crossref] [PubMed]

E. McLeod and A. Ozcan, “Unconventional methods of imaging: computational microscopy and compact implementations,” Rep. Prog. Phys. 79(7), 076001 (2016).
[Crossref] [PubMed]

2015 (2)

Y. Huang, W. Wang, and L. Wang, “Bidirectional recurrent convolutional networks for multi-frame super-resolution,” Adv. Neural Inf. Process. Syst. 28235–243 (2015).

W. Luo, A. Greenbaum, Y. Zhang, and A. Ozcan, “Synthetic aperture-based on-chip microscopy,” Light Sci. Appl. 4(3), e261 (2015).
[Crossref]

2014 (1)

R. Stahl, G. Vanmeerbeeck, G. Lafruit, R. Huys, V. Reumers, A. Lambrechts, C.-K. Liao, C.-C. Hsiao, M. Yashiro, M. Takemoto, and et al., “Lens-free digital in-line holographic imaging for wide field-of-view, high-resolution and real-time monitoring of complex microscopic objects,” Proc. SPIE 8947, 89471F (2014).

2013 (1)

A. Greenbaum, W. Luo, B. Khademhosseinieh, T.-W. Su, A. F. Coskun, and A. Ozcan, “Increased space-bandwidth product in pixel super-resolved lensfree on-chip microscopy,” Sci. Rep. 3(1), 1717 (2013).
[Crossref]

2012 (1)

S. Schumacher, J. Nestler, T. Otto, M. Wegener, E. Ehrentreich-Förster, D. Michel, K. Wunderlich, S. Palzer, K. Sohn, A. Weber, M. Burgard, A. Grzesiak, A. Teichert, A. Brandenburg, B. Koger, J. Albers, E. Nebling, and F. F. Bier, “Highly-integrated lab-on-chip system for point-of-care multiparameter analysis,” Lab Chip 12(3), 464–473 (2012).
[Crossref] [PubMed]

2011 (1)

2010 (4)

W. Bishara, T.-W. Su, A. F. Coskun, and A. Ozcan, “Lensfree on-chip microscopy over a wide field-of-view using pixel super-resolution,” Opt. Express 18(11), 11181–11191 (2010).
[Crossref] [PubMed]

W. Bishara, H. Zhu, and A. Ozcan, “Holographic opto-fluidic microscopy,” Opt. Express 18(26), 27499–27510 (2010).
[Crossref] [PubMed]

G. Stybayeva, O. Mudanyali, S. Seo, J. Silangcruz, M. Macal, E. Ramanculov, S. Dandekar, A. Erlinger, A. Ozcan, and A. Revzin, “Lensfree holographic imaging of antibody microarrays for high-throughput detection of leukocyte numbers and function,” Anal. Chem. 82(9), 3736–3744 (2010).
[Crossref] [PubMed]

O. Mudanyali, D. Tseng, C. Oh, S. O. Isikman, I. Sencan, W. Bishara, C. Oztoprak, S. Seo, B. Khademhosseini, and A. Ozcan, “Compact, light-weight and cost-effective microscope based on lensless incoherent holography for telemedicine applications,” Lab Chip 10(11), 1417–1428 (2010).
[Crossref] [PubMed]

2009 (2)

S. Seo, T.-W. Su, D. K. Tseng, A. Erlinger, and A. Ozcan, “Lensfree holographic imaging for on-chip cytometry and diagnostics,” Lab Chip 9(6), 777–787 (2009).
[Crossref] [PubMed]

K. Matsushima and T. Shimobaba, “Band-limited angular spectrum method for numerical simulation of free-space propagation in far and near fields,” Opt. Express 17(22), 19662–19673 (2009).
[Crossref] [PubMed]

2008 (2)

2006 (1)

2004 (3)

Y. Lu, M. Inamura, and M. del Carmen Valdes, “Super-resolution of the undersampled and subpixel shifted image sequence by a neural network,” Int. J. Imaging Syst. Technol. 14(1), 8–15 (2004).
[Crossref]

S. Farsiu, M. D. Robinson, M. Elad, and P. Milanfar, “Fast and robust multiframe super resolution,” IEEE Trans. Image Process. 13(10), 1327–1344 (2004).
[Crossref] [PubMed]

Z. Wang, A. C. Bovik, H. R. Sheikh, and E. P. Simoncelli, “Image quality assessment: from error visibility to structural similarity,” IEEE Trans. Image Process. 13(4), 600–612 (2004).
[Crossref] [PubMed]

2002 (1)

W. T. Freeman, T. R. Jones, and E. C. Pasztor, “Example-based super-resolution,” IEEE Comput. Graph. Appl. 22(2), 56–65 (2002).
[Crossref]

1948 (1)

D. Gabor, “A new microscopic principle,” Nature 161(4098), 777–778 (1948).
[Crossref] [PubMed]

Abadi, M.

M. Abadi, P. Barham, J. Chen, Z. Chen, A. Davis, J. Dean, M. Devin, S. Ghemawat, G. Irving, and M. Isard, “TensorFlow: A System for Large-Scale Machine Learning,” in Proceedings of the 12th USENIX Symposium on Operating Systems Design and Implementation (OSDI, 2016), pp. 265–283.

Acosta, A.

C. Ledig, L. Theis, F. Huszár, J. Caballero, A. Cunningham, A. Acosta, A. P. Aitken, A. Tejani, J. Totz, Z. Wang, and et al., “Photo-Realistic Single Image Super-Resolution Using a Generative Adversarial Network,” in CVPR,  2(3), p. 4 (2017).

Aitken, A. P.

C. Ledig, L. Theis, F. Huszár, J. Caballero, A. Cunningham, A. Acosta, A. P. Aitken, A. Tejani, J. Totz, Z. Wang, and et al., “Photo-Realistic Single Image Super-Resolution Using a Generative Adversarial Network,” in CVPR,  2(3), p. 4 (2017).

Albers, J.

S. Schumacher, J. Nestler, T. Otto, M. Wegener, E. Ehrentreich-Förster, D. Michel, K. Wunderlich, S. Palzer, K. Sohn, A. Weber, M. Burgard, A. Grzesiak, A. Teichert, A. Brandenburg, B. Koger, J. Albers, E. Nebling, and F. F. Bier, “Highly-integrated lab-on-chip system for point-of-care multiparameter analysis,” Lab Chip 12(3), 464–473 (2012).
[Crossref] [PubMed]

Allier, C.

C. Allier, S. Morel, R. Vincent, L. Ghenim, F. Navarro, M. Menneteau, T. Bordy, L. Hervé, O. Cioni, X. Gidrol, Y. Usson, and J. M. Dinten, “Imaging of dense cell cultures by multiwavelength lens-free video microscopy,” Cytometry A 91(5), 433–442 (2017).
[Crossref] [PubMed]

C. Fournier, F. Jolivet, L. Denis, N. Verrier, E. Thiebaut, C. Allier, and T. Fournel, “Pixel super-resolution in digital holography by regularized reconstruction,” Appl. Opt. 56(1), 69–77 (2017).
[Crossref]

Aristov, A.

W. Ouyang, A. Aristov, M. Lelek, X. Hao, and C. Zimmer, “Deep learning massively accelerates super-resolution localization microscopy,” Nat. Biotechnol. 36(5), 460–468 (2018).
[Crossref] [PubMed]

Barham, P.

M. Abadi, P. Barham, J. Chen, Z. Chen, A. Davis, J. Dean, M. Devin, S. Ghemawat, G. Irving, and M. Isard, “TensorFlow: A System for Large-Scale Machine Learning,” in Proceedings of the 12th USENIX Symposium on Operating Systems Design and Implementation (OSDI, 2016), pp. 265–283.

Bentolila, L.

H. Wang, Y. Rivenson, Y. Jin, Z. Wei, R. Gao, H. Gunaydin, L. Bentolila, and A. Ozcan, “Deep learning achieves super-resolution in fluorescence microscopy,” bioRxiv309641 (2018).

Bhatti, A.

K. Nelson, A. Bhatti, and S. Nahavandi, “Performance Evaluation of Multi-Frame Super-Resolution Algorithms,” in 2012 International Conference on Digital Image Computing Techniques and Applications (DICTA) (IEEE, 2012), pp. 1–8.

Bier, F. F.

S. Schumacher, J. Nestler, T. Otto, M. Wegener, E. Ehrentreich-Förster, D. Michel, K. Wunderlich, S. Palzer, K. Sohn, A. Weber, M. Burgard, A. Grzesiak, A. Teichert, A. Brandenburg, B. Koger, J. Albers, E. Nebling, and F. F. Bier, “Highly-integrated lab-on-chip system for point-of-care multiparameter analysis,” Lab Chip 12(3), 464–473 (2012).
[Crossref] [PubMed]

Bishara, W.

O. Mudanyali, D. Tseng, C. Oh, S. O. Isikman, I. Sencan, W. Bishara, C. Oztoprak, S. Seo, B. Khademhosseini, and A. Ozcan, “Compact, light-weight and cost-effective microscope based on lensless incoherent holography for telemedicine applications,” Lab Chip 10(11), 1417–1428 (2010).
[Crossref] [PubMed]

W. Bishara, H. Zhu, and A. Ozcan, “Holographic opto-fluidic microscopy,” Opt. Express 18(26), 27499–27510 (2010).
[Crossref] [PubMed]

W. Bishara, T.-W. Su, A. F. Coskun, and A. Ozcan, “Lensfree on-chip microscopy over a wide field-of-view using pixel super-resolution,” Opt. Express 18(11), 11181–11191 (2010).
[Crossref] [PubMed]

Bordy, T.

C. Allier, S. Morel, R. Vincent, L. Ghenim, F. Navarro, M. Menneteau, T. Bordy, L. Hervé, O. Cioni, X. Gidrol, Y. Usson, and J. M. Dinten, “Imaging of dense cell cultures by multiwavelength lens-free video microscopy,” Cytometry A 91(5), 433–442 (2017).
[Crossref] [PubMed]

Bovik, A. C.

Z. Wang, A. C. Bovik, H. R. Sheikh, and E. P. Simoncelli, “Image quality assessment: from error visibility to structural similarity,” IEEE Trans. Image Process. 13(4), 600–612 (2004).
[Crossref] [PubMed]

Brandenburg, A.

S. Schumacher, J. Nestler, T. Otto, M. Wegener, E. Ehrentreich-Förster, D. Michel, K. Wunderlich, S. Palzer, K. Sohn, A. Weber, M. Burgard, A. Grzesiak, A. Teichert, A. Brandenburg, B. Koger, J. Albers, E. Nebling, and F. F. Bier, “Highly-integrated lab-on-chip system for point-of-care multiparameter analysis,” Lab Chip 12(3), 464–473 (2012).
[Crossref] [PubMed]

Brooker, G.

Brox, T.

O. Ronneberger, P. Fischer, and T. Brox, “U-net: Convolutional networks for biomedical image segmentation,” in International Conference on Medical Image Computing and Computer-Assisted Intervention (2015), pp. 234–241.
[Crossref]

Burgard, M.

S. Schumacher, J. Nestler, T. Otto, M. Wegener, E. Ehrentreich-Förster, D. Michel, K. Wunderlich, S. Palzer, K. Sohn, A. Weber, M. Burgard, A. Grzesiak, A. Teichert, A. Brandenburg, B. Koger, J. Albers, E. Nebling, and F. F. Bier, “Highly-integrated lab-on-chip system for point-of-care multiparameter analysis,” Lab Chip 12(3), 464–473 (2012).
[Crossref] [PubMed]

Caballero, J.

C. Ledig, L. Theis, F. Huszár, J. Caballero, A. Cunningham, A. Acosta, A. P. Aitken, A. Tejani, J. Totz, Z. Wang, and et al., “Photo-Realistic Single Image Super-Resolution Using a Generative Adversarial Network,” in CVPR,  2(3), p. 4 (2017).

Ceylan Koydemir, H.

Y. Rivenson, H. Ceylan Koydemir, H. Wang, Z. Wei, Z. Ren, H. Günaydın, Y. Zhang, Z. Gorocs, K. Liang, D. Tseng, and A. Ozcan, “Deep learning enhanced mobile-phone microscopy,” ACS Photonics 5(6), 2354–2364 (2018).
[Crossref]

Chen, J.

M. Abadi, P. Barham, J. Chen, Z. Chen, A. Davis, J. Dean, M. Devin, S. Ghemawat, G. Irving, and M. Isard, “TensorFlow: A System for Large-Scale Machine Learning,” in Proceedings of the 12th USENIX Symposium on Operating Systems Design and Implementation (OSDI, 2016), pp. 265–283.

Chen, Q.

J. Zhang, Q. Chen, J. Li, J. Sun, and C. Zuo, “Lensfree dynamic super-resolved phase imaging based on active micro-scanning,” Opt. Lett. 43(15), 3714–3717 (2018).
[Crossref] [PubMed]

J. Zhang, J. Sun, Q. Chen, J. Li, and C. Zuo, “Adaptive pixel-super-resolved lensfree in-line digital holography for wide-field on-chip microscopy,” Sci. Rep. 7(1), 11777 (2017).
[Crossref] [PubMed]

Chen, Y.

K. Zhang, W. Zuo, Y. Chen, D. Meng, and L. Zhang, “Beyond a gaussian denoiser: Residual learning of deep cnn for image denoising,” IEEE Trans. Image Process. 26(7), 3142–3155 (2017).
[Crossref] [PubMed]

Chen, Z.

M. Abadi, P. Barham, J. Chen, Z. Chen, A. Davis, J. Dean, M. Devin, S. Ghemawat, G. Irving, and M. Isard, “TensorFlow: A System for Large-Scale Machine Learning,” in Proceedings of the 12th USENIX Symposium on Operating Systems Design and Implementation (OSDI, 2016), pp. 265–283.

Cioni, O.

C. Allier, S. Morel, R. Vincent, L. Ghenim, F. Navarro, M. Menneteau, T. Bordy, L. Hervé, O. Cioni, X. Gidrol, Y. Usson, and J. M. Dinten, “Imaging of dense cell cultures by multiwavelength lens-free video microscopy,” Cytometry A 91(5), 433–442 (2017).
[Crossref] [PubMed]

Coskun, A. F.

A. Greenbaum, W. Luo, B. Khademhosseinieh, T.-W. Su, A. F. Coskun, and A. Ozcan, “Increased space-bandwidth product in pixel super-resolved lensfree on-chip microscopy,” Sci. Rep. 3(1), 1717 (2013).
[Crossref]

W. Bishara, T.-W. Su, A. F. Coskun, and A. Ozcan, “Lensfree on-chip microscopy over a wide field-of-view using pixel super-resolution,” Opt. Express 18(11), 11181–11191 (2010).
[Crossref] [PubMed]

Cunningham, A.

C. Ledig, L. Theis, F. Huszár, J. Caballero, A. Cunningham, A. Acosta, A. P. Aitken, A. Tejani, J. Totz, Z. Wang, and et al., “Photo-Realistic Single Image Super-Resolution Using a Generative Adversarial Network,” in CVPR,  2(3), p. 4 (2017).

Dandekar, S.

G. Stybayeva, O. Mudanyali, S. Seo, J. Silangcruz, M. Macal, E. Ramanculov, S. Dandekar, A. Erlinger, A. Ozcan, and A. Revzin, “Lensfree holographic imaging of antibody microarrays for high-throughput detection of leukocyte numbers and function,” Anal. Chem. 82(9), 3736–3744 (2010).
[Crossref] [PubMed]

Davis, A.

M. Abadi, P. Barham, J. Chen, Z. Chen, A. Davis, J. Dean, M. Devin, S. Ghemawat, G. Irving, and M. Isard, “TensorFlow: A System for Large-Scale Machine Learning,” in Proceedings of the 12th USENIX Symposium on Operating Systems Design and Implementation (OSDI, 2016), pp. 265–283.

de Wijs, K.

L. Lagae, D. Vercruysse, A. Dusa, C. Liu, K. de Wijs, R. Stahl, G. Vanmeerbeeck, B. Majeed, Y. Li, and P. Peumans, “High throughput cell sorter based on lensfree imaging of cells,” in Proceedings of IEEE International Electron Devices Meeting (IEEE, 2015), pp. 333–336.
[Crossref]

Dean, J.

M. Abadi, P. Barham, J. Chen, Z. Chen, A. Davis, J. Dean, M. Devin, S. Ghemawat, G. Irving, and M. Isard, “TensorFlow: A System for Large-Scale Machine Learning,” in Proceedings of the 12th USENIX Symposium on Operating Systems Design and Implementation (OSDI, 2016), pp. 265–283.

del Carmen Valdes, M.

Y. Lu, M. Inamura, and M. del Carmen Valdes, “Super-resolution of the undersampled and subpixel shifted image sequence by a neural network,” Int. J. Imaging Syst. Technol. 14(1), 8–15 (2004).
[Crossref]

Demirci, U.

A. Ozcan and U. Demirci, “Ultra wide-field lens-free monitoring of cells on-chip,” Lab Chip 8(1), 98–106 (2008).
[Crossref] [PubMed]

Denis, L.

Devin, M.

M. Abadi, P. Barham, J. Chen, Z. Chen, A. Davis, J. Dean, M. Devin, S. Ghemawat, G. Irving, and M. Isard, “TensorFlow: A System for Large-Scale Machine Learning,” in Proceedings of the 12th USENIX Symposium on Operating Systems Design and Implementation (OSDI, 2016), pp. 265–283.

Dinten, J. M.

C. Allier, S. Morel, R. Vincent, L. Ghenim, F. Navarro, M. Menneteau, T. Bordy, L. Hervé, O. Cioni, X. Gidrol, Y. Usson, and J. M. Dinten, “Imaging of dense cell cultures by multiwavelength lens-free video microscopy,” Cytometry A 91(5), 433–442 (2017).
[Crossref] [PubMed]

Dong, C.

C. Dong, C. C. Loy, K. He, and X. Tang, “Image super-resolution using deep convolutional networks,” IEEE Trans. Pattern Anal. Mach. Intell. 38(2), 295–307 (2016).
[Crossref] [PubMed]

Dusa, A.

L. Lagae, D. Vercruysse, A. Dusa, C. Liu, K. de Wijs, R. Stahl, G. Vanmeerbeeck, B. Majeed, Y. Li, and P. Peumans, “High throughput cell sorter based on lensfree imaging of cells,” in Proceedings of IEEE International Electron Devices Meeting (IEEE, 2015), pp. 333–336.
[Crossref]

Ehrentreich-Förster, E.

S. Schumacher, J. Nestler, T. Otto, M. Wegener, E. Ehrentreich-Förster, D. Michel, K. Wunderlich, S. Palzer, K. Sohn, A. Weber, M. Burgard, A. Grzesiak, A. Teichert, A. Brandenburg, B. Koger, J. Albers, E. Nebling, and F. F. Bier, “Highly-integrated lab-on-chip system for point-of-care multiparameter analysis,” Lab Chip 12(3), 464–473 (2012).
[Crossref] [PubMed]

Elad, M.

S. Farsiu, M. D. Robinson, M. Elad, and P. Milanfar, “Fast and robust multiframe super resolution,” IEEE Trans. Image Process. 13(10), 1327–1344 (2004).
[Crossref] [PubMed]

Eldan, R.

R. Eldan and O. Shamir, “The power of depth for feedforward neural networks,” in Conference on Learning Theory (2016), pp. 907–940.

Erlinger, A.

G. Stybayeva, O. Mudanyali, S. Seo, J. Silangcruz, M. Macal, E. Ramanculov, S. Dandekar, A. Erlinger, A. Ozcan, and A. Revzin, “Lensfree holographic imaging of antibody microarrays for high-throughput detection of leukocyte numbers and function,” Anal. Chem. 82(9), 3736–3744 (2010).
[Crossref] [PubMed]

S. Seo, T.-W. Su, D. K. Tseng, A. Erlinger, and A. Ozcan, “Lensfree holographic imaging for on-chip cytometry and diagnostics,” Lab Chip 9(6), 777–787 (2009).
[Crossref] [PubMed]

Farsiu, S.

S. Farsiu, M. D. Robinson, M. Elad, and P. Milanfar, “Fast and robust multiframe super resolution,” IEEE Trans. Image Process. 13(10), 1327–1344 (2004).
[Crossref] [PubMed]

Feizi, A.

W. Luo, Y. Zhang, A. Feizi, Z. Göröcs, and A. Ozcan, “Pixel super-resolution using wavelength scanning,” Light Sci. Appl. 5(4), e16060 (2016).
[Crossref] [PubMed]

W. Luo, Y. Zhang, Z. Göröcs, A. Feizi, and A. Ozcan, “Propagation phasor approach for holographic image reconstruction,” Sci. Rep. 6(1), 22738 (2016).
[Crossref] [PubMed]

Fienup, J. R.

Fischer, P.

O. Ronneberger, P. Fischer, and T. Brox, “U-net: Convolutional networks for biomedical image segmentation,” in International Conference on Medical Image Computing and Computer-Assisted Intervention (2015), pp. 234–241.
[Crossref]

Fournel, T.

Fournier, C.

Freeman, W. T.

W. T. Freeman, T. R. Jones, and E. C. Pasztor, “Example-based super-resolution,” IEEE Comput. Graph. Appl. 22(2), 56–65 (2002).
[Crossref]

Gabor, D.

D. Gabor, “A new microscopic principle,” Nature 161(4098), 777–778 (1948).
[Crossref] [PubMed]

Gao, R.

H. Wang, Y. Rivenson, Y. Jin, Z. Wei, R. Gao, H. Gunaydin, L. Bentolila, and A. Ozcan, “Deep learning achieves super-resolution in fluorescence microscopy,” bioRxiv309641 (2018).

Garcia-Sucerquia, J.

Ghemawat, S.

M. Abadi, P. Barham, J. Chen, Z. Chen, A. Davis, J. Dean, M. Devin, S. Ghemawat, G. Irving, and M. Isard, “TensorFlow: A System for Large-Scale Machine Learning,” in Proceedings of the 12th USENIX Symposium on Operating Systems Design and Implementation (OSDI, 2016), pp. 265–283.

Ghenim, L.

C. Allier, S. Morel, R. Vincent, L. Ghenim, F. Navarro, M. Menneteau, T. Bordy, L. Hervé, O. Cioni, X. Gidrol, Y. Usson, and J. M. Dinten, “Imaging of dense cell cultures by multiwavelength lens-free video microscopy,” Cytometry A 91(5), 433–442 (2017).
[Crossref] [PubMed]

Gidrol, X.

C. Allier, S. Morel, R. Vincent, L. Ghenim, F. Navarro, M. Menneteau, T. Bordy, L. Hervé, O. Cioni, X. Gidrol, Y. Usson, and J. M. Dinten, “Imaging of dense cell cultures by multiwavelength lens-free video microscopy,” Cytometry A 91(5), 433–442 (2017).
[Crossref] [PubMed]

Gorocs, Z.

Y. Rivenson, H. Ceylan Koydemir, H. Wang, Z. Wei, Z. Ren, H. Günaydın, Y. Zhang, Z. Gorocs, K. Liang, D. Tseng, and A. Ozcan, “Deep learning enhanced mobile-phone microscopy,” ACS Photonics 5(6), 2354–2364 (2018).
[Crossref]

Göröcs, Z.

Y. Rivenson, Z. Göröcs, H. Günaydin, Y. Zhang, H. Wang, and A. Ozcan, “Deep learning microscopy,” Optica 4(11), 1437–1443 (2017).
[Crossref]

W. Luo, Y. Zhang, A. Feizi, Z. Göröcs, and A. Ozcan, “Pixel super-resolution using wavelength scanning,” Light Sci. Appl. 5(4), e16060 (2016).
[Crossref] [PubMed]

W. Luo, Y. Zhang, Z. Göröcs, A. Feizi, and A. Ozcan, “Propagation phasor approach for holographic image reconstruction,” Sci. Rep. 6(1), 22738 (2016).
[Crossref] [PubMed]

Greenbaum, A.

W. Luo, A. Greenbaum, Y. Zhang, and A. Ozcan, “Synthetic aperture-based on-chip microscopy,” Light Sci. Appl. 4(3), e261 (2015).
[Crossref]

A. Greenbaum, W. Luo, B. Khademhosseinieh, T.-W. Su, A. F. Coskun, and A. Ozcan, “Increased space-bandwidth product in pixel super-resolved lensfree on-chip microscopy,” Sci. Rep. 3(1), 1717 (2013).
[Crossref]

Grzesiak, A.

S. Schumacher, J. Nestler, T. Otto, M. Wegener, E. Ehrentreich-Förster, D. Michel, K. Wunderlich, S. Palzer, K. Sohn, A. Weber, M. Burgard, A. Grzesiak, A. Teichert, A. Brandenburg, B. Koger, J. Albers, E. Nebling, and F. F. Bier, “Highly-integrated lab-on-chip system for point-of-care multiparameter analysis,” Lab Chip 12(3), 464–473 (2012).
[Crossref] [PubMed]

Guizar-Sicairos, M.

Gunaydin, H.

H. Wang, Y. Rivenson, Y. Jin, Z. Wei, R. Gao, H. Gunaydin, L. Bentolila, and A. Ozcan, “Deep learning achieves super-resolution in fluorescence microscopy,” bioRxiv309641 (2018).

Günaydin, H.

Y. Rivenson, H. Ceylan Koydemir, H. Wang, Z. Wei, Z. Ren, H. Günaydın, Y. Zhang, Z. Gorocs, K. Liang, D. Tseng, and A. Ozcan, “Deep learning enhanced mobile-phone microscopy,” ACS Photonics 5(6), 2354–2364 (2018).
[Crossref]

Y. Rivenson, Y. Zhang, H. Günaydın, D. Teng, and A. Ozcan, “Phase recovery and holographic image reconstruction using deep learning in neural networks,” Light Sci. Appl. 7(2), 17141 (2018).
[Crossref]

Y. Rivenson, Z. Göröcs, H. Günaydin, Y. Zhang, H. Wang, and A. Ozcan, “Deep learning microscopy,” Optica 4(11), 1437–1443 (2017).
[Crossref]

Hao, X.

W. Ouyang, A. Aristov, M. Lelek, X. Hao, and C. Zimmer, “Deep learning massively accelerates super-resolution localization microscopy,” Nat. Biotechnol. 36(5), 460–468 (2018).
[Crossref] [PubMed]

He, K.

C. Dong, C. C. Loy, K. He, and X. Tang, “Image super-resolution using deep convolutional networks,” IEEE Trans. Pattern Anal. Mach. Intell. 38(2), 295–307 (2016).
[Crossref] [PubMed]

Hervé, L.

C. Allier, S. Morel, R. Vincent, L. Ghenim, F. Navarro, M. Menneteau, T. Bordy, L. Hervé, O. Cioni, X. Gidrol, Y. Usson, and J. M. Dinten, “Imaging of dense cell cultures by multiwavelength lens-free video microscopy,” Cytometry A 91(5), 433–442 (2017).
[Crossref] [PubMed]

Hsiao, C.-C.

R. Stahl, G. Vanmeerbeeck, G. Lafruit, R. Huys, V. Reumers, A. Lambrechts, C.-K. Liao, C.-C. Hsiao, M. Yashiro, M. Takemoto, and et al., “Lens-free digital in-line holographic imaging for wide field-of-view, high-resolution and real-time monitoring of complex microscopic objects,” Proc. SPIE 8947, 89471F (2014).

Huang, Y.

Y. Huang, W. Wang, and L. Wang, “Bidirectional recurrent convolutional networks for multi-frame super-resolution,” Adv. Neural Inf. Process. Syst. 28235–243 (2015).

Huszár, F.

C. Ledig, L. Theis, F. Huszár, J. Caballero, A. Cunningham, A. Acosta, A. P. Aitken, A. Tejani, J. Totz, Z. Wang, and et al., “Photo-Realistic Single Image Super-Resolution Using a Generative Adversarial Network,” in CVPR,  2(3), p. 4 (2017).

Huys, R.

R. Stahl, G. Vanmeerbeeck, G. Lafruit, R. Huys, V. Reumers, A. Lambrechts, C.-K. Liao, C.-C. Hsiao, M. Yashiro, M. Takemoto, and et al., “Lens-free digital in-line holographic imaging for wide field-of-view, high-resolution and real-time monitoring of complex microscopic objects,” Proc. SPIE 8947, 89471F (2014).

Im, H.

J. Song, C. Leon Swisher, H. Im, S. Jeong, D. Pathania, Y. Iwamoto, M. Pivovarov, R. Weissleder, and H. Lee, “Sparsity-based pixel super resolution for lens-free digital in-line holography,” Sci. Rep. 6, 24681 (2016).
[Crossref] [PubMed]

Inamura, M.

Y. Lu, M. Inamura, and M. del Carmen Valdes, “Super-resolution of the undersampled and subpixel shifted image sequence by a neural network,” Int. J. Imaging Syst. Technol. 14(1), 8–15 (2004).
[Crossref]

Irving, G.

M. Abadi, P. Barham, J. Chen, Z. Chen, A. Davis, J. Dean, M. Devin, S. Ghemawat, G. Irving, and M. Isard, “TensorFlow: A System for Large-Scale Machine Learning,” in Proceedings of the 12th USENIX Symposium on Operating Systems Design and Implementation (OSDI, 2016), pp. 265–283.

Isard, M.

M. Abadi, P. Barham, J. Chen, Z. Chen, A. Davis, J. Dean, M. Devin, S. Ghemawat, G. Irving, and M. Isard, “TensorFlow: A System for Large-Scale Machine Learning,” in Proceedings of the 12th USENIX Symposium on Operating Systems Design and Implementation (OSDI, 2016), pp. 265–283.

Isikman, S. O.

O. Mudanyali, D. Tseng, C. Oh, S. O. Isikman, I. Sencan, W. Bishara, C. Oztoprak, S. Seo, B. Khademhosseini, and A. Ozcan, “Compact, light-weight and cost-effective microscope based on lensless incoherent holography for telemedicine applications,” Lab Chip 10(11), 1417–1428 (2010).
[Crossref] [PubMed]

Iwamoto, Y.

J. Song, C. Leon Swisher, H. Im, S. Jeong, D. Pathania, Y. Iwamoto, M. Pivovarov, R. Weissleder, and H. Lee, “Sparsity-based pixel super resolution for lens-free digital in-line holography,” Sci. Rep. 6, 24681 (2016).
[Crossref] [PubMed]

Jeong, S.

J. Song, C. Leon Swisher, H. Im, S. Jeong, D. Pathania, Y. Iwamoto, M. Pivovarov, R. Weissleder, and H. Lee, “Sparsity-based pixel super resolution for lens-free digital in-line holography,” Sci. Rep. 6, 24681 (2016).
[Crossref] [PubMed]

Jericho, M. H.

Jericho, S. K.

Jin, Y.

H. Wang, Y. Rivenson, Y. Jin, Z. Wei, R. Gao, H. Gunaydin, L. Bentolila, and A. Ozcan, “Deep learning achieves super-resolution in fluorescence microscopy,” bioRxiv309641 (2018).

Jolivet, F.

Jones, T. R.

W. T. Freeman, T. R. Jones, and E. C. Pasztor, “Example-based super-resolution,” IEEE Comput. Graph. Appl. 22(2), 56–65 (2002).
[Crossref]

Khademhosseini, B.

O. Mudanyali, D. Tseng, C. Oh, S. O. Isikman, I. Sencan, W. Bishara, C. Oztoprak, S. Seo, B. Khademhosseini, and A. Ozcan, “Compact, light-weight and cost-effective microscope based on lensless incoherent holography for telemedicine applications,” Lab Chip 10(11), 1417–1428 (2010).
[Crossref] [PubMed]

Khademhosseinieh, B.

A. Greenbaum, W. Luo, B. Khademhosseinieh, T.-W. Su, A. F. Coskun, and A. Ozcan, “Increased space-bandwidth product in pixel super-resolved lensfree on-chip microscopy,” Sci. Rep. 3(1), 1717 (2013).
[Crossref]

Kim, H.

B. Lim, S. Son, H. Kim, S. Nah, and K. M. Lee, “Enhanced deep residual networks for single image super-resolution,” in The IEEE conference on computer vision and pattern recognition (CVPR) workshops, vol.  1, p. 4 (2017).

Kim, J.

J. Kim, J.K. Lee, and K.M. Lee, “Accurate image super-resolution using very deep convolutional networks,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2016), pp. 1646–1654.
[Crossref]

Klages, P.

Koger, B.

S. Schumacher, J. Nestler, T. Otto, M. Wegener, E. Ehrentreich-Förster, D. Michel, K. Wunderlich, S. Palzer, K. Sohn, A. Weber, M. Burgard, A. Grzesiak, A. Teichert, A. Brandenburg, B. Koger, J. Albers, E. Nebling, and F. F. Bier, “Highly-integrated lab-on-chip system for point-of-care multiparameter analysis,” Lab Chip 12(3), 464–473 (2012).
[Crossref] [PubMed]

Kreuzer, H. J.

Lafruit, G.

R. Stahl, G. Vanmeerbeeck, G. Lafruit, R. Huys, V. Reumers, A. Lambrechts, C.-K. Liao, C.-C. Hsiao, M. Yashiro, M. Takemoto, and et al., “Lens-free digital in-line holographic imaging for wide field-of-view, high-resolution and real-time monitoring of complex microscopic objects,” Proc. SPIE 8947, 89471F (2014).

Lagae, L.

L. Lagae, D. Vercruysse, A. Dusa, C. Liu, K. de Wijs, R. Stahl, G. Vanmeerbeeck, B. Majeed, Y. Li, and P. Peumans, “High throughput cell sorter based on lensfree imaging of cells,” in Proceedings of IEEE International Electron Devices Meeting (IEEE, 2015), pp. 333–336.
[Crossref]

Lambrechts, A.

R. Stahl, G. Vanmeerbeeck, G. Lafruit, R. Huys, V. Reumers, A. Lambrechts, C.-K. Liao, C.-C. Hsiao, M. Yashiro, M. Takemoto, and et al., “Lens-free digital in-line holographic imaging for wide field-of-view, high-resolution and real-time monitoring of complex microscopic objects,” Proc. SPIE 8947, 89471F (2014).

Ledig, C.

C. Ledig, L. Theis, F. Huszár, J. Caballero, A. Cunningham, A. Acosta, A. P. Aitken, A. Tejani, J. Totz, Z. Wang, and et al., “Photo-Realistic Single Image Super-Resolution Using a Generative Adversarial Network,” in CVPR,  2(3), p. 4 (2017).

Lee, H.

J. Song, C. Leon Swisher, H. Im, S. Jeong, D. Pathania, Y. Iwamoto, M. Pivovarov, R. Weissleder, and H. Lee, “Sparsity-based pixel super resolution for lens-free digital in-line holography,” Sci. Rep. 6, 24681 (2016).
[Crossref] [PubMed]

Lee, J.K.

J. Kim, J.K. Lee, and K.M. Lee, “Accurate image super-resolution using very deep convolutional networks,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2016), pp. 1646–1654.
[Crossref]

Lee, K. M.

B. Lim, S. Son, H. Kim, S. Nah, and K. M. Lee, “Enhanced deep residual networks for single image super-resolution,” in The IEEE conference on computer vision and pattern recognition (CVPR) workshops, vol.  1, p. 4 (2017).

Lee, K.M.

J. Kim, J.K. Lee, and K.M. Lee, “Accurate image super-resolution using very deep convolutional networks,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2016), pp. 1646–1654.
[Crossref]

Lelek, M.

W. Ouyang, A. Aristov, M. Lelek, X. Hao, and C. Zimmer, “Deep learning massively accelerates super-resolution localization microscopy,” Nat. Biotechnol. 36(5), 460–468 (2018).
[Crossref] [PubMed]

Leon Swisher, C.

J. Song, C. Leon Swisher, H. Im, S. Jeong, D. Pathania, Y. Iwamoto, M. Pivovarov, R. Weissleder, and H. Lee, “Sparsity-based pixel super resolution for lens-free digital in-line holography,” Sci. Rep. 6, 24681 (2016).
[Crossref] [PubMed]

Li, J.

J. Zhang, Q. Chen, J. Li, J. Sun, and C. Zuo, “Lensfree dynamic super-resolved phase imaging based on active micro-scanning,” Opt. Lett. 43(15), 3714–3717 (2018).
[Crossref] [PubMed]

J. Zhang, J. Sun, Q. Chen, J. Li, and C. Zuo, “Adaptive pixel-super-resolved lensfree in-line digital holography for wide-field on-chip microscopy,” Sci. Rep. 7(1), 11777 (2017).
[Crossref] [PubMed]

Li, Y.

T. Nguyen, Y. Xue, Y. Li, L. Tian, and G. Nehmetallah, “Deep learning approach for Fourier ptychography microscopy,” Opt. Express 26(20), 26470–26484 (2018).
[Crossref] [PubMed]

L. Lagae, D. Vercruysse, A. Dusa, C. Liu, K. de Wijs, R. Stahl, G. Vanmeerbeeck, B. Majeed, Y. Li, and P. Peumans, “High throughput cell sorter based on lensfree imaging of cells,” in Proceedings of IEEE International Electron Devices Meeting (IEEE, 2015), pp. 333–336.
[Crossref]

Liang, K.

Y. Rivenson, H. Ceylan Koydemir, H. Wang, Z. Wei, Z. Ren, H. Günaydın, Y. Zhang, Z. Gorocs, K. Liang, D. Tseng, and A. Ozcan, “Deep learning enhanced mobile-phone microscopy,” ACS Photonics 5(6), 2354–2364 (2018).
[Crossref]

Liao, C.-K.

R. Stahl, G. Vanmeerbeeck, G. Lafruit, R. Huys, V. Reumers, A. Lambrechts, C.-K. Liao, C.-C. Hsiao, M. Yashiro, M. Takemoto, and et al., “Lens-free digital in-line holographic imaging for wide field-of-view, high-resolution and real-time monitoring of complex microscopic objects,” Proc. SPIE 8947, 89471F (2014).

Lim, B.

B. Lim, S. Son, H. Kim, S. Nah, and K. M. Lee, “Enhanced deep residual networks for single image super-resolution,” in The IEEE conference on computer vision and pattern recognition (CVPR) workshops, vol.  1, p. 4 (2017).

Liu, C.

L. Lagae, D. Vercruysse, A. Dusa, C. Liu, K. de Wijs, R. Stahl, G. Vanmeerbeeck, B. Majeed, Y. Li, and P. Peumans, “High throughput cell sorter based on lensfree imaging of cells,” in Proceedings of IEEE International Electron Devices Meeting (IEEE, 2015), pp. 333–336.
[Crossref]

Loy, C. C.

C. Dong, C. C. Loy, K. He, and X. Tang, “Image super-resolution using deep convolutional networks,” IEEE Trans. Pattern Anal. Mach. Intell. 38(2), 295–307 (2016).
[Crossref] [PubMed]

Lu, Y.

Y. Lu, M. Inamura, and M. del Carmen Valdes, “Super-resolution of the undersampled and subpixel shifted image sequence by a neural network,” Int. J. Imaging Syst. Technol. 14(1), 8–15 (2004).
[Crossref]

Luo, W.

W. Luo, Y. Zhang, A. Feizi, Z. Göröcs, and A. Ozcan, “Pixel super-resolution using wavelength scanning,” Light Sci. Appl. 5(4), e16060 (2016).
[Crossref] [PubMed]

W. Luo, Y. Zhang, Z. Göröcs, A. Feizi, and A. Ozcan, “Propagation phasor approach for holographic image reconstruction,” Sci. Rep. 6(1), 22738 (2016).
[Crossref] [PubMed]

W. Luo, A. Greenbaum, Y. Zhang, and A. Ozcan, “Synthetic aperture-based on-chip microscopy,” Light Sci. Appl. 4(3), e261 (2015).
[Crossref]

A. Greenbaum, W. Luo, B. Khademhosseinieh, T.-W. Su, A. F. Coskun, and A. Ozcan, “Increased space-bandwidth product in pixel super-resolved lensfree on-chip microscopy,” Sci. Rep. 3(1), 1717 (2013).
[Crossref]

Macal, M.

G. Stybayeva, O. Mudanyali, S. Seo, J. Silangcruz, M. Macal, E. Ramanculov, S. Dandekar, A. Erlinger, A. Ozcan, and A. Revzin, “Lensfree holographic imaging of antibody microarrays for high-throughput detection of leukocyte numbers and function,” Anal. Chem. 82(9), 3736–3744 (2010).
[Crossref] [PubMed]

Majeed, B.

L. Lagae, D. Vercruysse, A. Dusa, C. Liu, K. de Wijs, R. Stahl, G. Vanmeerbeeck, B. Majeed, Y. Li, and P. Peumans, “High throughput cell sorter based on lensfree imaging of cells,” in Proceedings of IEEE International Electron Devices Meeting (IEEE, 2015), pp. 333–336.
[Crossref]

Matsushima, K.

McLeod, E.

E. McLeod and A. Ozcan, “Unconventional methods of imaging: computational microscopy and compact implementations,” Rep. Prog. Phys. 79(7), 076001 (2016).
[Crossref] [PubMed]

Meng, D.

K. Zhang, W. Zuo, Y. Chen, D. Meng, and L. Zhang, “Beyond a gaussian denoiser: Residual learning of deep cnn for image denoising,” IEEE Trans. Image Process. 26(7), 3142–3155 (2017).
[Crossref] [PubMed]

Menneteau, M.

C. Allier, S. Morel, R. Vincent, L. Ghenim, F. Navarro, M. Menneteau, T. Bordy, L. Hervé, O. Cioni, X. Gidrol, Y. Usson, and J. M. Dinten, “Imaging of dense cell cultures by multiwavelength lens-free video microscopy,” Cytometry A 91(5), 433–442 (2017).
[Crossref] [PubMed]

Michaeli, T.

Michel, D.

S. Schumacher, J. Nestler, T. Otto, M. Wegener, E. Ehrentreich-Förster, D. Michel, K. Wunderlich, S. Palzer, K. Sohn, A. Weber, M. Burgard, A. Grzesiak, A. Teichert, A. Brandenburg, B. Koger, J. Albers, E. Nebling, and F. F. Bier, “Highly-integrated lab-on-chip system for point-of-care multiparameter analysis,” Lab Chip 12(3), 464–473 (2012).
[Crossref] [PubMed]

Milanfar, P.

S. Farsiu, M. D. Robinson, M. Elad, and P. Milanfar, “Fast and robust multiframe super resolution,” IEEE Trans. Image Process. 13(10), 1327–1344 (2004).
[Crossref] [PubMed]

Morel, S.

C. Allier, S. Morel, R. Vincent, L. Ghenim, F. Navarro, M. Menneteau, T. Bordy, L. Hervé, O. Cioni, X. Gidrol, Y. Usson, and J. M. Dinten, “Imaging of dense cell cultures by multiwavelength lens-free video microscopy,” Cytometry A 91(5), 433–442 (2017).
[Crossref] [PubMed]

Mudanyali, O.

G. Stybayeva, O. Mudanyali, S. Seo, J. Silangcruz, M. Macal, E. Ramanculov, S. Dandekar, A. Erlinger, A. Ozcan, and A. Revzin, “Lensfree holographic imaging of antibody microarrays for high-throughput detection of leukocyte numbers and function,” Anal. Chem. 82(9), 3736–3744 (2010).
[Crossref] [PubMed]

O. Mudanyali, D. Tseng, C. Oh, S. O. Isikman, I. Sencan, W. Bishara, C. Oztoprak, S. Seo, B. Khademhosseini, and A. Ozcan, “Compact, light-weight and cost-effective microscope based on lensless incoherent holography for telemedicine applications,” Lab Chip 10(11), 1417–1428 (2010).
[Crossref] [PubMed]

Nah, S.

B. Lim, S. Son, H. Kim, S. Nah, and K. M. Lee, “Enhanced deep residual networks for single image super-resolution,” in The IEEE conference on computer vision and pattern recognition (CVPR) workshops, vol.  1, p. 4 (2017).

Nahavandi, S.

K. Nelson, A. Bhatti, and S. Nahavandi, “Performance Evaluation of Multi-Frame Super-Resolution Algorithms,” in 2012 International Conference on Digital Image Computing Techniques and Applications (DICTA) (IEEE, 2012), pp. 1–8.

Navarro, F.

C. Allier, S. Morel, R. Vincent, L. Ghenim, F. Navarro, M. Menneteau, T. Bordy, L. Hervé, O. Cioni, X. Gidrol, Y. Usson, and J. M. Dinten, “Imaging of dense cell cultures by multiwavelength lens-free video microscopy,” Cytometry A 91(5), 433–442 (2017).
[Crossref] [PubMed]

Nebling, E.

S. Schumacher, J. Nestler, T. Otto, M. Wegener, E. Ehrentreich-Förster, D. Michel, K. Wunderlich, S. Palzer, K. Sohn, A. Weber, M. Burgard, A. Grzesiak, A. Teichert, A. Brandenburg, B. Koger, J. Albers, E. Nebling, and F. F. Bier, “Highly-integrated lab-on-chip system for point-of-care multiparameter analysis,” Lab Chip 12(3), 464–473 (2012).
[Crossref] [PubMed]

Nehme, E.

Nehmetallah, G.

Nelson, K.

K. Nelson, A. Bhatti, and S. Nahavandi, “Performance Evaluation of Multi-Frame Super-Resolution Algorithms,” in 2012 International Conference on Digital Image Computing Techniques and Applications (DICTA) (IEEE, 2012), pp. 1–8.

Nestler, J.

S. Schumacher, J. Nestler, T. Otto, M. Wegener, E. Ehrentreich-Förster, D. Michel, K. Wunderlich, S. Palzer, K. Sohn, A. Weber, M. Burgard, A. Grzesiak, A. Teichert, A. Brandenburg, B. Koger, J. Albers, E. Nebling, and F. F. Bier, “Highly-integrated lab-on-chip system for point-of-care multiparameter analysis,” Lab Chip 12(3), 464–473 (2012).
[Crossref] [PubMed]

Nguyen, T.

Oh, C.

O. Mudanyali, D. Tseng, C. Oh, S. O. Isikman, I. Sencan, W. Bishara, C. Oztoprak, S. Seo, B. Khademhosseini, and A. Ozcan, “Compact, light-weight and cost-effective microscope based on lensless incoherent holography for telemedicine applications,” Lab Chip 10(11), 1417–1428 (2010).
[Crossref] [PubMed]

Otto, T.

S. Schumacher, J. Nestler, T. Otto, M. Wegener, E. Ehrentreich-Förster, D. Michel, K. Wunderlich, S. Palzer, K. Sohn, A. Weber, M. Burgard, A. Grzesiak, A. Teichert, A. Brandenburg, B. Koger, J. Albers, E. Nebling, and F. F. Bier, “Highly-integrated lab-on-chip system for point-of-care multiparameter analysis,” Lab Chip 12(3), 464–473 (2012).
[Crossref] [PubMed]

Ouyang, W.

W. Ouyang, A. Aristov, M. Lelek, X. Hao, and C. Zimmer, “Deep learning massively accelerates super-resolution localization microscopy,” Nat. Biotechnol. 36(5), 460–468 (2018).
[Crossref] [PubMed]

Ozcan, A.

Y. Rivenson, H. Ceylan Koydemir, H. Wang, Z. Wei, Z. Ren, H. Günaydın, Y. Zhang, Z. Gorocs, K. Liang, D. Tseng, and A. Ozcan, “Deep learning enhanced mobile-phone microscopy,” ACS Photonics 5(6), 2354–2364 (2018).
[Crossref]

Y. Rivenson, Y. Zhang, H. Günaydın, D. Teng, and A. Ozcan, “Phase recovery and holographic image reconstruction using deep learning in neural networks,” Light Sci. Appl. 7(2), 17141 (2018).
[Crossref]

Y. Rivenson, Z. Göröcs, H. Günaydin, Y. Zhang, H. Wang, and A. Ozcan, “Deep learning microscopy,” Optica 4(11), 1437–1443 (2017).
[Crossref]

E. McLeod and A. Ozcan, “Unconventional methods of imaging: computational microscopy and compact implementations,” Rep. Prog. Phys. 79(7), 076001 (2016).
[Crossref] [PubMed]

W. Luo, Y. Zhang, Z. Göröcs, A. Feizi, and A. Ozcan, “Propagation phasor approach for holographic image reconstruction,” Sci. Rep. 6(1), 22738 (2016).
[Crossref] [PubMed]

W. Luo, Y. Zhang, A. Feizi, Z. Göröcs, and A. Ozcan, “Pixel super-resolution using wavelength scanning,” Light Sci. Appl. 5(4), e16060 (2016).
[Crossref] [PubMed]

W. Luo, A. Greenbaum, Y. Zhang, and A. Ozcan, “Synthetic aperture-based on-chip microscopy,” Light Sci. Appl. 4(3), e261 (2015).
[Crossref]

A. Greenbaum, W. Luo, B. Khademhosseinieh, T.-W. Su, A. F. Coskun, and A. Ozcan, “Increased space-bandwidth product in pixel super-resolved lensfree on-chip microscopy,” Sci. Rep. 3(1), 1717 (2013).
[Crossref]

W. Bishara, T.-W. Su, A. F. Coskun, and A. Ozcan, “Lensfree on-chip microscopy over a wide field-of-view using pixel super-resolution,” Opt. Express 18(11), 11181–11191 (2010).
[Crossref] [PubMed]

O. Mudanyali, D. Tseng, C. Oh, S. O. Isikman, I. Sencan, W. Bishara, C. Oztoprak, S. Seo, B. Khademhosseini, and A. Ozcan, “Compact, light-weight and cost-effective microscope based on lensless incoherent holography for telemedicine applications,” Lab Chip 10(11), 1417–1428 (2010).
[Crossref] [PubMed]

W. Bishara, H. Zhu, and A. Ozcan, “Holographic opto-fluidic microscopy,” Opt. Express 18(26), 27499–27510 (2010).
[Crossref] [PubMed]

G. Stybayeva, O. Mudanyali, S. Seo, J. Silangcruz, M. Macal, E. Ramanculov, S. Dandekar, A. Erlinger, A. Ozcan, and A. Revzin, “Lensfree holographic imaging of antibody microarrays for high-throughput detection of leukocyte numbers and function,” Anal. Chem. 82(9), 3736–3744 (2010).
[Crossref] [PubMed]

S. Seo, T.-W. Su, D. K. Tseng, A. Erlinger, and A. Ozcan, “Lensfree holographic imaging for on-chip cytometry and diagnostics,” Lab Chip 9(6), 777–787 (2009).
[Crossref] [PubMed]

A. Ozcan and U. Demirci, “Ultra wide-field lens-free monitoring of cells on-chip,” Lab Chip 8(1), 98–106 (2008).
[Crossref] [PubMed]

H. Wang, Y. Rivenson, Y. Jin, Z. Wei, R. Gao, H. Gunaydin, L. Bentolila, and A. Ozcan, “Deep learning achieves super-resolution in fluorescence microscopy,” bioRxiv309641 (2018).

Oztoprak, C.

O. Mudanyali, D. Tseng, C. Oh, S. O. Isikman, I. Sencan, W. Bishara, C. Oztoprak, S. Seo, B. Khademhosseini, and A. Ozcan, “Compact, light-weight and cost-effective microscope based on lensless incoherent holography for telemedicine applications,” Lab Chip 10(11), 1417–1428 (2010).
[Crossref] [PubMed]

Palzer, S.

S. Schumacher, J. Nestler, T. Otto, M. Wegener, E. Ehrentreich-Förster, D. Michel, K. Wunderlich, S. Palzer, K. Sohn, A. Weber, M. Burgard, A. Grzesiak, A. Teichert, A. Brandenburg, B. Koger, J. Albers, E. Nebling, and F. F. Bier, “Highly-integrated lab-on-chip system for point-of-care multiparameter analysis,” Lab Chip 12(3), 464–473 (2012).
[Crossref] [PubMed]

Pasztor, E. C.

W. T. Freeman, T. R. Jones, and E. C. Pasztor, “Example-based super-resolution,” IEEE Comput. Graph. Appl. 22(2), 56–65 (2002).
[Crossref]

Pathania, D.

J. Song, C. Leon Swisher, H. Im, S. Jeong, D. Pathania, Y. Iwamoto, M. Pivovarov, R. Weissleder, and H. Lee, “Sparsity-based pixel super resolution for lens-free digital in-line holography,” Sci. Rep. 6, 24681 (2016).
[Crossref] [PubMed]

Peumans, P.

L. Lagae, D. Vercruysse, A. Dusa, C. Liu, K. de Wijs, R. Stahl, G. Vanmeerbeeck, B. Majeed, Y. Li, and P. Peumans, “High throughput cell sorter based on lensfree imaging of cells,” in Proceedings of IEEE International Electron Devices Meeting (IEEE, 2015), pp. 333–336.
[Crossref]

Pivovarov, M.

J. Song, C. Leon Swisher, H. Im, S. Jeong, D. Pathania, Y. Iwamoto, M. Pivovarov, R. Weissleder, and H. Lee, “Sparsity-based pixel super resolution for lens-free digital in-line holography,” Sci. Rep. 6, 24681 (2016).
[Crossref] [PubMed]

Ramanculov, E.

G. Stybayeva, O. Mudanyali, S. Seo, J. Silangcruz, M. Macal, E. Ramanculov, S. Dandekar, A. Erlinger, A. Ozcan, and A. Revzin, “Lensfree holographic imaging of antibody microarrays for high-throughput detection of leukocyte numbers and function,” Anal. Chem. 82(9), 3736–3744 (2010).
[Crossref] [PubMed]

Ren, Z.

Y. Rivenson, H. Ceylan Koydemir, H. Wang, Z. Wei, Z. Ren, H. Günaydın, Y. Zhang, Z. Gorocs, K. Liang, D. Tseng, and A. Ozcan, “Deep learning enhanced mobile-phone microscopy,” ACS Photonics 5(6), 2354–2364 (2018).
[Crossref]

Reumers, V.

R. Stahl, G. Vanmeerbeeck, G. Lafruit, R. Huys, V. Reumers, A. Lambrechts, C.-K. Liao, C.-C. Hsiao, M. Yashiro, M. Takemoto, and et al., “Lens-free digital in-line holographic imaging for wide field-of-view, high-resolution and real-time monitoring of complex microscopic objects,” Proc. SPIE 8947, 89471F (2014).

Revzin, A.

G. Stybayeva, O. Mudanyali, S. Seo, J. Silangcruz, M. Macal, E. Ramanculov, S. Dandekar, A. Erlinger, A. Ozcan, and A. Revzin, “Lensfree holographic imaging of antibody microarrays for high-throughput detection of leukocyte numbers and function,” Anal. Chem. 82(9), 3736–3744 (2010).
[Crossref] [PubMed]

Rivenson, Y.

Y. Rivenson, Y. Zhang, H. Günaydın, D. Teng, and A. Ozcan, “Phase recovery and holographic image reconstruction using deep learning in neural networks,” Light Sci. Appl. 7(2), 17141 (2018).
[Crossref]

Y. Rivenson, H. Ceylan Koydemir, H. Wang, Z. Wei, Z. Ren, H. Günaydın, Y. Zhang, Z. Gorocs, K. Liang, D. Tseng, and A. Ozcan, “Deep learning enhanced mobile-phone microscopy,” ACS Photonics 5(6), 2354–2364 (2018).
[Crossref]

Y. Rivenson, Z. Göröcs, H. Günaydin, Y. Zhang, H. Wang, and A. Ozcan, “Deep learning microscopy,” Optica 4(11), 1437–1443 (2017).
[Crossref]

H. Wang, Y. Rivenson, Y. Jin, Z. Wei, R. Gao, H. Gunaydin, L. Bentolila, and A. Ozcan, “Deep learning achieves super-resolution in fluorescence microscopy,” bioRxiv309641 (2018).

Robinson, M. D.

S. Farsiu, M. D. Robinson, M. Elad, and P. Milanfar, “Fast and robust multiframe super resolution,” IEEE Trans. Image Process. 13(10), 1327–1344 (2004).
[Crossref] [PubMed]

Ronneberger, O.

O. Ronneberger, P. Fischer, and T. Brox, “U-net: Convolutional networks for biomedical image segmentation,” in International Conference on Medical Image Computing and Computer-Assisted Intervention (2015), pp. 234–241.
[Crossref]

Rosen, J.

Schumacher, S.

S. Schumacher, J. Nestler, T. Otto, M. Wegener, E. Ehrentreich-Förster, D. Michel, K. Wunderlich, S. Palzer, K. Sohn, A. Weber, M. Burgard, A. Grzesiak, A. Teichert, A. Brandenburg, B. Koger, J. Albers, E. Nebling, and F. F. Bier, “Highly-integrated lab-on-chip system for point-of-care multiparameter analysis,” Lab Chip 12(3), 464–473 (2012).
[Crossref] [PubMed]

Sencan, I.

O. Mudanyali, D. Tseng, C. Oh, S. O. Isikman, I. Sencan, W. Bishara, C. Oztoprak, S. Seo, B. Khademhosseini, and A. Ozcan, “Compact, light-weight and cost-effective microscope based on lensless incoherent holography for telemedicine applications,” Lab Chip 10(11), 1417–1428 (2010).
[Crossref] [PubMed]

Seo, S.

O. Mudanyali, D. Tseng, C. Oh, S. O. Isikman, I. Sencan, W. Bishara, C. Oztoprak, S. Seo, B. Khademhosseini, and A. Ozcan, “Compact, light-weight and cost-effective microscope based on lensless incoherent holography for telemedicine applications,” Lab Chip 10(11), 1417–1428 (2010).
[Crossref] [PubMed]

G. Stybayeva, O. Mudanyali, S. Seo, J. Silangcruz, M. Macal, E. Ramanculov, S. Dandekar, A. Erlinger, A. Ozcan, and A. Revzin, “Lensfree holographic imaging of antibody microarrays for high-throughput detection of leukocyte numbers and function,” Anal. Chem. 82(9), 3736–3744 (2010).
[Crossref] [PubMed]

S. Seo, T.-W. Su, D. K. Tseng, A. Erlinger, and A. Ozcan, “Lensfree holographic imaging for on-chip cytometry and diagnostics,” Lab Chip 9(6), 777–787 (2009).
[Crossref] [PubMed]

Shamir, O.

R. Eldan and O. Shamir, “The power of depth for feedforward neural networks,” in Conference on Learning Theory (2016), pp. 907–940.

Shechtman, Y.

Sheikh, H. R.

Z. Wang, A. C. Bovik, H. R. Sheikh, and E. P. Simoncelli, “Image quality assessment: from error visibility to structural similarity,” IEEE Trans. Image Process. 13(4), 600–612 (2004).
[Crossref] [PubMed]

Shimobaba, T.

Siegel, N.

Silangcruz, J.

G. Stybayeva, O. Mudanyali, S. Seo, J. Silangcruz, M. Macal, E. Ramanculov, S. Dandekar, A. Erlinger, A. Ozcan, and A. Revzin, “Lensfree holographic imaging of antibody microarrays for high-throughput detection of leukocyte numbers and function,” Anal. Chem. 82(9), 3736–3744 (2010).
[Crossref] [PubMed]

Simoncelli, E. P.

Z. Wang, A. C. Bovik, H. R. Sheikh, and E. P. Simoncelli, “Image quality assessment: from error visibility to structural similarity,” IEEE Trans. Image Process. 13(4), 600–612 (2004).
[Crossref] [PubMed]

Sohn, K.

S. Schumacher, J. Nestler, T. Otto, M. Wegener, E. Ehrentreich-Förster, D. Michel, K. Wunderlich, S. Palzer, K. Sohn, A. Weber, M. Burgard, A. Grzesiak, A. Teichert, A. Brandenburg, B. Koger, J. Albers, E. Nebling, and F. F. Bier, “Highly-integrated lab-on-chip system for point-of-care multiparameter analysis,” Lab Chip 12(3), 464–473 (2012).
[Crossref] [PubMed]

Son, S.

B. Lim, S. Son, H. Kim, S. Nah, and K. M. Lee, “Enhanced deep residual networks for single image super-resolution,” in The IEEE conference on computer vision and pattern recognition (CVPR) workshops, vol.  1, p. 4 (2017).

Song, J.

J. Song, C. Leon Swisher, H. Im, S. Jeong, D. Pathania, Y. Iwamoto, M. Pivovarov, R. Weissleder, and H. Lee, “Sparsity-based pixel super resolution for lens-free digital in-line holography,” Sci. Rep. 6, 24681 (2016).
[Crossref] [PubMed]

Stahl, R.

R. Stahl, G. Vanmeerbeeck, G. Lafruit, R. Huys, V. Reumers, A. Lambrechts, C.-K. Liao, C.-C. Hsiao, M. Yashiro, M. Takemoto, and et al., “Lens-free digital in-line holographic imaging for wide field-of-view, high-resolution and real-time monitoring of complex microscopic objects,” Proc. SPIE 8947, 89471F (2014).

L. Lagae, D. Vercruysse, A. Dusa, C. Liu, K. de Wijs, R. Stahl, G. Vanmeerbeeck, B. Majeed, Y. Li, and P. Peumans, “High throughput cell sorter based on lensfree imaging of cells,” in Proceedings of IEEE International Electron Devices Meeting (IEEE, 2015), pp. 333–336.
[Crossref]

Stybayeva, G.

G. Stybayeva, O. Mudanyali, S. Seo, J. Silangcruz, M. Macal, E. Ramanculov, S. Dandekar, A. Erlinger, A. Ozcan, and A. Revzin, “Lensfree holographic imaging of antibody microarrays for high-throughput detection of leukocyte numbers and function,” Anal. Chem. 82(9), 3736–3744 (2010).
[Crossref] [PubMed]

Su, T.-W.

A. Greenbaum, W. Luo, B. Khademhosseinieh, T.-W. Su, A. F. Coskun, and A. Ozcan, “Increased space-bandwidth product in pixel super-resolved lensfree on-chip microscopy,” Sci. Rep. 3(1), 1717 (2013).
[Crossref]

W. Bishara, T.-W. Su, A. F. Coskun, and A. Ozcan, “Lensfree on-chip microscopy over a wide field-of-view using pixel super-resolution,” Opt. Express 18(11), 11181–11191 (2010).
[Crossref] [PubMed]

S. Seo, T.-W. Su, D. K. Tseng, A. Erlinger, and A. Ozcan, “Lensfree holographic imaging for on-chip cytometry and diagnostics,” Lab Chip 9(6), 777–787 (2009).
[Crossref] [PubMed]

Sun, J.

J. Zhang, Q. Chen, J. Li, J. Sun, and C. Zuo, “Lensfree dynamic super-resolved phase imaging based on active micro-scanning,” Opt. Lett. 43(15), 3714–3717 (2018).
[Crossref] [PubMed]

J. Zhang, J. Sun, Q. Chen, J. Li, and C. Zuo, “Adaptive pixel-super-resolved lensfree in-line digital holography for wide-field on-chip microscopy,” Sci. Rep. 7(1), 11777 (2017).
[Crossref] [PubMed]

Takemoto, M.

R. Stahl, G. Vanmeerbeeck, G. Lafruit, R. Huys, V. Reumers, A. Lambrechts, C.-K. Liao, C.-C. Hsiao, M. Yashiro, M. Takemoto, and et al., “Lens-free digital in-line holographic imaging for wide field-of-view, high-resolution and real-time monitoring of complex microscopic objects,” Proc. SPIE 8947, 89471F (2014).

Tang, X.

C. Dong, C. C. Loy, K. He, and X. Tang, “Image super-resolution using deep convolutional networks,” IEEE Trans. Pattern Anal. Mach. Intell. 38(2), 295–307 (2016).
[Crossref] [PubMed]

Teichert, A.

S. Schumacher, J. Nestler, T. Otto, M. Wegener, E. Ehrentreich-Förster, D. Michel, K. Wunderlich, S. Palzer, K. Sohn, A. Weber, M. Burgard, A. Grzesiak, A. Teichert, A. Brandenburg, B. Koger, J. Albers, E. Nebling, and F. F. Bier, “Highly-integrated lab-on-chip system for point-of-care multiparameter analysis,” Lab Chip 12(3), 464–473 (2012).
[Crossref] [PubMed]

Tejani, A.

C. Ledig, L. Theis, F. Huszár, J. Caballero, A. Cunningham, A. Acosta, A. P. Aitken, A. Tejani, J. Totz, Z. Wang, and et al., “Photo-Realistic Single Image Super-Resolution Using a Generative Adversarial Network,” in CVPR,  2(3), p. 4 (2017).

Teng, D.

Y. Rivenson, Y. Zhang, H. Günaydın, D. Teng, and A. Ozcan, “Phase recovery and holographic image reconstruction using deep learning in neural networks,” Light Sci. Appl. 7(2), 17141 (2018).
[Crossref]

Theis, L.

C. Ledig, L. Theis, F. Huszár, J. Caballero, A. Cunningham, A. Acosta, A. P. Aitken, A. Tejani, J. Totz, Z. Wang, and et al., “Photo-Realistic Single Image Super-Resolution Using a Generative Adversarial Network,” in CVPR,  2(3), p. 4 (2017).

Thiebaut, E.

Thurman, S. T.

Tian, L.

Totz, J.

C. Ledig, L. Theis, F. Huszár, J. Caballero, A. Cunningham, A. Acosta, A. P. Aitken, A. Tejani, J. Totz, Z. Wang, and et al., “Photo-Realistic Single Image Super-Resolution Using a Generative Adversarial Network,” in CVPR,  2(3), p. 4 (2017).

Tseng, D.

Y. Rivenson, H. Ceylan Koydemir, H. Wang, Z. Wei, Z. Ren, H. Günaydın, Y. Zhang, Z. Gorocs, K. Liang, D. Tseng, and A. Ozcan, “Deep learning enhanced mobile-phone microscopy,” ACS Photonics 5(6), 2354–2364 (2018).
[Crossref]

O. Mudanyali, D. Tseng, C. Oh, S. O. Isikman, I. Sencan, W. Bishara, C. Oztoprak, S. Seo, B. Khademhosseini, and A. Ozcan, “Compact, light-weight and cost-effective microscope based on lensless incoherent holography for telemedicine applications,” Lab Chip 10(11), 1417–1428 (2010).
[Crossref] [PubMed]

Tseng, D. K.

S. Seo, T.-W. Su, D. K. Tseng, A. Erlinger, and A. Ozcan, “Lensfree holographic imaging for on-chip cytometry and diagnostics,” Lab Chip 9(6), 777–787 (2009).
[Crossref] [PubMed]

Usson, Y.

C. Allier, S. Morel, R. Vincent, L. Ghenim, F. Navarro, M. Menneteau, T. Bordy, L. Hervé, O. Cioni, X. Gidrol, Y. Usson, and J. M. Dinten, “Imaging of dense cell cultures by multiwavelength lens-free video microscopy,” Cytometry A 91(5), 433–442 (2017).
[Crossref] [PubMed]

Vanmeerbeeck, G.

R. Stahl, G. Vanmeerbeeck, G. Lafruit, R. Huys, V. Reumers, A. Lambrechts, C.-K. Liao, C.-C. Hsiao, M. Yashiro, M. Takemoto, and et al., “Lens-free digital in-line holographic imaging for wide field-of-view, high-resolution and real-time monitoring of complex microscopic objects,” Proc. SPIE 8947, 89471F (2014).

L. Lagae, D. Vercruysse, A. Dusa, C. Liu, K. de Wijs, R. Stahl, G. Vanmeerbeeck, B. Majeed, Y. Li, and P. Peumans, “High throughput cell sorter based on lensfree imaging of cells,” in Proceedings of IEEE International Electron Devices Meeting (IEEE, 2015), pp. 333–336.
[Crossref]

Vercruysse, D.

L. Lagae, D. Vercruysse, A. Dusa, C. Liu, K. de Wijs, R. Stahl, G. Vanmeerbeeck, B. Majeed, Y. Li, and P. Peumans, “High throughput cell sorter based on lensfree imaging of cells,” in Proceedings of IEEE International Electron Devices Meeting (IEEE, 2015), pp. 333–336.
[Crossref]

Verrier, N.

Vincent, R.

C. Allier, S. Morel, R. Vincent, L. Ghenim, F. Navarro, M. Menneteau, T. Bordy, L. Hervé, O. Cioni, X. Gidrol, Y. Usson, and J. M. Dinten, “Imaging of dense cell cultures by multiwavelength lens-free video microscopy,” Cytometry A 91(5), 433–442 (2017).
[Crossref] [PubMed]

Wang, H.

Y. Rivenson, H. Ceylan Koydemir, H. Wang, Z. Wei, Z. Ren, H. Günaydın, Y. Zhang, Z. Gorocs, K. Liang, D. Tseng, and A. Ozcan, “Deep learning enhanced mobile-phone microscopy,” ACS Photonics 5(6), 2354–2364 (2018).
[Crossref]

Y. Rivenson, Z. Göröcs, H. Günaydin, Y. Zhang, H. Wang, and A. Ozcan, “Deep learning microscopy,” Optica 4(11), 1437–1443 (2017).
[Crossref]

H. Wang, Y. Rivenson, Y. Jin, Z. Wei, R. Gao, H. Gunaydin, L. Bentolila, and A. Ozcan, “Deep learning achieves super-resolution in fluorescence microscopy,” bioRxiv309641 (2018).

Wang, L.

Y. Huang, W. Wang, and L. Wang, “Bidirectional recurrent convolutional networks for multi-frame super-resolution,” Adv. Neural Inf. Process. Syst. 28235–243 (2015).

Wang, W.

Y. Huang, W. Wang, and L. Wang, “Bidirectional recurrent convolutional networks for multi-frame super-resolution,” Adv. Neural Inf. Process. Syst. 28235–243 (2015).

Wang, Z.

C. Ledig, L. Theis, F. Huszár, J. Caballero, A. Cunningham, A. Acosta, A. P. Aitken, A. Tejani, J. Totz, Z. Wang, and et al., “Photo-Realistic Single Image Super-Resolution Using a Generative Adversarial Network,” in CVPR,  2(3), p. 4 (2017).

Z. Wang, A. C. Bovik, H. R. Sheikh, and E. P. Simoncelli, “Image quality assessment: from error visibility to structural similarity,” IEEE Trans. Image Process. 13(4), 600–612 (2004).
[Crossref] [PubMed]

Weber, A.

S. Schumacher, J. Nestler, T. Otto, M. Wegener, E. Ehrentreich-Förster, D. Michel, K. Wunderlich, S. Palzer, K. Sohn, A. Weber, M. Burgard, A. Grzesiak, A. Teichert, A. Brandenburg, B. Koger, J. Albers, E. Nebling, and F. F. Bier, “Highly-integrated lab-on-chip system for point-of-care multiparameter analysis,” Lab Chip 12(3), 464–473 (2012).
[Crossref] [PubMed]

Wegener, M.

S. Schumacher, J. Nestler, T. Otto, M. Wegener, E. Ehrentreich-Förster, D. Michel, K. Wunderlich, S. Palzer, K. Sohn, A. Weber, M. Burgard, A. Grzesiak, A. Teichert, A. Brandenburg, B. Koger, J. Albers, E. Nebling, and F. F. Bier, “Highly-integrated lab-on-chip system for point-of-care multiparameter analysis,” Lab Chip 12(3), 464–473 (2012).
[Crossref] [PubMed]

Wei, Z.

Y. Rivenson, H. Ceylan Koydemir, H. Wang, Z. Wei, Z. Ren, H. Günaydın, Y. Zhang, Z. Gorocs, K. Liang, D. Tseng, and A. Ozcan, “Deep learning enhanced mobile-phone microscopy,” ACS Photonics 5(6), 2354–2364 (2018).
[Crossref]

H. Wang, Y. Rivenson, Y. Jin, Z. Wei, R. Gao, H. Gunaydin, L. Bentolila, and A. Ozcan, “Deep learning achieves super-resolution in fluorescence microscopy,” bioRxiv309641 (2018).

Weiss, L. E.

Weissleder, R.

J. Song, C. Leon Swisher, H. Im, S. Jeong, D. Pathania, Y. Iwamoto, M. Pivovarov, R. Weissleder, and H. Lee, “Sparsity-based pixel super resolution for lens-free digital in-line holography,” Sci. Rep. 6, 24681 (2016).
[Crossref] [PubMed]

Wunderlich, K.

S. Schumacher, J. Nestler, T. Otto, M. Wegener, E. Ehrentreich-Förster, D. Michel, K. Wunderlich, S. Palzer, K. Sohn, A. Weber, M. Burgard, A. Grzesiak, A. Teichert, A. Brandenburg, B. Koger, J. Albers, E. Nebling, and F. F. Bier, “Highly-integrated lab-on-chip system for point-of-care multiparameter analysis,” Lab Chip 12(3), 464–473 (2012).
[Crossref] [PubMed]

Xu, W.

Xue, Y.

Yashiro, M.

R. Stahl, G. Vanmeerbeeck, G. Lafruit, R. Huys, V. Reumers, A. Lambrechts, C.-K. Liao, C.-C. Hsiao, M. Yashiro, M. Takemoto, and et al., “Lens-free digital in-line holographic imaging for wide field-of-view, high-resolution and real-time monitoring of complex microscopic objects,” Proc. SPIE 8947, 89471F (2014).

Zhang, J.

J. Zhang, Q. Chen, J. Li, J. Sun, and C. Zuo, “Lensfree dynamic super-resolved phase imaging based on active micro-scanning,” Opt. Lett. 43(15), 3714–3717 (2018).
[Crossref] [PubMed]

J. Zhang, J. Sun, Q. Chen, J. Li, and C. Zuo, “Adaptive pixel-super-resolved lensfree in-line digital holography for wide-field on-chip microscopy,” Sci. Rep. 7(1), 11777 (2017).
[Crossref] [PubMed]

Zhang, K.

K. Zhang, W. Zuo, Y. Chen, D. Meng, and L. Zhang, “Beyond a gaussian denoiser: Residual learning of deep cnn for image denoising,” IEEE Trans. Image Process. 26(7), 3142–3155 (2017).
[Crossref] [PubMed]

Zhang, L.

K. Zhang, W. Zuo, Y. Chen, D. Meng, and L. Zhang, “Beyond a gaussian denoiser: Residual learning of deep cnn for image denoising,” IEEE Trans. Image Process. 26(7), 3142–3155 (2017).
[Crossref] [PubMed]

Zhang, Y.

Y. Rivenson, Y. Zhang, H. Günaydın, D. Teng, and A. Ozcan, “Phase recovery and holographic image reconstruction using deep learning in neural networks,” Light Sci. Appl. 7(2), 17141 (2018).
[Crossref]

Y. Rivenson, H. Ceylan Koydemir, H. Wang, Z. Wei, Z. Ren, H. Günaydın, Y. Zhang, Z. Gorocs, K. Liang, D. Tseng, and A. Ozcan, “Deep learning enhanced mobile-phone microscopy,” ACS Photonics 5(6), 2354–2364 (2018).
[Crossref]

Y. Rivenson, Z. Göröcs, H. Günaydin, Y. Zhang, H. Wang, and A. Ozcan, “Deep learning microscopy,” Optica 4(11), 1437–1443 (2017).
[Crossref]

W. Luo, Y. Zhang, Z. Göröcs, A. Feizi, and A. Ozcan, “Propagation phasor approach for holographic image reconstruction,” Sci. Rep. 6(1), 22738 (2016).
[Crossref] [PubMed]

W. Luo, Y. Zhang, A. Feizi, Z. Göröcs, and A. Ozcan, “Pixel super-resolution using wavelength scanning,” Light Sci. Appl. 5(4), e16060 (2016).
[Crossref] [PubMed]

W. Luo, A. Greenbaum, Y. Zhang, and A. Ozcan, “Synthetic aperture-based on-chip microscopy,” Light Sci. Appl. 4(3), e261 (2015).
[Crossref]

Zhu, H.

Zimmer, C.

W. Ouyang, A. Aristov, M. Lelek, X. Hao, and C. Zimmer, “Deep learning massively accelerates super-resolution localization microscopy,” Nat. Biotechnol. 36(5), 460–468 (2018).
[Crossref] [PubMed]

Zuo, C.

J. Zhang, Q. Chen, J. Li, J. Sun, and C. Zuo, “Lensfree dynamic super-resolved phase imaging based on active micro-scanning,” Opt. Lett. 43(15), 3714–3717 (2018).
[Crossref] [PubMed]

J. Zhang, J. Sun, Q. Chen, J. Li, and C. Zuo, “Adaptive pixel-super-resolved lensfree in-line digital holography for wide-field on-chip microscopy,” Sci. Rep. 7(1), 11777 (2017).
[Crossref] [PubMed]

Zuo, W.

K. Zhang, W. Zuo, Y. Chen, D. Meng, and L. Zhang, “Beyond a gaussian denoiser: Residual learning of deep cnn for image denoising,” IEEE Trans. Image Process. 26(7), 3142–3155 (2017).
[Crossref] [PubMed]

ACS Photonics (1)

Y. Rivenson, H. Ceylan Koydemir, H. Wang, Z. Wei, Z. Ren, H. Günaydın, Y. Zhang, Z. Gorocs, K. Liang, D. Tseng, and A. Ozcan, “Deep learning enhanced mobile-phone microscopy,” ACS Photonics 5(6), 2354–2364 (2018).
[Crossref]

Adv. Neural Inf. Process. Syst. (1)

Y. Huang, W. Wang, and L. Wang, “Bidirectional recurrent convolutional networks for multi-frame super-resolution,” Adv. Neural Inf. Process. Syst. 28235–243 (2015).

Anal. Chem. (1)

G. Stybayeva, O. Mudanyali, S. Seo, J. Silangcruz, M. Macal, E. Ramanculov, S. Dandekar, A. Erlinger, A. Ozcan, and A. Revzin, “Lensfree holographic imaging of antibody microarrays for high-throughput detection of leukocyte numbers and function,” Anal. Chem. 82(9), 3736–3744 (2010).
[Crossref] [PubMed]

Appl. Opt. (2)

Cytometry A (1)

C. Allier, S. Morel, R. Vincent, L. Ghenim, F. Navarro, M. Menneteau, T. Bordy, L. Hervé, O. Cioni, X. Gidrol, Y. Usson, and J. M. Dinten, “Imaging of dense cell cultures by multiwavelength lens-free video microscopy,” Cytometry A 91(5), 433–442 (2017).
[Crossref] [PubMed]

IEEE Comput. Graph. Appl. (1)

W. T. Freeman, T. R. Jones, and E. C. Pasztor, “Example-based super-resolution,” IEEE Comput. Graph. Appl. 22(2), 56–65 (2002).
[Crossref]

IEEE Trans. Image Process. (3)

K. Zhang, W. Zuo, Y. Chen, D. Meng, and L. Zhang, “Beyond a gaussian denoiser: Residual learning of deep cnn for image denoising,” IEEE Trans. Image Process. 26(7), 3142–3155 (2017).
[Crossref] [PubMed]

S. Farsiu, M. D. Robinson, M. Elad, and P. Milanfar, “Fast and robust multiframe super resolution,” IEEE Trans. Image Process. 13(10), 1327–1344 (2004).
[Crossref] [PubMed]

Z. Wang, A. C. Bovik, H. R. Sheikh, and E. P. Simoncelli, “Image quality assessment: from error visibility to structural similarity,” IEEE Trans. Image Process. 13(4), 600–612 (2004).
[Crossref] [PubMed]

IEEE Trans. Pattern Anal. Mach. Intell. (1)

C. Dong, C. C. Loy, K. He, and X. Tang, “Image super-resolution using deep convolutional networks,” IEEE Trans. Pattern Anal. Mach. Intell. 38(2), 295–307 (2016).
[Crossref] [PubMed]

in CVPR (1)

C. Ledig, L. Theis, F. Huszár, J. Caballero, A. Cunningham, A. Acosta, A. P. Aitken, A. Tejani, J. Totz, Z. Wang, and et al., “Photo-Realistic Single Image Super-Resolution Using a Generative Adversarial Network,” in CVPR,  2(3), p. 4 (2017).

in The IEEE conference on computer vision and pattern recognition (CVPR) workshops (1)

B. Lim, S. Son, H. Kim, S. Nah, and K. M. Lee, “Enhanced deep residual networks for single image super-resolution,” in The IEEE conference on computer vision and pattern recognition (CVPR) workshops, vol.  1, p. 4 (2017).

Int. J. Imaging Syst. Technol. (1)

Y. Lu, M. Inamura, and M. del Carmen Valdes, “Super-resolution of the undersampled and subpixel shifted image sequence by a neural network,” Int. J. Imaging Syst. Technol. 14(1), 8–15 (2004).
[Crossref]

Lab Chip (4)

A. Ozcan and U. Demirci, “Ultra wide-field lens-free monitoring of cells on-chip,” Lab Chip 8(1), 98–106 (2008).
[Crossref] [PubMed]

S. Schumacher, J. Nestler, T. Otto, M. Wegener, E. Ehrentreich-Förster, D. Michel, K. Wunderlich, S. Palzer, K. Sohn, A. Weber, M. Burgard, A. Grzesiak, A. Teichert, A. Brandenburg, B. Koger, J. Albers, E. Nebling, and F. F. Bier, “Highly-integrated lab-on-chip system for point-of-care multiparameter analysis,” Lab Chip 12(3), 464–473 (2012).
[Crossref] [PubMed]

O. Mudanyali, D. Tseng, C. Oh, S. O. Isikman, I. Sencan, W. Bishara, C. Oztoprak, S. Seo, B. Khademhosseini, and A. Ozcan, “Compact, light-weight and cost-effective microscope based on lensless incoherent holography for telemedicine applications,” Lab Chip 10(11), 1417–1428 (2010).
[Crossref] [PubMed]

S. Seo, T.-W. Su, D. K. Tseng, A. Erlinger, and A. Ozcan, “Lensfree holographic imaging for on-chip cytometry and diagnostics,” Lab Chip 9(6), 777–787 (2009).
[Crossref] [PubMed]

Light Sci. Appl. (3)

W. Luo, Y. Zhang, A. Feizi, Z. Göröcs, and A. Ozcan, “Pixel super-resolution using wavelength scanning,” Light Sci. Appl. 5(4), e16060 (2016).
[Crossref] [PubMed]

Y. Rivenson, Y. Zhang, H. Günaydın, D. Teng, and A. Ozcan, “Phase recovery and holographic image reconstruction using deep learning in neural networks,” Light Sci. Appl. 7(2), 17141 (2018).
[Crossref]

W. Luo, A. Greenbaum, Y. Zhang, and A. Ozcan, “Synthetic aperture-based on-chip microscopy,” Light Sci. Appl. 4(3), e261 (2015).
[Crossref]

Nat. Biotechnol. (1)

W. Ouyang, A. Aristov, M. Lelek, X. Hao, and C. Zimmer, “Deep learning massively accelerates super-resolution localization microscopy,” Nat. Biotechnol. 36(5), 460–468 (2018).
[Crossref] [PubMed]

Nature (1)

D. Gabor, “A new microscopic principle,” Nature 161(4098), 777–778 (1948).
[Crossref] [PubMed]

Opt. Express (5)

Opt. Lett. (2)

Optica (2)

Proc. SPIE (1)

R. Stahl, G. Vanmeerbeeck, G. Lafruit, R. Huys, V. Reumers, A. Lambrechts, C.-K. Liao, C.-C. Hsiao, M. Yashiro, M. Takemoto, and et al., “Lens-free digital in-line holographic imaging for wide field-of-view, high-resolution and real-time monitoring of complex microscopic objects,” Proc. SPIE 8947, 89471F (2014).

Rep. Prog. Phys. (1)

E. McLeod and A. Ozcan, “Unconventional methods of imaging: computational microscopy and compact implementations,” Rep. Prog. Phys. 79(7), 076001 (2016).
[Crossref] [PubMed]

Sci. Rep. (4)

J. Song, C. Leon Swisher, H. Im, S. Jeong, D. Pathania, Y. Iwamoto, M. Pivovarov, R. Weissleder, and H. Lee, “Sparsity-based pixel super resolution for lens-free digital in-line holography,” Sci. Rep. 6, 24681 (2016).
[Crossref] [PubMed]

J. Zhang, J. Sun, Q. Chen, J. Li, and C. Zuo, “Adaptive pixel-super-resolved lensfree in-line digital holography for wide-field on-chip microscopy,” Sci. Rep. 7(1), 11777 (2017).
[Crossref] [PubMed]

A. Greenbaum, W. Luo, B. Khademhosseinieh, T.-W. Su, A. F. Coskun, and A. Ozcan, “Increased space-bandwidth product in pixel super-resolved lensfree on-chip microscopy,” Sci. Rep. 3(1), 1717 (2013).
[Crossref]

W. Luo, Y. Zhang, Z. Göröcs, A. Feizi, and A. Ozcan, “Propagation phasor approach for holographic image reconstruction,” Sci. Rep. 6(1), 22738 (2016).
[Crossref] [PubMed]

Other (11)

K. Nelson, A. Bhatti, and S. Nahavandi, “Performance Evaluation of Multi-Frame Super-Resolution Algorithms,” in 2012 International Conference on Digital Image Computing Techniques and Applications (DICTA) (IEEE, 2012), pp. 1–8.

L. Lagae, D. Vercruysse, A. Dusa, C. Liu, K. de Wijs, R. Stahl, G. Vanmeerbeeck, B. Majeed, Y. Li, and P. Peumans, “High throughput cell sorter based on lensfree imaging of cells,” in Proceedings of IEEE International Electron Devices Meeting (IEEE, 2015), pp. 333–336.
[Crossref]

W. Yang, X. Zhang, Y. Tian, W. Wang, and J.-H. Xue, “Deep learning for single image super-resolution: A brief review,” arXiv 1808.03344 (2018).

J. Kim, J.K. Lee, and K.M. Lee, “Accurate image super-resolution using very deep convolutional networks,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2016), pp. 1646–1654.
[Crossref]

H. Wang, Y. Rivenson, Y. Jin, Z. Wei, R. Gao, H. Gunaydin, L. Bentolila, and A. Ozcan, “Deep learning achieves super-resolution in fluorescence microscopy,” bioRxiv309641 (2018).

S. Kazeminia, C. Baur, A. Kuijper, B. van Ginneken, N. Navab, S. Albarqouni, and A. Mukhopadhyay, “GANs for medical image analysis,” arXiv 1809.06222 (2018).

R. Eldan and O. Shamir, “The power of depth for feedforward neural networks,” in Conference on Learning Theory (2016), pp. 907–940.

O. Ronneberger, P. Fischer, and T. Brox, “U-net: Convolutional networks for biomedical image segmentation,” in International Conference on Medical Image Computing and Computer-Assisted Intervention (2015), pp. 234–241.
[Crossref]

F. Chollet, “Keras: Deep learning library for theano and tensorflow,” https://keras.io

M. Abadi, P. Barham, J. Chen, Z. Chen, A. Davis, J. Dean, M. Devin, S. Ghemawat, G. Irving, and M. Isard, “TensorFlow: A System for Large-Scale Machine Learning,” in Proceedings of the 12th USENIX Symposium on Operating Systems Design and Implementation (OSDI, 2016), pp. 265–283.

D. P. Kingma and J. Ba, “Adam: A method for stochastic optimization,” arXiv 1412.6980 (2014).

Cited By

OSA participates in Crossref's Cited-By Linking service. Citing articles from OSA journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (10)

Fig. 1
Fig. 1 Overview of the deep learning-based pixel super-resolution approach. The CNN is trained to take a stack of LR holograms with subpixel shift as input and produce the corresponding HR hologram as output. The training data set is generated based on the single frame hologram captured by the LFHM which serves as the “HR ground truth” itself. The corresponding LR image sequence is synthesized by digitally shifting and down-sampling this hologram.
Fig. 2
Fig. 2 Image acquisition of LR holograms with sub-pixel shift. (a) The schematic diagram of the PSR LFHM experiment setup. Shifting the light source in lateral directions leads to sub-pixel shift of the hologram at the sensor plane. (b) Geometric relationship between the light source shifting and the hologram shifting. (c) Itinerary of the hologram sub-pixel shift where LR grid corresponds to the physical pixel size of the imager and HR grid represents the effective pixel size of the HR hologram.
Fig. 3
Fig. 3 The schematic diagram of the LR sequence synthesis process. A series of LR holograms is generated by repeatedly applying geometric warping, blurring and decimation with different lateral shifts to the single frame hologram captured by the LFHM.
Fig. 4
Fig. 4 Accuracy of the model during the training. (a) training/test accuracy with respect to the number of training iterations of the neural network using 25 holograms as input. (b) training/test accuracy with respect to the number of training iterations of the neural network using 9 holograms as input.
Fig. 5
Fig. 5 Deep learning-based PSR of the USAF resolution test chart. (a) The interpolated LR hologram of the USAF resolution test chart. (b) The enlarged image within the yellow dashed box. (c) Reconstructed amplitude image from the interpolated LR hologram. The first 4 elements of group 9 of the chart is shown in the green dashed box. (d) HR hologram predicted by the trained CNN within the yellow dashed box. (e) Reconstructed amplitude image from the HR hologram. (f), (g) Cross-section profiles of group 8, element 5 and element 6 of the LR reconstructed image. (h), (i) Cross-section profiles of group 9, element 3 and element 4 of the HR reconstructed image. (j) Ground truth of the resolution chart provided by a 40X 0.5NA objective. (k) MTFs obtained by the baseline method (LR reconstruction), the deep learning-based method (HR reconstruction) and the 40X 0.5NA objective.
Fig. 6
Fig. 6 Comparison of resolution improvement by different approaches. (a) Comparison between the reconstructed amplitude image by different approaches. The results from the bicubic interpolation, the deep learning-based PSR method using 25 holograms and 9 holograms, the iterative PSR method using 25 holograms and 9 holograms are denoted by Bicubic, CNNPSR25, CNNPSR9, IPSR25 and IPSR9 respectively. (b) – (d) Cross section profiles of group 9, element 3 between CNNPSR25 and IPSR25, CNNPSR25 and CNNPSR9, IPSR25 and IPSR9 respectively.
Fig. 7
Fig. 7 FWHM measurement of polystyrene beads. (a) The reconstructed amplitude image of the polystyrene beads based on the network prediction. (b), (c) The enlarged image of two beads from CNNPSR25 and Bicubic. (d), (e) Cross section profiles along the arrows in the blue and red dashed box. (f) The averaged amplitude image of 108 beads from CNNPSR25. (g) Comparison between the FWHMs of various approaches.
Fig. 8
Fig. 8 Super-resolution lens-free imaging of lily anther. (a) The reconstructed phase image of the lily anther using deep learning-based PSR method. The yellow box corresponds to the FOV of a 10X microscope objective. Scale bar: 200 μm. (b), (c) The enlarged image within the yellow dashed box from the HR lens-free reconstruction and 10X objective respectively. Scale bar: 100 μm. (d) - (f) The zoom-in images of the red dashed box area of the baseline LR lens-free image, the HR lens-free image and the ground truth (GT) image from the 40X objective respectively. Scale bar: 15 μm. (g) Cross-section profiles along the arrows in (d) - (f).
Fig. 9
Fig. 9 Influence of the training data set generated with different types of samples. (a) Holograms of different types of samples and their corresponding sample-to-sensor distances (denoted by d ss ). Scale bar: 50 μm. (b) Cross-section profiles of group 9, element 3 of the USAF resolution based on the HR hologram obtained by networks trained on different types of samples. 25 holograms were used. (c) Same as (b) but only 9 holograms were used. (d) FWHM of the polystyrene beads based on the HR hologram obtained by networks trained on different types of samples.
Fig. 10
Fig. 10 The schematic diagram of the CNN architecture.

Tables (2)

Tables Icon

Table 1 SSIMs of the images reconstructed by the baseline approach and the deep learning approach with respect to the result from the iterative PSR approach

Tables Icon

Table 2 Run-time profiling of the deep learning-based PSR methods and the iterative PSR methods

Equations (4)

Equations on this page are rendered with MathJax. Learn more.

d x 1 d x 2 = d y 1 d y 2 = z 1 z 2
SSIM(x,y)= (2 μ x μ y + c 1 )(2 σ xy + c 2 ) ( μ x 2 + μ y 2 + c 1 )( σ x 2 + σ y 2 + c 2 )
Y k =DH F k X+ V k k=1,2,...,N
X=arg min X [ k=1 N DH F k X Y k 1 +λ X BTV ]

Metrics