Abstract

Structured illumination digital holographic microscopy (SI-DHM) is a high-resolution, label-free technique enabling us to image unstained biological samples. SI-DHM has high requirements on the stability of the experimental setup and needs long exposure time. Furthermore, image synthesizing and phase correcting in the reconstruction process are both challenging tasks. We propose a deep-learning-based method called DL-SI-DHM to improve the recording, the reconstruction efficiency and the accuracy of SI-DHM and to provide high-resolution phase imaging. In the training process, high-resolution amplitude and phase images obtained by phase-shifting SI-DHM together with wide-field amplitudes are used as inputs of DL-SI-DHM. The well-trained network can reconstruct both the high-resolution amplitude and phase images from a single wide-field amplitude image. Compared with the traditional SI-DHM, this method significantly shortens the recording time and simplifies the reconstruction process and complex phase correction, and frequency synthesizing are not required anymore. By comparsion, with other learning-based reconstruction schemes, the proposed network has better response to high frequencies. The possibility of using the proposed method for the investigation of different biological samples has been experimentally verified, and the low-noise characteristics were also proved.

© 2021 Optical Society of America under the terms of the OSA Open Access Publishing Agreement

1. Introduction

Optical microscopy is widely used in biological and material detection [1]. Its resolution is limited by the diffraction and according to Abbe’s criterion [2], nerve cells, bacteria or microorganisms whose sizes are less than half of the illumination wavelength cannot be resolved by wide-field (WF) microscopy (uniform wide-field illumination). To break the resolution limit, the structured illumination is introduced in optical microscopy, which is known as structured illumination microscopy (SIM) [3]. Sine-type pattern is a typical illumination to down-modulate the high-frequency information beyond the cut-off frequency into the microscopic system. The resolution of SIM can theoretically be doubled by combining low- and high-frequency information to achieve a synthetic aperture effect. However, traditional SIM cannot meet the demand in studying the unstained biological sample, since only amplitude information can be achieved. Digital holographic microscopy (DHM) [4] has been widely used to obtain the phase information. By combing DHM and structured illumination (SI-DHM), the phase can be retrieved at the expense of complexity of the recording system [57]. However, this is time-consuming since, for each direction, at least three images with different phase-shifting must be recorded to separate low- and high-frequency information. Synthesizing images covering low- and high-frequency, unwrapping and correcting the phase are also tough tasks while the prior knowledge of the phase-shifting amounts, the period of SI and the carrier frequency of the reference beam directly influence the reconstruction accuracy and resolution. It is thus worth exploring, how to improve the recording efficiency and guarantee accuracy.

In recent years, deep learning (DL) had outstanding achievements in the field of optical super-resolution imaging [810]. DL is usually used to reduce the cost of high-resolution imaging (or reduce the use of equipment) to improve imaging efficiency (or reduce the computational cost of traditional algorithms) [11]. It is also used in label-free super-resolution microscopy SIM for improving traditional SIM imaging [1218] or for quick sample sorting [1921]. By using deep learning strategy, high-resolution image can be reconstructed with few raw SIM images. The improvement of the speed and the performance under low light conditions were demonstrated [13]. This work successfully proved the ability of deep learning to improve the time resolution of the SIM system, and further expanded the advantages of low phototoxicity, but only the object amplitude information is obtained (no phase information). Another work reported a machine learning assisted SIM method (ML-SIM) [14], based on residual network. It uses simulated SIM images as training, which reduces the experimental effort. The image reconstruction accuracy of a network trained with simulated data is lower than that of a network trained with real experimental data. In Ref. [15], it has been shown that it is possible to start with simple simulated and experimental images illuminated from single direction structured illumination, and use the network to recover the high-frequency information of the rest of the spectrum in other directions by learning method. This method uses image migration technology and improves the reconstruction quality of the learning-based SIM, but it does not provide the phase information.

In this paper, we propose a deep learning network to reconstruct high-resolution amplitude and phase from low-resolution amplitude (DL-SI-DHM). The low-resolution images called the WF images are directly captured by a WF microscope, and the high-resolution images (SIM images) are reconstructed from the raw structured illumination holograms (SIHs) captured from a SI-DHM system. In Sec. 2, we introduce the data preparation process for network training, and show the designed SI-DHM system and the imaging results obtained in the experiment. The details of the network architecture and the cascade scheme are introduced in Sec. 3. The proposed method is verified in Sec. 4, where DL-SI-DHM is tested on the reconstruction results of the network trained with real images of polystyrene particle-cluster and biological samples (sarcina and neurons) of different sizes. In addition, we use non-cascaded networks for comparative experiments.

2. Data preparation

The data-driven network requires huge amounts of experimentally recorded images for training, which poses a challenge for the recording system. We built a SI-DHM setup on a vibration isolated platform, by which we can obtain amplitude and phase images of large field-of-view and high-resolution images modulated by structured illumination. The proposed SI-DHM imaging system is based on the Mach-Zehnder interferometer (see Fig. 1). A laser beam (wavelength: 632.8 nm) is divided into object beam and reference beam by a beam splitter BS1. Pre-designed sinusoidal fringe patterns with modulation $1 + \cos ({2\pi {x_0}f + n\varphi } )$ and known phase-shifting are loaded on the SLM (Holoeye Leto, pixel size: 6.4×6.4 µm2, number of pixels: 1920×1080). f is the carrier frequency, ${x_0}$ is the coordinate, $n = 1,2,3$ represent the serial numbers of the captured images, and $\varphi \textrm{ = }{{2\pi } / 3}$ is the phase-shifting step value. In SIM, the improvement of resolution is determined by the carrier frequency of the structured illumination (or grating period) and the numerical aperture of the imaging system. A system formed by the lens L and the objective MO1 projects the pattern onto the sample. An aperture F is inserted in the Fourier plane of L to eliminate unwanted spatial frequencies. The modulated object beam is collected and magnified by an imaging system, composed by the objective MO2 and the tube lens TL. After being modulated by the pattern, the samples’ high-frequency information beyond the cut-off frequency is directed into the aperture of the imaging system. The resulting resolution is determined by the spatial frequency of the illumination pattern on the sample and the cut-off frequency of the imaging system. The cut-off frequency and the frequency of the illumination pattern are determined in advance according to the required resolution. The SIHs formed by the interference between the object beam and the reference beam after compensating the optical path difference by the mirror M are recorded by a CMOS detector (The Imaging Source, DMK-33UX178).

 figure: Fig. 1.

Fig. 1. Schematic diagram of SI-DHM. BS: beam splitters; BEC: beam expansion and collimation; P: polarizer; L: lens; F: filter; MO: microscope objective; TL: tube lens; M: mirror.

Download Full Size | PPT Slide | PDF

After collecting the raw data of SIHs, these are processed. Figure 2 shows the main steps in preparing the training data. Three terms representing low- and high-frequency information are overlapped in the frequency domain of the SIHs, which are included in the yellow squares (see Fig. 2). The PCA algorithm [22] is used for compensating secondary phase distortion. After decomposing and up-modulating, the low- and high-frequency components are superimposed in the frequency domain. The inverse fast Fourier transform (IFFT) is used to obtain the object reconstruction image with improved resolution. Then, the dataset for network training, namely the SIM amplitude and phase images obtained by spectrum synthesis and the WF amplitude images covering low spectrum are simultaneously prepared (see green paths in Fig. 2).

 figure: Fig. 2.

Fig. 2. Flow chart of the data set preparation process for network training. The red path indicates the traditional SI-DHM reconstruction algorithm flow. The green path represents the reconstruction results and data acquisition. The blue path represents the images for network training.

Download Full Size | PPT Slide | PDF

The numerical aperture (NA) of the imaging objective is 0.25 and its resolution $R$ can reach theoretically 1.95 µm according to formula $R = {{k\lambda } / {NA}}$ [23]. For imaging systems using coherent light sources, the default constant term $k = 0.77$. However, the theoretical value is often not reached in the experiment. By using the USAF test target we found that the resolution of the WF system is about 362 Lp/mm (2.76 µm). By illuminating the sample with a grating pattern whose frequency is 287 Lp/mm (period: 3.48 µm), the SIM system resolution can be increased to 649 Lp/mm (1.54 µm). Polystyrene particles (1.97 µm in diameter) are used to verify the imaging ability of the system. The processes for reconstructing amplitude and phase of low- and high-resolution images are described by the red and green paths in Fig. 2. The results are shown in Fig. 3. The parts included in the orange boxes are magnified, and the polystyrene particle-cluster become resolved (amplitude and phase) under structured illumination. The WF amplitude images and the SIM images are used as the input and label data for the network. The images obtained by the SI-DHM experimental setup have a large field-of-view and can provide the required data set.

 figure: Fig. 3.

Fig. 3. (a) Amplitude and phase reconstruction of polystyrene particle-cluster by SI-DHM. WF and SIM images are shown. The enlarged parts are included in orange boxes. (b) Three-dimensional visualization of the phase distribution included in the orange box.

Download Full Size | PPT Slide | PDF

In Fig. 2, the blue path represents the training process, which is cascaded and includes two CNNs. One of the networks, called SIM-LHA (LHA stands for amplitude information from low-resolution to high-resolution) is trained to synthesize the high-frequency information modulated by structured illumination in different directions with the low-frequency information, which has similar function with networks reported in Refs. [8,10,24]. The SIM-LHA uses WF and SIM amplitude images with the same field-of-view as the input-label pairs and finally generates a high-resolution amplitude image which is comparable to the result obtained by traditional structured illumination. The purpose of the other network SIM-AP (AP stands for the amplitude and phase) is to convert high-resolution amplitude information to phase information. It closely follows SIM-LHA and serves as an extension of it, making full use of the object high-resolution amplitude information that has been constructed by the previous architecture. At the same time, the high-resolution phase information obtained in the experiment, named SIM phase, is used for training SIM-AP. With the training of SIM-AP, the potential connection between amplitude and phase is obtained from statistics, by which the high-resolution phase information is retrieved from the WF amplitude images. Although the required training data are obtained by SI-DHM, the network does not rely on any prior knowledge of the sample structure. To ensure the learning speed and shorten the reconstruction time, we cut 4000 pairs of image patches having size 64×64 pixels from 26 WF and SIM images pairs having size 1920×1080 pixels. Among the image patches, 3800 pairs of WF and SIM amplitude patches are used for training the SIM-LHA, while 3800 SIM phase patches together with 3800 high-resolution amplitude patches output from SIM-LHA are used for training the SIM-AP. It should be also noted that SIM amplitude and phase images are reconstructed from the same hologram, and WF images are obtained only through system mode changes (loading the blank phase map on the SLM while blocking the reference path), thus data registration is not required before training process [25]. The remaining 200 pairs of patches constitute a test set, ensuring that the trained model is given data that it has never seen before.

3. Network architecture and training process

Deep learning is a kind of nonlinear numerical processing [26]. Compared with the traditional reconstruction method under the linear formula, the learning-based method does not need to establish any equations for parameter solving, thus it is an efficient alternative data-driven method. In our work, we train the intensity and phase in two cascaded CNNs, SIM-LHA and SIM-AP. SIM-LHA is trained to make the sub-network ‘learn’ to add the missing high-frequency information in the frequency domain. The network is based on U-Net [27], which has good transformability and scalability. This versatile architecture has the capacity for feature extraction of high-frequency information modulated by structured illumination. As shown in the Fig. 4(a), the WF and SIM amplitude images reconstructed by the phase shift algorithm are input into the network in pairs. The WF amplitude image is encoded in the up-sampling path, which is mainly composed of double convolution layers and pooling layers. In this path, the feature maps of low-resolution amplitude images are extracted by convolution layers and enter into the bridging path, which combine similar features and supply them to the down-sampling path. As the network depth increases, the resolution of the image patches decreases with the appearance of the feature maps. The down-sampling path is mainly composed of deconvolution layers, which decodes the feature maps and restores the object features. During the training process, the feature maps with different resolutions based on the input WF image patches are connected by skip connections, which make the extracted information shared between paths. The generated images in the decoding stage are constrained by the images in the encoding stage.

 figure: Fig. 4.

Fig. 4. (a) Network architecture. Rectangular blocks of different colors represent different operation. The ‘Conv Double’ represented by the gray block contains the convolution layers, Batch Normalization (BN) [28] and Rectified Linear Unit (ReLU) [29]. Its structure is shown in the dotted line box with gray background. The blue arrow represents the skip connection. (b) Reconstruction process and example results of SIM-LHA. The SSIM and PSNR values of the sample image are shown in parentheses. (c) Normalized intensity values of the sample image along the white dotted line in (b). (d) Reconstruction process and example results of SIM-AP. The WF phase image (leftmost) is not used in the training process of SIM-AP. The display is only for comparing the network output results. (e) Phase values of the sample image along the white dotted line in (d).

Download Full Size | PPT Slide | PDF

The proposed network uses the mean square error (MSE) as the loss function:

$$MSELoss({{I_i},\widehat {{I_i}}} )= {{{{\sum\nolimits_{i = 1}^n {{\bigg \Vert}{{I_i} - \widehat {{I_i}}} {\bigg \Vert}} }^2}} {\bigg /} n}$$
where ${I_i}$ and $\widehat {{I_i}}$ represent the ground truth image and the image output by the network, respectively. i and n are the number of the data in the training set and the batch size, respectively. The purpose of the MSE loss function is to find the difference between the images generated by the network in every epoch and the ground truth, and feeds it back to the network to generate an image closing to the ground truth in the next epoch. This process is called back propagation. As the value of the MSE loss function decreases in each training epoch, it gradually converges to an acceptable local minimum. The performance of the network tends to be stable and can output high-resolution amplitude images. For the whole training process, the effective mapping relationship between low- and high-resolution amplitude information is provided, and the network can provide reasonable guesses for similar cases. Example output results of SIM-LHA are shown in Fig. 4(b) and (c). A well-trained SIM-LHA does not need any prior knowledge of the object frequencies.

In order to obtain high-resolution phase information more quickly, we use the SIM phase images introduced by the SI-DHM system to train the SIM-AP network. This is trained to realize the potential connection between amplitude and phase, and its architecture is similar to the previous one. We connect two networks so that the output from the former network can be used by the latter one. The specification of the new training set consisting of the SIM phase images and the output images of SIM-LHA matches the previous one. From the results shown in Fig. 4(d) and (e), it is possible to see that the well-trained SIM-AP can apply the conversion from amplitude to phase of objects that have not been seen before. More results of the proposed network used to generate high-resolution phase images of polystyrene particle-cluster and biological samples are shown and discussed in detail in Section 4.

4. Results and discussion

The normalized intensity and phase curves of the value along the white dashed line shown in Figs. 4(b) and (d) are plot in 4(c) and 4(e), respectively. The resolution the structured illumination (purple line) is compared with WF (black dashed line). Additionally, the trend of the blue line of the network output fits the purple line of the ground truth. We introduced two indicators to further quantitatively evaluate the results, namely the Structural Similarity (SSIM) [30] which is closely related to human visual perception and the Peak Signal-to-Noise Ratio (PSNR) which measures the image distortion or noise level (see Fig. 4(b) and 4(d)). The amplitude results in Fig. 5, for the well-trained SIM-LHA, can be used to supplement the missing high-frequency information in the WF image to generate a high-resolution image which is similar to traditional SIM. SIM-AP can accurately retrieve the phase information of the samples from the amplitude value after processing a large amount of data. SIM-AP can transform the amplitude image into a phase image. The combination of the two networks enables to obtain both the high-resolution amplitude and phase of the object from WF amplitude images captured without interferometric experimental set-up. A well-trained cascade network can perform high-speed processing, the time to output a single 64×64 pixels patch is 50 ms. It can be regarded as a substitute for the high-resolution and quantitative phase microscopy. Compared with traditional SI-DHM, complicated installations are not required.

 figure: Fig. 5.

Fig. 5. WF images, SIM images and high-resolution output results of the proposed network for different field-of-view images of polystyrene particle-cluster.

Download Full Size | PPT Slide | PDF

To test the robustness of the network and apply it to biological sample imaging, we use sarcina and neurons samples with different inner structures. We have to choose different parameters to achieve the resolution enhancement in the experiment and determine the sizes of the training data according to the object sizes. The data set for sarcina and neurons patches are 128×128 and 256×256, respectively. The large-size images on the left side in Fig. 6(a) and Fig. 6(b) show the WF and SIM images of the sarcina and neuron samples. The image patches on the right are the input and output of the network (the WF phase patches is for comparison only). Under structured illumination, the structure of bacterial colony and neuron dendritic are clearer. Comparing the enlarged images of the ground truth with the network output results, we find that the network is sensitive to high-frequencies amplitude and phase. The high-frequency information added by the structured illumination in the network output results show that they are very close to the ground truth. Like the polystyrene particle-cluster, the high-resolution information of biological samples can be recovered and the conversion of amplitude to phase can be completed through the proposed network.

 figure: Fig. 6.

Fig. 6. WF images, SIM images and high-resolution output result for the biological sample (a) sarcina and (b) neurons. The large-size image is obtained by using the traditional phase-shifting reconstruction method. The image patches on the right are related to the network (WF phase patches are not used as network input). The orange and purple boxes are the enlarged detail of different regions. The network reconstruction results are displayed on the right.

Download Full Size | PPT Slide | PDF

In order to further analyze the output quality, we averaged and summed up the SSIM and PSNR values of all high-resolution amplitude and phase output results of the three samples corresponding to their respective test sets, the results are included in Table 1. The evaluation indicators of sarcina are better because for the same fixed network to ‘learn’ under the same computing support environment without changing factors such as network hyperparameters and so on, the complexity of the images in the dataset is inversely proportional to the quality of the network output. SIM-LHA needs to ‘learn’ the generation of high-frequency information of the object during the training process, and fills the missing high-frequency information into the WF image during the test. SIM-AP outputs high-resolution phase images and only needs to find the potential connection between amplitude and phase information without expanding the high-frequency information.

We will discuss and verify this analysis by different data sets for network training and testing, and compare the differences between various schemes to illustrate the necessity of using cascade network.

Tables Icon

Table 1. Image quality evaluation index for different objects of proposed method.

We designed two other networks: SIM-D (D stands for direct output) and SIM-LHP (LHP stands for phase information from low-resolution to high-resolution). Their architectures are the same as the SIM-LHA and SIM-AP, but the difference is the data set given to them. Our purpose is to determine what kind of data set and network combination is the best for generating high-resolution phase information. As shown in Fig. 7, for the cascade network, the WF amplitude image obtained by the experiment is used as the input of SIM-LHA, and the high-resolution amplitude image output is used as the input of SIM-AP, finally this cascade network outputs high-resolution phase images. For SIM-D, only the WF amplitude image is used as the input, and it directly output high-resolution phase images. For SIM-LHP, the inputs are WF phase images reconstructed by phase-shifting which are obtained from experiments and have never been used before. According to the aforementioned analysis, we compare the output results of the three networks to estimate their ability to obtain high-frequency and information conversion.

 figure: Fig. 7.

Fig. 7. Schematic diagram of data processed by different networks. (a) Cascade network composed of: SIM-LHA and SIM-AP; (b) SIM-D; (c) SIM-LHP.

Download Full Size | PPT Slide | PDF

We input the corresponding images of the three objects to the three networks. Under the premise that all environmental variables are constant, the network is trained for the same 100 epochs. The result is shown in Fig. 8. For comparison, the SIM phase images of the ground truth obtained by the traditional reconstruction are also shown in Fig. 8(a). For the three objects, image patches with different field-of-view are selected. There are visible error in the output of SIM-D. Especially for neurons having complex structures, the network cannot provide more high-frequency information. Compared with the other two networks, the output of SIM-D on polystyrene particle-cluster and sarcina generates false information. Therefore, SIM-D performs worse than the other two networks. The plots in Fig. 8(b), (c) and (d) show the index statistics of the output results of the three networks for the three different objects. For the output of high-resolution phase images, the capabilities of SIM-D, SIM-LHP and SIM-AP are sequentially improved. The average value of the index is summarized in Table 2. It can be seen that SIM-D, which performed both the high-frequency information filling task and the information conversion task, has the lowest index, followed by SIM-LHP, which only performed the high-frequency information filling task. SIM-AP in the cascade network only performing information conversion reached the highest quality. This validates our expectation that for the proposed network, the task of filling high-frequency information is more difficult than the task of transforming amplitude to phase. Therefore, when only relying on the WF amplitude image to allow the target network to generate high-resolution phase information, the proposed cascade network structure should be used since it does not require complex experimental settings.

 figure: Fig. 8.

Fig. 8. (a) Comparison of different networks. (b), (c) and (d) are plots of the SSIM and PSNR of the high-resolution phase image of polystyrene particle-cluster, sarcina and neurons respectively in different networks. The red-brown box, olive green box and cobalt blue box represent the output results of SIM-D, SIM-LHP and SIM-AP, respectively. The median of the two evaluation indicators is given by the red line in the center of the box, outliers are indicated by circles.

Download Full Size | PPT Slide | PDF

Tables Icon

Table 2. Image quality evaluation index for SIM-D, SIM-LHP and the cascade network.

Since our work is based on experimental data, it is important to consider the environmental impact. We use Gaussian noise (low, medium and high) added to the WF amplitude image to simulate the poor experimental environment. We tested the tolerance of the well-trained network to different levels (standard deviation) of noise. The results obtained using neuron samples are shown in Fig. 9. In order to compare the learning-based method with the traditional phase-shifting, we also load the corresponding level of Gaussian noise into the raw SIHs, and use phase-shifting to reconstruct the high-frequency image modulated by structured illumination. Figure 9(a) shows that when the noise level increases, there is still a lot of noise in the image reconstructed by the traditional method, because the noise has destroyed the original object information in the raw hologram. But for a well-trained network, even if the noise floods the input low-resolution amplitude, the network can still extract features. With the deepening of down-sampling, the discretized noise distribution is eliminated. The proposed network has anti-noise capabilities similar to those reported in Refs. [3133]. Figure 9(b) and (c) show the anti-noise ability. When the degree of noise increases, the dashed line representing the traditional method decreases significantly. The network output quality is only slightly affected by the noise, the solid line drops slowly. The network output result is better than the traditional reconstruction when the Gaussian standard deviation is in the range: 0.05-0.07.

 figure: Fig. 9.

Fig. 9. The tolerance of the cascade network and phase-shifting to varying amount of noise. (a) Comparison of traditional reconstruction and network output results on high-resolution amplitude and phase images for different noise levels. All image are from neurons. (b), (c) Changes in the evaluation index of the amplitude and phase reconstructions as a function of the noise level. The blue and the green curves represent the SSIM and PSNR values respectively. The dotted line represents the phase-shifting result, and the solid line represents the network output result.

Download Full Size | PPT Slide | PDF

5. Conclusion

DL-SI-DHM using deep learning was discussed and verified. The proposed network can reconstruct high-frequency amplitude and phase information from only low-resolution amplitude information. The training data comes from traditional phase-shifting. Since the WF amplitude image are easy to obtain, this learning-based method greatly reduces the requirements for optical experiments and simplifies the setup. We also quantitatively verified the method for different biological samples. The cascade network scheme can ensure the optimal output of high-resolution phase information. The reconstruction process of DL-SI-DHM has anti-noise ability, and can distinguish the noise in the low-frequency component from the real object information. Combining the proposed method with more advanced network architectures, such as GAN [34], may further increase its versatility.

Funding

National Natural Science Foundation of China (61775097, 61975081); National Key Research and Development Program of China (2017YFB0503505); Key Laboratory of Virtual Geographic Environment (Nanjing Normal University), Ministry of Education (2017VGE02); Sino-German Center for Research Promotion (GZ 1391).

Disclosures

The authors declare that there are no conflicts of interest related to this article.

Data availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

References

1. M. W. Davidson and M. Abramowitz, “Optical Microscopy,” in Encyclopedia of Imaging Science and Technology (John Wiley & Sons, Inc., 2002).

2. E. A.-A. für mikroskopische Anatomie and undefined 1873, “Beiträge zur Theorie des Mikroskops und der mikroskopischen Wahrnehmung,” Springer (n.d.).

3. M. Saxena, G. Eluru, and S. S. Gorthi, “Structured illumination microscopy,” Adv. Opt. Photonics 7(2), 241 (2015). [CrossRef]  

4. M. K. Kim, “Principles and techniques of digital holographic microscopy,” J. Photonics Energy 1(1), 018005 (2010). [CrossRef]  

5. A. A. Mudassar and A. Hussain, “Super-resolution of active spatial frequency heterodyning using holographic approach,” Appl. Opt. 49(17), 3434–3441 (2010). [CrossRef]  

6. P. Gao, G. Pedrini, and W. Osten, “Structured illumination for resolution enhancement and autofocusing in digital holographic microscopy,” Opt. Lett. 38(8), 1328 (2013). [CrossRef]  

7. S. Li, J. Ma, C. Chang, S. Nie, S. Feng, and C. Yuan, “Phase-shifting-free resolution enhancement in digital holographic microscopy under structured illumination,” Opt. Express 26(18), 23572 (2018). [CrossRef]  

8. Y. Rivenson, Z. Göröcs, H. Günaydin, Y. Zhang, H. Wang, and A. Ozcan, “Deep learning microscopy,” Optica 4(11), 1437 (2017). [CrossRef]  

9. H. Zhang, C. Fang, X. Xie, Y. Yang, W. Mei, D. I. Jin, and A. P. Fei, “High-throughput, high-resolution deep learning microscopy based on registration-free generative adversarial network,” (2019).

10. E. Nehme, L. E. Weiss, T. Michaeli, and Y. Shechtman, “Deep-STORM: super-resolution single-molecule microscopy by deep learning,” Optica 5(4), 458 (2018). [CrossRef]  

11. T. Liu, K. de Haan, Y. Rivenson, Z. Wei, X. Zeng, Y. Zhang, and A. Ozcan, “Deep learning-based super-resolution in coherent imaging systems,” Sci. Rep. 9(1), 3926 (2019). [CrossRef]  

12. H. Wang, Y. Rivenson, Y. Jin, Z. Wei, R. Gao, H. Günaydın, L. A. Bentolila, C. Kural, and A. Ozcan, “Deep learning enables cross-modality super-resolution in fluorescence microscopy,” Nat. Methods 16(1), 103–110 (2019). [CrossRef]  

13. L. Jin, B. Liu, F. Zhao, S. Hahn, B. Dong, R. Song, T. C. Elston, Y. Xu, and K. M. Hahn, “Deep learning enables structured illumination microscopy with low light levels and enhanced speed,” Nat. Commun. 11(1), 1934 (2020). [CrossRef]  

14. C. N. Christensen, E. N. Ward, P. Lio, and C. F. Kaminski, “ML-SIM: A deep neural network for reconstruction of structured illumination microscopy images,” 1–9 (2020).

15. C. Ling, C. Zhang, M. Wang, F. Meng, L. Du, and X. Yuan, “Fast structured illumination microscopy via deep learning,” Photonics Res. 8(8), 1350 (2020). [CrossRef]  

16. Z. Wu, X. Wu, and Y. Zhu, “Structured illumination-based phase retrieval via Generative Adversarial Network,” Proc. SPIE 11249, 112490L (2020). [CrossRef]  

17. Z. Hussain Shah, M. Müller, T.-C. Wang, P. Maurice Scheidig, A. Schneider, M. Schüttpelz, T. Huser, and W. Schenck, “Deep-learning based denoising and reconstruction of super-resolution structured illumination microscopy images,” bioRxiv 2020.10.27.352633 (2020).

18. M. Boland, E. A. K. Cohen, S. Flaxman, and M. A. A. Neil, “Improving axial resolution in SIM using deep learning,” (2020).

19. Y. Lu and R. Lu, “Detection of Surface and Subsurface Defects of Apples Using Structured- Illumination Reflectance Imaging with Machine Learning Algorithms,” Trans. ASABE 61(6), 1831–1842 (2018). [CrossRef]  

20. R. F. Laine, G. Goodfellow, L. J. Young, J. Travers, D. Carroll, O. Dibben, H. Bright, and C. F. Kaminski, “Structured illumination microscopy combined with machine learning enables the high throughput analysis and classification of virus structure,” Elife 7, e40183 (2018). [CrossRef]  

21. Y. He, W. Xu, Y. Zhi, R. Tyagi, Z. Hu, and G. Cao, “Rapid bacteria identification using structured illumination microscopy and machine learning,” J. Innov. Opt. Health Sci. 11(01), 1850007 (2018). [CrossRef]  

22. S. Wold, K. Esbensen, and P. Geladi, “Principal component analysis,” Chemom. Intell. Lab. Syst. 2(1-3), 37–52 (1987). [CrossRef]  

23. W. Osten, A. Faridian, P. Gao, K. Körner, D. Naik, G. Pedrini, A. K. Singh, M. Takeda, and M. Wilke, “Recent advances in digital holography [Invited],” Appl. Opt. 53(27), G44 (2014). [CrossRef]  

24. C. Dong, C. C. Loy, K. He, and X. Tang, “Learning a deep convolutional network for image super-resolution,” in Lecture Notes in Computer Science (Including Subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Springer Verlag, 2014), Vol. 8692 LNCS, pp. 184–199.

25. Z. Meng, L. Ding, S. Feng, F. Xing, S. Nie, J. Ma, G. Pedrini, and C. Yuan, “Numerical dark-field imaging using deep-learning,” Opt. Express 28(23), 34266 (2020). [CrossRef]  

26. Y. Lecun, Y. Bengio, and G. Hinton, “Deep learning,” Nature 521(7553), 436–444 (2015). [CrossRef]  

27. O. Ronneberger, P. Fischer, and T. Brox, “U-net: Convolutional networks for biomedical image segmentation,” in Lecture Notes in Computer Science (Including Subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Springer Verlag, 2015), Vol. 9351, pp. 234–241.

28. S. Ioffe and C. Szegedy, “Batch normalization: Accelerating deep network training by reducing internal covariate shift,” 32nd Int. Conf. Mach. Learn. ICML 20151, 448–456 (2015).

29. V. Nair and G. E. Hinton, Rectified Linear Units Improve Restricted Boltzmann Machines (2010).

30. Z. Wang, A. C. Bovik, H. R. Sheikh, and E. P. Simoncelli, “Image quality assessment: From error visibility to structural similarity,” IEEE Trans. Image Process. 13(4), 600–612 (2004). [CrossRef]  

31. K. Zhang, W. Zuo, Y. Chen, D. Meng, and L. Zhang, “Beyond a Gaussian denoiser: Residual learning of deep CNN for image denoising,” IEEE Trans. Image Process. 26(7), 3142–3155 (2017). [CrossRef]  

32. S. Li, M. Deng, J. Lee, A. Sinha, and G. Barbastathis, “Imaging through glass diffusers using densely connected convolutional networks,” Optica 5(7), 803–813 (2018). [CrossRef]  

33. M. Lyu, H. Wang, G. Li, S. Zheng, and G. Situ, “Learning-based lensless imaging through optically thick scattering media,” Adv. Photonics 1(3), 036002 (2019). [CrossRef]  

34. I. J. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville, and Y. Bengio, “Generative Adversarial Nets,” NIPS 14, Vol. 2, 2672–2680 (2014).

References

  • View by:
  • |
  • |
  • |

  1. M. W. Davidson and M. Abramowitz, “Optical Microscopy,” in Encyclopedia of Imaging Science and Technology (John Wiley & Sons, Inc., 2002).
  2. E. A.-A. für mikroskopische Anatomie and undefined 1873, “Beiträge zur Theorie des Mikroskops und der mikroskopischen Wahrnehmung,” Springer (n.d.).
  3. M. Saxena, G. Eluru, and S. S. Gorthi, “Structured illumination microscopy,” Adv. Opt. Photonics 7(2), 241 (2015).
    [Crossref]
  4. M. K. Kim, “Principles and techniques of digital holographic microscopy,” J. Photonics Energy 1(1), 018005 (2010).
    [Crossref]
  5. A. A. Mudassar and A. Hussain, “Super-resolution of active spatial frequency heterodyning using holographic approach,” Appl. Opt. 49(17), 3434–3441 (2010).
    [Crossref]
  6. P. Gao, G. Pedrini, and W. Osten, “Structured illumination for resolution enhancement and autofocusing in digital holographic microscopy,” Opt. Lett. 38(8), 1328 (2013).
    [Crossref]
  7. S. Li, J. Ma, C. Chang, S. Nie, S. Feng, and C. Yuan, “Phase-shifting-free resolution enhancement in digital holographic microscopy under structured illumination,” Opt. Express 26(18), 23572 (2018).
    [Crossref]
  8. Y. Rivenson, Z. Göröcs, H. Günaydin, Y. Zhang, H. Wang, and A. Ozcan, “Deep learning microscopy,” Optica 4(11), 1437 (2017).
    [Crossref]
  9. H. Zhang, C. Fang, X. Xie, Y. Yang, W. Mei, D. I. Jin, and A. P. Fei, “High-throughput, high-resolution deep learning microscopy based on registration-free generative adversarial network,” (2019).
  10. E. Nehme, L. E. Weiss, T. Michaeli, and Y. Shechtman, “Deep-STORM: super-resolution single-molecule microscopy by deep learning,” Optica 5(4), 458 (2018).
    [Crossref]
  11. T. Liu, K. de Haan, Y. Rivenson, Z. Wei, X. Zeng, Y. Zhang, and A. Ozcan, “Deep learning-based super-resolution in coherent imaging systems,” Sci. Rep. 9(1), 3926 (2019).
    [Crossref]
  12. H. Wang, Y. Rivenson, Y. Jin, Z. Wei, R. Gao, H. Günaydın, L. A. Bentolila, C. Kural, and A. Ozcan, “Deep learning enables cross-modality super-resolution in fluorescence microscopy,” Nat. Methods 16(1), 103–110 (2019).
    [Crossref]
  13. L. Jin, B. Liu, F. Zhao, S. Hahn, B. Dong, R. Song, T. C. Elston, Y. Xu, and K. M. Hahn, “Deep learning enables structured illumination microscopy with low light levels and enhanced speed,” Nat. Commun. 11(1), 1934 (2020).
    [Crossref]
  14. C. N. Christensen, E. N. Ward, P. Lio, and C. F. Kaminski, “ML-SIM: A deep neural network for reconstruction of structured illumination microscopy images,” 1–9 (2020).
  15. C. Ling, C. Zhang, M. Wang, F. Meng, L. Du, and X. Yuan, “Fast structured illumination microscopy via deep learning,” Photonics Res. 8(8), 1350 (2020).
    [Crossref]
  16. Z. Wu, X. Wu, and Y. Zhu, “Structured illumination-based phase retrieval via Generative Adversarial Network,” Proc. SPIE 11249, 112490L (2020).
    [Crossref]
  17. Z. Hussain Shah, M. Müller, T.-C. Wang, P. Maurice Scheidig, A. Schneider, M. Schüttpelz, T. Huser, and W. Schenck, “Deep-learning based denoising and reconstruction of super-resolution structured illumination microscopy images,” bioRxiv 2020.10.27.352633 (2020).
  18. M. Boland, E. A. K. Cohen, S. Flaxman, and M. A. A. Neil, “Improving axial resolution in SIM using deep learning,” (2020).
  19. Y. Lu and R. Lu, “Detection of Surface and Subsurface Defects of Apples Using Structured- Illumination Reflectance Imaging with Machine Learning Algorithms,” Trans. ASABE 61(6), 1831–1842 (2018).
    [Crossref]
  20. R. F. Laine, G. Goodfellow, L. J. Young, J. Travers, D. Carroll, O. Dibben, H. Bright, and C. F. Kaminski, “Structured illumination microscopy combined with machine learning enables the high throughput analysis and classification of virus structure,” Elife 7, e40183 (2018).
    [Crossref]
  21. Y. He, W. Xu, Y. Zhi, R. Tyagi, Z. Hu, and G. Cao, “Rapid bacteria identification using structured illumination microscopy and machine learning,” J. Innov. Opt. Health Sci. 11(01), 1850007 (2018).
    [Crossref]
  22. S. Wold, K. Esbensen, and P. Geladi, “Principal component analysis,” Chemom. Intell. Lab. Syst. 2(1-3), 37–52 (1987).
    [Crossref]
  23. W. Osten, A. Faridian, P. Gao, K. Körner, D. Naik, G. Pedrini, A. K. Singh, M. Takeda, and M. Wilke, “Recent advances in digital holography [Invited],” Appl. Opt. 53(27), G44 (2014).
    [Crossref]
  24. C. Dong, C. C. Loy, K. He, and X. Tang, “Learning a deep convolutional network for image super-resolution,” in Lecture Notes in Computer Science (Including Subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Springer Verlag, 2014), Vol. 8692 LNCS, pp. 184–199.
  25. Z. Meng, L. Ding, S. Feng, F. Xing, S. Nie, J. Ma, G. Pedrini, and C. Yuan, “Numerical dark-field imaging using deep-learning,” Opt. Express 28(23), 34266 (2020).
    [Crossref]
  26. Y. Lecun, Y. Bengio, and G. Hinton, “Deep learning,” Nature 521(7553), 436–444 (2015).
    [Crossref]
  27. O. Ronneberger, P. Fischer, and T. Brox, “U-net: Convolutional networks for biomedical image segmentation,” in Lecture Notes in Computer Science (Including Subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Springer Verlag, 2015), Vol. 9351, pp. 234–241.
  28. S. Ioffe and C. Szegedy, “Batch normalization: Accelerating deep network training by reducing internal covariate shift,” 32nd Int. Conf. Mach. Learn. ICML 20151, 448–456 (2015).
  29. V. Nair and G. E. Hinton, Rectified Linear Units Improve Restricted Boltzmann Machines (2010).
  30. Z. Wang, A. C. Bovik, H. R. Sheikh, and E. P. Simoncelli, “Image quality assessment: From error visibility to structural similarity,” IEEE Trans. Image Process. 13(4), 600–612 (2004).
    [Crossref]
  31. K. Zhang, W. Zuo, Y. Chen, D. Meng, and L. Zhang, “Beyond a Gaussian denoiser: Residual learning of deep CNN for image denoising,” IEEE Trans. Image Process. 26(7), 3142–3155 (2017).
    [Crossref]
  32. S. Li, M. Deng, J. Lee, A. Sinha, and G. Barbastathis, “Imaging through glass diffusers using densely connected convolutional networks,” Optica 5(7), 803–813 (2018).
    [Crossref]
  33. M. Lyu, H. Wang, G. Li, S. Zheng, and G. Situ, “Learning-based lensless imaging through optically thick scattering media,” Adv. Photonics 1(3), 036002 (2019).
    [Crossref]
  34. I. J. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville, and Y. Bengio, “Generative Adversarial Nets,” NIPS 14, Vol. 2, 2672–2680 (2014).

2020 (4)

L. Jin, B. Liu, F. Zhao, S. Hahn, B. Dong, R. Song, T. C. Elston, Y. Xu, and K. M. Hahn, “Deep learning enables structured illumination microscopy with low light levels and enhanced speed,” Nat. Commun. 11(1), 1934 (2020).
[Crossref]

C. Ling, C. Zhang, M. Wang, F. Meng, L. Du, and X. Yuan, “Fast structured illumination microscopy via deep learning,” Photonics Res. 8(8), 1350 (2020).
[Crossref]

Z. Wu, X. Wu, and Y. Zhu, “Structured illumination-based phase retrieval via Generative Adversarial Network,” Proc. SPIE 11249, 112490L (2020).
[Crossref]

Z. Meng, L. Ding, S. Feng, F. Xing, S. Nie, J. Ma, G. Pedrini, and C. Yuan, “Numerical dark-field imaging using deep-learning,” Opt. Express 28(23), 34266 (2020).
[Crossref]

2019 (3)

T. Liu, K. de Haan, Y. Rivenson, Z. Wei, X. Zeng, Y. Zhang, and A. Ozcan, “Deep learning-based super-resolution in coherent imaging systems,” Sci. Rep. 9(1), 3926 (2019).
[Crossref]

H. Wang, Y. Rivenson, Y. Jin, Z. Wei, R. Gao, H. Günaydın, L. A. Bentolila, C. Kural, and A. Ozcan, “Deep learning enables cross-modality super-resolution in fluorescence microscopy,” Nat. Methods 16(1), 103–110 (2019).
[Crossref]

M. Lyu, H. Wang, G. Li, S. Zheng, and G. Situ, “Learning-based lensless imaging through optically thick scattering media,” Adv. Photonics 1(3), 036002 (2019).
[Crossref]

2018 (6)

S. Li, M. Deng, J. Lee, A. Sinha, and G. Barbastathis, “Imaging through glass diffusers using densely connected convolutional networks,” Optica 5(7), 803–813 (2018).
[Crossref]

E. Nehme, L. E. Weiss, T. Michaeli, and Y. Shechtman, “Deep-STORM: super-resolution single-molecule microscopy by deep learning,” Optica 5(4), 458 (2018).
[Crossref]

S. Li, J. Ma, C. Chang, S. Nie, S. Feng, and C. Yuan, “Phase-shifting-free resolution enhancement in digital holographic microscopy under structured illumination,” Opt. Express 26(18), 23572 (2018).
[Crossref]

Y. Lu and R. Lu, “Detection of Surface and Subsurface Defects of Apples Using Structured- Illumination Reflectance Imaging with Machine Learning Algorithms,” Trans. ASABE 61(6), 1831–1842 (2018).
[Crossref]

R. F. Laine, G. Goodfellow, L. J. Young, J. Travers, D. Carroll, O. Dibben, H. Bright, and C. F. Kaminski, “Structured illumination microscopy combined with machine learning enables the high throughput analysis and classification of virus structure,” Elife 7, e40183 (2018).
[Crossref]

Y. He, W. Xu, Y. Zhi, R. Tyagi, Z. Hu, and G. Cao, “Rapid bacteria identification using structured illumination microscopy and machine learning,” J. Innov. Opt. Health Sci. 11(01), 1850007 (2018).
[Crossref]

2017 (2)

K. Zhang, W. Zuo, Y. Chen, D. Meng, and L. Zhang, “Beyond a Gaussian denoiser: Residual learning of deep CNN for image denoising,” IEEE Trans. Image Process. 26(7), 3142–3155 (2017).
[Crossref]

Y. Rivenson, Z. Göröcs, H. Günaydin, Y. Zhang, H. Wang, and A. Ozcan, “Deep learning microscopy,” Optica 4(11), 1437 (2017).
[Crossref]

2015 (2)

M. Saxena, G. Eluru, and S. S. Gorthi, “Structured illumination microscopy,” Adv. Opt. Photonics 7(2), 241 (2015).
[Crossref]

Y. Lecun, Y. Bengio, and G. Hinton, “Deep learning,” Nature 521(7553), 436–444 (2015).
[Crossref]

2014 (1)

2013 (1)

2010 (2)

2004 (1)

Z. Wang, A. C. Bovik, H. R. Sheikh, and E. P. Simoncelli, “Image quality assessment: From error visibility to structural similarity,” IEEE Trans. Image Process. 13(4), 600–612 (2004).
[Crossref]

1987 (1)

S. Wold, K. Esbensen, and P. Geladi, “Principal component analysis,” Chemom. Intell. Lab. Syst. 2(1-3), 37–52 (1987).
[Crossref]

Abramowitz, M.

M. W. Davidson and M. Abramowitz, “Optical Microscopy,” in Encyclopedia of Imaging Science and Technology (John Wiley & Sons, Inc., 2002).

Barbastathis, G.

Bengio, Y.

Y. Lecun, Y. Bengio, and G. Hinton, “Deep learning,” Nature 521(7553), 436–444 (2015).
[Crossref]

I. J. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville, and Y. Bengio, “Generative Adversarial Nets,” NIPS 14, Vol. 2, 2672–2680 (2014).

Bentolila, L. A.

H. Wang, Y. Rivenson, Y. Jin, Z. Wei, R. Gao, H. Günaydın, L. A. Bentolila, C. Kural, and A. Ozcan, “Deep learning enables cross-modality super-resolution in fluorescence microscopy,” Nat. Methods 16(1), 103–110 (2019).
[Crossref]

Boland, M.

M. Boland, E. A. K. Cohen, S. Flaxman, and M. A. A. Neil, “Improving axial resolution in SIM using deep learning,” (2020).

Bovik, A. C.

Z. Wang, A. C. Bovik, H. R. Sheikh, and E. P. Simoncelli, “Image quality assessment: From error visibility to structural similarity,” IEEE Trans. Image Process. 13(4), 600–612 (2004).
[Crossref]

Bright, H.

R. F. Laine, G. Goodfellow, L. J. Young, J. Travers, D. Carroll, O. Dibben, H. Bright, and C. F. Kaminski, “Structured illumination microscopy combined with machine learning enables the high throughput analysis and classification of virus structure,” Elife 7, e40183 (2018).
[Crossref]

Brox, T.

O. Ronneberger, P. Fischer, and T. Brox, “U-net: Convolutional networks for biomedical image segmentation,” in Lecture Notes in Computer Science (Including Subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Springer Verlag, 2015), Vol. 9351, pp. 234–241.

Cao, G.

Y. He, W. Xu, Y. Zhi, R. Tyagi, Z. Hu, and G. Cao, “Rapid bacteria identification using structured illumination microscopy and machine learning,” J. Innov. Opt. Health Sci. 11(01), 1850007 (2018).
[Crossref]

Carroll, D.

R. F. Laine, G. Goodfellow, L. J. Young, J. Travers, D. Carroll, O. Dibben, H. Bright, and C. F. Kaminski, “Structured illumination microscopy combined with machine learning enables the high throughput analysis and classification of virus structure,” Elife 7, e40183 (2018).
[Crossref]

Chang, C.

Chen, Y.

K. Zhang, W. Zuo, Y. Chen, D. Meng, and L. Zhang, “Beyond a Gaussian denoiser: Residual learning of deep CNN for image denoising,” IEEE Trans. Image Process. 26(7), 3142–3155 (2017).
[Crossref]

Christensen, C. N.

C. N. Christensen, E. N. Ward, P. Lio, and C. F. Kaminski, “ML-SIM: A deep neural network for reconstruction of structured illumination microscopy images,” 1–9 (2020).

Cohen, E. A. K.

M. Boland, E. A. K. Cohen, S. Flaxman, and M. A. A. Neil, “Improving axial resolution in SIM using deep learning,” (2020).

Courville, A.

I. J. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville, and Y. Bengio, “Generative Adversarial Nets,” NIPS 14, Vol. 2, 2672–2680 (2014).

Davidson, M. W.

M. W. Davidson and M. Abramowitz, “Optical Microscopy,” in Encyclopedia of Imaging Science and Technology (John Wiley & Sons, Inc., 2002).

de Haan, K.

T. Liu, K. de Haan, Y. Rivenson, Z. Wei, X. Zeng, Y. Zhang, and A. Ozcan, “Deep learning-based super-resolution in coherent imaging systems,” Sci. Rep. 9(1), 3926 (2019).
[Crossref]

Deng, M.

Dibben, O.

R. F. Laine, G. Goodfellow, L. J. Young, J. Travers, D. Carroll, O. Dibben, H. Bright, and C. F. Kaminski, “Structured illumination microscopy combined with machine learning enables the high throughput analysis and classification of virus structure,” Elife 7, e40183 (2018).
[Crossref]

Ding, L.

Dong, B.

L. Jin, B. Liu, F. Zhao, S. Hahn, B. Dong, R. Song, T. C. Elston, Y. Xu, and K. M. Hahn, “Deep learning enables structured illumination microscopy with low light levels and enhanced speed,” Nat. Commun. 11(1), 1934 (2020).
[Crossref]

Dong, C.

C. Dong, C. C. Loy, K. He, and X. Tang, “Learning a deep convolutional network for image super-resolution,” in Lecture Notes in Computer Science (Including Subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Springer Verlag, 2014), Vol. 8692 LNCS, pp. 184–199.

Du, L.

C. Ling, C. Zhang, M. Wang, F. Meng, L. Du, and X. Yuan, “Fast structured illumination microscopy via deep learning,” Photonics Res. 8(8), 1350 (2020).
[Crossref]

Elston, T. C.

L. Jin, B. Liu, F. Zhao, S. Hahn, B. Dong, R. Song, T. C. Elston, Y. Xu, and K. M. Hahn, “Deep learning enables structured illumination microscopy with low light levels and enhanced speed,” Nat. Commun. 11(1), 1934 (2020).
[Crossref]

Eluru, G.

M. Saxena, G. Eluru, and S. S. Gorthi, “Structured illumination microscopy,” Adv. Opt. Photonics 7(2), 241 (2015).
[Crossref]

Esbensen, K.

S. Wold, K. Esbensen, and P. Geladi, “Principal component analysis,” Chemom. Intell. Lab. Syst. 2(1-3), 37–52 (1987).
[Crossref]

Fang, C.

H. Zhang, C. Fang, X. Xie, Y. Yang, W. Mei, D. I. Jin, and A. P. Fei, “High-throughput, high-resolution deep learning microscopy based on registration-free generative adversarial network,” (2019).

Faridian, A.

Fei, A. P.

H. Zhang, C. Fang, X. Xie, Y. Yang, W. Mei, D. I. Jin, and A. P. Fei, “High-throughput, high-resolution deep learning microscopy based on registration-free generative adversarial network,” (2019).

Feng, S.

Fischer, P.

O. Ronneberger, P. Fischer, and T. Brox, “U-net: Convolutional networks for biomedical image segmentation,” in Lecture Notes in Computer Science (Including Subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Springer Verlag, 2015), Vol. 9351, pp. 234–241.

Flaxman, S.

M. Boland, E. A. K. Cohen, S. Flaxman, and M. A. A. Neil, “Improving axial resolution in SIM using deep learning,” (2020).

Gao, P.

Gao, R.

H. Wang, Y. Rivenson, Y. Jin, Z. Wei, R. Gao, H. Günaydın, L. A. Bentolila, C. Kural, and A. Ozcan, “Deep learning enables cross-modality super-resolution in fluorescence microscopy,” Nat. Methods 16(1), 103–110 (2019).
[Crossref]

Geladi, P.

S. Wold, K. Esbensen, and P. Geladi, “Principal component analysis,” Chemom. Intell. Lab. Syst. 2(1-3), 37–52 (1987).
[Crossref]

Goodfellow, G.

R. F. Laine, G. Goodfellow, L. J. Young, J. Travers, D. Carroll, O. Dibben, H. Bright, and C. F. Kaminski, “Structured illumination microscopy combined with machine learning enables the high throughput analysis and classification of virus structure,” Elife 7, e40183 (2018).
[Crossref]

Goodfellow, I. J.

I. J. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville, and Y. Bengio, “Generative Adversarial Nets,” NIPS 14, Vol. 2, 2672–2680 (2014).

Göröcs, Z.

Gorthi, S. S.

M. Saxena, G. Eluru, and S. S. Gorthi, “Structured illumination microscopy,” Adv. Opt. Photonics 7(2), 241 (2015).
[Crossref]

Günaydin, H.

H. Wang, Y. Rivenson, Y. Jin, Z. Wei, R. Gao, H. Günaydın, L. A. Bentolila, C. Kural, and A. Ozcan, “Deep learning enables cross-modality super-resolution in fluorescence microscopy,” Nat. Methods 16(1), 103–110 (2019).
[Crossref]

Y. Rivenson, Z. Göröcs, H. Günaydin, Y. Zhang, H. Wang, and A. Ozcan, “Deep learning microscopy,” Optica 4(11), 1437 (2017).
[Crossref]

Hahn, K. M.

L. Jin, B. Liu, F. Zhao, S. Hahn, B. Dong, R. Song, T. C. Elston, Y. Xu, and K. M. Hahn, “Deep learning enables structured illumination microscopy with low light levels and enhanced speed,” Nat. Commun. 11(1), 1934 (2020).
[Crossref]

Hahn, S.

L. Jin, B. Liu, F. Zhao, S. Hahn, B. Dong, R. Song, T. C. Elston, Y. Xu, and K. M. Hahn, “Deep learning enables structured illumination microscopy with low light levels and enhanced speed,” Nat. Commun. 11(1), 1934 (2020).
[Crossref]

He, K.

C. Dong, C. C. Loy, K. He, and X. Tang, “Learning a deep convolutional network for image super-resolution,” in Lecture Notes in Computer Science (Including Subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Springer Verlag, 2014), Vol. 8692 LNCS, pp. 184–199.

He, Y.

Y. He, W. Xu, Y. Zhi, R. Tyagi, Z. Hu, and G. Cao, “Rapid bacteria identification using structured illumination microscopy and machine learning,” J. Innov. Opt. Health Sci. 11(01), 1850007 (2018).
[Crossref]

Hinton, G.

Y. Lecun, Y. Bengio, and G. Hinton, “Deep learning,” Nature 521(7553), 436–444 (2015).
[Crossref]

Hinton, G. E.

V. Nair and G. E. Hinton, Rectified Linear Units Improve Restricted Boltzmann Machines (2010).

Hu, Z.

Y. He, W. Xu, Y. Zhi, R. Tyagi, Z. Hu, and G. Cao, “Rapid bacteria identification using structured illumination microscopy and machine learning,” J. Innov. Opt. Health Sci. 11(01), 1850007 (2018).
[Crossref]

Huser, T.

Z. Hussain Shah, M. Müller, T.-C. Wang, P. Maurice Scheidig, A. Schneider, M. Schüttpelz, T. Huser, and W. Schenck, “Deep-learning based denoising and reconstruction of super-resolution structured illumination microscopy images,” bioRxiv 2020.10.27.352633 (2020).

Hussain, A.

Hussain Shah, Z.

Z. Hussain Shah, M. Müller, T.-C. Wang, P. Maurice Scheidig, A. Schneider, M. Schüttpelz, T. Huser, and W. Schenck, “Deep-learning based denoising and reconstruction of super-resolution structured illumination microscopy images,” bioRxiv 2020.10.27.352633 (2020).

Ioffe, S.

S. Ioffe and C. Szegedy, “Batch normalization: Accelerating deep network training by reducing internal covariate shift,” 32nd Int. Conf. Mach. Learn. ICML 20151, 448–456 (2015).

Jin, D. I.

H. Zhang, C. Fang, X. Xie, Y. Yang, W. Mei, D. I. Jin, and A. P. Fei, “High-throughput, high-resolution deep learning microscopy based on registration-free generative adversarial network,” (2019).

Jin, L.

L. Jin, B. Liu, F. Zhao, S. Hahn, B. Dong, R. Song, T. C. Elston, Y. Xu, and K. M. Hahn, “Deep learning enables structured illumination microscopy with low light levels and enhanced speed,” Nat. Commun. 11(1), 1934 (2020).
[Crossref]

Jin, Y.

H. Wang, Y. Rivenson, Y. Jin, Z. Wei, R. Gao, H. Günaydın, L. A. Bentolila, C. Kural, and A. Ozcan, “Deep learning enables cross-modality super-resolution in fluorescence microscopy,” Nat. Methods 16(1), 103–110 (2019).
[Crossref]

Kaminski, C. F.

R. F. Laine, G. Goodfellow, L. J. Young, J. Travers, D. Carroll, O. Dibben, H. Bright, and C. F. Kaminski, “Structured illumination microscopy combined with machine learning enables the high throughput analysis and classification of virus structure,” Elife 7, e40183 (2018).
[Crossref]

C. N. Christensen, E. N. Ward, P. Lio, and C. F. Kaminski, “ML-SIM: A deep neural network for reconstruction of structured illumination microscopy images,” 1–9 (2020).

Kim, M. K.

M. K. Kim, “Principles and techniques of digital holographic microscopy,” J. Photonics Energy 1(1), 018005 (2010).
[Crossref]

Körner, K.

Kural, C.

H. Wang, Y. Rivenson, Y. Jin, Z. Wei, R. Gao, H. Günaydın, L. A. Bentolila, C. Kural, and A. Ozcan, “Deep learning enables cross-modality super-resolution in fluorescence microscopy,” Nat. Methods 16(1), 103–110 (2019).
[Crossref]

Laine, R. F.

R. F. Laine, G. Goodfellow, L. J. Young, J. Travers, D. Carroll, O. Dibben, H. Bright, and C. F. Kaminski, “Structured illumination microscopy combined with machine learning enables the high throughput analysis and classification of virus structure,” Elife 7, e40183 (2018).
[Crossref]

Lecun, Y.

Y. Lecun, Y. Bengio, and G. Hinton, “Deep learning,” Nature 521(7553), 436–444 (2015).
[Crossref]

Lee, J.

Li, G.

M. Lyu, H. Wang, G. Li, S. Zheng, and G. Situ, “Learning-based lensless imaging through optically thick scattering media,” Adv. Photonics 1(3), 036002 (2019).
[Crossref]

Li, S.

Ling, C.

C. Ling, C. Zhang, M. Wang, F. Meng, L. Du, and X. Yuan, “Fast structured illumination microscopy via deep learning,” Photonics Res. 8(8), 1350 (2020).
[Crossref]

Lio, P.

C. N. Christensen, E. N. Ward, P. Lio, and C. F. Kaminski, “ML-SIM: A deep neural network for reconstruction of structured illumination microscopy images,” 1–9 (2020).

Liu, B.

L. Jin, B. Liu, F. Zhao, S. Hahn, B. Dong, R. Song, T. C. Elston, Y. Xu, and K. M. Hahn, “Deep learning enables structured illumination microscopy with low light levels and enhanced speed,” Nat. Commun. 11(1), 1934 (2020).
[Crossref]

Liu, T.

T. Liu, K. de Haan, Y. Rivenson, Z. Wei, X. Zeng, Y. Zhang, and A. Ozcan, “Deep learning-based super-resolution in coherent imaging systems,” Sci. Rep. 9(1), 3926 (2019).
[Crossref]

Loy, C. C.

C. Dong, C. C. Loy, K. He, and X. Tang, “Learning a deep convolutional network for image super-resolution,” in Lecture Notes in Computer Science (Including Subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Springer Verlag, 2014), Vol. 8692 LNCS, pp. 184–199.

Lu, R.

Y. Lu and R. Lu, “Detection of Surface and Subsurface Defects of Apples Using Structured- Illumination Reflectance Imaging with Machine Learning Algorithms,” Trans. ASABE 61(6), 1831–1842 (2018).
[Crossref]

Lu, Y.

Y. Lu and R. Lu, “Detection of Surface and Subsurface Defects of Apples Using Structured- Illumination Reflectance Imaging with Machine Learning Algorithms,” Trans. ASABE 61(6), 1831–1842 (2018).
[Crossref]

Lyu, M.

M. Lyu, H. Wang, G. Li, S. Zheng, and G. Situ, “Learning-based lensless imaging through optically thick scattering media,” Adv. Photonics 1(3), 036002 (2019).
[Crossref]

Ma, J.

Maurice Scheidig, P.

Z. Hussain Shah, M. Müller, T.-C. Wang, P. Maurice Scheidig, A. Schneider, M. Schüttpelz, T. Huser, and W. Schenck, “Deep-learning based denoising and reconstruction of super-resolution structured illumination microscopy images,” bioRxiv 2020.10.27.352633 (2020).

Mei, W.

H. Zhang, C. Fang, X. Xie, Y. Yang, W. Mei, D. I. Jin, and A. P. Fei, “High-throughput, high-resolution deep learning microscopy based on registration-free generative adversarial network,” (2019).

Meng, D.

K. Zhang, W. Zuo, Y. Chen, D. Meng, and L. Zhang, “Beyond a Gaussian denoiser: Residual learning of deep CNN for image denoising,” IEEE Trans. Image Process. 26(7), 3142–3155 (2017).
[Crossref]

Meng, F.

C. Ling, C. Zhang, M. Wang, F. Meng, L. Du, and X. Yuan, “Fast structured illumination microscopy via deep learning,” Photonics Res. 8(8), 1350 (2020).
[Crossref]

Meng, Z.

Michaeli, T.

Mirza, M.

I. J. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville, and Y. Bengio, “Generative Adversarial Nets,” NIPS 14, Vol. 2, 2672–2680 (2014).

Mudassar, A. A.

Müller, M.

Z. Hussain Shah, M. Müller, T.-C. Wang, P. Maurice Scheidig, A. Schneider, M. Schüttpelz, T. Huser, and W. Schenck, “Deep-learning based denoising and reconstruction of super-resolution structured illumination microscopy images,” bioRxiv 2020.10.27.352633 (2020).

Naik, D.

Nair, V.

V. Nair and G. E. Hinton, Rectified Linear Units Improve Restricted Boltzmann Machines (2010).

Nehme, E.

Neil, M. A. A.

M. Boland, E. A. K. Cohen, S. Flaxman, and M. A. A. Neil, “Improving axial resolution in SIM using deep learning,” (2020).

Nie, S.

Osten, W.

Ozair, S.

I. J. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville, and Y. Bengio, “Generative Adversarial Nets,” NIPS 14, Vol. 2, 2672–2680 (2014).

Ozcan, A.

H. Wang, Y. Rivenson, Y. Jin, Z. Wei, R. Gao, H. Günaydın, L. A. Bentolila, C. Kural, and A. Ozcan, “Deep learning enables cross-modality super-resolution in fluorescence microscopy,” Nat. Methods 16(1), 103–110 (2019).
[Crossref]

T. Liu, K. de Haan, Y. Rivenson, Z. Wei, X. Zeng, Y. Zhang, and A. Ozcan, “Deep learning-based super-resolution in coherent imaging systems,” Sci. Rep. 9(1), 3926 (2019).
[Crossref]

Y. Rivenson, Z. Göröcs, H. Günaydin, Y. Zhang, H. Wang, and A. Ozcan, “Deep learning microscopy,” Optica 4(11), 1437 (2017).
[Crossref]

Pedrini, G.

Pouget-Abadie, J.

I. J. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville, and Y. Bengio, “Generative Adversarial Nets,” NIPS 14, Vol. 2, 2672–2680 (2014).

Rivenson, Y.

T. Liu, K. de Haan, Y. Rivenson, Z. Wei, X. Zeng, Y. Zhang, and A. Ozcan, “Deep learning-based super-resolution in coherent imaging systems,” Sci. Rep. 9(1), 3926 (2019).
[Crossref]

H. Wang, Y. Rivenson, Y. Jin, Z. Wei, R. Gao, H. Günaydın, L. A. Bentolila, C. Kural, and A. Ozcan, “Deep learning enables cross-modality super-resolution in fluorescence microscopy,” Nat. Methods 16(1), 103–110 (2019).
[Crossref]

Y. Rivenson, Z. Göröcs, H. Günaydin, Y. Zhang, H. Wang, and A. Ozcan, “Deep learning microscopy,” Optica 4(11), 1437 (2017).
[Crossref]

Ronneberger, O.

O. Ronneberger, P. Fischer, and T. Brox, “U-net: Convolutional networks for biomedical image segmentation,” in Lecture Notes in Computer Science (Including Subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Springer Verlag, 2015), Vol. 9351, pp. 234–241.

Saxena, M.

M. Saxena, G. Eluru, and S. S. Gorthi, “Structured illumination microscopy,” Adv. Opt. Photonics 7(2), 241 (2015).
[Crossref]

Schenck, W.

Z. Hussain Shah, M. Müller, T.-C. Wang, P. Maurice Scheidig, A. Schneider, M. Schüttpelz, T. Huser, and W. Schenck, “Deep-learning based denoising and reconstruction of super-resolution structured illumination microscopy images,” bioRxiv 2020.10.27.352633 (2020).

Schneider, A.

Z. Hussain Shah, M. Müller, T.-C. Wang, P. Maurice Scheidig, A. Schneider, M. Schüttpelz, T. Huser, and W. Schenck, “Deep-learning based denoising and reconstruction of super-resolution structured illumination microscopy images,” bioRxiv 2020.10.27.352633 (2020).

Schüttpelz, M.

Z. Hussain Shah, M. Müller, T.-C. Wang, P. Maurice Scheidig, A. Schneider, M. Schüttpelz, T. Huser, and W. Schenck, “Deep-learning based denoising and reconstruction of super-resolution structured illumination microscopy images,” bioRxiv 2020.10.27.352633 (2020).

Shechtman, Y.

Sheikh, H. R.

Z. Wang, A. C. Bovik, H. R. Sheikh, and E. P. Simoncelli, “Image quality assessment: From error visibility to structural similarity,” IEEE Trans. Image Process. 13(4), 600–612 (2004).
[Crossref]

Simoncelli, E. P.

Z. Wang, A. C. Bovik, H. R. Sheikh, and E. P. Simoncelli, “Image quality assessment: From error visibility to structural similarity,” IEEE Trans. Image Process. 13(4), 600–612 (2004).
[Crossref]

Singh, A. K.

Sinha, A.

Situ, G.

M. Lyu, H. Wang, G. Li, S. Zheng, and G. Situ, “Learning-based lensless imaging through optically thick scattering media,” Adv. Photonics 1(3), 036002 (2019).
[Crossref]

Song, R.

L. Jin, B. Liu, F. Zhao, S. Hahn, B. Dong, R. Song, T. C. Elston, Y. Xu, and K. M. Hahn, “Deep learning enables structured illumination microscopy with low light levels and enhanced speed,” Nat. Commun. 11(1), 1934 (2020).
[Crossref]

Szegedy, C.

S. Ioffe and C. Szegedy, “Batch normalization: Accelerating deep network training by reducing internal covariate shift,” 32nd Int. Conf. Mach. Learn. ICML 20151, 448–456 (2015).

Takeda, M.

Tang, X.

C. Dong, C. C. Loy, K. He, and X. Tang, “Learning a deep convolutional network for image super-resolution,” in Lecture Notes in Computer Science (Including Subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Springer Verlag, 2014), Vol. 8692 LNCS, pp. 184–199.

Travers, J.

R. F. Laine, G. Goodfellow, L. J. Young, J. Travers, D. Carroll, O. Dibben, H. Bright, and C. F. Kaminski, “Structured illumination microscopy combined with machine learning enables the high throughput analysis and classification of virus structure,” Elife 7, e40183 (2018).
[Crossref]

Tyagi, R.

Y. He, W. Xu, Y. Zhi, R. Tyagi, Z. Hu, and G. Cao, “Rapid bacteria identification using structured illumination microscopy and machine learning,” J. Innov. Opt. Health Sci. 11(01), 1850007 (2018).
[Crossref]

Wang, H.

H. Wang, Y. Rivenson, Y. Jin, Z. Wei, R. Gao, H. Günaydın, L. A. Bentolila, C. Kural, and A. Ozcan, “Deep learning enables cross-modality super-resolution in fluorescence microscopy,” Nat. Methods 16(1), 103–110 (2019).
[Crossref]

M. Lyu, H. Wang, G. Li, S. Zheng, and G. Situ, “Learning-based lensless imaging through optically thick scattering media,” Adv. Photonics 1(3), 036002 (2019).
[Crossref]

Y. Rivenson, Z. Göröcs, H. Günaydin, Y. Zhang, H. Wang, and A. Ozcan, “Deep learning microscopy,” Optica 4(11), 1437 (2017).
[Crossref]

Wang, M.

C. Ling, C. Zhang, M. Wang, F. Meng, L. Du, and X. Yuan, “Fast structured illumination microscopy via deep learning,” Photonics Res. 8(8), 1350 (2020).
[Crossref]

Wang, T.-C.

Z. Hussain Shah, M. Müller, T.-C. Wang, P. Maurice Scheidig, A. Schneider, M. Schüttpelz, T. Huser, and W. Schenck, “Deep-learning based denoising and reconstruction of super-resolution structured illumination microscopy images,” bioRxiv 2020.10.27.352633 (2020).

Wang, Z.

Z. Wang, A. C. Bovik, H. R. Sheikh, and E. P. Simoncelli, “Image quality assessment: From error visibility to structural similarity,” IEEE Trans. Image Process. 13(4), 600–612 (2004).
[Crossref]

Ward, E. N.

C. N. Christensen, E. N. Ward, P. Lio, and C. F. Kaminski, “ML-SIM: A deep neural network for reconstruction of structured illumination microscopy images,” 1–9 (2020).

Warde-Farley, D.

I. J. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville, and Y. Bengio, “Generative Adversarial Nets,” NIPS 14, Vol. 2, 2672–2680 (2014).

Wei, Z.

H. Wang, Y. Rivenson, Y. Jin, Z. Wei, R. Gao, H. Günaydın, L. A. Bentolila, C. Kural, and A. Ozcan, “Deep learning enables cross-modality super-resolution in fluorescence microscopy,” Nat. Methods 16(1), 103–110 (2019).
[Crossref]

T. Liu, K. de Haan, Y. Rivenson, Z. Wei, X. Zeng, Y. Zhang, and A. Ozcan, “Deep learning-based super-resolution in coherent imaging systems,” Sci. Rep. 9(1), 3926 (2019).
[Crossref]

Weiss, L. E.

Wilke, M.

Wold, S.

S. Wold, K. Esbensen, and P. Geladi, “Principal component analysis,” Chemom. Intell. Lab. Syst. 2(1-3), 37–52 (1987).
[Crossref]

Wu, X.

Z. Wu, X. Wu, and Y. Zhu, “Structured illumination-based phase retrieval via Generative Adversarial Network,” Proc. SPIE 11249, 112490L (2020).
[Crossref]

Wu, Z.

Z. Wu, X. Wu, and Y. Zhu, “Structured illumination-based phase retrieval via Generative Adversarial Network,” Proc. SPIE 11249, 112490L (2020).
[Crossref]

Xie, X.

H. Zhang, C. Fang, X. Xie, Y. Yang, W. Mei, D. I. Jin, and A. P. Fei, “High-throughput, high-resolution deep learning microscopy based on registration-free generative adversarial network,” (2019).

Xing, F.

Xu, B.

I. J. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville, and Y. Bengio, “Generative Adversarial Nets,” NIPS 14, Vol. 2, 2672–2680 (2014).

Xu, W.

Y. He, W. Xu, Y. Zhi, R. Tyagi, Z. Hu, and G. Cao, “Rapid bacteria identification using structured illumination microscopy and machine learning,” J. Innov. Opt. Health Sci. 11(01), 1850007 (2018).
[Crossref]

Xu, Y.

L. Jin, B. Liu, F. Zhao, S. Hahn, B. Dong, R. Song, T. C. Elston, Y. Xu, and K. M. Hahn, “Deep learning enables structured illumination microscopy with low light levels and enhanced speed,” Nat. Commun. 11(1), 1934 (2020).
[Crossref]

Yang, Y.

H. Zhang, C. Fang, X. Xie, Y. Yang, W. Mei, D. I. Jin, and A. P. Fei, “High-throughput, high-resolution deep learning microscopy based on registration-free generative adversarial network,” (2019).

Young, L. J.

R. F. Laine, G. Goodfellow, L. J. Young, J. Travers, D. Carroll, O. Dibben, H. Bright, and C. F. Kaminski, “Structured illumination microscopy combined with machine learning enables the high throughput analysis and classification of virus structure,” Elife 7, e40183 (2018).
[Crossref]

Yuan, C.

Yuan, X.

C. Ling, C. Zhang, M. Wang, F. Meng, L. Du, and X. Yuan, “Fast structured illumination microscopy via deep learning,” Photonics Res. 8(8), 1350 (2020).
[Crossref]

Zeng, X.

T. Liu, K. de Haan, Y. Rivenson, Z. Wei, X. Zeng, Y. Zhang, and A. Ozcan, “Deep learning-based super-resolution in coherent imaging systems,” Sci. Rep. 9(1), 3926 (2019).
[Crossref]

Zhang, C.

C. Ling, C. Zhang, M. Wang, F. Meng, L. Du, and X. Yuan, “Fast structured illumination microscopy via deep learning,” Photonics Res. 8(8), 1350 (2020).
[Crossref]

Zhang, H.

H. Zhang, C. Fang, X. Xie, Y. Yang, W. Mei, D. I. Jin, and A. P. Fei, “High-throughput, high-resolution deep learning microscopy based on registration-free generative adversarial network,” (2019).

Zhang, K.

K. Zhang, W. Zuo, Y. Chen, D. Meng, and L. Zhang, “Beyond a Gaussian denoiser: Residual learning of deep CNN for image denoising,” IEEE Trans. Image Process. 26(7), 3142–3155 (2017).
[Crossref]

Zhang, L.

K. Zhang, W. Zuo, Y. Chen, D. Meng, and L. Zhang, “Beyond a Gaussian denoiser: Residual learning of deep CNN for image denoising,” IEEE Trans. Image Process. 26(7), 3142–3155 (2017).
[Crossref]

Zhang, Y.

T. Liu, K. de Haan, Y. Rivenson, Z. Wei, X. Zeng, Y. Zhang, and A. Ozcan, “Deep learning-based super-resolution in coherent imaging systems,” Sci. Rep. 9(1), 3926 (2019).
[Crossref]

Y. Rivenson, Z. Göröcs, H. Günaydin, Y. Zhang, H. Wang, and A. Ozcan, “Deep learning microscopy,” Optica 4(11), 1437 (2017).
[Crossref]

Zhao, F.

L. Jin, B. Liu, F. Zhao, S. Hahn, B. Dong, R. Song, T. C. Elston, Y. Xu, and K. M. Hahn, “Deep learning enables structured illumination microscopy with low light levels and enhanced speed,” Nat. Commun. 11(1), 1934 (2020).
[Crossref]

Zheng, S.

M. Lyu, H. Wang, G. Li, S. Zheng, and G. Situ, “Learning-based lensless imaging through optically thick scattering media,” Adv. Photonics 1(3), 036002 (2019).
[Crossref]

Zhi, Y.

Y. He, W. Xu, Y. Zhi, R. Tyagi, Z. Hu, and G. Cao, “Rapid bacteria identification using structured illumination microscopy and machine learning,” J. Innov. Opt. Health Sci. 11(01), 1850007 (2018).
[Crossref]

Zhu, Y.

Z. Wu, X. Wu, and Y. Zhu, “Structured illumination-based phase retrieval via Generative Adversarial Network,” Proc. SPIE 11249, 112490L (2020).
[Crossref]

Zuo, W.

K. Zhang, W. Zuo, Y. Chen, D. Meng, and L. Zhang, “Beyond a Gaussian denoiser: Residual learning of deep CNN for image denoising,” IEEE Trans. Image Process. 26(7), 3142–3155 (2017).
[Crossref]

Adv. Opt. Photonics (1)

M. Saxena, G. Eluru, and S. S. Gorthi, “Structured illumination microscopy,” Adv. Opt. Photonics 7(2), 241 (2015).
[Crossref]

Adv. Photonics (1)

M. Lyu, H. Wang, G. Li, S. Zheng, and G. Situ, “Learning-based lensless imaging through optically thick scattering media,” Adv. Photonics 1(3), 036002 (2019).
[Crossref]

Appl. Opt. (2)

Chemom. Intell. Lab. Syst. (1)

S. Wold, K. Esbensen, and P. Geladi, “Principal component analysis,” Chemom. Intell. Lab. Syst. 2(1-3), 37–52 (1987).
[Crossref]

Elife (1)

R. F. Laine, G. Goodfellow, L. J. Young, J. Travers, D. Carroll, O. Dibben, H. Bright, and C. F. Kaminski, “Structured illumination microscopy combined with machine learning enables the high throughput analysis and classification of virus structure,” Elife 7, e40183 (2018).
[Crossref]

IEEE Trans. Image Process. (2)

Z. Wang, A. C. Bovik, H. R. Sheikh, and E. P. Simoncelli, “Image quality assessment: From error visibility to structural similarity,” IEEE Trans. Image Process. 13(4), 600–612 (2004).
[Crossref]

K. Zhang, W. Zuo, Y. Chen, D. Meng, and L. Zhang, “Beyond a Gaussian denoiser: Residual learning of deep CNN for image denoising,” IEEE Trans. Image Process. 26(7), 3142–3155 (2017).
[Crossref]

J. Innov. Opt. Health Sci. (1)

Y. He, W. Xu, Y. Zhi, R. Tyagi, Z. Hu, and G. Cao, “Rapid bacteria identification using structured illumination microscopy and machine learning,” J. Innov. Opt. Health Sci. 11(01), 1850007 (2018).
[Crossref]

J. Photonics Energy (1)

M. K. Kim, “Principles and techniques of digital holographic microscopy,” J. Photonics Energy 1(1), 018005 (2010).
[Crossref]

Nat. Commun. (1)

L. Jin, B. Liu, F. Zhao, S. Hahn, B. Dong, R. Song, T. C. Elston, Y. Xu, and K. M. Hahn, “Deep learning enables structured illumination microscopy with low light levels and enhanced speed,” Nat. Commun. 11(1), 1934 (2020).
[Crossref]

Nat. Methods (1)

H. Wang, Y. Rivenson, Y. Jin, Z. Wei, R. Gao, H. Günaydın, L. A. Bentolila, C. Kural, and A. Ozcan, “Deep learning enables cross-modality super-resolution in fluorescence microscopy,” Nat. Methods 16(1), 103–110 (2019).
[Crossref]

Nature (1)

Y. Lecun, Y. Bengio, and G. Hinton, “Deep learning,” Nature 521(7553), 436–444 (2015).
[Crossref]

Opt. Express (2)

Opt. Lett. (1)

Optica (3)

Photonics Res. (1)

C. Ling, C. Zhang, M. Wang, F. Meng, L. Du, and X. Yuan, “Fast structured illumination microscopy via deep learning,” Photonics Res. 8(8), 1350 (2020).
[Crossref]

Proc. SPIE (1)

Z. Wu, X. Wu, and Y. Zhu, “Structured illumination-based phase retrieval via Generative Adversarial Network,” Proc. SPIE 11249, 112490L (2020).
[Crossref]

Sci. Rep. (1)

T. Liu, K. de Haan, Y. Rivenson, Z. Wei, X. Zeng, Y. Zhang, and A. Ozcan, “Deep learning-based super-resolution in coherent imaging systems,” Sci. Rep. 9(1), 3926 (2019).
[Crossref]

Trans. ASABE (1)

Y. Lu and R. Lu, “Detection of Surface and Subsurface Defects of Apples Using Structured- Illumination Reflectance Imaging with Machine Learning Algorithms,” Trans. ASABE 61(6), 1831–1842 (2018).
[Crossref]

Other (11)

C. Dong, C. C. Loy, K. He, and X. Tang, “Learning a deep convolutional network for image super-resolution,” in Lecture Notes in Computer Science (Including Subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Springer Verlag, 2014), Vol. 8692 LNCS, pp. 184–199.

I. J. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville, and Y. Bengio, “Generative Adversarial Nets,” NIPS 14, Vol. 2, 2672–2680 (2014).

O. Ronneberger, P. Fischer, and T. Brox, “U-net: Convolutional networks for biomedical image segmentation,” in Lecture Notes in Computer Science (Including Subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Springer Verlag, 2015), Vol. 9351, pp. 234–241.

S. Ioffe and C. Szegedy, “Batch normalization: Accelerating deep network training by reducing internal covariate shift,” 32nd Int. Conf. Mach. Learn. ICML 20151, 448–456 (2015).

V. Nair and G. E. Hinton, Rectified Linear Units Improve Restricted Boltzmann Machines (2010).

C. N. Christensen, E. N. Ward, P. Lio, and C. F. Kaminski, “ML-SIM: A deep neural network for reconstruction of structured illumination microscopy images,” 1–9 (2020).

Z. Hussain Shah, M. Müller, T.-C. Wang, P. Maurice Scheidig, A. Schneider, M. Schüttpelz, T. Huser, and W. Schenck, “Deep-learning based denoising and reconstruction of super-resolution structured illumination microscopy images,” bioRxiv 2020.10.27.352633 (2020).

M. Boland, E. A. K. Cohen, S. Flaxman, and M. A. A. Neil, “Improving axial resolution in SIM using deep learning,” (2020).

H. Zhang, C. Fang, X. Xie, Y. Yang, W. Mei, D. I. Jin, and A. P. Fei, “High-throughput, high-resolution deep learning microscopy based on registration-free generative adversarial network,” (2019).

M. W. Davidson and M. Abramowitz, “Optical Microscopy,” in Encyclopedia of Imaging Science and Technology (John Wiley & Sons, Inc., 2002).

E. A.-A. für mikroskopische Anatomie and undefined 1873, “Beiträge zur Theorie des Mikroskops und der mikroskopischen Wahrnehmung,” Springer (n.d.).

Data availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

Cited By

OSA participates in Crossref's Cited-By Linking service. Citing articles from OSA journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (9)

Fig. 1.
Fig. 1. Schematic diagram of SI-DHM. BS: beam splitters; BEC: beam expansion and collimation; P: polarizer; L: lens; F: filter; MO: microscope objective; TL: tube lens; M: mirror.
Fig. 2.
Fig. 2. Flow chart of the data set preparation process for network training. The red path indicates the traditional SI-DHM reconstruction algorithm flow. The green path represents the reconstruction results and data acquisition. The blue path represents the images for network training.
Fig. 3.
Fig. 3. (a) Amplitude and phase reconstruction of polystyrene particle-cluster by SI-DHM. WF and SIM images are shown. The enlarged parts are included in orange boxes. (b) Three-dimensional visualization of the phase distribution included in the orange box.
Fig. 4.
Fig. 4. (a) Network architecture. Rectangular blocks of different colors represent different operation. The ‘Conv Double’ represented by the gray block contains the convolution layers, Batch Normalization (BN) [28] and Rectified Linear Unit (ReLU) [29]. Its structure is shown in the dotted line box with gray background. The blue arrow represents the skip connection. (b) Reconstruction process and example results of SIM-LHA. The SSIM and PSNR values of the sample image are shown in parentheses. (c) Normalized intensity values of the sample image along the white dotted line in (b). (d) Reconstruction process and example results of SIM-AP. The WF phase image (leftmost) is not used in the training process of SIM-AP. The display is only for comparing the network output results. (e) Phase values of the sample image along the white dotted line in (d).
Fig. 5.
Fig. 5. WF images, SIM images and high-resolution output results of the proposed network for different field-of-view images of polystyrene particle-cluster.
Fig. 6.
Fig. 6. WF images, SIM images and high-resolution output result for the biological sample (a) sarcina and (b) neurons. The large-size image is obtained by using the traditional phase-shifting reconstruction method. The image patches on the right are related to the network (WF phase patches are not used as network input). The orange and purple boxes are the enlarged detail of different regions. The network reconstruction results are displayed on the right.
Fig. 7.
Fig. 7. Schematic diagram of data processed by different networks. (a) Cascade network composed of: SIM-LHA and SIM-AP; (b) SIM-D; (c) SIM-LHP.
Fig. 8.
Fig. 8. (a) Comparison of different networks. (b), (c) and (d) are plots of the SSIM and PSNR of the high-resolution phase image of polystyrene particle-cluster, sarcina and neurons respectively in different networks. The red-brown box, olive green box and cobalt blue box represent the output results of SIM-D, SIM-LHP and SIM-AP, respectively. The median of the two evaluation indicators is given by the red line in the center of the box, outliers are indicated by circles.
Fig. 9.
Fig. 9. The tolerance of the cascade network and phase-shifting to varying amount of noise. (a) Comparison of traditional reconstruction and network output results on high-resolution amplitude and phase images for different noise levels. All image are from neurons. (b), (c) Changes in the evaluation index of the amplitude and phase reconstructions as a function of the noise level. The blue and the green curves represent the SSIM and PSNR values respectively. The dotted line represents the phase-shifting result, and the solid line represents the network output result.

Tables (2)

Tables Icon

Table 1. Image quality evaluation index for different objects of proposed method.

Tables Icon

Table 2. Image quality evaluation index for SIM-D, SIM-LHP and the cascade network.

Equations (1)

Equations on this page are rendered with MathJax. Learn more.

M S E L o s s ( I i , I i ^ ) = i = 1 n I i I i ^ 2 / n

Metrics