Abstract

This study shows that convolutional neural networks (CNNs) can be used to improve the performance of structured illumination microscopy to enable it to reconstruct a super-resolution image using three instead of nine raw frames, which is the standard number of frames required to this end. Owing to the isotropy of the fluorescence group, the correlation between the high-frequency information in each direction of the spectrum is obtained by training the CNNs. A high-precision super-resolution image can thus be reconstructed using accurate data from three image frames in one direction. This allows for gentler super-resolution imaging at higher speeds and weakens phototoxicity in the imaging process.

© 2020 Chinese Laser Press

Full Article  |  PDF Article

References

  • View by:
  • |
  • |
  • |

  1. E. Abbe, “Contributions to the theory of the microscope and that microscopic perception,” Arch. Microsc. Anat.9, 413–468 (1873).
  2. M. J. Rust, M. Bates, and X. Zhuang, “Sub-diffraction-limit imaging by stochastic optical reconstruction microscopy (STORM),” Nat. Methods 3, 793–795 (2006).
    [Crossref]
  3. M. Bates, B. Huang, G. T. Dempsey, and X. Zhuang, “Multicolor super-resolution imaging with photo-switchable fluorescent probes,” Science 317, 1749–1753 (2007).
    [Crossref]
  4. S. T. Hess, T. P. K. Girirajan, and M. D. Mason, “Ultra-high resolution imaging by fluorescence photoactivation localization microscopy,” Biophys. J. 91, 4258–4272 (2006).
    [Crossref]
  5. H. Shroff, C. G. Galbraith, J. A. Galbraith, and E. Betzig, “Live-cell photoactivated localization microscopy of nanoscale adhesion dynamics,” Nat. Methods 5, 417–423 (2008).
    [Crossref]
  6. M. G. L. Gustafsson, D. A. Agard, and J. W. Sedat, “Doubling the lateral resolution of wide-field fluorescence microscopy using structured illumination,” Proc. SPIE 3919, 141–150 (2000).
    [Crossref]
  7. M. G. L. Gustafsson, L. Shao, P. M. Carlton, C. J. R. Wang, I. N. Golubovskaya, W. Z. Cande, D. A. Agard, and J. W. Sedat, “Three-dimensional resolution doubling in wide-field fluorescence microscopy by structured illumination,” Biophys. J. 94, 4957–4970 (2008).
    [Crossref]
  8. T. A. Klar, S. Jakobs, M. Dyba, A. Egner, and S. W. Hell, “Fluorescence microscopy with diffraction resolution barrier broken by stimulated emission,” Proc. Natl. Acad. Sci. USA 97, 8206–8210 (2000).
    [Crossref]
  9. E. Betzig, G. H. Patterson, R. Sougrat, O. W. Lindwasser, S. Olenych, J. S. Bonifacino, M. W. Davidson, J. Lippincott-Schwartz, and H. F. Hess, “Imaging intracellular fluorescent proteins at nanometer resolution,” Science 313, 1642–1645 (2006).
    [Crossref]
  10. H. Linnenbank, T. Steinle, F. Morz, M. Floss, C. Han, A. Glidle, and H. Giessen, “Robust and rapidly tunable light source for SRS/CARS microscopy with low-intensity noise,” Adv. Photon. 1, 055001 (2019).
    [Crossref]
  11. P. Fei, J. Nie, J. Lee, Y. Ding, S. Li, H. Zhang, M. Hagiwara, T. Yu, T. Segura, C.-M. Ho, D. Zhu, and T. K. Hsiai, “Subvoxel light-sheet microscopy for high-resolution high-throughput volumetric imaging of large biomedical specimens,” Adv. Photon. 1, 016002 (2019).
    [Crossref]
  12. E. Narimanov, “Resolution limit of label-free far-field microscopy,” Adv. Photon. 1, 056003 (2019).
  13. K. Wicker, “Super-resolution fluorescence microscopy using structured illumination,” in Super-Resolution Microscopy Techniques in the Neurosciences, E. F. Fornasiero and S. O. Rizzoli, eds. (Springer, 2014), pp. 133–165.
  14. M. G. L. Gustafsson, “Nonlinear structured-illumination microscopy: wide-field fluorescence imaging with theoretically unlimited resolution,” Proc. Natl. Acad. Sci. USA 102, 13081–13086 (2005).
    [Crossref]
  15. F. Orieux, E. Sepulveda, V. Loriette, B. Dubertret, and J.-C. Olivo-Marin, “Bayesian estimation for optimized structured illumination microscopy,” IEEE Trans. Image Process. 21, 601–614 (2012).
    [Crossref]
  16. S. Dong, J. Liao, K. Guo, L. Bian, J. Suo, and G. Zheng, “Resolution doubling with a reduced number of image acquisitions,” Biomed. Opt. Express 6, 2946–2952 (2015).
    [Crossref]
  17. A. Lal, C. Shan, K. Zhao, W. Liu, X. Huang, W. Zong, L. Chen, and P. Xi, “A frequency domain SIM reconstruction algorithm using reduced number of images,” IEEE Trans. Image Process. 27, 4555–4570 (2018).
    [Crossref]
  18. F. Strohl and C. F. Kaminski, “Speed limits of structured illumination microscopy,” Opt. Lett. 42, 2511–2514 (2017).
    [Crossref]
  19. W. H. Richardson, “Bayesian-based iterative method of image restoration,” J. Opt. Soc. Am. 62, 55–59 (1972).
    [Crossref]
  20. M. Ingaramo, A. G. York, E. Hoogendoorn, M. Postma, H. Shroff, and G. H. Patterson, “Richardson-Lucy deconvolution as a general tool for combining images with complementary strengths,” Chem. Phys. Chem. 15, 794–800 (2014).
    [Crossref]
  21. M. I. Jordan and T. M. Mitchell, “Machine learning: trends, perspectives, and prospects,” Science 349, 255–260 (2015).
    [Crossref]
  22. Y. LeCun, Y. Bengio, and G. Hinton, “Deep learning,” Nature 521, 436–444 (2015).
    [Crossref]
  23. A. Krizhevsky, I. Sutskever, and G. E. Hinton, “ImageNet classification with deep convolutional neural networks,” Commun. ACM 60, 84–90 (2017).
    [Crossref]
  24. Y. Rivenson, Z. Gorocs, H. Gunaydin, Y. Zhang, H. Wang, and A. Ozcan, “Deep learning microscopy,” Optica 4, 1437–1443 (2017).
    [Crossref]
  25. W. Ouyang, A. Aristov, M. Lelek, X. Hao, and C. Zimmer, “Deep learning massively accelerates super-resolution localization microscopy,” Nat. Biotechnol. 36, 460–468 (2018).
    [Crossref]
  26. E. Nehme, L. E. Weiss, T. Michaeli, and Y. Shechtman, “Deep-STORM: super-resolution single-molecule microscopy by deep learning,” Optica 5, 458–464 (2018).
    [Crossref]
  27. N. Thanh, Y. Xue, Y. Li, L. Tian, and G. Nehmetallah, “Deep learning approach to Fourier ptychographic microscopy,” Opt. Express 26, 26470–26484 (2018).
    [Crossref]
  28. I. J. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville, and Y. Bengio, “Generative adversarial nets,” in Proceedings of the 27th International Conference on Neural Information Processing Systems, Z. Ghahramani, M. Welling, C. Cortes, N. D. Lawrence, and K. Q. Weinberger, eds. (2014), pp. 2672–2680.
  29. J.-Y. Zhu, T. Park, P. Isola, and A. A. Efros, and IEEE, “Unpaired image-to-image translation using cycle-consistent adversarial networks,” in IEEE International Conference on Computer Vision (2017), pp. 2242–2251.
  30. M. Mirza and S. Osindero, “Conditional generative adversarial nets,” arXiv:1411.1784 (2014).
  31. L. A. Gatys, A. S. Ecker, and M. Bethge, “Image style transfer using convolutional neural networks,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2016), pp. 2414–2423.
  32. P. Isola, J.-Y. Zhu, T. Zhou, and A. A. Efros, and IEEE, “Image-to-image translation with conditional adversarial networks,” in 30th IEEE Conference on Computer Vision and Pattern Recognition (2017), pp. 5967–5976.
  33. C. Li and M. Wand, “Precomputed real-time texture synthesis with Markovian generative adversarial networks,” in Computer Vision—European Conference on Computer Vision (ECCV), B. Leibe, J. Matas, N. Sebe, and M. Welling, eds. (2016), pp. 702–716.
  34. N. Sundaram, T. Brox, and K. Keutzer, “Dense point trajectories by GPU-accelerated large displacement optical flow,” in Computer Vision—European Conference on Computer Vision (ECCV), K. Daniilidis, P. Maragos, and N. Paragios, eds. (2010), pp. 438–451.
  35. C. Godard, O. Mac Aodha, and G. J. Brostow, and IEEE, “Unsupervised monocular depth estimation with left-right consistency,” in 30th IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2017), pp. 6602–6611.
  36. K. He, X. Zhang, S. Ren, and J. Sun, and IEEE, “Deep residual learning for image recognition,” in IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2016), pp. 770–778.
  37. A. Lal, C. Shan, and P. Xi, “Structured illumination microscopy image reconstruction algorithm,” IEEE J. Sel. Top. Quantum Electron. 22, 6803414 (2016).
    [Crossref]
  38. M. Mueller, V. Moenkemoeller, S. Hennig, W. Huebner, and T. Huser, “Open-source image reconstruction of super-resolution structured illumination microscopy data in ImageJ,” Nat. Commun. 7, 10980 (2016).
    [Crossref]
  39. Z. Wang, A. C. Bovik, H. R. Sheikh, and E. P. Simoncelli, “Image quality assessment: from error visibility to structural similarity,” IEEE Trans. Image Process. 13, 600–612 (2004).
    [Crossref]
  40. Z. Wang, E. P. Simoncelli, and A. C. Bovik, “Multi-scale structural similarity for image quality assessment,” in Conference Record of the 37th Asilomar Conference on Signals, Systems & Computers, M. B. Matthews, ed. (2003), pp. 1398–1402.
  41. H. Wang, Y. Rivenson, Y. Jin, Z. Wei, R. Gao, H. Gunaydin, L. A. Bentolila, C. Kural, and A. Ozcan, “Deep learning enables cross-modality super-resolution in fluorescence microscopy,” Nat. Methods 16, 103–110 (2019).
    [Crossref]
  42. L. Jin, B. Liu, F. Zhao, S. Hahn, B. Dong, R. Song, T. C. Elston, Y. Xu, and K. M. Hahn, “Deep learning enables structured illumination microscopy with low light levels and enhanced speed,” Nat. Commun. 11, 1934 (2020).
    [Crossref]

2020 (1)

L. Jin, B. Liu, F. Zhao, S. Hahn, B. Dong, R. Song, T. C. Elston, Y. Xu, and K. M. Hahn, “Deep learning enables structured illumination microscopy with low light levels and enhanced speed,” Nat. Commun. 11, 1934 (2020).
[Crossref]

2019 (4)

H. Wang, Y. Rivenson, Y. Jin, Z. Wei, R. Gao, H. Gunaydin, L. A. Bentolila, C. Kural, and A. Ozcan, “Deep learning enables cross-modality super-resolution in fluorescence microscopy,” Nat. Methods 16, 103–110 (2019).
[Crossref]

H. Linnenbank, T. Steinle, F. Morz, M. Floss, C. Han, A. Glidle, and H. Giessen, “Robust and rapidly tunable light source for SRS/CARS microscopy with low-intensity noise,” Adv. Photon. 1, 055001 (2019).
[Crossref]

P. Fei, J. Nie, J. Lee, Y. Ding, S. Li, H. Zhang, M. Hagiwara, T. Yu, T. Segura, C.-M. Ho, D. Zhu, and T. K. Hsiai, “Subvoxel light-sheet microscopy for high-resolution high-throughput volumetric imaging of large biomedical specimens,” Adv. Photon. 1, 016002 (2019).
[Crossref]

E. Narimanov, “Resolution limit of label-free far-field microscopy,” Adv. Photon. 1, 056003 (2019).

2018 (4)

A. Lal, C. Shan, K. Zhao, W. Liu, X. Huang, W. Zong, L. Chen, and P. Xi, “A frequency domain SIM reconstruction algorithm using reduced number of images,” IEEE Trans. Image Process. 27, 4555–4570 (2018).
[Crossref]

W. Ouyang, A. Aristov, M. Lelek, X. Hao, and C. Zimmer, “Deep learning massively accelerates super-resolution localization microscopy,” Nat. Biotechnol. 36, 460–468 (2018).
[Crossref]

E. Nehme, L. E. Weiss, T. Michaeli, and Y. Shechtman, “Deep-STORM: super-resolution single-molecule microscopy by deep learning,” Optica 5, 458–464 (2018).
[Crossref]

N. Thanh, Y. Xue, Y. Li, L. Tian, and G. Nehmetallah, “Deep learning approach to Fourier ptychographic microscopy,” Opt. Express 26, 26470–26484 (2018).
[Crossref]

2017 (3)

2016 (2)

A. Lal, C. Shan, and P. Xi, “Structured illumination microscopy image reconstruction algorithm,” IEEE J. Sel. Top. Quantum Electron. 22, 6803414 (2016).
[Crossref]

M. Mueller, V. Moenkemoeller, S. Hennig, W. Huebner, and T. Huser, “Open-source image reconstruction of super-resolution structured illumination microscopy data in ImageJ,” Nat. Commun. 7, 10980 (2016).
[Crossref]

2015 (3)

M. I. Jordan and T. M. Mitchell, “Machine learning: trends, perspectives, and prospects,” Science 349, 255–260 (2015).
[Crossref]

Y. LeCun, Y. Bengio, and G. Hinton, “Deep learning,” Nature 521, 436–444 (2015).
[Crossref]

S. Dong, J. Liao, K. Guo, L. Bian, J. Suo, and G. Zheng, “Resolution doubling with a reduced number of image acquisitions,” Biomed. Opt. Express 6, 2946–2952 (2015).
[Crossref]

2014 (1)

M. Ingaramo, A. G. York, E. Hoogendoorn, M. Postma, H. Shroff, and G. H. Patterson, “Richardson-Lucy deconvolution as a general tool for combining images with complementary strengths,” Chem. Phys. Chem. 15, 794–800 (2014).
[Crossref]

2012 (1)

F. Orieux, E. Sepulveda, V. Loriette, B. Dubertret, and J.-C. Olivo-Marin, “Bayesian estimation for optimized structured illumination microscopy,” IEEE Trans. Image Process. 21, 601–614 (2012).
[Crossref]

2008 (2)

H. Shroff, C. G. Galbraith, J. A. Galbraith, and E. Betzig, “Live-cell photoactivated localization microscopy of nanoscale adhesion dynamics,” Nat. Methods 5, 417–423 (2008).
[Crossref]

M. G. L. Gustafsson, L. Shao, P. M. Carlton, C. J. R. Wang, I. N. Golubovskaya, W. Z. Cande, D. A. Agard, and J. W. Sedat, “Three-dimensional resolution doubling in wide-field fluorescence microscopy by structured illumination,” Biophys. J. 94, 4957–4970 (2008).
[Crossref]

2007 (1)

M. Bates, B. Huang, G. T. Dempsey, and X. Zhuang, “Multicolor super-resolution imaging with photo-switchable fluorescent probes,” Science 317, 1749–1753 (2007).
[Crossref]

2006 (3)

S. T. Hess, T. P. K. Girirajan, and M. D. Mason, “Ultra-high resolution imaging by fluorescence photoactivation localization microscopy,” Biophys. J. 91, 4258–4272 (2006).
[Crossref]

M. J. Rust, M. Bates, and X. Zhuang, “Sub-diffraction-limit imaging by stochastic optical reconstruction microscopy (STORM),” Nat. Methods 3, 793–795 (2006).
[Crossref]

E. Betzig, G. H. Patterson, R. Sougrat, O. W. Lindwasser, S. Olenych, J. S. Bonifacino, M. W. Davidson, J. Lippincott-Schwartz, and H. F. Hess, “Imaging intracellular fluorescent proteins at nanometer resolution,” Science 313, 1642–1645 (2006).
[Crossref]

2005 (1)

M. G. L. Gustafsson, “Nonlinear structured-illumination microscopy: wide-field fluorescence imaging with theoretically unlimited resolution,” Proc. Natl. Acad. Sci. USA 102, 13081–13086 (2005).
[Crossref]

2004 (1)

Z. Wang, A. C. Bovik, H. R. Sheikh, and E. P. Simoncelli, “Image quality assessment: from error visibility to structural similarity,” IEEE Trans. Image Process. 13, 600–612 (2004).
[Crossref]

2000 (2)

T. A. Klar, S. Jakobs, M. Dyba, A. Egner, and S. W. Hell, “Fluorescence microscopy with diffraction resolution barrier broken by stimulated emission,” Proc. Natl. Acad. Sci. USA 97, 8206–8210 (2000).
[Crossref]

M. G. L. Gustafsson, D. A. Agard, and J. W. Sedat, “Doubling the lateral resolution of wide-field fluorescence microscopy using structured illumination,” Proc. SPIE 3919, 141–150 (2000).
[Crossref]

1972 (1)

Abbe, E.

E. Abbe, “Contributions to the theory of the microscope and that microscopic perception,” Arch. Microsc. Anat.9, 413–468 (1873).

Agard, D. A.

M. G. L. Gustafsson, L. Shao, P. M. Carlton, C. J. R. Wang, I. N. Golubovskaya, W. Z. Cande, D. A. Agard, and J. W. Sedat, “Three-dimensional resolution doubling in wide-field fluorescence microscopy by structured illumination,” Biophys. J. 94, 4957–4970 (2008).
[Crossref]

M. G. L. Gustafsson, D. A. Agard, and J. W. Sedat, “Doubling the lateral resolution of wide-field fluorescence microscopy using structured illumination,” Proc. SPIE 3919, 141–150 (2000).
[Crossref]

Aristov, A.

W. Ouyang, A. Aristov, M. Lelek, X. Hao, and C. Zimmer, “Deep learning massively accelerates super-resolution localization microscopy,” Nat. Biotechnol. 36, 460–468 (2018).
[Crossref]

Bates, M.

M. Bates, B. Huang, G. T. Dempsey, and X. Zhuang, “Multicolor super-resolution imaging with photo-switchable fluorescent probes,” Science 317, 1749–1753 (2007).
[Crossref]

M. J. Rust, M. Bates, and X. Zhuang, “Sub-diffraction-limit imaging by stochastic optical reconstruction microscopy (STORM),” Nat. Methods 3, 793–795 (2006).
[Crossref]

Bengio, Y.

Y. LeCun, Y. Bengio, and G. Hinton, “Deep learning,” Nature 521, 436–444 (2015).
[Crossref]

I. J. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville, and Y. Bengio, “Generative adversarial nets,” in Proceedings of the 27th International Conference on Neural Information Processing Systems, Z. Ghahramani, M. Welling, C. Cortes, N. D. Lawrence, and K. Q. Weinberger, eds. (2014), pp. 2672–2680.

Bentolila, L. A.

H. Wang, Y. Rivenson, Y. Jin, Z. Wei, R. Gao, H. Gunaydin, L. A. Bentolila, C. Kural, and A. Ozcan, “Deep learning enables cross-modality super-resolution in fluorescence microscopy,” Nat. Methods 16, 103–110 (2019).
[Crossref]

Bethge, M.

L. A. Gatys, A. S. Ecker, and M. Bethge, “Image style transfer using convolutional neural networks,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2016), pp. 2414–2423.

Betzig, E.

H. Shroff, C. G. Galbraith, J. A. Galbraith, and E. Betzig, “Live-cell photoactivated localization microscopy of nanoscale adhesion dynamics,” Nat. Methods 5, 417–423 (2008).
[Crossref]

E. Betzig, G. H. Patterson, R. Sougrat, O. W. Lindwasser, S. Olenych, J. S. Bonifacino, M. W. Davidson, J. Lippincott-Schwartz, and H. F. Hess, “Imaging intracellular fluorescent proteins at nanometer resolution,” Science 313, 1642–1645 (2006).
[Crossref]

Bian, L.

Bonifacino, J. S.

E. Betzig, G. H. Patterson, R. Sougrat, O. W. Lindwasser, S. Olenych, J. S. Bonifacino, M. W. Davidson, J. Lippincott-Schwartz, and H. F. Hess, “Imaging intracellular fluorescent proteins at nanometer resolution,” Science 313, 1642–1645 (2006).
[Crossref]

Bovik, A. C.

Z. Wang, A. C. Bovik, H. R. Sheikh, and E. P. Simoncelli, “Image quality assessment: from error visibility to structural similarity,” IEEE Trans. Image Process. 13, 600–612 (2004).
[Crossref]

Z. Wang, E. P. Simoncelli, and A. C. Bovik, “Multi-scale structural similarity for image quality assessment,” in Conference Record of the 37th Asilomar Conference on Signals, Systems & Computers, M. B. Matthews, ed. (2003), pp. 1398–1402.

Brostow, G. J.

C. Godard, O. Mac Aodha, and G. J. Brostow, and IEEE, “Unsupervised monocular depth estimation with left-right consistency,” in 30th IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2017), pp. 6602–6611.

Brox, T.

N. Sundaram, T. Brox, and K. Keutzer, “Dense point trajectories by GPU-accelerated large displacement optical flow,” in Computer Vision—European Conference on Computer Vision (ECCV), K. Daniilidis, P. Maragos, and N. Paragios, eds. (2010), pp. 438–451.

Cande, W. Z.

M. G. L. Gustafsson, L. Shao, P. M. Carlton, C. J. R. Wang, I. N. Golubovskaya, W. Z. Cande, D. A. Agard, and J. W. Sedat, “Three-dimensional resolution doubling in wide-field fluorescence microscopy by structured illumination,” Biophys. J. 94, 4957–4970 (2008).
[Crossref]

Carlton, P. M.

M. G. L. Gustafsson, L. Shao, P. M. Carlton, C. J. R. Wang, I. N. Golubovskaya, W. Z. Cande, D. A. Agard, and J. W. Sedat, “Three-dimensional resolution doubling in wide-field fluorescence microscopy by structured illumination,” Biophys. J. 94, 4957–4970 (2008).
[Crossref]

Chen, L.

A. Lal, C. Shan, K. Zhao, W. Liu, X. Huang, W. Zong, L. Chen, and P. Xi, “A frequency domain SIM reconstruction algorithm using reduced number of images,” IEEE Trans. Image Process. 27, 4555–4570 (2018).
[Crossref]

Courville, A.

I. J. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville, and Y. Bengio, “Generative adversarial nets,” in Proceedings of the 27th International Conference on Neural Information Processing Systems, Z. Ghahramani, M. Welling, C. Cortes, N. D. Lawrence, and K. Q. Weinberger, eds. (2014), pp. 2672–2680.

Davidson, M. W.

E. Betzig, G. H. Patterson, R. Sougrat, O. W. Lindwasser, S. Olenych, J. S. Bonifacino, M. W. Davidson, J. Lippincott-Schwartz, and H. F. Hess, “Imaging intracellular fluorescent proteins at nanometer resolution,” Science 313, 1642–1645 (2006).
[Crossref]

Dempsey, G. T.

M. Bates, B. Huang, G. T. Dempsey, and X. Zhuang, “Multicolor super-resolution imaging with photo-switchable fluorescent probes,” Science 317, 1749–1753 (2007).
[Crossref]

Ding, Y.

P. Fei, J. Nie, J. Lee, Y. Ding, S. Li, H. Zhang, M. Hagiwara, T. Yu, T. Segura, C.-M. Ho, D. Zhu, and T. K. Hsiai, “Subvoxel light-sheet microscopy for high-resolution high-throughput volumetric imaging of large biomedical specimens,” Adv. Photon. 1, 016002 (2019).
[Crossref]

Dong, B.

L. Jin, B. Liu, F. Zhao, S. Hahn, B. Dong, R. Song, T. C. Elston, Y. Xu, and K. M. Hahn, “Deep learning enables structured illumination microscopy with low light levels and enhanced speed,” Nat. Commun. 11, 1934 (2020).
[Crossref]

Dong, S.

Dubertret, B.

F. Orieux, E. Sepulveda, V. Loriette, B. Dubertret, and J.-C. Olivo-Marin, “Bayesian estimation for optimized structured illumination microscopy,” IEEE Trans. Image Process. 21, 601–614 (2012).
[Crossref]

Dyba, M.

T. A. Klar, S. Jakobs, M. Dyba, A. Egner, and S. W. Hell, “Fluorescence microscopy with diffraction resolution barrier broken by stimulated emission,” Proc. Natl. Acad. Sci. USA 97, 8206–8210 (2000).
[Crossref]

Ecker, A. S.

L. A. Gatys, A. S. Ecker, and M. Bethge, “Image style transfer using convolutional neural networks,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2016), pp. 2414–2423.

Efros, A. A.

P. Isola, J.-Y. Zhu, T. Zhou, and A. A. Efros, and IEEE, “Image-to-image translation with conditional adversarial networks,” in 30th IEEE Conference on Computer Vision and Pattern Recognition (2017), pp. 5967–5976.

J.-Y. Zhu, T. Park, P. Isola, and A. A. Efros, and IEEE, “Unpaired image-to-image translation using cycle-consistent adversarial networks,” in IEEE International Conference on Computer Vision (2017), pp. 2242–2251.

Egner, A.

T. A. Klar, S. Jakobs, M. Dyba, A. Egner, and S. W. Hell, “Fluorescence microscopy with diffraction resolution barrier broken by stimulated emission,” Proc. Natl. Acad. Sci. USA 97, 8206–8210 (2000).
[Crossref]

Elston, T. C.

L. Jin, B. Liu, F. Zhao, S. Hahn, B. Dong, R. Song, T. C. Elston, Y. Xu, and K. M. Hahn, “Deep learning enables structured illumination microscopy with low light levels and enhanced speed,” Nat. Commun. 11, 1934 (2020).
[Crossref]

Fei, P.

P. Fei, J. Nie, J. Lee, Y. Ding, S. Li, H. Zhang, M. Hagiwara, T. Yu, T. Segura, C.-M. Ho, D. Zhu, and T. K. Hsiai, “Subvoxel light-sheet microscopy for high-resolution high-throughput volumetric imaging of large biomedical specimens,” Adv. Photon. 1, 016002 (2019).
[Crossref]

Floss, M.

H. Linnenbank, T. Steinle, F. Morz, M. Floss, C. Han, A. Glidle, and H. Giessen, “Robust and rapidly tunable light source for SRS/CARS microscopy with low-intensity noise,” Adv. Photon. 1, 055001 (2019).
[Crossref]

Galbraith, C. G.

H. Shroff, C. G. Galbraith, J. A. Galbraith, and E. Betzig, “Live-cell photoactivated localization microscopy of nanoscale adhesion dynamics,” Nat. Methods 5, 417–423 (2008).
[Crossref]

Galbraith, J. A.

H. Shroff, C. G. Galbraith, J. A. Galbraith, and E. Betzig, “Live-cell photoactivated localization microscopy of nanoscale adhesion dynamics,” Nat. Methods 5, 417–423 (2008).
[Crossref]

Gao, R.

H. Wang, Y. Rivenson, Y. Jin, Z. Wei, R. Gao, H. Gunaydin, L. A. Bentolila, C. Kural, and A. Ozcan, “Deep learning enables cross-modality super-resolution in fluorescence microscopy,” Nat. Methods 16, 103–110 (2019).
[Crossref]

Gatys, L. A.

L. A. Gatys, A. S. Ecker, and M. Bethge, “Image style transfer using convolutional neural networks,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2016), pp. 2414–2423.

Giessen, H.

H. Linnenbank, T. Steinle, F. Morz, M. Floss, C. Han, A. Glidle, and H. Giessen, “Robust and rapidly tunable light source for SRS/CARS microscopy with low-intensity noise,” Adv. Photon. 1, 055001 (2019).
[Crossref]

Girirajan, T. P. K.

S. T. Hess, T. P. K. Girirajan, and M. D. Mason, “Ultra-high resolution imaging by fluorescence photoactivation localization microscopy,” Biophys. J. 91, 4258–4272 (2006).
[Crossref]

Glidle, A.

H. Linnenbank, T. Steinle, F. Morz, M. Floss, C. Han, A. Glidle, and H. Giessen, “Robust and rapidly tunable light source for SRS/CARS microscopy with low-intensity noise,” Adv. Photon. 1, 055001 (2019).
[Crossref]

Godard, C.

C. Godard, O. Mac Aodha, and G. J. Brostow, and IEEE, “Unsupervised monocular depth estimation with left-right consistency,” in 30th IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2017), pp. 6602–6611.

Golubovskaya, I. N.

M. G. L. Gustafsson, L. Shao, P. M. Carlton, C. J. R. Wang, I. N. Golubovskaya, W. Z. Cande, D. A. Agard, and J. W. Sedat, “Three-dimensional resolution doubling in wide-field fluorescence microscopy by structured illumination,” Biophys. J. 94, 4957–4970 (2008).
[Crossref]

Goodfellow, I. J.

I. J. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville, and Y. Bengio, “Generative adversarial nets,” in Proceedings of the 27th International Conference on Neural Information Processing Systems, Z. Ghahramani, M. Welling, C. Cortes, N. D. Lawrence, and K. Q. Weinberger, eds. (2014), pp. 2672–2680.

Gorocs, Z.

Gunaydin, H.

H. Wang, Y. Rivenson, Y. Jin, Z. Wei, R. Gao, H. Gunaydin, L. A. Bentolila, C. Kural, and A. Ozcan, “Deep learning enables cross-modality super-resolution in fluorescence microscopy,” Nat. Methods 16, 103–110 (2019).
[Crossref]

Y. Rivenson, Z. Gorocs, H. Gunaydin, Y. Zhang, H. Wang, and A. Ozcan, “Deep learning microscopy,” Optica 4, 1437–1443 (2017).
[Crossref]

Guo, K.

Gustafsson, M. G. L.

M. G. L. Gustafsson, L. Shao, P. M. Carlton, C. J. R. Wang, I. N. Golubovskaya, W. Z. Cande, D. A. Agard, and J. W. Sedat, “Three-dimensional resolution doubling in wide-field fluorescence microscopy by structured illumination,” Biophys. J. 94, 4957–4970 (2008).
[Crossref]

M. G. L. Gustafsson, “Nonlinear structured-illumination microscopy: wide-field fluorescence imaging with theoretically unlimited resolution,” Proc. Natl. Acad. Sci. USA 102, 13081–13086 (2005).
[Crossref]

M. G. L. Gustafsson, D. A. Agard, and J. W. Sedat, “Doubling the lateral resolution of wide-field fluorescence microscopy using structured illumination,” Proc. SPIE 3919, 141–150 (2000).
[Crossref]

Hagiwara, M.

P. Fei, J. Nie, J. Lee, Y. Ding, S. Li, H. Zhang, M. Hagiwara, T. Yu, T. Segura, C.-M. Ho, D. Zhu, and T. K. Hsiai, “Subvoxel light-sheet microscopy for high-resolution high-throughput volumetric imaging of large biomedical specimens,” Adv. Photon. 1, 016002 (2019).
[Crossref]

Hahn, K. M.

L. Jin, B. Liu, F. Zhao, S. Hahn, B. Dong, R. Song, T. C. Elston, Y. Xu, and K. M. Hahn, “Deep learning enables structured illumination microscopy with low light levels and enhanced speed,” Nat. Commun. 11, 1934 (2020).
[Crossref]

Hahn, S.

L. Jin, B. Liu, F. Zhao, S. Hahn, B. Dong, R. Song, T. C. Elston, Y. Xu, and K. M. Hahn, “Deep learning enables structured illumination microscopy with low light levels and enhanced speed,” Nat. Commun. 11, 1934 (2020).
[Crossref]

Han, C.

H. Linnenbank, T. Steinle, F. Morz, M. Floss, C. Han, A. Glidle, and H. Giessen, “Robust and rapidly tunable light source for SRS/CARS microscopy with low-intensity noise,” Adv. Photon. 1, 055001 (2019).
[Crossref]

Hao, X.

W. Ouyang, A. Aristov, M. Lelek, X. Hao, and C. Zimmer, “Deep learning massively accelerates super-resolution localization microscopy,” Nat. Biotechnol. 36, 460–468 (2018).
[Crossref]

He, K.

K. He, X. Zhang, S. Ren, and J. Sun, and IEEE, “Deep residual learning for image recognition,” in IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2016), pp. 770–778.

Hell, S. W.

T. A. Klar, S. Jakobs, M. Dyba, A. Egner, and S. W. Hell, “Fluorescence microscopy with diffraction resolution barrier broken by stimulated emission,” Proc. Natl. Acad. Sci. USA 97, 8206–8210 (2000).
[Crossref]

Hennig, S.

M. Mueller, V. Moenkemoeller, S. Hennig, W. Huebner, and T. Huser, “Open-source image reconstruction of super-resolution structured illumination microscopy data in ImageJ,” Nat. Commun. 7, 10980 (2016).
[Crossref]

Hess, H. F.

E. Betzig, G. H. Patterson, R. Sougrat, O. W. Lindwasser, S. Olenych, J. S. Bonifacino, M. W. Davidson, J. Lippincott-Schwartz, and H. F. Hess, “Imaging intracellular fluorescent proteins at nanometer resolution,” Science 313, 1642–1645 (2006).
[Crossref]

Hess, S. T.

S. T. Hess, T. P. K. Girirajan, and M. D. Mason, “Ultra-high resolution imaging by fluorescence photoactivation localization microscopy,” Biophys. J. 91, 4258–4272 (2006).
[Crossref]

Hinton, G.

Y. LeCun, Y. Bengio, and G. Hinton, “Deep learning,” Nature 521, 436–444 (2015).
[Crossref]

Hinton, G. E.

A. Krizhevsky, I. Sutskever, and G. E. Hinton, “ImageNet classification with deep convolutional neural networks,” Commun. ACM 60, 84–90 (2017).
[Crossref]

Ho, C.-M.

P. Fei, J. Nie, J. Lee, Y. Ding, S. Li, H. Zhang, M. Hagiwara, T. Yu, T. Segura, C.-M. Ho, D. Zhu, and T. K. Hsiai, “Subvoxel light-sheet microscopy for high-resolution high-throughput volumetric imaging of large biomedical specimens,” Adv. Photon. 1, 016002 (2019).
[Crossref]

Hoogendoorn, E.

M. Ingaramo, A. G. York, E. Hoogendoorn, M. Postma, H. Shroff, and G. H. Patterson, “Richardson-Lucy deconvolution as a general tool for combining images with complementary strengths,” Chem. Phys. Chem. 15, 794–800 (2014).
[Crossref]

Hsiai, T. K.

P. Fei, J. Nie, J. Lee, Y. Ding, S. Li, H. Zhang, M. Hagiwara, T. Yu, T. Segura, C.-M. Ho, D. Zhu, and T. K. Hsiai, “Subvoxel light-sheet microscopy for high-resolution high-throughput volumetric imaging of large biomedical specimens,” Adv. Photon. 1, 016002 (2019).
[Crossref]

Huang, B.

M. Bates, B. Huang, G. T. Dempsey, and X. Zhuang, “Multicolor super-resolution imaging with photo-switchable fluorescent probes,” Science 317, 1749–1753 (2007).
[Crossref]

Huang, X.

A. Lal, C. Shan, K. Zhao, W. Liu, X. Huang, W. Zong, L. Chen, and P. Xi, “A frequency domain SIM reconstruction algorithm using reduced number of images,” IEEE Trans. Image Process. 27, 4555–4570 (2018).
[Crossref]

Huebner, W.

M. Mueller, V. Moenkemoeller, S. Hennig, W. Huebner, and T. Huser, “Open-source image reconstruction of super-resolution structured illumination microscopy data in ImageJ,” Nat. Commun. 7, 10980 (2016).
[Crossref]

Huser, T.

M. Mueller, V. Moenkemoeller, S. Hennig, W. Huebner, and T. Huser, “Open-source image reconstruction of super-resolution structured illumination microscopy data in ImageJ,” Nat. Commun. 7, 10980 (2016).
[Crossref]

Ingaramo, M.

M. Ingaramo, A. G. York, E. Hoogendoorn, M. Postma, H. Shroff, and G. H. Patterson, “Richardson-Lucy deconvolution as a general tool for combining images with complementary strengths,” Chem. Phys. Chem. 15, 794–800 (2014).
[Crossref]

Isola, P.

J.-Y. Zhu, T. Park, P. Isola, and A. A. Efros, and IEEE, “Unpaired image-to-image translation using cycle-consistent adversarial networks,” in IEEE International Conference on Computer Vision (2017), pp. 2242–2251.

P. Isola, J.-Y. Zhu, T. Zhou, and A. A. Efros, and IEEE, “Image-to-image translation with conditional adversarial networks,” in 30th IEEE Conference on Computer Vision and Pattern Recognition (2017), pp. 5967–5976.

Jakobs, S.

T. A. Klar, S. Jakobs, M. Dyba, A. Egner, and S. W. Hell, “Fluorescence microscopy with diffraction resolution barrier broken by stimulated emission,” Proc. Natl. Acad. Sci. USA 97, 8206–8210 (2000).
[Crossref]

Jin, L.

L. Jin, B. Liu, F. Zhao, S. Hahn, B. Dong, R. Song, T. C. Elston, Y. Xu, and K. M. Hahn, “Deep learning enables structured illumination microscopy with low light levels and enhanced speed,” Nat. Commun. 11, 1934 (2020).
[Crossref]

Jin, Y.

H. Wang, Y. Rivenson, Y. Jin, Z. Wei, R. Gao, H. Gunaydin, L. A. Bentolila, C. Kural, and A. Ozcan, “Deep learning enables cross-modality super-resolution in fluorescence microscopy,” Nat. Methods 16, 103–110 (2019).
[Crossref]

Jordan, M. I.

M. I. Jordan and T. M. Mitchell, “Machine learning: trends, perspectives, and prospects,” Science 349, 255–260 (2015).
[Crossref]

Kaminski, C. F.

Keutzer, K.

N. Sundaram, T. Brox, and K. Keutzer, “Dense point trajectories by GPU-accelerated large displacement optical flow,” in Computer Vision—European Conference on Computer Vision (ECCV), K. Daniilidis, P. Maragos, and N. Paragios, eds. (2010), pp. 438–451.

Klar, T. A.

T. A. Klar, S. Jakobs, M. Dyba, A. Egner, and S. W. Hell, “Fluorescence microscopy with diffraction resolution barrier broken by stimulated emission,” Proc. Natl. Acad. Sci. USA 97, 8206–8210 (2000).
[Crossref]

Krizhevsky, A.

A. Krizhevsky, I. Sutskever, and G. E. Hinton, “ImageNet classification with deep convolutional neural networks,” Commun. ACM 60, 84–90 (2017).
[Crossref]

Kural, C.

H. Wang, Y. Rivenson, Y. Jin, Z. Wei, R. Gao, H. Gunaydin, L. A. Bentolila, C. Kural, and A. Ozcan, “Deep learning enables cross-modality super-resolution in fluorescence microscopy,” Nat. Methods 16, 103–110 (2019).
[Crossref]

Lal, A.

A. Lal, C. Shan, K. Zhao, W. Liu, X. Huang, W. Zong, L. Chen, and P. Xi, “A frequency domain SIM reconstruction algorithm using reduced number of images,” IEEE Trans. Image Process. 27, 4555–4570 (2018).
[Crossref]

A. Lal, C. Shan, and P. Xi, “Structured illumination microscopy image reconstruction algorithm,” IEEE J. Sel. Top. Quantum Electron. 22, 6803414 (2016).
[Crossref]

LeCun, Y.

Y. LeCun, Y. Bengio, and G. Hinton, “Deep learning,” Nature 521, 436–444 (2015).
[Crossref]

Lee, J.

P. Fei, J. Nie, J. Lee, Y. Ding, S. Li, H. Zhang, M. Hagiwara, T. Yu, T. Segura, C.-M. Ho, D. Zhu, and T. K. Hsiai, “Subvoxel light-sheet microscopy for high-resolution high-throughput volumetric imaging of large biomedical specimens,” Adv. Photon. 1, 016002 (2019).
[Crossref]

Lelek, M.

W. Ouyang, A. Aristov, M. Lelek, X. Hao, and C. Zimmer, “Deep learning massively accelerates super-resolution localization microscopy,” Nat. Biotechnol. 36, 460–468 (2018).
[Crossref]

Li, C.

C. Li and M. Wand, “Precomputed real-time texture synthesis with Markovian generative adversarial networks,” in Computer Vision—European Conference on Computer Vision (ECCV), B. Leibe, J. Matas, N. Sebe, and M. Welling, eds. (2016), pp. 702–716.

Li, S.

P. Fei, J. Nie, J. Lee, Y. Ding, S. Li, H. Zhang, M. Hagiwara, T. Yu, T. Segura, C.-M. Ho, D. Zhu, and T. K. Hsiai, “Subvoxel light-sheet microscopy for high-resolution high-throughput volumetric imaging of large biomedical specimens,” Adv. Photon. 1, 016002 (2019).
[Crossref]

Li, Y.

Liao, J.

Lindwasser, O. W.

E. Betzig, G. H. Patterson, R. Sougrat, O. W. Lindwasser, S. Olenych, J. S. Bonifacino, M. W. Davidson, J. Lippincott-Schwartz, and H. F. Hess, “Imaging intracellular fluorescent proteins at nanometer resolution,” Science 313, 1642–1645 (2006).
[Crossref]

Linnenbank, H.

H. Linnenbank, T. Steinle, F. Morz, M. Floss, C. Han, A. Glidle, and H. Giessen, “Robust and rapidly tunable light source for SRS/CARS microscopy with low-intensity noise,” Adv. Photon. 1, 055001 (2019).
[Crossref]

Lippincott-Schwartz, J.

E. Betzig, G. H. Patterson, R. Sougrat, O. W. Lindwasser, S. Olenych, J. S. Bonifacino, M. W. Davidson, J. Lippincott-Schwartz, and H. F. Hess, “Imaging intracellular fluorescent proteins at nanometer resolution,” Science 313, 1642–1645 (2006).
[Crossref]

Liu, B.

L. Jin, B. Liu, F. Zhao, S. Hahn, B. Dong, R. Song, T. C. Elston, Y. Xu, and K. M. Hahn, “Deep learning enables structured illumination microscopy with low light levels and enhanced speed,” Nat. Commun. 11, 1934 (2020).
[Crossref]

Liu, W.

A. Lal, C. Shan, K. Zhao, W. Liu, X. Huang, W. Zong, L. Chen, and P. Xi, “A frequency domain SIM reconstruction algorithm using reduced number of images,” IEEE Trans. Image Process. 27, 4555–4570 (2018).
[Crossref]

Loriette, V.

F. Orieux, E. Sepulveda, V. Loriette, B. Dubertret, and J.-C. Olivo-Marin, “Bayesian estimation for optimized structured illumination microscopy,” IEEE Trans. Image Process. 21, 601–614 (2012).
[Crossref]

Mac Aodha, O.

C. Godard, O. Mac Aodha, and G. J. Brostow, and IEEE, “Unsupervised monocular depth estimation with left-right consistency,” in 30th IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2017), pp. 6602–6611.

Mason, M. D.

S. T. Hess, T. P. K. Girirajan, and M. D. Mason, “Ultra-high resolution imaging by fluorescence photoactivation localization microscopy,” Biophys. J. 91, 4258–4272 (2006).
[Crossref]

Michaeli, T.

Mirza, M.

I. J. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville, and Y. Bengio, “Generative adversarial nets,” in Proceedings of the 27th International Conference on Neural Information Processing Systems, Z. Ghahramani, M. Welling, C. Cortes, N. D. Lawrence, and K. Q. Weinberger, eds. (2014), pp. 2672–2680.

M. Mirza and S. Osindero, “Conditional generative adversarial nets,” arXiv:1411.1784 (2014).

Mitchell, T. M.

M. I. Jordan and T. M. Mitchell, “Machine learning: trends, perspectives, and prospects,” Science 349, 255–260 (2015).
[Crossref]

Moenkemoeller, V.

M. Mueller, V. Moenkemoeller, S. Hennig, W. Huebner, and T. Huser, “Open-source image reconstruction of super-resolution structured illumination microscopy data in ImageJ,” Nat. Commun. 7, 10980 (2016).
[Crossref]

Morz, F.

H. Linnenbank, T. Steinle, F. Morz, M. Floss, C. Han, A. Glidle, and H. Giessen, “Robust and rapidly tunable light source for SRS/CARS microscopy with low-intensity noise,” Adv. Photon. 1, 055001 (2019).
[Crossref]

Mueller, M.

M. Mueller, V. Moenkemoeller, S. Hennig, W. Huebner, and T. Huser, “Open-source image reconstruction of super-resolution structured illumination microscopy data in ImageJ,” Nat. Commun. 7, 10980 (2016).
[Crossref]

Narimanov, E.

E. Narimanov, “Resolution limit of label-free far-field microscopy,” Adv. Photon. 1, 056003 (2019).

Nehme, E.

Nehmetallah, G.

Nie, J.

P. Fei, J. Nie, J. Lee, Y. Ding, S. Li, H. Zhang, M. Hagiwara, T. Yu, T. Segura, C.-M. Ho, D. Zhu, and T. K. Hsiai, “Subvoxel light-sheet microscopy for high-resolution high-throughput volumetric imaging of large biomedical specimens,” Adv. Photon. 1, 016002 (2019).
[Crossref]

Olenych, S.

E. Betzig, G. H. Patterson, R. Sougrat, O. W. Lindwasser, S. Olenych, J. S. Bonifacino, M. W. Davidson, J. Lippincott-Schwartz, and H. F. Hess, “Imaging intracellular fluorescent proteins at nanometer resolution,” Science 313, 1642–1645 (2006).
[Crossref]

Olivo-Marin, J.-C.

F. Orieux, E. Sepulveda, V. Loriette, B. Dubertret, and J.-C. Olivo-Marin, “Bayesian estimation for optimized structured illumination microscopy,” IEEE Trans. Image Process. 21, 601–614 (2012).
[Crossref]

Orieux, F.

F. Orieux, E. Sepulveda, V. Loriette, B. Dubertret, and J.-C. Olivo-Marin, “Bayesian estimation for optimized structured illumination microscopy,” IEEE Trans. Image Process. 21, 601–614 (2012).
[Crossref]

Osindero, S.

M. Mirza and S. Osindero, “Conditional generative adversarial nets,” arXiv:1411.1784 (2014).

Ouyang, W.

W. Ouyang, A. Aristov, M. Lelek, X. Hao, and C. Zimmer, “Deep learning massively accelerates super-resolution localization microscopy,” Nat. Biotechnol. 36, 460–468 (2018).
[Crossref]

Ozair, S.

I. J. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville, and Y. Bengio, “Generative adversarial nets,” in Proceedings of the 27th International Conference on Neural Information Processing Systems, Z. Ghahramani, M. Welling, C. Cortes, N. D. Lawrence, and K. Q. Weinberger, eds. (2014), pp. 2672–2680.

Ozcan, A.

H. Wang, Y. Rivenson, Y. Jin, Z. Wei, R. Gao, H. Gunaydin, L. A. Bentolila, C. Kural, and A. Ozcan, “Deep learning enables cross-modality super-resolution in fluorescence microscopy,” Nat. Methods 16, 103–110 (2019).
[Crossref]

Y. Rivenson, Z. Gorocs, H. Gunaydin, Y. Zhang, H. Wang, and A. Ozcan, “Deep learning microscopy,” Optica 4, 1437–1443 (2017).
[Crossref]

Park, T.

J.-Y. Zhu, T. Park, P. Isola, and A. A. Efros, and IEEE, “Unpaired image-to-image translation using cycle-consistent adversarial networks,” in IEEE International Conference on Computer Vision (2017), pp. 2242–2251.

Patterson, G. H.

M. Ingaramo, A. G. York, E. Hoogendoorn, M. Postma, H. Shroff, and G. H. Patterson, “Richardson-Lucy deconvolution as a general tool for combining images with complementary strengths,” Chem. Phys. Chem. 15, 794–800 (2014).
[Crossref]

E. Betzig, G. H. Patterson, R. Sougrat, O. W. Lindwasser, S. Olenych, J. S. Bonifacino, M. W. Davidson, J. Lippincott-Schwartz, and H. F. Hess, “Imaging intracellular fluorescent proteins at nanometer resolution,” Science 313, 1642–1645 (2006).
[Crossref]

Postma, M.

M. Ingaramo, A. G. York, E. Hoogendoorn, M. Postma, H. Shroff, and G. H. Patterson, “Richardson-Lucy deconvolution as a general tool for combining images with complementary strengths,” Chem. Phys. Chem. 15, 794–800 (2014).
[Crossref]

Pouget-Abadie, J.

I. J. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville, and Y. Bengio, “Generative adversarial nets,” in Proceedings of the 27th International Conference on Neural Information Processing Systems, Z. Ghahramani, M. Welling, C. Cortes, N. D. Lawrence, and K. Q. Weinberger, eds. (2014), pp. 2672–2680.

Ren, S.

K. He, X. Zhang, S. Ren, and J. Sun, and IEEE, “Deep residual learning for image recognition,” in IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2016), pp. 770–778.

Richardson, W. H.

Rivenson, Y.

H. Wang, Y. Rivenson, Y. Jin, Z. Wei, R. Gao, H. Gunaydin, L. A. Bentolila, C. Kural, and A. Ozcan, “Deep learning enables cross-modality super-resolution in fluorescence microscopy,” Nat. Methods 16, 103–110 (2019).
[Crossref]

Y. Rivenson, Z. Gorocs, H. Gunaydin, Y. Zhang, H. Wang, and A. Ozcan, “Deep learning microscopy,” Optica 4, 1437–1443 (2017).
[Crossref]

Rust, M. J.

M. J. Rust, M. Bates, and X. Zhuang, “Sub-diffraction-limit imaging by stochastic optical reconstruction microscopy (STORM),” Nat. Methods 3, 793–795 (2006).
[Crossref]

Sedat, J. W.

M. G. L. Gustafsson, L. Shao, P. M. Carlton, C. J. R. Wang, I. N. Golubovskaya, W. Z. Cande, D. A. Agard, and J. W. Sedat, “Three-dimensional resolution doubling in wide-field fluorescence microscopy by structured illumination,” Biophys. J. 94, 4957–4970 (2008).
[Crossref]

M. G. L. Gustafsson, D. A. Agard, and J. W. Sedat, “Doubling the lateral resolution of wide-field fluorescence microscopy using structured illumination,” Proc. SPIE 3919, 141–150 (2000).
[Crossref]

Segura, T.

P. Fei, J. Nie, J. Lee, Y. Ding, S. Li, H. Zhang, M. Hagiwara, T. Yu, T. Segura, C.-M. Ho, D. Zhu, and T. K. Hsiai, “Subvoxel light-sheet microscopy for high-resolution high-throughput volumetric imaging of large biomedical specimens,” Adv. Photon. 1, 016002 (2019).
[Crossref]

Sepulveda, E.

F. Orieux, E. Sepulveda, V. Loriette, B. Dubertret, and J.-C. Olivo-Marin, “Bayesian estimation for optimized structured illumination microscopy,” IEEE Trans. Image Process. 21, 601–614 (2012).
[Crossref]

Shan, C.

A. Lal, C. Shan, K. Zhao, W. Liu, X. Huang, W. Zong, L. Chen, and P. Xi, “A frequency domain SIM reconstruction algorithm using reduced number of images,” IEEE Trans. Image Process. 27, 4555–4570 (2018).
[Crossref]

A. Lal, C. Shan, and P. Xi, “Structured illumination microscopy image reconstruction algorithm,” IEEE J. Sel. Top. Quantum Electron. 22, 6803414 (2016).
[Crossref]

Shao, L.

M. G. L. Gustafsson, L. Shao, P. M. Carlton, C. J. R. Wang, I. N. Golubovskaya, W. Z. Cande, D. A. Agard, and J. W. Sedat, “Three-dimensional resolution doubling in wide-field fluorescence microscopy by structured illumination,” Biophys. J. 94, 4957–4970 (2008).
[Crossref]

Shechtman, Y.

Sheikh, H. R.

Z. Wang, A. C. Bovik, H. R. Sheikh, and E. P. Simoncelli, “Image quality assessment: from error visibility to structural similarity,” IEEE Trans. Image Process. 13, 600–612 (2004).
[Crossref]

Shroff, H.

M. Ingaramo, A. G. York, E. Hoogendoorn, M. Postma, H. Shroff, and G. H. Patterson, “Richardson-Lucy deconvolution as a general tool for combining images with complementary strengths,” Chem. Phys. Chem. 15, 794–800 (2014).
[Crossref]

H. Shroff, C. G. Galbraith, J. A. Galbraith, and E. Betzig, “Live-cell photoactivated localization microscopy of nanoscale adhesion dynamics,” Nat. Methods 5, 417–423 (2008).
[Crossref]

Simoncelli, E. P.

Z. Wang, A. C. Bovik, H. R. Sheikh, and E. P. Simoncelli, “Image quality assessment: from error visibility to structural similarity,” IEEE Trans. Image Process. 13, 600–612 (2004).
[Crossref]

Z. Wang, E. P. Simoncelli, and A. C. Bovik, “Multi-scale structural similarity for image quality assessment,” in Conference Record of the 37th Asilomar Conference on Signals, Systems & Computers, M. B. Matthews, ed. (2003), pp. 1398–1402.

Song, R.

L. Jin, B. Liu, F. Zhao, S. Hahn, B. Dong, R. Song, T. C. Elston, Y. Xu, and K. M. Hahn, “Deep learning enables structured illumination microscopy with low light levels and enhanced speed,” Nat. Commun. 11, 1934 (2020).
[Crossref]

Sougrat, R.

E. Betzig, G. H. Patterson, R. Sougrat, O. W. Lindwasser, S. Olenych, J. S. Bonifacino, M. W. Davidson, J. Lippincott-Schwartz, and H. F. Hess, “Imaging intracellular fluorescent proteins at nanometer resolution,” Science 313, 1642–1645 (2006).
[Crossref]

Steinle, T.

H. Linnenbank, T. Steinle, F. Morz, M. Floss, C. Han, A. Glidle, and H. Giessen, “Robust and rapidly tunable light source for SRS/CARS microscopy with low-intensity noise,” Adv. Photon. 1, 055001 (2019).
[Crossref]

Strohl, F.

Sun, J.

K. He, X. Zhang, S. Ren, and J. Sun, and IEEE, “Deep residual learning for image recognition,” in IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2016), pp. 770–778.

Sundaram, N.

N. Sundaram, T. Brox, and K. Keutzer, “Dense point trajectories by GPU-accelerated large displacement optical flow,” in Computer Vision—European Conference on Computer Vision (ECCV), K. Daniilidis, P. Maragos, and N. Paragios, eds. (2010), pp. 438–451.

Suo, J.

Sutskever, I.

A. Krizhevsky, I. Sutskever, and G. E. Hinton, “ImageNet classification with deep convolutional neural networks,” Commun. ACM 60, 84–90 (2017).
[Crossref]

Thanh, N.

Tian, L.

Wand, M.

C. Li and M. Wand, “Precomputed real-time texture synthesis with Markovian generative adversarial networks,” in Computer Vision—European Conference on Computer Vision (ECCV), B. Leibe, J. Matas, N. Sebe, and M. Welling, eds. (2016), pp. 702–716.

Wang, C. J. R.

M. G. L. Gustafsson, L. Shao, P. M. Carlton, C. J. R. Wang, I. N. Golubovskaya, W. Z. Cande, D. A. Agard, and J. W. Sedat, “Three-dimensional resolution doubling in wide-field fluorescence microscopy by structured illumination,” Biophys. J. 94, 4957–4970 (2008).
[Crossref]

Wang, H.

H. Wang, Y. Rivenson, Y. Jin, Z. Wei, R. Gao, H. Gunaydin, L. A. Bentolila, C. Kural, and A. Ozcan, “Deep learning enables cross-modality super-resolution in fluorescence microscopy,” Nat. Methods 16, 103–110 (2019).
[Crossref]

Y. Rivenson, Z. Gorocs, H. Gunaydin, Y. Zhang, H. Wang, and A. Ozcan, “Deep learning microscopy,” Optica 4, 1437–1443 (2017).
[Crossref]

Wang, Z.

Z. Wang, A. C. Bovik, H. R. Sheikh, and E. P. Simoncelli, “Image quality assessment: from error visibility to structural similarity,” IEEE Trans. Image Process. 13, 600–612 (2004).
[Crossref]

Z. Wang, E. P. Simoncelli, and A. C. Bovik, “Multi-scale structural similarity for image quality assessment,” in Conference Record of the 37th Asilomar Conference on Signals, Systems & Computers, M. B. Matthews, ed. (2003), pp. 1398–1402.

Warde-Farley, D.

I. J. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville, and Y. Bengio, “Generative adversarial nets,” in Proceedings of the 27th International Conference on Neural Information Processing Systems, Z. Ghahramani, M. Welling, C. Cortes, N. D. Lawrence, and K. Q. Weinberger, eds. (2014), pp. 2672–2680.

Wei, Z.

H. Wang, Y. Rivenson, Y. Jin, Z. Wei, R. Gao, H. Gunaydin, L. A. Bentolila, C. Kural, and A. Ozcan, “Deep learning enables cross-modality super-resolution in fluorescence microscopy,” Nat. Methods 16, 103–110 (2019).
[Crossref]

Weiss, L. E.

Wicker, K.

K. Wicker, “Super-resolution fluorescence microscopy using structured illumination,” in Super-Resolution Microscopy Techniques in the Neurosciences, E. F. Fornasiero and S. O. Rizzoli, eds. (Springer, 2014), pp. 133–165.

Xi, P.

A. Lal, C. Shan, K. Zhao, W. Liu, X. Huang, W. Zong, L. Chen, and P. Xi, “A frequency domain SIM reconstruction algorithm using reduced number of images,” IEEE Trans. Image Process. 27, 4555–4570 (2018).
[Crossref]

A. Lal, C. Shan, and P. Xi, “Structured illumination microscopy image reconstruction algorithm,” IEEE J. Sel. Top. Quantum Electron. 22, 6803414 (2016).
[Crossref]

Xu, B.

I. J. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville, and Y. Bengio, “Generative adversarial nets,” in Proceedings of the 27th International Conference on Neural Information Processing Systems, Z. Ghahramani, M. Welling, C. Cortes, N. D. Lawrence, and K. Q. Weinberger, eds. (2014), pp. 2672–2680.

Xu, Y.

L. Jin, B. Liu, F. Zhao, S. Hahn, B. Dong, R. Song, T. C. Elston, Y. Xu, and K. M. Hahn, “Deep learning enables structured illumination microscopy with low light levels and enhanced speed,” Nat. Commun. 11, 1934 (2020).
[Crossref]

Xue, Y.

York, A. G.

M. Ingaramo, A. G. York, E. Hoogendoorn, M. Postma, H. Shroff, and G. H. Patterson, “Richardson-Lucy deconvolution as a general tool for combining images with complementary strengths,” Chem. Phys. Chem. 15, 794–800 (2014).
[Crossref]

Yu, T.

P. Fei, J. Nie, J. Lee, Y. Ding, S. Li, H. Zhang, M. Hagiwara, T. Yu, T. Segura, C.-M. Ho, D. Zhu, and T. K. Hsiai, “Subvoxel light-sheet microscopy for high-resolution high-throughput volumetric imaging of large biomedical specimens,” Adv. Photon. 1, 016002 (2019).
[Crossref]

Zhang, H.

P. Fei, J. Nie, J. Lee, Y. Ding, S. Li, H. Zhang, M. Hagiwara, T. Yu, T. Segura, C.-M. Ho, D. Zhu, and T. K. Hsiai, “Subvoxel light-sheet microscopy for high-resolution high-throughput volumetric imaging of large biomedical specimens,” Adv. Photon. 1, 016002 (2019).
[Crossref]

Zhang, X.

K. He, X. Zhang, S. Ren, and J. Sun, and IEEE, “Deep residual learning for image recognition,” in IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2016), pp. 770–778.

Zhang, Y.

Zhao, F.

L. Jin, B. Liu, F. Zhao, S. Hahn, B. Dong, R. Song, T. C. Elston, Y. Xu, and K. M. Hahn, “Deep learning enables structured illumination microscopy with low light levels and enhanced speed,” Nat. Commun. 11, 1934 (2020).
[Crossref]

Zhao, K.

A. Lal, C. Shan, K. Zhao, W. Liu, X. Huang, W. Zong, L. Chen, and P. Xi, “A frequency domain SIM reconstruction algorithm using reduced number of images,” IEEE Trans. Image Process. 27, 4555–4570 (2018).
[Crossref]

Zheng, G.

Zhou, T.

P. Isola, J.-Y. Zhu, T. Zhou, and A. A. Efros, and IEEE, “Image-to-image translation with conditional adversarial networks,” in 30th IEEE Conference on Computer Vision and Pattern Recognition (2017), pp. 5967–5976.

Zhu, D.

P. Fei, J. Nie, J. Lee, Y. Ding, S. Li, H. Zhang, M. Hagiwara, T. Yu, T. Segura, C.-M. Ho, D. Zhu, and T. K. Hsiai, “Subvoxel light-sheet microscopy for high-resolution high-throughput volumetric imaging of large biomedical specimens,” Adv. Photon. 1, 016002 (2019).
[Crossref]

Zhu, J.-Y.

P. Isola, J.-Y. Zhu, T. Zhou, and A. A. Efros, and IEEE, “Image-to-image translation with conditional adversarial networks,” in 30th IEEE Conference on Computer Vision and Pattern Recognition (2017), pp. 5967–5976.

J.-Y. Zhu, T. Park, P. Isola, and A. A. Efros, and IEEE, “Unpaired image-to-image translation using cycle-consistent adversarial networks,” in IEEE International Conference on Computer Vision (2017), pp. 2242–2251.

Zhuang, X.

M. Bates, B. Huang, G. T. Dempsey, and X. Zhuang, “Multicolor super-resolution imaging with photo-switchable fluorescent probes,” Science 317, 1749–1753 (2007).
[Crossref]

M. J. Rust, M. Bates, and X. Zhuang, “Sub-diffraction-limit imaging by stochastic optical reconstruction microscopy (STORM),” Nat. Methods 3, 793–795 (2006).
[Crossref]

Zimmer, C.

W. Ouyang, A. Aristov, M. Lelek, X. Hao, and C. Zimmer, “Deep learning massively accelerates super-resolution localization microscopy,” Nat. Biotechnol. 36, 460–468 (2018).
[Crossref]

Zong, W.

A. Lal, C. Shan, K. Zhao, W. Liu, X. Huang, W. Zong, L. Chen, and P. Xi, “A frequency domain SIM reconstruction algorithm using reduced number of images,” IEEE Trans. Image Process. 27, 4555–4570 (2018).
[Crossref]

Adv. Photon. (3)

H. Linnenbank, T. Steinle, F. Morz, M. Floss, C. Han, A. Glidle, and H. Giessen, “Robust and rapidly tunable light source for SRS/CARS microscopy with low-intensity noise,” Adv. Photon. 1, 055001 (2019).
[Crossref]

P. Fei, J. Nie, J. Lee, Y. Ding, S. Li, H. Zhang, M. Hagiwara, T. Yu, T. Segura, C.-M. Ho, D. Zhu, and T. K. Hsiai, “Subvoxel light-sheet microscopy for high-resolution high-throughput volumetric imaging of large biomedical specimens,” Adv. Photon. 1, 016002 (2019).
[Crossref]

E. Narimanov, “Resolution limit of label-free far-field microscopy,” Adv. Photon. 1, 056003 (2019).

Biomed. Opt. Express (1)

Biophys. J. (2)

S. T. Hess, T. P. K. Girirajan, and M. D. Mason, “Ultra-high resolution imaging by fluorescence photoactivation localization microscopy,” Biophys. J. 91, 4258–4272 (2006).
[Crossref]

M. G. L. Gustafsson, L. Shao, P. M. Carlton, C. J. R. Wang, I. N. Golubovskaya, W. Z. Cande, D. A. Agard, and J. W. Sedat, “Three-dimensional resolution doubling in wide-field fluorescence microscopy by structured illumination,” Biophys. J. 94, 4957–4970 (2008).
[Crossref]

Chem. Phys. Chem. (1)

M. Ingaramo, A. G. York, E. Hoogendoorn, M. Postma, H. Shroff, and G. H. Patterson, “Richardson-Lucy deconvolution as a general tool for combining images with complementary strengths,” Chem. Phys. Chem. 15, 794–800 (2014).
[Crossref]

Commun. ACM (1)

A. Krizhevsky, I. Sutskever, and G. E. Hinton, “ImageNet classification with deep convolutional neural networks,” Commun. ACM 60, 84–90 (2017).
[Crossref]

IEEE J. Sel. Top. Quantum Electron. (1)

A. Lal, C. Shan, and P. Xi, “Structured illumination microscopy image reconstruction algorithm,” IEEE J. Sel. Top. Quantum Electron. 22, 6803414 (2016).
[Crossref]

IEEE Trans. Image Process. (3)

Z. Wang, A. C. Bovik, H. R. Sheikh, and E. P. Simoncelli, “Image quality assessment: from error visibility to structural similarity,” IEEE Trans. Image Process. 13, 600–612 (2004).
[Crossref]

A. Lal, C. Shan, K. Zhao, W. Liu, X. Huang, W. Zong, L. Chen, and P. Xi, “A frequency domain SIM reconstruction algorithm using reduced number of images,” IEEE Trans. Image Process. 27, 4555–4570 (2018).
[Crossref]

F. Orieux, E. Sepulveda, V. Loriette, B. Dubertret, and J.-C. Olivo-Marin, “Bayesian estimation for optimized structured illumination microscopy,” IEEE Trans. Image Process. 21, 601–614 (2012).
[Crossref]

J. Opt. Soc. Am. (1)

Nat. Biotechnol. (1)

W. Ouyang, A. Aristov, M. Lelek, X. Hao, and C. Zimmer, “Deep learning massively accelerates super-resolution localization microscopy,” Nat. Biotechnol. 36, 460–468 (2018).
[Crossref]

Nat. Commun. (2)

M. Mueller, V. Moenkemoeller, S. Hennig, W. Huebner, and T. Huser, “Open-source image reconstruction of super-resolution structured illumination microscopy data in ImageJ,” Nat. Commun. 7, 10980 (2016).
[Crossref]

L. Jin, B. Liu, F. Zhao, S. Hahn, B. Dong, R. Song, T. C. Elston, Y. Xu, and K. M. Hahn, “Deep learning enables structured illumination microscopy with low light levels and enhanced speed,” Nat. Commun. 11, 1934 (2020).
[Crossref]

Nat. Methods (3)

H. Wang, Y. Rivenson, Y. Jin, Z. Wei, R. Gao, H. Gunaydin, L. A. Bentolila, C. Kural, and A. Ozcan, “Deep learning enables cross-modality super-resolution in fluorescence microscopy,” Nat. Methods 16, 103–110 (2019).
[Crossref]

H. Shroff, C. G. Galbraith, J. A. Galbraith, and E. Betzig, “Live-cell photoactivated localization microscopy of nanoscale adhesion dynamics,” Nat. Methods 5, 417–423 (2008).
[Crossref]

M. J. Rust, M. Bates, and X. Zhuang, “Sub-diffraction-limit imaging by stochastic optical reconstruction microscopy (STORM),” Nat. Methods 3, 793–795 (2006).
[Crossref]

Nature (1)

Y. LeCun, Y. Bengio, and G. Hinton, “Deep learning,” Nature 521, 436–444 (2015).
[Crossref]

Opt. Express (1)

Opt. Lett. (1)

Optica (2)

Proc. Natl. Acad. Sci. USA (2)

T. A. Klar, S. Jakobs, M. Dyba, A. Egner, and S. W. Hell, “Fluorescence microscopy with diffraction resolution barrier broken by stimulated emission,” Proc. Natl. Acad. Sci. USA 97, 8206–8210 (2000).
[Crossref]

M. G. L. Gustafsson, “Nonlinear structured-illumination microscopy: wide-field fluorescence imaging with theoretically unlimited resolution,” Proc. Natl. Acad. Sci. USA 102, 13081–13086 (2005).
[Crossref]

Proc. SPIE (1)

M. G. L. Gustafsson, D. A. Agard, and J. W. Sedat, “Doubling the lateral resolution of wide-field fluorescence microscopy using structured illumination,” Proc. SPIE 3919, 141–150 (2000).
[Crossref]

Science (3)

M. Bates, B. Huang, G. T. Dempsey, and X. Zhuang, “Multicolor super-resolution imaging with photo-switchable fluorescent probes,” Science 317, 1749–1753 (2007).
[Crossref]

E. Betzig, G. H. Patterson, R. Sougrat, O. W. Lindwasser, S. Olenych, J. S. Bonifacino, M. W. Davidson, J. Lippincott-Schwartz, and H. F. Hess, “Imaging intracellular fluorescent proteins at nanometer resolution,” Science 313, 1642–1645 (2006).
[Crossref]

M. I. Jordan and T. M. Mitchell, “Machine learning: trends, perspectives, and prospects,” Science 349, 255–260 (2015).
[Crossref]

Other (12)

E. Abbe, “Contributions to the theory of the microscope and that microscopic perception,” Arch. Microsc. Anat.9, 413–468 (1873).

K. Wicker, “Super-resolution fluorescence microscopy using structured illumination,” in Super-Resolution Microscopy Techniques in the Neurosciences, E. F. Fornasiero and S. O. Rizzoli, eds. (Springer, 2014), pp. 133–165.

Z. Wang, E. P. Simoncelli, and A. C. Bovik, “Multi-scale structural similarity for image quality assessment,” in Conference Record of the 37th Asilomar Conference on Signals, Systems & Computers, M. B. Matthews, ed. (2003), pp. 1398–1402.

I. J. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville, and Y. Bengio, “Generative adversarial nets,” in Proceedings of the 27th International Conference on Neural Information Processing Systems, Z. Ghahramani, M. Welling, C. Cortes, N. D. Lawrence, and K. Q. Weinberger, eds. (2014), pp. 2672–2680.

J.-Y. Zhu, T. Park, P. Isola, and A. A. Efros, and IEEE, “Unpaired image-to-image translation using cycle-consistent adversarial networks,” in IEEE International Conference on Computer Vision (2017), pp. 2242–2251.

M. Mirza and S. Osindero, “Conditional generative adversarial nets,” arXiv:1411.1784 (2014).

L. A. Gatys, A. S. Ecker, and M. Bethge, “Image style transfer using convolutional neural networks,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2016), pp. 2414–2423.

P. Isola, J.-Y. Zhu, T. Zhou, and A. A. Efros, and IEEE, “Image-to-image translation with conditional adversarial networks,” in 30th IEEE Conference on Computer Vision and Pattern Recognition (2017), pp. 5967–5976.

C. Li and M. Wand, “Precomputed real-time texture synthesis with Markovian generative adversarial networks,” in Computer Vision—European Conference on Computer Vision (ECCV), B. Leibe, J. Matas, N. Sebe, and M. Welling, eds. (2016), pp. 702–716.

N. Sundaram, T. Brox, and K. Keutzer, “Dense point trajectories by GPU-accelerated large displacement optical flow,” in Computer Vision—European Conference on Computer Vision (ECCV), K. Daniilidis, P. Maragos, and N. Paragios, eds. (2010), pp. 438–451.

C. Godard, O. Mac Aodha, and G. J. Brostow, and IEEE, “Unsupervised monocular depth estimation with left-right consistency,” in 30th IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2017), pp. 6602–6611.

K. He, X. Zhang, S. Ren, and J. Sun, and IEEE, “Deep residual learning for image recognition,” in IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2016), pp. 770–778.

Supplementary Material (1)

NameDescription
» Data File 1       correlation coefficient of spectral components along x-axis and y-axis

Cited By

OSA participates in Crossref's Cited-By Linking service. Citing articles from OSA journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (8)

Fig. 1.
Fig. 1. Schematics of the deep neural network trained for SIM imaging. (a) The inputs are 1d_SIM and 9_SIM images generated by nine lower-resolution raw images (using the SIM algorithm) as two training datasets with different training labels. The deep neural network features two generators and two discriminators. These generators and discriminators are trained by optimizing various parameters to minimize the adversarial loss between the network’s input and output as well as cycle consistency loss between the network’s input image and the corresponding cyclic image. The cyclic 9_SIM in the schematics is the final image (3_SIM) desired. (b) Detailed schematics of half of the CycleGAN training phase (generator 1d_SIM and discriminator 9_SIM). The generator consists of three parts: an encoder (which uses convolution layers to extract features from the input image), a converter (which uses residual blocks to combine different similar features of the image), and a decoder (which uses the deconvolution layer to restore the low-level features from the feature vector), realizing the functions of encoding, transformation, and decoding. The discriminator uses a 1D convolution layer to determine whether these features belong to that particular category. The other half of the CycleGAN training phase (generator 9_SIM and discriminator 1d_SIM) is the same as this.
Fig. 2.
Fig. 2. Experimental comparison of imaging modes with a database of point images. For all methods, nine raw SI images were used as the basis for processing. (a) The WF image was generated by summing all raw SI images. (b) 1d_SIM images were generated by three raw SI images in the x direction. (c) The 3_SIM images formed the output of the CNN training. (d) 9_SIM image reconstructed from nine SI raw images as the ground truth. The enlarged area shows neighboring beads in the dashed box. In both the 3_SIM and the 9_SIM images, the beads are distinguishable and yield a resolution beyond the diffraction limit, which 1d_SIM images cannot realize. The resolution of the point is shown in (e).
Fig. 3.
Fig. 3. Using deep learning to transform images in the dataset of lines from 1d_SIM to 9_SIM. (a) WF line image. (b) 1d_SIM line image used as network input. (c) 3_SIM line image used as network output. (d) 9_SIM line image used as contrast. (e) The achieved resolution of different approaches of line images.
Fig. 4.
Fig. 4. Deep learning-enabled transformation of images of curves from 1d_SIM to 9_SIM. (a) WF curve image. (b) 1d_SIM image of curves used as input to the neural network. (c) 3_SIM image that was the network output, compared to the (d) 9_SIM image.
Fig. 5.
Fig. 5. Experimental setup for the TIRF-SIM. A laser beam with a wavelength of 532 nm was employed as the light source. After expansion, the light was illuminated into digital micromirror device (DMD) and generated structured illumination. A polarizer and a half-wave plate were used to rotate the polarization orientation; a spatial mask is used to filter the excess frequency components. The generated structured illumination is tightly focused by a high-numerical-aperture (NA) oil-immersion objective lens (Olympus, NA=1.4, 100×) from the bottom side onto the sample. The sample was fixed at a scanning stage and was prepared with the following procedures. A droplet of dilute nanoparticles (100 nm, attached with R6G molecules) suspension was subsequently dropped onto the prepared cover slip and evaporated naturally. After rinsing with water and air drying, the sample was ready for use.
Fig. 6.
Fig. 6. Comparison of the experiment results of deep learning [(c) 3_SIM]) with (a) WF, (b) 1_direction SIM, and (d) 9_SIM. Wide-field images were generated by summing all raw images, 1d_SIM images were reconstructed using three SI raw images in one direction (x), and the 9_SIM images were reconstructed from all nine SI raw images and used as ground truth compared with the 3_SIM images. The 1d_SIM image was used as input to the network to generate the 3_SIM images. The dotted frame in the figures shows an enlarged view of two areas (A and B), where the intensity distribution of the white dotted line is shown in the line chart on the right. In (a), two closely spaced nanobeads that could not be resolved by TIRF microscopy, and the 1d_SIM image super-resolved in one direction in (b). The trained neural network took the 1d_SIM image as input and resolved the beads, agreeing well with the SIM images.
Fig. 7.
Fig. 7. Fourier analysis of the reconstructed images. (a) Comparison of the frequency spectrum of images with different numbers of Gaussian points. The frequency spectrum of the Gaussian points is highly symmetrical. (b) The different colors indicate different types of frequency-related information. The yellow area represents the frequency-related information of the original image, and the green area represents information restored by the network. The grid in (b) represents the relationship between the available frequency-related information and the frequency-related information recovered by the network. (c) The Fourier transform of the reconstructions in Fig. 2 was used to obtain the spectra. To illustrate the Fourier coverage of each model, three circles are marked in each image, where the green–yellow circle corresponds to support for the WF image, the blue circle corresponds to that for the 1d_SIM image, and the yellow circle represents support for the 3_SIM and 9_SIM images.
Fig. 8.
Fig. 8. Comparing WF to 9_SIM with 1d_SIM to 9_SIM. (a) The 9_SIM image reconstructed from nine SI raw images. (b)–(d) Network output, 200, 500, and 900 image pairs (1d_SIM and 9_SIM) were used to train the network models, respectively. (e)–(h) Network output, using 100, 200, 500, and 900 image pairs (WF and 9_SIM) as datasets to train the network models. Each network underwent 10,000 iterations. Some details were not correctly restored in the WF-to-9_SIM training model. The arrows in (a)–(h) point to a missing detail.

Tables (1)

Tables Icon

Table 1. Performance Metrics of the Proposed Method on the Testing Data

Equations (4)

Equations on this page are rendered with MathJax. Learn more.

minGmaxDExp[logD(x)]+Ezp{log{1D[G(z)]}}.
LGAN,GA(GA,DA)=Exp(x){log{1DA[GA(x)]}}+Eyp(y)[logDA(y)].
Lcycle(GA,GB)=Exp(x){GB[GA(x)]x1}+Eyp(y){GA[GB(y)]y1}.
Ltotal(GA,DA,GB,DB)=LGAN,GA(GA,DA)+LGAN,GB(GB,DB)+λLcycle(GA,GB).