Abstract

We present a method of retrieving a superresolved object field from a single captured intensity image in diffraction-limited diffractive imaging based on machine learning. In this method, the inverse process of diffractive imaging is regressed by using a number of pairs, each consisting of object and captured images. The proposed method is experimentally demonstrated by using a lensless imaging setup with or without scattering media.

© 2017 Optical Society of America

Full Article  |  PDF Article
OSA Recommended Articles
Learning-based focusing through scattering media

Ryoichi Horisaki, Ryosuke Takagi, and Jun Tanida
Appl. Opt. 56(15) 4358-4362 (2017)

Learning-based imaging through scattering media

Ryoichi Horisaki, Ryosuke Takagi, and Jun Tanida
Opt. Express 24(13) 13738-13743 (2016)

Imaging through glass diffusers using densely connected convolutional networks

Shuai Li, Mo Deng, Justin Lee, Ayan Sinha, and George Barbastathis
Optica 5(7) 803-813 (2018)

References

  • View by:
  • |
  • |
  • |

  1. J. W. Goodman, Introduction to Fourier Optics (McGraw-Hill, 1996).
  2. N. Ji, H. Shroff, H. Zhong, and E. Betzig, “Advances in the speed and resolution of light microscopy,” Curr. Opin. Neurobiol. 18, 605–616 (2008).
    [Crossref]
  3. L. Schermelleh, R. Heintzmann, and H. Leonhardt, “A guide to super-resolution fluorescence microscopy,” J. Cell Biol. 190, 165–175 (2010).
    [Crossref]
  4. T. J. Lambert and J. C. Waters, “Navigating challenges in the application of superresolution microscopy,” J. Cell Biol. 216, 53–63 (2017).
    [Crossref]
  5. S. Baker and T. Kanade, “Hallucinating faces,” in 4th IEEE International Conference on Automatic Face and Gesture Recognition (2000), pp. 83–88.
  6. S. Baker and T. Kanade, “Limits on super-resolution and how to break them,” IEEE Trans. Pattern Anal. Mach. Intell. 24, 1167–1183 (2002).
    [Crossref]
  7. W. Freeman, T. Jones, and E. Pasztor, “Example-based super-resolution,” IEEE Comput. Graph. Appl. 22, 56–65 (2002).
    [Crossref]
  8. S. C. Park, M. K. Park, and M. G. Kang, “Super-resolution image reconstruction: a technical overview,” IEEE Signal Process. Mag. 20(3), 21–36 (2003).
    [Crossref]
  9. C. Dong, C. C. Loy, K. He, and X. Tang, “Image super-resolution using deep convolutional networks,” IEEE Trans. Pattern Anal. Mach. Intell. 38, 295–307 (2016).
    [Crossref]
  10. S. Gazit, A. Szameit, Y. C. Eldar, and M. Segev, “Super-resolution and reconstruction of sparse sub-wavelength images,” Opt. Express 17, 23920–23946 (2009).
    [Crossref]
  11. J. Song, C. Leon Swisher, H. Im, S. Jeong, D. Pathania, Y. Iwamoto, M. Pivovarov, R. Weissleder, and H. Lee, “Sparsity-based pixel super resolution for lens-free digital in-line holography,” Sci. Rep. 6, 24681 (2016).
    [Crossref]
  12. S. Ishikawa and Y. Hayasaki, “Super-resolution complex amplitude reconstruction of nanostructured binary data using an interference microscope with pattern matching,” Opt. Express 21, 18424–18433 (2013).
    [Crossref]
  13. T. Ando, R. Horisaki, and J. Tanida, “Speckle-learning-based object recognition through scattering media,” Opt. Express 23, 33902–33910 (2015).
    [Crossref]
  14. R. Horisaki, R. Takagi, and J. Tanida, “Learning-based imaging through scattering media,” Opt. Express 24, 13738–13743 (2016).
    [Crossref]
  15. R. Takagi, R. Horisaki, and J. Tanida, “Object recognition through a multi-mode fiber,” Opt. Rev. 24, 117–120 (2017).
    [Crossref]
  16. R. Horisaki, R. Takagi, and J. Tanida, “Learning-based focusing through scattering media,” Appl. Opt. 56, 4358–4362 (2017).
    [Crossref]
  17. V. Ntziachristos, “Going deeper than microscopy: the optical imaging frontier in biology,” Nat. Methods 7, 603–614 (2010).
    [Crossref]
  18. A. P. Mosk, A. Lagendijk, G. Lerosey, and M. Fink, “Controlling waves in space and time for imaging and focusing in complex media,” Nat. Photonics 6, 283–292 (2012).
    [Crossref]
  19. J. Bertolotti, E. G. van Putten, C. Blum, A. Lagendijk, W. L. Vos, and A. P. Mosk, “Non-invasive imaging through opaque scattering layers,” Nature 491, 232–234 (2012).
    [Crossref]
  20. V. Durán, F. Soldevila, E. Irles, P. Clemente, E. Tajahuerce, P. Andrés, and J. Lancis, “Compressive imaging in scattering media,” Opt. Express 23, 14424–14433 (2015).
    [Crossref]
  21. R. Horstmeyer, H. Ruan, and C. Yang, “Guidestar-assisted wavefront-shaping methods for focusing light into biological tissue,” Nat. Photonics 9, 563–571 (2015).
    [Crossref]
  22. V. N. Vapnik, The Nature of Statistical Learning Theory (Springer-Verlag, 1995).
  23. C. M. Bishop, Pattern Recognition and Machine Learning (Springer-Verlag, 2006).
  24. G. Huang, M. Mattar, H. Lee, and E. G. Learned-Miller, “Learning to align from scratch,” in Advances in Neural Information Processing Systems 25: 26th Annual Conference on Neural Information Processing Systems 2012 (Curran Associates, 2012), pp. 764–772.
  25. Z. Wang, A. Bovik, H. Sheikh, and E. Simoncelli, “Image quality assessment: from error visibility to structural similarity,” IEEE Trans. Image Process. 13, 600–612 (2004).
    [Crossref]
  26. Z. Wang and E. P. Simoncelli, “Maximum differentiation (MAD) competition: a methodology for comparing computational models of perceptual quantities,” J. Vis. 8(12), 8 (2008).
    [Crossref]
  27. “Caltech computer vision database,” http://www.vision.caltech.edu/archive.html .

2017 (3)

T. J. Lambert and J. C. Waters, “Navigating challenges in the application of superresolution microscopy,” J. Cell Biol. 216, 53–63 (2017).
[Crossref]

R. Takagi, R. Horisaki, and J. Tanida, “Object recognition through a multi-mode fiber,” Opt. Rev. 24, 117–120 (2017).
[Crossref]

R. Horisaki, R. Takagi, and J. Tanida, “Learning-based focusing through scattering media,” Appl. Opt. 56, 4358–4362 (2017).
[Crossref]

2016 (3)

C. Dong, C. C. Loy, K. He, and X. Tang, “Image super-resolution using deep convolutional networks,” IEEE Trans. Pattern Anal. Mach. Intell. 38, 295–307 (2016).
[Crossref]

J. Song, C. Leon Swisher, H. Im, S. Jeong, D. Pathania, Y. Iwamoto, M. Pivovarov, R. Weissleder, and H. Lee, “Sparsity-based pixel super resolution for lens-free digital in-line holography,” Sci. Rep. 6, 24681 (2016).
[Crossref]

R. Horisaki, R. Takagi, and J. Tanida, “Learning-based imaging through scattering media,” Opt. Express 24, 13738–13743 (2016).
[Crossref]

2015 (3)

2013 (1)

2012 (2)

A. P. Mosk, A. Lagendijk, G. Lerosey, and M. Fink, “Controlling waves in space and time for imaging and focusing in complex media,” Nat. Photonics 6, 283–292 (2012).
[Crossref]

J. Bertolotti, E. G. van Putten, C. Blum, A. Lagendijk, W. L. Vos, and A. P. Mosk, “Non-invasive imaging through opaque scattering layers,” Nature 491, 232–234 (2012).
[Crossref]

2010 (2)

V. Ntziachristos, “Going deeper than microscopy: the optical imaging frontier in biology,” Nat. Methods 7, 603–614 (2010).
[Crossref]

L. Schermelleh, R. Heintzmann, and H. Leonhardt, “A guide to super-resolution fluorescence microscopy,” J. Cell Biol. 190, 165–175 (2010).
[Crossref]

2009 (1)

2008 (2)

N. Ji, H. Shroff, H. Zhong, and E. Betzig, “Advances in the speed and resolution of light microscopy,” Curr. Opin. Neurobiol. 18, 605–616 (2008).
[Crossref]

Z. Wang and E. P. Simoncelli, “Maximum differentiation (MAD) competition: a methodology for comparing computational models of perceptual quantities,” J. Vis. 8(12), 8 (2008).
[Crossref]

2004 (1)

Z. Wang, A. Bovik, H. Sheikh, and E. Simoncelli, “Image quality assessment: from error visibility to structural similarity,” IEEE Trans. Image Process. 13, 600–612 (2004).
[Crossref]

2003 (1)

S. C. Park, M. K. Park, and M. G. Kang, “Super-resolution image reconstruction: a technical overview,” IEEE Signal Process. Mag. 20(3), 21–36 (2003).
[Crossref]

2002 (2)

S. Baker and T. Kanade, “Limits on super-resolution and how to break them,” IEEE Trans. Pattern Anal. Mach. Intell. 24, 1167–1183 (2002).
[Crossref]

W. Freeman, T. Jones, and E. Pasztor, “Example-based super-resolution,” IEEE Comput. Graph. Appl. 22, 56–65 (2002).
[Crossref]

Ando, T.

Andrés, P.

Baker, S.

S. Baker and T. Kanade, “Limits on super-resolution and how to break them,” IEEE Trans. Pattern Anal. Mach. Intell. 24, 1167–1183 (2002).
[Crossref]

S. Baker and T. Kanade, “Hallucinating faces,” in 4th IEEE International Conference on Automatic Face and Gesture Recognition (2000), pp. 83–88.

Bertolotti, J.

J. Bertolotti, E. G. van Putten, C. Blum, A. Lagendijk, W. L. Vos, and A. P. Mosk, “Non-invasive imaging through opaque scattering layers,” Nature 491, 232–234 (2012).
[Crossref]

Betzig, E.

N. Ji, H. Shroff, H. Zhong, and E. Betzig, “Advances in the speed and resolution of light microscopy,” Curr. Opin. Neurobiol. 18, 605–616 (2008).
[Crossref]

Bishop, C. M.

C. M. Bishop, Pattern Recognition and Machine Learning (Springer-Verlag, 2006).

Blum, C.

J. Bertolotti, E. G. van Putten, C. Blum, A. Lagendijk, W. L. Vos, and A. P. Mosk, “Non-invasive imaging through opaque scattering layers,” Nature 491, 232–234 (2012).
[Crossref]

Bovik, A.

Z. Wang, A. Bovik, H. Sheikh, and E. Simoncelli, “Image quality assessment: from error visibility to structural similarity,” IEEE Trans. Image Process. 13, 600–612 (2004).
[Crossref]

Clemente, P.

Dong, C.

C. Dong, C. C. Loy, K. He, and X. Tang, “Image super-resolution using deep convolutional networks,” IEEE Trans. Pattern Anal. Mach. Intell. 38, 295–307 (2016).
[Crossref]

Durán, V.

Eldar, Y. C.

Fink, M.

A. P. Mosk, A. Lagendijk, G. Lerosey, and M. Fink, “Controlling waves in space and time for imaging and focusing in complex media,” Nat. Photonics 6, 283–292 (2012).
[Crossref]

Freeman, W.

W. Freeman, T. Jones, and E. Pasztor, “Example-based super-resolution,” IEEE Comput. Graph. Appl. 22, 56–65 (2002).
[Crossref]

Gazit, S.

Goodman, J. W.

J. W. Goodman, Introduction to Fourier Optics (McGraw-Hill, 1996).

Hayasaki, Y.

He, K.

C. Dong, C. C. Loy, K. He, and X. Tang, “Image super-resolution using deep convolutional networks,” IEEE Trans. Pattern Anal. Mach. Intell. 38, 295–307 (2016).
[Crossref]

Heintzmann, R.

L. Schermelleh, R. Heintzmann, and H. Leonhardt, “A guide to super-resolution fluorescence microscopy,” J. Cell Biol. 190, 165–175 (2010).
[Crossref]

Horisaki, R.

Horstmeyer, R.

R. Horstmeyer, H. Ruan, and C. Yang, “Guidestar-assisted wavefront-shaping methods for focusing light into biological tissue,” Nat. Photonics 9, 563–571 (2015).
[Crossref]

Huang, G.

G. Huang, M. Mattar, H. Lee, and E. G. Learned-Miller, “Learning to align from scratch,” in Advances in Neural Information Processing Systems 25: 26th Annual Conference on Neural Information Processing Systems 2012 (Curran Associates, 2012), pp. 764–772.

Im, H.

J. Song, C. Leon Swisher, H. Im, S. Jeong, D. Pathania, Y. Iwamoto, M. Pivovarov, R. Weissleder, and H. Lee, “Sparsity-based pixel super resolution for lens-free digital in-line holography,” Sci. Rep. 6, 24681 (2016).
[Crossref]

Irles, E.

Ishikawa, S.

Iwamoto, Y.

J. Song, C. Leon Swisher, H. Im, S. Jeong, D. Pathania, Y. Iwamoto, M. Pivovarov, R. Weissleder, and H. Lee, “Sparsity-based pixel super resolution for lens-free digital in-line holography,” Sci. Rep. 6, 24681 (2016).
[Crossref]

Jeong, S.

J. Song, C. Leon Swisher, H. Im, S. Jeong, D. Pathania, Y. Iwamoto, M. Pivovarov, R. Weissleder, and H. Lee, “Sparsity-based pixel super resolution for lens-free digital in-line holography,” Sci. Rep. 6, 24681 (2016).
[Crossref]

Ji, N.

N. Ji, H. Shroff, H. Zhong, and E. Betzig, “Advances in the speed and resolution of light microscopy,” Curr. Opin. Neurobiol. 18, 605–616 (2008).
[Crossref]

Jones, T.

W. Freeman, T. Jones, and E. Pasztor, “Example-based super-resolution,” IEEE Comput. Graph. Appl. 22, 56–65 (2002).
[Crossref]

Kanade, T.

S. Baker and T. Kanade, “Limits on super-resolution and how to break them,” IEEE Trans. Pattern Anal. Mach. Intell. 24, 1167–1183 (2002).
[Crossref]

S. Baker and T. Kanade, “Hallucinating faces,” in 4th IEEE International Conference on Automatic Face and Gesture Recognition (2000), pp. 83–88.

Kang, M. G.

S. C. Park, M. K. Park, and M. G. Kang, “Super-resolution image reconstruction: a technical overview,” IEEE Signal Process. Mag. 20(3), 21–36 (2003).
[Crossref]

Lagendijk, A.

J. Bertolotti, E. G. van Putten, C. Blum, A. Lagendijk, W. L. Vos, and A. P. Mosk, “Non-invasive imaging through opaque scattering layers,” Nature 491, 232–234 (2012).
[Crossref]

A. P. Mosk, A. Lagendijk, G. Lerosey, and M. Fink, “Controlling waves in space and time for imaging and focusing in complex media,” Nat. Photonics 6, 283–292 (2012).
[Crossref]

Lambert, T. J.

T. J. Lambert and J. C. Waters, “Navigating challenges in the application of superresolution microscopy,” J. Cell Biol. 216, 53–63 (2017).
[Crossref]

Lancis, J.

Learned-Miller, E. G.

G. Huang, M. Mattar, H. Lee, and E. G. Learned-Miller, “Learning to align from scratch,” in Advances in Neural Information Processing Systems 25: 26th Annual Conference on Neural Information Processing Systems 2012 (Curran Associates, 2012), pp. 764–772.

Lee, H.

J. Song, C. Leon Swisher, H. Im, S. Jeong, D. Pathania, Y. Iwamoto, M. Pivovarov, R. Weissleder, and H. Lee, “Sparsity-based pixel super resolution for lens-free digital in-line holography,” Sci. Rep. 6, 24681 (2016).
[Crossref]

G. Huang, M. Mattar, H. Lee, and E. G. Learned-Miller, “Learning to align from scratch,” in Advances in Neural Information Processing Systems 25: 26th Annual Conference on Neural Information Processing Systems 2012 (Curran Associates, 2012), pp. 764–772.

Leon Swisher, C.

J. Song, C. Leon Swisher, H. Im, S. Jeong, D. Pathania, Y. Iwamoto, M. Pivovarov, R. Weissleder, and H. Lee, “Sparsity-based pixel super resolution for lens-free digital in-line holography,” Sci. Rep. 6, 24681 (2016).
[Crossref]

Leonhardt, H.

L. Schermelleh, R. Heintzmann, and H. Leonhardt, “A guide to super-resolution fluorescence microscopy,” J. Cell Biol. 190, 165–175 (2010).
[Crossref]

Lerosey, G.

A. P. Mosk, A. Lagendijk, G. Lerosey, and M. Fink, “Controlling waves in space and time for imaging and focusing in complex media,” Nat. Photonics 6, 283–292 (2012).
[Crossref]

Loy, C. C.

C. Dong, C. C. Loy, K. He, and X. Tang, “Image super-resolution using deep convolutional networks,” IEEE Trans. Pattern Anal. Mach. Intell. 38, 295–307 (2016).
[Crossref]

Mattar, M.

G. Huang, M. Mattar, H. Lee, and E. G. Learned-Miller, “Learning to align from scratch,” in Advances in Neural Information Processing Systems 25: 26th Annual Conference on Neural Information Processing Systems 2012 (Curran Associates, 2012), pp. 764–772.

Mosk, A. P.

J. Bertolotti, E. G. van Putten, C. Blum, A. Lagendijk, W. L. Vos, and A. P. Mosk, “Non-invasive imaging through opaque scattering layers,” Nature 491, 232–234 (2012).
[Crossref]

A. P. Mosk, A. Lagendijk, G. Lerosey, and M. Fink, “Controlling waves in space and time for imaging and focusing in complex media,” Nat. Photonics 6, 283–292 (2012).
[Crossref]

Ntziachristos, V.

V. Ntziachristos, “Going deeper than microscopy: the optical imaging frontier in biology,” Nat. Methods 7, 603–614 (2010).
[Crossref]

Park, M. K.

S. C. Park, M. K. Park, and M. G. Kang, “Super-resolution image reconstruction: a technical overview,” IEEE Signal Process. Mag. 20(3), 21–36 (2003).
[Crossref]

Park, S. C.

S. C. Park, M. K. Park, and M. G. Kang, “Super-resolution image reconstruction: a technical overview,” IEEE Signal Process. Mag. 20(3), 21–36 (2003).
[Crossref]

Pasztor, E.

W. Freeman, T. Jones, and E. Pasztor, “Example-based super-resolution,” IEEE Comput. Graph. Appl. 22, 56–65 (2002).
[Crossref]

Pathania, D.

J. Song, C. Leon Swisher, H. Im, S. Jeong, D. Pathania, Y. Iwamoto, M. Pivovarov, R. Weissleder, and H. Lee, “Sparsity-based pixel super resolution for lens-free digital in-line holography,” Sci. Rep. 6, 24681 (2016).
[Crossref]

Pivovarov, M.

J. Song, C. Leon Swisher, H. Im, S. Jeong, D. Pathania, Y. Iwamoto, M. Pivovarov, R. Weissleder, and H. Lee, “Sparsity-based pixel super resolution for lens-free digital in-line holography,” Sci. Rep. 6, 24681 (2016).
[Crossref]

Ruan, H.

R. Horstmeyer, H. Ruan, and C. Yang, “Guidestar-assisted wavefront-shaping methods for focusing light into biological tissue,” Nat. Photonics 9, 563–571 (2015).
[Crossref]

Schermelleh, L.

L. Schermelleh, R. Heintzmann, and H. Leonhardt, “A guide to super-resolution fluorescence microscopy,” J. Cell Biol. 190, 165–175 (2010).
[Crossref]

Segev, M.

Sheikh, H.

Z. Wang, A. Bovik, H. Sheikh, and E. Simoncelli, “Image quality assessment: from error visibility to structural similarity,” IEEE Trans. Image Process. 13, 600–612 (2004).
[Crossref]

Shroff, H.

N. Ji, H. Shroff, H. Zhong, and E. Betzig, “Advances in the speed and resolution of light microscopy,” Curr. Opin. Neurobiol. 18, 605–616 (2008).
[Crossref]

Simoncelli, E.

Z. Wang, A. Bovik, H. Sheikh, and E. Simoncelli, “Image quality assessment: from error visibility to structural similarity,” IEEE Trans. Image Process. 13, 600–612 (2004).
[Crossref]

Simoncelli, E. P.

Z. Wang and E. P. Simoncelli, “Maximum differentiation (MAD) competition: a methodology for comparing computational models of perceptual quantities,” J. Vis. 8(12), 8 (2008).
[Crossref]

Soldevila, F.

Song, J.

J. Song, C. Leon Swisher, H. Im, S. Jeong, D. Pathania, Y. Iwamoto, M. Pivovarov, R. Weissleder, and H. Lee, “Sparsity-based pixel super resolution for lens-free digital in-line holography,” Sci. Rep. 6, 24681 (2016).
[Crossref]

Szameit, A.

Tajahuerce, E.

Takagi, R.

Tang, X.

C. Dong, C. C. Loy, K. He, and X. Tang, “Image super-resolution using deep convolutional networks,” IEEE Trans. Pattern Anal. Mach. Intell. 38, 295–307 (2016).
[Crossref]

Tanida, J.

van Putten, E. G.

J. Bertolotti, E. G. van Putten, C. Blum, A. Lagendijk, W. L. Vos, and A. P. Mosk, “Non-invasive imaging through opaque scattering layers,” Nature 491, 232–234 (2012).
[Crossref]

Vapnik, V. N.

V. N. Vapnik, The Nature of Statistical Learning Theory (Springer-Verlag, 1995).

Vos, W. L.

J. Bertolotti, E. G. van Putten, C. Blum, A. Lagendijk, W. L. Vos, and A. P. Mosk, “Non-invasive imaging through opaque scattering layers,” Nature 491, 232–234 (2012).
[Crossref]

Wang, Z.

Z. Wang and E. P. Simoncelli, “Maximum differentiation (MAD) competition: a methodology for comparing computational models of perceptual quantities,” J. Vis. 8(12), 8 (2008).
[Crossref]

Z. Wang, A. Bovik, H. Sheikh, and E. Simoncelli, “Image quality assessment: from error visibility to structural similarity,” IEEE Trans. Image Process. 13, 600–612 (2004).
[Crossref]

Waters, J. C.

T. J. Lambert and J. C. Waters, “Navigating challenges in the application of superresolution microscopy,” J. Cell Biol. 216, 53–63 (2017).
[Crossref]

Weissleder, R.

J. Song, C. Leon Swisher, H. Im, S. Jeong, D. Pathania, Y. Iwamoto, M. Pivovarov, R. Weissleder, and H. Lee, “Sparsity-based pixel super resolution for lens-free digital in-line holography,” Sci. Rep. 6, 24681 (2016).
[Crossref]

Yang, C.

R. Horstmeyer, H. Ruan, and C. Yang, “Guidestar-assisted wavefront-shaping methods for focusing light into biological tissue,” Nat. Photonics 9, 563–571 (2015).
[Crossref]

Zhong, H.

N. Ji, H. Shroff, H. Zhong, and E. Betzig, “Advances in the speed and resolution of light microscopy,” Curr. Opin. Neurobiol. 18, 605–616 (2008).
[Crossref]

Appl. Opt. (1)

Curr. Opin. Neurobiol. (1)

N. Ji, H. Shroff, H. Zhong, and E. Betzig, “Advances in the speed and resolution of light microscopy,” Curr. Opin. Neurobiol. 18, 605–616 (2008).
[Crossref]

IEEE Comput. Graph. Appl. (1)

W. Freeman, T. Jones, and E. Pasztor, “Example-based super-resolution,” IEEE Comput. Graph. Appl. 22, 56–65 (2002).
[Crossref]

IEEE Signal Process. Mag. (1)

S. C. Park, M. K. Park, and M. G. Kang, “Super-resolution image reconstruction: a technical overview,” IEEE Signal Process. Mag. 20(3), 21–36 (2003).
[Crossref]

IEEE Trans. Image Process. (1)

Z. Wang, A. Bovik, H. Sheikh, and E. Simoncelli, “Image quality assessment: from error visibility to structural similarity,” IEEE Trans. Image Process. 13, 600–612 (2004).
[Crossref]

IEEE Trans. Pattern Anal. Mach. Intell. (2)

S. Baker and T. Kanade, “Limits on super-resolution and how to break them,” IEEE Trans. Pattern Anal. Mach. Intell. 24, 1167–1183 (2002).
[Crossref]

C. Dong, C. C. Loy, K. He, and X. Tang, “Image super-resolution using deep convolutional networks,” IEEE Trans. Pattern Anal. Mach. Intell. 38, 295–307 (2016).
[Crossref]

J. Cell Biol. (2)

L. Schermelleh, R. Heintzmann, and H. Leonhardt, “A guide to super-resolution fluorescence microscopy,” J. Cell Biol. 190, 165–175 (2010).
[Crossref]

T. J. Lambert and J. C. Waters, “Navigating challenges in the application of superresolution microscopy,” J. Cell Biol. 216, 53–63 (2017).
[Crossref]

J. Vis. (1)

Z. Wang and E. P. Simoncelli, “Maximum differentiation (MAD) competition: a methodology for comparing computational models of perceptual quantities,” J. Vis. 8(12), 8 (2008).
[Crossref]

Nat. Methods (1)

V. Ntziachristos, “Going deeper than microscopy: the optical imaging frontier in biology,” Nat. Methods 7, 603–614 (2010).
[Crossref]

Nat. Photonics (2)

A. P. Mosk, A. Lagendijk, G. Lerosey, and M. Fink, “Controlling waves in space and time for imaging and focusing in complex media,” Nat. Photonics 6, 283–292 (2012).
[Crossref]

R. Horstmeyer, H. Ruan, and C. Yang, “Guidestar-assisted wavefront-shaping methods for focusing light into biological tissue,” Nat. Photonics 9, 563–571 (2015).
[Crossref]

Nature (1)

J. Bertolotti, E. G. van Putten, C. Blum, A. Lagendijk, W. L. Vos, and A. P. Mosk, “Non-invasive imaging through opaque scattering layers,” Nature 491, 232–234 (2012).
[Crossref]

Opt. Express (5)

Opt. Rev. (1)

R. Takagi, R. Horisaki, and J. Tanida, “Object recognition through a multi-mode fiber,” Opt. Rev. 24, 117–120 (2017).
[Crossref]

Sci. Rep. (1)

J. Song, C. Leon Swisher, H. Im, S. Jeong, D. Pathania, Y. Iwamoto, M. Pivovarov, R. Weissleder, and H. Lee, “Sparsity-based pixel super resolution for lens-free digital in-line holography,” Sci. Rep. 6, 24681 (2016).
[Crossref]

Other (6)

S. Baker and T. Kanade, “Hallucinating faces,” in 4th IEEE International Conference on Automatic Face and Gesture Recognition (2000), pp. 83–88.

V. N. Vapnik, The Nature of Statistical Learning Theory (Springer-Verlag, 1995).

C. M. Bishop, Pattern Recognition and Machine Learning (Springer-Verlag, 2006).

G. Huang, M. Mattar, H. Lee, and E. G. Learned-Miller, “Learning to align from scratch,” in Advances in Neural Information Processing Systems 25: 26th Annual Conference on Neural Information Processing Systems 2012 (Curran Associates, 2012), pp. 764–772.

“Caltech computer vision database,” http://www.vision.caltech.edu/archive.html .

J. W. Goodman, Introduction to Fourier Optics (McGraw-Hill, 1996).

Cited By

OSA participates in Crossref's Cited-By Linking service. Citing articles from OSA journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (11)

Fig. 1.
Fig. 1. Schematic diagram of learning-based superresolution in diffraction-limited diffractive imaging.
Fig. 2.
Fig. 2. Experimental setup.
Fig. 3.
Fig. 3. Five examples of (a) the test images and their simulated diffraction-limited images at the SRs of (b) 8.8, (c) 17.6, (d) 26.4, and (e) 35.2.
Fig. 4.
Fig. 4. Experimental results with the amplitude objects and without the diffuser. The captured images of Fig. 3(a) at SRs of (a) 8.8, (b) 17.6, (c) 26.4, and (d) 35.2. The reconstructions from the captured images of (e) part (a), (f) part (b), (g) part (c), and (h) part (d).
Fig. 5.
Fig. 5. Experimental results with the phase objects and without the diffuser. The captured images of Fig. 3(a) at SRs of (a) 8.8, (b) 17.6, (c) 26.4, and (d) 35.2. The reconstructions from the captured images of (e) part (a), (f) part (b), (g) part (c), and (h) part (d).
Fig. 6.
Fig. 6. Experimental results with the amplitude objects and the diffuser. The captured images of Fig. 3(a) at the SRs of (a) 8.8, (b) 17.6, (c) 26.4, and (d) 35.2. The reconstructions from the captured images of (e) part (a), (f) part (b), (g) part (c), and (h) part (d).
Fig. 7.
Fig. 7. Experimental results with the phase objects and the diffuser. The captured images of Fig. 3(a) at the SRs of (a) 8.8, (b) 17.6, (c) 26.4, and (d) 35.2. The reconstructions from the captured images of (e) part (a), (f) part (b), (g) part (c), and (h) part (d).
Fig. 8.
Fig. 8. Plots of the SSIMs at different SRs.
Fig. 9.
Fig. 9. Plots of the SSIMs with the phase objects, the diffuser, and the SR of 17.6 for different numbers of sampled pixels ( N y ) and training datasets ( M ).
Fig. 10.
Fig. 10. Plots of the SSIMs with the phase objects, the diffuser, and the SR of 17.6 at different measurement signal-to-noise ratios (SNRs).
Fig. 11.
Fig. 11. Experimental results with the phase objects of non-face images, the diffuser, the SR of 17.6, and the inverse operator trained for the face images. Five examples of (a) the test images and (b) the reconstructed images.

Equations (3)

Equations on this page are rendered with MathJax. Learn more.

y = F [ x ] ,
x ^ ( k ) = F k 1 [ y ] ,
F k 1 [ y ^ ] = m = 1 M ( α m ( k ) α m * ( k ) ) exp ( y ^ m 2 2 h 2 ) + b ( k ) ,

Metrics