Abstract

We develop and explore a deep learning based single-shot ptychography reconstruction method. We show that a deep neural network, trained using only experimental data and without any model of the system, leads to reconstructions of natural real-valued images with higher spatial resolution and better resistance to systematic noise than common iterative algorithms.

© 2020 Optical Society of America under the terms of the OSA Open Access Publishing Agreement

Full Article  |  PDF Article

References

  • View by:
  • |
  • |
  • |

  1. J. Rodenburg, “Ptychography and related diffractive imaging methods,” in Advances in Imaging and Electron Physics, vol. 150 Hawkes, ed. (Elsevier, 2008), pp. 87–184.
  2. A. M. Maiden and J. M. Rodenburg, “An improved ptychographical phase retrieval algorithm for diffractive imaging,” Ultramicroscopy 109(10), 1256–1262 (2009).
    [Crossref]
  3. Z. Guan and E. H. Tsai, “Ptychonet: Fast and high quality phase retrieval for ptychography,” Tech. rep., Brookhaven National Lab.(BNL), Upton, NY (United States) (2019).
  4. X. Pan, C. Liu, and J. Zhu, “Single shot ptychographical iterative engine based on multi-beam illumination,” Appl. Phys. Lett. 103(17), 171105 (2013).
    [Crossref]
  5. P. Sidorenko and O. Cohen, “Single-shot ptychography,” Optica 3(1), 9–14 (2016).
    [Crossref]
  6. X. He, C. Liu, and J. Zhu, “Single-shot Fourier ptychography based on diffractive beam splitting,” Opt. Lett. 43(2), 214–217 (2018).
    [Crossref]
  7. G. Ilan Haham, O. Peleg, P. Sidorenko, and O. Cohen, “High-resolution (diffraction limited) single-shot multiplexed coded-aperture ptychography,” J. Opt. (to be published), https://doi.org/10.1088/2040-8986/ab7f23.
  8. P. Sidorenko, O. Lahav, and O. Cohen, “Ptychographic ultrahigh-speed imaging,” Opt. Express 25(10), 10997–11008 (2017).
    [Crossref]
  9. O. Wengrowicz, O. Peleg, B. Loevsky, B. K. Chen, G. I. Haham, U. S. Sainadh, and O. Cohen, “Experimental time-resolved imaging by multiplexed ptychography,” Opt. Express 27(17), 24568–24577 (2019).
    [Crossref]
  10. M. Pham, A. Rana, J. Miao, and S. Osher, “Semi-implicit relaxed Douglas-Rachford algorithm (sDR) for ptychography,” Opt. Express 27(22), 31246–31260 (2019).
    [Crossref]
  11. P. Thibault, M. Dierolf, A. Menzel, O. Bunk, C. David, and F. Pfeiffer, “High-Resolution Scanning X-ray Diffraction Microscopy,” Science 321(5887), 379–382 (2008).
    [Crossref]
  12. B. K. Chen, P. Sidorenko, O. Lahav, O. Peleg, and O. Cohen, “Multiplexed single-shot ptychography,” Opt. Lett. 43(21), 5379–5382 (2018).
    [Crossref]
  13. W. Xu, H. Xu, Y. Luo, T. Li, and Y. Shi, “Optical watermarking based on single-shot-ptychography encoding,” Opt. Express 24(24), 27922–27936 (2016).
    [Crossref]
  14. A. Krizhevsky, “Learning Multiple Layers of Features from Tiny Images,” Tech. rep. (2009).
  15. D. P. Kingma and J. Ba, “Adam: A method for stochastic optimization,” arXiv preprint arXiv:1412.6980 (2014).
  16. A. Hore and D. Ziou, “Image quality metrics: PSNR vs. SSIM,” in 2010 20th International Conference on Pattern Recognition, (IEEE, 2010), pp. 2366–2369.
  17. G. Barbastathis, A. Ozcan, and G. Situ, “On the use of deep learning for computational imaging,” Optica 6(8), 921–943 (2019).
    [Crossref]
  18. T. Zahavy, A. Dikopoltsev, D. Moss, G. I. Haham, O. Cohen, S. Mannor, and M. Segev, “Deep learning reconstruction of ultrashort pulses,” Optica 5(5), 666–673 (2018).
    [Crossref]
  19. R. Ziv, A. Dikopoltsev, T. Zahavy, I. Rubinstein, P. Sidorenko, O. Cohen, and M. Segev, “Deep learning reconstruction of ultrashort pulses from 2D spatial intensity patterns recorded by an all-in-line system in a single-shot,” Opt. Express 28(5), 7528–7538 (2020).
    [Crossref]

2020 (1)

2019 (3)

2018 (3)

2017 (1)

2016 (2)

2013 (1)

X. Pan, C. Liu, and J. Zhu, “Single shot ptychographical iterative engine based on multi-beam illumination,” Appl. Phys. Lett. 103(17), 171105 (2013).
[Crossref]

2009 (1)

A. M. Maiden and J. M. Rodenburg, “An improved ptychographical phase retrieval algorithm for diffractive imaging,” Ultramicroscopy 109(10), 1256–1262 (2009).
[Crossref]

2008 (1)

P. Thibault, M. Dierolf, A. Menzel, O. Bunk, C. David, and F. Pfeiffer, “High-Resolution Scanning X-ray Diffraction Microscopy,” Science 321(5887), 379–382 (2008).
[Crossref]

Ba, J.

D. P. Kingma and J. Ba, “Adam: A method for stochastic optimization,” arXiv preprint arXiv:1412.6980 (2014).

Barbastathis, G.

Bunk, O.

P. Thibault, M. Dierolf, A. Menzel, O. Bunk, C. David, and F. Pfeiffer, “High-Resolution Scanning X-ray Diffraction Microscopy,” Science 321(5887), 379–382 (2008).
[Crossref]

Chen, B. K.

Cohen, O.

David, C.

P. Thibault, M. Dierolf, A. Menzel, O. Bunk, C. David, and F. Pfeiffer, “High-Resolution Scanning X-ray Diffraction Microscopy,” Science 321(5887), 379–382 (2008).
[Crossref]

Dierolf, M.

P. Thibault, M. Dierolf, A. Menzel, O. Bunk, C. David, and F. Pfeiffer, “High-Resolution Scanning X-ray Diffraction Microscopy,” Science 321(5887), 379–382 (2008).
[Crossref]

Dikopoltsev, A.

Guan, Z.

Z. Guan and E. H. Tsai, “Ptychonet: Fast and high quality phase retrieval for ptychography,” Tech. rep., Brookhaven National Lab.(BNL), Upton, NY (United States) (2019).

Haham, G. I.

He, X.

Hore, A.

A. Hore and D. Ziou, “Image quality metrics: PSNR vs. SSIM,” in 2010 20th International Conference on Pattern Recognition, (IEEE, 2010), pp. 2366–2369.

Ilan Haham, G.

G. Ilan Haham, O. Peleg, P. Sidorenko, and O. Cohen, “High-resolution (diffraction limited) single-shot multiplexed coded-aperture ptychography,” J. Opt. (to be published), https://doi.org/10.1088/2040-8986/ab7f23.

Kingma, D. P.

D. P. Kingma and J. Ba, “Adam: A method for stochastic optimization,” arXiv preprint arXiv:1412.6980 (2014).

Krizhevsky, A.

A. Krizhevsky, “Learning Multiple Layers of Features from Tiny Images,” Tech. rep. (2009).

Lahav, O.

Li, T.

Liu, C.

X. He, C. Liu, and J. Zhu, “Single-shot Fourier ptychography based on diffractive beam splitting,” Opt. Lett. 43(2), 214–217 (2018).
[Crossref]

X. Pan, C. Liu, and J. Zhu, “Single shot ptychographical iterative engine based on multi-beam illumination,” Appl. Phys. Lett. 103(17), 171105 (2013).
[Crossref]

Loevsky, B.

Luo, Y.

Maiden, A. M.

A. M. Maiden and J. M. Rodenburg, “An improved ptychographical phase retrieval algorithm for diffractive imaging,” Ultramicroscopy 109(10), 1256–1262 (2009).
[Crossref]

Mannor, S.

Menzel, A.

P. Thibault, M. Dierolf, A. Menzel, O. Bunk, C. David, and F. Pfeiffer, “High-Resolution Scanning X-ray Diffraction Microscopy,” Science 321(5887), 379–382 (2008).
[Crossref]

Miao, J.

Moss, D.

Osher, S.

Ozcan, A.

Pan, X.

X. Pan, C. Liu, and J. Zhu, “Single shot ptychographical iterative engine based on multi-beam illumination,” Appl. Phys. Lett. 103(17), 171105 (2013).
[Crossref]

Peleg, O.

Pfeiffer, F.

P. Thibault, M. Dierolf, A. Menzel, O. Bunk, C. David, and F. Pfeiffer, “High-Resolution Scanning X-ray Diffraction Microscopy,” Science 321(5887), 379–382 (2008).
[Crossref]

Pham, M.

Rana, A.

Rodenburg, J.

J. Rodenburg, “Ptychography and related diffractive imaging methods,” in Advances in Imaging and Electron Physics, vol. 150 Hawkes, ed. (Elsevier, 2008), pp. 87–184.

Rodenburg, J. M.

A. M. Maiden and J. M. Rodenburg, “An improved ptychographical phase retrieval algorithm for diffractive imaging,” Ultramicroscopy 109(10), 1256–1262 (2009).
[Crossref]

Rubinstein, I.

Sainadh, U. S.

Segev, M.

Shi, Y.

Sidorenko, P.

Situ, G.

Thibault, P.

P. Thibault, M. Dierolf, A. Menzel, O. Bunk, C. David, and F. Pfeiffer, “High-Resolution Scanning X-ray Diffraction Microscopy,” Science 321(5887), 379–382 (2008).
[Crossref]

Tsai, E. H.

Z. Guan and E. H. Tsai, “Ptychonet: Fast and high quality phase retrieval for ptychography,” Tech. rep., Brookhaven National Lab.(BNL), Upton, NY (United States) (2019).

Wengrowicz, O.

Xu, H.

Xu, W.

Zahavy, T.

Zhu, J.

X. He, C. Liu, and J. Zhu, “Single-shot Fourier ptychography based on diffractive beam splitting,” Opt. Lett. 43(2), 214–217 (2018).
[Crossref]

X. Pan, C. Liu, and J. Zhu, “Single shot ptychographical iterative engine based on multi-beam illumination,” Appl. Phys. Lett. 103(17), 171105 (2013).
[Crossref]

Ziou, D.

A. Hore and D. Ziou, “Image quality metrics: PSNR vs. SSIM,” in 2010 20th International Conference on Pattern Recognition, (IEEE, 2010), pp. 2366–2369.

Ziv, R.

Appl. Phys. Lett. (1)

X. Pan, C. Liu, and J. Zhu, “Single shot ptychographical iterative engine based on multi-beam illumination,” Appl. Phys. Lett. 103(17), 171105 (2013).
[Crossref]

Opt. Express (5)

Opt. Lett. (2)

Optica (3)

Science (1)

P. Thibault, M. Dierolf, A. Menzel, O. Bunk, C. David, and F. Pfeiffer, “High-Resolution Scanning X-ray Diffraction Microscopy,” Science 321(5887), 379–382 (2008).
[Crossref]

Ultramicroscopy (1)

A. M. Maiden and J. M. Rodenburg, “An improved ptychographical phase retrieval algorithm for diffractive imaging,” Ultramicroscopy 109(10), 1256–1262 (2009).
[Crossref]

Other (6)

Z. Guan and E. H. Tsai, “Ptychonet: Fast and high quality phase retrieval for ptychography,” Tech. rep., Brookhaven National Lab.(BNL), Upton, NY (United States) (2019).

G. Ilan Haham, O. Peleg, P. Sidorenko, and O. Cohen, “High-resolution (diffraction limited) single-shot multiplexed coded-aperture ptychography,” J. Opt. (to be published), https://doi.org/10.1088/2040-8986/ab7f23.

A. Krizhevsky, “Learning Multiple Layers of Features from Tiny Images,” Tech. rep. (2009).

D. P. Kingma and J. Ba, “Adam: A method for stochastic optimization,” arXiv preprint arXiv:1412.6980 (2014).

A. Hore and D. Ziou, “Image quality metrics: PSNR vs. SSIM,” in 2010 20th International Conference on Pattern Recognition, (IEEE, 2010), pp. 2366–2369.

J. Rodenburg, “Ptychography and related diffractive imaging methods,” in Advances in Imaging and Electron Physics, vol. 150 Hawkes, ed. (Elsevier, 2008), pp. 87–184.

Cited By

OSA participates in Crossref's Cited-By Linking service. Citing articles from OSA journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (5)

Fig. 1.
Fig. 1. (a) Conceptual diagram of 4fSSP microscope with ray tracing. An array of pinholes is located at the input plane of a 4f system. Lens L1 focuses the light beams that diffract from the array onto the object, which is located at distance $d$ before the back focal plane of lens L1. Lens L2 focuses the diffracted light from the object to the CCD, which is located on the output plane of the 4f system, resulting in blocks of diffraction patterns, where each block corresponds to a region on the object that is illuminated by a beam originating from one of the pinholes. (b) Schematic of the 4fSSP in our experiment. The pinhole array is replaced by a phase SLM displaying an MLA, producing an array of focal spots. The object is an amplitude SLM, which is dynamic and thus allowing to easily change thousands of object images, required for training the DNN. Lenses L1 and L2 are replaced by lens OL. (c) The phase structure, which is induced by SLMP, that mimics a micro-lens array (MLA). (d) An example of an experimentally measured raw data (intensity).
Fig. 2.
Fig. 2. A schematic of the proposed SspNet architecture. SspNet is comprised of an encoder network and a decoder network (a convolutional encoder-decoder) which are represented in this figure as trapezoids: the width of a trapezoid (parallel to the bases) indicates the spatial size of the tensors (not to scale) and the fill color indicates the number of channels where darker color means more channels.
Fig. 3.
Fig. 3. Experimental demonstrations of reconstruction using SspNet – Histograms of the PSNR values (a) and SSIM indices (b) of 100 test samples reconstructed using SspNet, and for comparison reconstructions using iterative methods, ePIE and sDR. SspNet has significantly higher mean values (27.1dB, 0.9) than ePIE (18.6dB, 0.58) and sDR (20dB, 0.63). (c) Three examples of test images that demonstrate the visual differences. For each example, the following images are shown (left to right): ground truth image (the original image from CIFAR10), SspNet reconstruction, ePIE reconstruction and sDR reconstruction.
Fig. 4.
Fig. 4. Spatial spectra – (a) 2D spatial spectra (logarithmic scale) of the image shown in the first row in Fig. 3(c). The red dashed rectangle marks the cutoff frequency. (b) The mean 1D slices of the 2D spatial spectra in (a). The SspNet reconstruction spectrum follows closely the ground truth spectrum until $1.25 \nu _{\mathit {cutoff}}$ (defined in Eq. (1)). ePIE and sDR experience a cutoff much earlier, on $0.65 \nu _{\mathit {cutoff}}$. (c) The mean 2D spatial spectra of 100 test samples and their reconstructions. (d) The mean 1D slices of the 2D spatial spectra in (c). The SspNet reconstruction spectrum approximately follows the ground truth spectrum until $1.25 \nu _{\mathit {cutoff}}$, ePIE and sDR experience a cutoff much earlier, on $0.75 \nu _{\mathit {cutoff}}$ (ePIE) and $0.65 \nu _{\mathit {cutoff}}$ (sDR).
Fig. 5.
Fig. 5. Robustness to noise – (a) Measured raw LN data (amplitude) corresponding to the first row in Fig. 3(c). (b) Measured raw HN data (amplitude) of the same sample used in (a). (c), (d) Histograms of the PSNR values and SSIM indices, respectively, of 100 test samples reconstructed using SspNet that was trained using HN data, and for comparison, nominal SspNet (trained using LN data), ePIE and sDR. SspNet that was trained specifically for HN data has significantly higher mean values (27.3dB, 0.9) than nominal SspNet (17.1dB, 0.61), ePIE (12.3dB, 0.21) and sDR (12.8dB, 0.21). (e) 3 reconstruction examples of HN test images, that demonstrate the differences visually. For each example, the following images are shown (left to right): ground truth image (the original image from CIFAR10) and reconstructions using SspNet-HN, nominal SspNet, ePIE and sDR.

Equations (2)

Equations on this page are rendered with MathJax. Learn more.

ν c u t o f f = b 2 λ f O L .
Δ r = 1 2 ν c u t o f f = λ f O L b = 52 [ μ m ] .

Metrics