Abstract
Reflection ptychography is a lensfree microscopy technique particularly promising in regions of the electromagnetic spectrum where imaging optics are inefficient or not available. This is the case in tabletop extreme ultraviolet microscopy and grazing incidence small angle x ray scattering experiments. Combining such experimental configurations with ptychography requires accurate knowledge of the relative tilt between the sample and the detector in non-coplanar scattering geometries. Here, we describe an algorithm for tilt estimation in reflection ptychography. The method is verified experimentally, enabling sample tilt determination within a fraction of a degree. Furthermore, the angle-estimation uncertainty and reconstruction quality are studied for both smooth and highly structured beams.
© 2022 Optica Publishing Group under the terms of the Optica Open Access Publishing Agreement
Introduction. Ptychography is a diffractive imaging technique that enables simultaneous quantitative phase microscopy and wavefront sensing [1]. Instead of producing a direct image of a sample of interest on a detector, a series of diffraction intensities is recorded while a sample is laterally scanned through a focused beam. The recorded data are inverted via iterative phase retrieval algorithms, resulting in a deconvolution of sample and illumination contributions in the observed signal [2,3]. Ptychography has become a popular technique for extreme ultraviolet, x ray, and electron microscopy, where the lensless experimental geometry dispenses with the need for high-resolution imaging optics [4–6]. Moreover it has been used for visible-light label-free quantitative phase microscopy [7,8], near-infrared wavefront sensing [9], and terahertz imaging [10]. Throughout the past decade, the experimental robustness of ptychography has been improved by means of various self-calibration techniques. These include algorithms for the correction of lateral [11,12] as well as axial [13,14] position errors, wavefront instability [15], and partial coherence [16,17]. An additional complication arises in reflection-mode ptychography [18], where the sample and camera are situated in a non-coplanar geometry. Tilting the sample introduces a nonlinear coordinate warping in the observed diffraction data, parameterized by the relative angle between the specimen and the detector [19]. Inaccurate knowledge of this angle results in model mismatch, with the effect of degraded imaging performance. Here, we report an angle self-calibration algorithm for reflection-mode ptychography. We demonstrate the method on experimental near-infrared data. In addition, we investigate the influence of the illumination wavefront shape on the uncertainty of the retrieved angle.
Far-field diffraction between two mutually tilted planes is given by [19–21]
Backward mapping versus forward mapping. We consider two approaches for numerically transforming a function from one coordinate system to another (see Fig. S1 in Supplement 1). The first method, referred to here as forward mapping, applies a coordinate transformation to the input (= detector) coordinate grid $(x,y)\rightarrow (u,v)=\boldsymbol {T}(x,y)$ to find the associated grid points in output (= spatial frequency) coordinates. Due to the nonlinearity of the transformation, the output grid exhibits irregular spacings. As most commonly used fast Fourier transform (FFT) methods require uniform grids, the intensity on this warped grid is interpolated onto a regular grid. This method was previously suggested for use for tilted plane coordinate correction by Gardner et al. [22]. However, this approach has some downsides in terms of interpolation: The data points for this method are not on a rectilinear grid aligned with the coordinate axes, which excludes the use of fast bivariate interpolation schemes, such as bilinear or bicubic [23,24]. Alternatives to these bivariate interpolation methods tend to either compromise accuracy or are much slower in determining the interpolation weights for neighboring pixels. Such interpolation schemes are no option for our angle correction method (see below), which needs to embed the interpolation step into each iteration of the algorithm. Thus, a more performant approach is needed.
An alternative approach to transform the intensities is to substitute $x(u,v)$ and $y(u,v)$ into the measured intensity $I(x,y)$ using the inverse mapping $\boldsymbol {{T}^{-1}}$. The reverse transformation is applied to an evenly spaced spatial frequency output grid to find the associated observation coordinates. Next, the intensity function at those detector points is found by means of interpolation $I(x_\textrm {warped},y_\textrm {warped})=I(\boldsymbol {{T}^{-1}}(u_\textrm {regular},v_\textrm {regular}))$. Since the data points for this interpolation step are located on a regular detector pixel grid, this interpolation step is compatible with bilinear interpolation, which is straightforward and fast [25]. As repeated transformation and interpolation steps of the diffraction pattern are required for the angle calibration procedure reported in this work, a backward mapping approach with bilinear interpolation is used in this paper. Starting from the forward transform [see Eq. (2)], the following expression for the inverse transformation ${\boldsymbol {T}}^{-1}$ was derived (see Supplement 1 for more details):
Next, inspired by the approach of mPIE [28], a momentum acceleration term $v_{j}$ is added to the angle to speed up the rate of convergence. This momentum term is initialized at zero, and gets updated at the end of every iteration: $v_{j}=(\theta _{update}-\theta )+\eta \cdot v_{j-1}$, where $\eta =0.7$ is a friction term. At the end of each loop, the angle estimate is updated with the following momentum update step $\theta =\theta _{update}+v_{j}$.
To test our angle calibration method in experiment, a series of ptychographic measurements were recorded in a tilted-plane reflection geometry using a USAF (Thorlabs R3L1S4P) resolution test target. The experimental setup is shown in Fig. 2. Illumination around a wavelength of 708.8 nm was generated by spectrally limiting a super continuum source by means of short pass (SP1000) and long pass (LP700) filters, and finally by selecting a narrow wavelength band with an acousto-optic tunable (AOTF) filter ($\Delta \lambda =0.6$ nm). The sample and detector were mounted on two concentric rotation stages, enabling control of the tilt angle $\theta$ between the incident beam and the specimen’s surface normal. Using this setup, 20 data sets were recorded at a tilt angle of $43\pm 1 ^\circ$, which was triangulated from the setup geometry. In half of these measurements, a focused top-hat beam was used, while a structured beam was used in the other half. The beam structuring was achieved by means of a piece of Scotch Tape. Each data set consisted of 152 diffraction patterns recorded on a CCD camera (AVT GT3400, 14 bit, $3384\times 2704$ pixels) at a sample–detector distance of $71.4$ mm. The linear overlap ratio in these scans was $87\%$.
Reconstructions were executed on a NVIDIA Titan RTX GPU. Reconstructions in this paper have been preprocessed by 200 iterations of ePIE, before applying 400 iterations of aPIE. Representative reconstructions of the object and the probe are depicted in Figs. 3(d) and 3(f) for the case of smooth illumination, and in Figs. 3(c), 3(e), 3(g) for the case of a structured illumination. Upon starting angle optimization, the error [Eq. (6)] rapidly improves as illustrated in Fig. S2 (Supplement 1). The robustness of the angle calibration of the smooth beam was compared with that of the structured beam through an estimation of the standard deviation of the recovered values of the tilt angle $\theta$.
The results of this comparison are shown in Fig. 3(a), where the solid line and shaded areas indicate the average and standard deviation of the current estimate of $\theta$. The solid curves were calculated by averaging reconstructions of ten different data sets. It is seen that the standard deviation for the angle estimate is much smaller for the case of the structured beam, indicating more precise parameter estimation performance. This is also reflected in the improved object reconstruction quality in Fig. 3(e) (structured) as compared with Fig. 3(f) (smooth). Next, we tested the robustness of our method against inaccurate initial tilt angle estimates. A series of reconstructions were carried out with varying starting values for $\theta$. The recovered tilt angle $\theta$ for these reconstructions as a function of the number of iterations is shown in Fig. 3(b). It is seen that our angle calibration method retrieved the angle within the aforementioned uncertainty given by the respective beam profile for initial deviations as large as $10^{\circ }$, with a more rapid convergence rate observed for the structured illumination. Finally, the feasibility of a combined calibration of the detector sample distance $z$ and the tilt angle $\theta$ was investigated. For this purpose, a series of reconstructions was executed with varying starting $\theta$-z-estimates of the structured beam data. These reconstructions alternated between 200 iterations of zPIE [14] and 50 iterations of aPIE for 2500 iterations.The trajectories of these combined reconstructions through the joint $\theta$-$z$ plane are shown in Fig. 4, where each color indicates a single reconstruction with a different initial guess. These reconstructions converged to a value for theta of 43.37$\pm 0.06^{\circ }$ and to a value for z of $71.16\pm 0.04$ mm, where the uncertainty is a single standard deviation in the final parameter estimates.
Discussion and conclusion. In this Letter, we proposed a self-calibration algorithm for estimating the tilt angle in non-coplanar reflection ptychography. The method was tested experimentally, where it showed robust performance for an initial estimate range up to $10^{\circ }$ deviation from the true angle. We observed empirically in these tests that a structured illumination helps to reduce the uncertainty in the angle estimate and to improve the convergence rate of our proposed algorithm. Additionally, we demonstrated that despite of the explicit $z$-dependency of the underlying coordinate transformation, an alternating descent optimization of the tilt angle and detector–sample distance is feasible, even when neither parameter is known precisely. In summary, aPIE will improve the robustness and allow for tilt angle self-calibration in reflection-mode ptychography.
Funding
European Research Council (ERC-CoG 864016); Nederlandse Organisatie voor Wetenschappelijk Onderzoek (HTSM 13934).
Disclosures
The authors declare no conflicts of interest
Data availability
Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.
Supplemental document
See Supplement 1 for supporting content.
REFERENCES
1. J. Rodenburg and A. Maiden, Springer Handbook of Microscopy (Springer, 2019), p. 2.
2. P. Thibault, M. Dierolf, A. Menzel, O. Bunk, C. David, and F. Pfeiffer, Science 321, 379 (2008). [CrossRef]
3. A. M. Maiden and J. M. Rodenburg, Ultramicroscopy 109, 1256 (2009). [CrossRef]
4. L. Loetgering, X. Liu, A. C. C. De Beurs, M. Du, G. Kuijper, K. S. E. Eikema, and S. Witte, Optica 8, 130 (2021). [CrossRef]
5. F. Pfeiffer, Nat. Photonics 12, 9 (2018). [CrossRef]
6. F. Hüe, J. M. Rodenburg, A. M. Maiden, F. Sweeney, and P. A. Midgley, Phys. Rev. B: Condens. Matter Mater. Phys. 82, 121415 (2010). [CrossRef]
7. J. Marrison, L. Räty, P. Marriott, and P. O’Toole, Sci. Rep. 3, 2369 (2013). [CrossRef]
8. N. Anthony, C. Darmanin, M. R. Bleackley, K. Parisi, G. Cadenazzi, S. Holmes, M. A. Anderson, K. A. Nugent, and B. Abbey, Biomed. Opt. Express 10, 4964 (2019). [CrossRef]
9. M. Du, L. Loetgering, K. Eikema, and S. Witte, Opt. Express 28, 5022 (2020). [CrossRef]
10. L. Valzania, T. Feurer, P. Zolliker, and E. Hack, Opt. Lett. 43, 543 (2018). [CrossRef]
11. M. Guizar-Sicairos and J. R. Fienup, Opt. Express 16, 7264 (2008). [CrossRef]
12. A. M. Maiden, M. J. Humphry, M. C. Sarahan, B. Kraus, and J. M. Rodenburg, Ultramicroscopy 120, 64 (2012). [CrossRef]
13. E. H. R. Tsai, I. Usov, A. Diaz, A. Menzel, and M. Guizar-Sicairos, Opt. Express 24, 29089 (2016). [CrossRef]
14. L. Loetgering, M. Du, K. S. E. Eikema, and S. Witte, Opt. Lett. 45, 2030 (2020). [CrossRef]
15. M. Odstrcil, P. Baksh, S. A. Boden, R. Card, J. E. Chad, J. G. Frey, and W. S. Brocklesby, Opt. Express 24, 8360 (2016). [CrossRef]
16. P. Thibault and A. Menzel, Nature 494, 68 (2013). [CrossRef]
17. D. J. Batey, D. Claus, and J. M. Rodenburg, Ultramicroscopy 138, 13 (2014). [CrossRef]
18. M. D. Seaberg, B. Zhang, D. F. Gardner, E. R. Shanblatt, M. M. Murnane, H. C. Kapteyn, and D. E. Adams, Optica 1, 39 (2014). [CrossRef]
19. K. Patorski, Opt. Acta 30, 673 (1983). [CrossRef]
20. H. J. Rabal, X. Bolognini, and E. E. Sicre, Opt. Acta 32, 1309 (1985). [CrossRef]
21. J. W. Goodman, Introduction to Fourier Optics (WH Freeman, 2017), 4th ed.
22. D. F. Gardner, B. Zhang, M. D. Seaberg, L. S. Martin, D. E. Adams, F. Salmassi, E. Gullikson, H. Kapteyn, and M. Murnane, Opt. Express 20, 19050 (2012). [CrossRef]
23. I. Skorokhodov, “Interpolating points on a non-uniform grid using a mixture of Gaussians,” arXiv:2012.13257 [cs] (2020).
24. D. W. Zingg and M. Yarrow, SIAM J. Sci. and Stat. Comput. 13, 687 (1992). [CrossRef]
25. R. E. Woods, S. L. Eddins, and R. C. Gonzalez, Digital Image Processing Using MATLAB (Gatesmark, 2009), 2nd ed.
26. K. Matsushima, H. Schimmel, and F. Wyrowski, J. Opt. Soc. Am. A 20, 1755 (2003). [CrossRef]
27. R. Luus and T. H. I. Jaakola, AIChE J. 19, 760 (1973). [CrossRef]
28. A. Maiden, D. Johnson, and P. Li, Optica 4, 736 (2017). [CrossRef]