Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

aPIE: an angle calibration algorithm for reflection ptychography

Open Access Open Access

Abstract

Reflection ptychography is a lensfree microscopy technique particularly promising in regions of the electromagnetic spectrum where imaging optics are inefficient or not available. This is the case in tabletop extreme ultraviolet microscopy and grazing incidence small angle x ray scattering experiments. Combining such experimental configurations with ptychography requires accurate knowledge of the relative tilt between the sample and the detector in non-coplanar scattering geometries. Here, we describe an algorithm for tilt estimation in reflection ptychography. The method is verified experimentally, enabling sample tilt determination within a fraction of a degree. Furthermore, the angle-estimation uncertainty and reconstruction quality are studied for both smooth and highly structured beams.

© 2022 Optica Publishing Group under the terms of the Optica Open Access Publishing Agreement

Introduction. Ptychography is a diffractive imaging technique that enables simultaneous quantitative phase microscopy and wavefront sensing [1]. Instead of producing a direct image of a sample of interest on a detector, a series of diffraction intensities is recorded while a sample is laterally scanned through a focused beam. The recorded data are inverted via iterative phase retrieval algorithms, resulting in a deconvolution of sample and illumination contributions in the observed signal [2,3]. Ptychography has become a popular technique for extreme ultraviolet, x ray, and electron microscopy, where the lensless experimental geometry dispenses with the need for high-resolution imaging optics [46]. Moreover it has been used for visible-light label-free quantitative phase microscopy [7,8], near-infrared wavefront sensing [9], and terahertz imaging [10]. Throughout the past decade, the experimental robustness of ptychography has been improved by means of various self-calibration techniques. These include algorithms for the correction of lateral [11,12] as well as axial [13,14] position errors, wavefront instability [15], and partial coherence [16,17]. An additional complication arises in reflection-mode ptychography [18], where the sample and camera are situated in a non-coplanar geometry. Tilting the sample introduces a nonlinear coordinate warping in the observed diffraction data, parameterized by the relative angle between the specimen and the detector [19]. Inaccurate knowledge of this angle results in model mismatch, with the effect of degraded imaging performance. Here, we report an angle self-calibration algorithm for reflection-mode ptychography. We demonstrate the method on experimental near-infrared data. In addition, we investigate the influence of the illumination wavefront shape on the uncertainty of the retrieved angle.

Far-field diffraction between two mutually tilted planes is given by [1921]

$$\tilde{\psi}\left(u,v\right)=\iint\psi\left(x',y'\right)\exp\left[{-}i2\pi\left(ux'+vy'\right)\right]\textrm{d}x'\,\textrm{d}y',$$
where $x',y'$ denote the sample (= source) coordinates. The relation between spatial frequencies $u,v$ and observation coordinates $x,y$ is described by the mapping
$$\boldsymbol{T}:\,u=\frac{x}{\lambda r_{0}}\cos\theta+\frac{\sin\theta}{\lambda}\left[\left(1-\frac{x^{2}+y^{2}}{r_{0}^{2}}\right)^{1/2}-1\right],\quad v=\frac{y}{\lambda r_{0}},$$
where $r_{0}=\sqrt {x^{2}+y^{2}+z^{2}}$ denotes the distance from the sample plane origin to a point $x,y$ in the observation plane. Here, $z$ is the distance from the sample plane origin to the observation plane origin, and $\theta$ is the angle between the sample surface normal and the optical axis (cf. Figure 1). Equations (1) and (2) assume a small detection numerical aperture, i.e., $x,y\ll z$. For $\theta \neq 0$, the coordinate transformation distorts the diffraction lobes with increasing distance from the center coordinate. This is illustrated in Fig. 1, where the observed diffraction pattern under oblique incidence is equivalent to the diffraction pattern observed under perpendicular incidence when subjected to the mapping $\boldsymbol {T}$.

Backward mapping versus forward mapping. We consider two approaches for numerically transforming a function from one coordinate system to another (see Fig. S1 in Supplement 1). The first method, referred to here as forward mapping, applies a coordinate transformation to the input (= detector) coordinate grid $(x,y)\rightarrow (u,v)=\boldsymbol {T}(x,y)$ to find the associated grid points in output (= spatial frequency) coordinates. Due to the nonlinearity of the transformation, the output grid exhibits irregular spacings. As most commonly used fast Fourier transform (FFT) methods require uniform grids, the intensity on this warped grid is interpolated onto a regular grid. This method was previously suggested for use for tilted plane coordinate correction by Gardner et al. [22]. However, this approach has some downsides in terms of interpolation: The data points for this method are not on a rectilinear grid aligned with the coordinate axes, which excludes the use of fast bivariate interpolation schemes, such as bilinear or bicubic [23,24]. Alternatives to these bivariate interpolation methods tend to either compromise accuracy or are much slower in determining the interpolation weights for neighboring pixels. Such interpolation schemes are no option for our angle correction method (see below), which needs to embed the interpolation step into each iteration of the algorithm. Thus, a more performant approach is needed.

 figure: Fig. 1.

Fig. 1. Effect of sample tilt on diffraction. (a) Sample and detector in coplanar detection geometry and perpendicular incidence illumination. (b) Non-coplanar geometry with oblique illumination. The diffraction pattern in (b) is obtained via nonlinear transformation $\boldsymbol {T}$ of the diffraction pattern in (a) and vice versa.

Download Full Size | PDF

An alternative approach to transform the intensities is to substitute $x(u,v)$ and $y(u,v)$ into the measured intensity $I(x,y)$ using the inverse mapping $\boldsymbol {{T}^{-1}}$. The reverse transformation is applied to an evenly spaced spatial frequency output grid to find the associated observation coordinates. Next, the intensity function at those detector points is found by means of interpolation $I(x_\textrm {warped},y_\textrm {warped})=I(\boldsymbol {{T}^{-1}}(u_\textrm {regular},v_\textrm {regular}))$. Since the data points for this interpolation step are located on a regular detector pixel grid, this interpolation step is compatible with bilinear interpolation, which is straightforward and fast [25]. As repeated transformation and interpolation steps of the diffraction pattern are required for the angle calibration procedure reported in this work, a backward mapping approach with bilinear interpolation is used in this paper. Starting from the forward transform [see Eq. (2)], the following expression for the inverse transformation ${\boldsymbol {T}}^{-1}$ was derived (see Supplement 1 for more details):

$$\boldsymbol{{T}^{-1}}: x=\frac{y}{v}\frac{\lambda u+\sin(\theta)}{\lambda ~\text{cos}(\theta)}-z\text{tan}(\theta),\,y=\frac{-2v z^{2}}{b_0-[b_0^{2}-4 a z^{2}]^{1/2}},$$
where
$$a=\cos(\theta)^{2}v^{2}-\frac{\cos(2\theta)}{\lambda^{2}}+u^{2}+2\sin(\theta)\frac{u}{\lambda},$$
and
$$b_{0}={-}2 z\sin(\theta)(u+\frac{\sin(\theta)}{\lambda}).$$
In its simplest form, ptychography models the wave diffracted by a sample as the product of an illumination and a sample transmissivity or reflectivity, depending on the operation mode. The resulting wave exiting the sample plane is propagated into the observation plane by application of a suitable diffraction model. This results in an estimated wave in the detector plane, which can be updated in such a way that it complies with the experimental observation [2,3]. Here, we add an extra step that minimizes the mismatch between the forward model and the experimental observation with respect to the a priori unknown specimen tilt angle $\theta$. To this end, we measure model mismatch by the error metric
$$ e=\sum_{u,v}\sum_{j}\left|I_{j,m}\left(u(x,y,\theta),v(x,y,\theta))\right)-\left|\mathcal{F}\left[\psi_j\left(x',y' \right)\right]\right|^{2}\right|,$$
where the summation is over all measured spatial frequencies ($u,v$) and scan positions ($j$), and $\mathcal {F}$ denotes two-dimensional Fourier transformation. For energy conservation upon coordinate transformation, the data are normalized to the measured total energy. Note that due to the nonlinearity of the transformation, a Jacobian determinant correction will be required when operating closer to grazing incidence or at higher NA. Such a correction is described for tilted plane propagation with the angular spectrum method in [26]. Our angle estimation method, summarized in Algorithm 1, is a combination of a randomized search inspired by the Luus–Jakoola (LJ) algorithm [27] and the extended ptychographic iterative engine (ePIE) [3]. At each iteration, the measured diffraction intensities are transformed with ${\boldsymbol {T}}^{-1}$ for a test angle $\theta _t$ drawn from a uniform probability distribution ($\mathcal {U}$) of width 2$\Delta \theta$ and centered around the current estimate $\theta$. As the candidate solution approaches the true tilt angle, the model mismatch in Eq. (6) decreases. Therefore, if the error $e_t$ for the test angle $\theta _t$ is lower than the error $c\cdot e$ for the previous angle estimate $\theta$, the latter will be replaced by the former. We added an additional factor, $c=0.999$, to make the comparison between the test angle error and the previously estimated angle more robust. At every iteration of the algorithm, $\Delta \theta$ is linearly contracted to narrow down the search space.

Tables Icon

Algorithm 1. Angle calibration ptychographic iterative engine (aPIE) based on the Luus–Jaakola algorithm

Next, inspired by the approach of mPIE [28], a momentum acceleration term $v_{j}$ is added to the angle to speed up the rate of convergence. This momentum term is initialized at zero, and gets updated at the end of every iteration: $v_{j}=(\theta _{update}-\theta )+\eta \cdot v_{j-1}$, where $\eta =0.7$ is a friction term. At the end of each loop, the angle estimate is updated with the following momentum update step $\theta =\theta _{update}+v_{j}$.

To test our angle calibration method in experiment, a series of ptychographic measurements were recorded in a tilted-plane reflection geometry using a USAF (Thorlabs R3L1S4P) resolution test target. The experimental setup is shown in Fig. 2. Illumination around a wavelength of 708.8 nm was generated by spectrally limiting a super continuum source by means of short pass (SP1000) and long pass (LP700) filters, and finally by selecting a narrow wavelength band with an acousto-optic tunable (AOTF) filter ($\Delta \lambda =0.6$ nm). The sample and detector were mounted on two concentric rotation stages, enabling control of the tilt angle $\theta$ between the incident beam and the specimen’s surface normal. Using this setup, 20 data sets were recorded at a tilt angle of $43\pm 1 ^\circ$, which was triangulated from the setup geometry. In half of these measurements, a focused top-hat beam was used, while a structured beam was used in the other half. The beam structuring was achieved by means of a piece of Scotch Tape. Each data set consisted of 152 diffraction patterns recorded on a CCD camera (AVT GT3400, 14 bit, $3384\times 2704$ pixels) at a sample–detector distance of $71.4$ mm. The linear overlap ratio in these scans was $87\%$.

 figure: Fig. 2.

Fig. 2. Experimental setup. A supercontinuum source is spectrally limited via short pass (SP1000) and long pass (LP700) filters to a wavelength range of 700 nm to 1000 nm. The beam is linearly polarized using polarization beam splitters (PBS). A narrow spectral band ($\Delta \lambda =0.6$ nm) is selected by means of an acousto-optic tunable filter (AOTF). The beam is expanded through lenses L1 ($f_1=25$ mm) and L2 ($f_2=300$ mm), and modulated through pinholes PH1 (empty pinhole) or PH2 (pinhole with a Scotch Tape diffuser). Finally the pinhole is imaged by L3 ($f_5=500$ mm) onto the sample. The sample and detector are mounted on concentric rotation stages (dashed lines), permitting flexible control in the tilt angle $\theta$ between the sample normal and the optical axis.

Download Full Size | PDF

Reconstructions were executed on a NVIDIA Titan RTX GPU. Reconstructions in this paper have been preprocessed by 200 iterations of ePIE, before applying 400 iterations of aPIE. Representative reconstructions of the object and the probe are depicted in Figs. 3(d) and 3(f) for the case of smooth illumination, and in Figs. 3(c), 3(e), 3(g) for the case of a structured illumination. Upon starting angle optimization, the error [Eq. (6)] rapidly improves as illustrated in Fig. S2 (Supplement 1). The robustness of the angle calibration of the smooth beam was compared with that of the structured beam through an estimation of the standard deviation of the recovered values of the tilt angle $\theta$.

 figure: Fig. 3.

Fig. 3. (a) Comparison of the standard deviation of the estimated angle for smooth (green) and structured (blue) illumination. The solid lines indicate the average tilt angle estimate, while the shaded areas indicate the region within $\pm 1$ standard deviation (averaged over 10 measurements) from the mean. (b) Convergence behavior for varying initial tilt angle $\theta$ guesses. The green lines (with round markers) indicate smooth illumination and the blue lines indicate structured illumination. The results shown in panels (a) and (b) are preprocessed by 200 iterations of ePIE at the original angle estimates before aPIE is started. (c1)–(c3) Image reconstructions (c1) before, (c2) during, and (c3) after convergence of the angle correction method using structured illumination. Note that ePIE convergence was already reached in panel (c1) before the angle correction was initiated. (d),(f) Reconstructions of object and probe, respectively, obtained with a smooth beam. (e),(g) Reconstructions of object and probe, respectively, obtained with a structured beam.

Download Full Size | PDF

The results of this comparison are shown in Fig. 3(a), where the solid line and shaded areas indicate the average and standard deviation of the current estimate of $\theta$. The solid curves were calculated by averaging reconstructions of ten different data sets. It is seen that the standard deviation for the angle estimate is much smaller for the case of the structured beam, indicating more precise parameter estimation performance. This is also reflected in the improved object reconstruction quality in Fig. 3(e) (structured) as compared with Fig. 3(f) (smooth). Next, we tested the robustness of our method against inaccurate initial tilt angle estimates. A series of reconstructions were carried out with varying starting values for $\theta$. The recovered tilt angle $\theta$ for these reconstructions as a function of the number of iterations is shown in Fig. 3(b). It is seen that our angle calibration method retrieved the angle within the aforementioned uncertainty given by the respective beam profile for initial deviations as large as $10^{\circ }$, with a more rapid convergence rate observed for the structured illumination. Finally, the feasibility of a combined calibration of the detector sample distance $z$ and the tilt angle $\theta$ was investigated. For this purpose, a series of reconstructions was executed with varying starting $\theta$-z-estimates of the structured beam data. These reconstructions alternated between 200 iterations of zPIE [14] and 50 iterations of aPIE for 2500 iterations.The trajectories of these combined reconstructions through the joint $\theta$-$z$ plane are shown in Fig. 4, where each color indicates a single reconstruction with a different initial guess. These reconstructions converged to a value for theta of 43.37$\pm 0.06^{\circ }$ and to a value for z of $71.16\pm 0.04$ mm, where the uncertainty is a single standard deviation in the final parameter estimates.

Discussion and conclusion. In this Letter, we proposed a self-calibration algorithm for estimating the tilt angle in non-coplanar reflection ptychography. The method was tested experimentally, where it showed robust performance for an initial estimate range up to $10^{\circ }$ deviation from the true angle. We observed empirically in these tests that a structured illumination helps to reduce the uncertainty in the angle estimate and to improve the convergence rate of our proposed algorithm. Additionally, we demonstrated that despite of the explicit $z$-dependency of the underlying coordinate transformation, an alternating descent optimization of the tilt angle and detector–sample distance is feasible, even when neither parameter is known precisely. In summary, aPIE will improve the robustness and allow for tilt angle self-calibration in reflection-mode ptychography.

 figure: Fig. 4.

Fig. 4. Convergence diagram of a combined calibration of both the tilt angle $\theta$ and the sample–detector distance $z$. The calibration alternates between 50 iterations of aPIE and 200 iterations of zPIE on experimental data with a structured beam illumination [cf. Figure 3(f)]. Each colored trajectory represents the convergence behavior for a different initial estimate starting on the dashed circles. The reconstructions converge to $z=71.16\pm 0.04$ mm and $\theta =43.37\pm 0.06^{\circ }$.

Download Full Size | PDF

Funding

European Research Council (ERC-CoG 864016); Nederlandse Organisatie voor Wetenschappelijk Onderzoek (HTSM 13934).

Disclosures

The authors declare no conflicts of interest

Data availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

Supplemental document

See Supplement 1 for supporting content.

REFERENCES

1. J. Rodenburg and A. Maiden, Springer Handbook of Microscopy (Springer, 2019), p. 2.

2. P. Thibault, M. Dierolf, A. Menzel, O. Bunk, C. David, and F. Pfeiffer, Science 321, 379 (2008). [CrossRef]  

3. A. M. Maiden and J. M. Rodenburg, Ultramicroscopy 109, 1256 (2009). [CrossRef]  

4. L. Loetgering, X. Liu, A. C. C. De Beurs, M. Du, G. Kuijper, K. S. E. Eikema, and S. Witte, Optica 8, 130 (2021). [CrossRef]  

5. F. Pfeiffer, Nat. Photonics 12, 9 (2018). [CrossRef]  

6. F. Hüe, J. M. Rodenburg, A. M. Maiden, F. Sweeney, and P. A. Midgley, Phys. Rev. B: Condens. Matter Mater. Phys. 82, 121415 (2010). [CrossRef]  

7. J. Marrison, L. Räty, P. Marriott, and P. O’Toole, Sci. Rep. 3, 2369 (2013). [CrossRef]  

8. N. Anthony, C. Darmanin, M. R. Bleackley, K. Parisi, G. Cadenazzi, S. Holmes, M. A. Anderson, K. A. Nugent, and B. Abbey, Biomed. Opt. Express 10, 4964 (2019). [CrossRef]  

9. M. Du, L. Loetgering, K. Eikema, and S. Witte, Opt. Express 28, 5022 (2020). [CrossRef]  

10. L. Valzania, T. Feurer, P. Zolliker, and E. Hack, Opt. Lett. 43, 543 (2018). [CrossRef]  

11. M. Guizar-Sicairos and J. R. Fienup, Opt. Express 16, 7264 (2008). [CrossRef]  

12. A. M. Maiden, M. J. Humphry, M. C. Sarahan, B. Kraus, and J. M. Rodenburg, Ultramicroscopy 120, 64 (2012). [CrossRef]  

13. E. H. R. Tsai, I. Usov, A. Diaz, A. Menzel, and M. Guizar-Sicairos, Opt. Express 24, 29089 (2016). [CrossRef]  

14. L. Loetgering, M. Du, K. S. E. Eikema, and S. Witte, Opt. Lett. 45, 2030 (2020). [CrossRef]  

15. M. Odstrcil, P. Baksh, S. A. Boden, R. Card, J. E. Chad, J. G. Frey, and W. S. Brocklesby, Opt. Express 24, 8360 (2016). [CrossRef]  

16. P. Thibault and A. Menzel, Nature 494, 68 (2013). [CrossRef]  

17. D. J. Batey, D. Claus, and J. M. Rodenburg, Ultramicroscopy 138, 13 (2014). [CrossRef]  

18. M. D. Seaberg, B. Zhang, D. F. Gardner, E. R. Shanblatt, M. M. Murnane, H. C. Kapteyn, and D. E. Adams, Optica 1, 39 (2014). [CrossRef]  

19. K. Patorski, Opt. Acta 30, 673 (1983). [CrossRef]  

20. H. J. Rabal, X. Bolognini, and E. E. Sicre, Opt. Acta 32, 1309 (1985). [CrossRef]  

21. J. W. Goodman, Introduction to Fourier Optics (WH Freeman, 2017), 4th ed.

22. D. F. Gardner, B. Zhang, M. D. Seaberg, L. S. Martin, D. E. Adams, F. Salmassi, E. Gullikson, H. Kapteyn, and M. Murnane, Opt. Express 20, 19050 (2012). [CrossRef]  

23. I. Skorokhodov, “Interpolating points on a non-uniform grid using a mixture of Gaussians,” arXiv:2012.13257 [cs] (2020).

24. D. W. Zingg and M. Yarrow, SIAM J. Sci. and Stat. Comput. 13, 687 (1992). [CrossRef]  

25. R. E. Woods, S. L. Eddins, and R. C. Gonzalez, Digital Image Processing Using MATLAB (Gatesmark, 2009), 2nd ed.

26. K. Matsushima, H. Schimmel, and F. Wyrowski, J. Opt. Soc. Am. A 20, 1755 (2003). [CrossRef]  

27. R. Luus and T. H. I. Jaakola, AIChE J. 19, 760 (1973). [CrossRef]  

28. A. Maiden, D. Johnson, and P. Li, Optica 4, 736 (2017). [CrossRef]  

Supplementary Material (1)

NameDescription
Supplement 1       Supplementary information

Data availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (4)

Fig. 1.
Fig. 1. Effect of sample tilt on diffraction. (a) Sample and detector in coplanar detection geometry and perpendicular incidence illumination. (b) Non-coplanar geometry with oblique illumination. The diffraction pattern in (b) is obtained via nonlinear transformation $\boldsymbol {T}$ of the diffraction pattern in (a) and vice versa.
Fig. 2.
Fig. 2. Experimental setup. A supercontinuum source is spectrally limited via short pass (SP1000) and long pass (LP700) filters to a wavelength range of 700 nm to 1000 nm. The beam is linearly polarized using polarization beam splitters (PBS). A narrow spectral band ($\Delta \lambda =0.6$ nm) is selected by means of an acousto-optic tunable filter (AOTF). The beam is expanded through lenses L1 ($f_1=25$ mm) and L2 ($f_2=300$ mm), and modulated through pinholes PH1 (empty pinhole) or PH2 (pinhole with a Scotch Tape diffuser). Finally the pinhole is imaged by L3 ($f_5=500$ mm) onto the sample. The sample and detector are mounted on concentric rotation stages (dashed lines), permitting flexible control in the tilt angle $\theta$ between the sample normal and the optical axis.
Fig. 3.
Fig. 3. (a) Comparison of the standard deviation of the estimated angle for smooth (green) and structured (blue) illumination. The solid lines indicate the average tilt angle estimate, while the shaded areas indicate the region within $\pm 1$ standard deviation (averaged over 10 measurements) from the mean. (b) Convergence behavior for varying initial tilt angle $\theta$ guesses. The green lines (with round markers) indicate smooth illumination and the blue lines indicate structured illumination. The results shown in panels (a) and (b) are preprocessed by 200 iterations of ePIE at the original angle estimates before aPIE is started. (c1)–(c3) Image reconstructions (c1) before, (c2) during, and (c3) after convergence of the angle correction method using structured illumination. Note that ePIE convergence was already reached in panel (c1) before the angle correction was initiated. (d),(f) Reconstructions of object and probe, respectively, obtained with a smooth beam. (e),(g) Reconstructions of object and probe, respectively, obtained with a structured beam.
Fig. 4.
Fig. 4. Convergence diagram of a combined calibration of both the tilt angle $\theta$ and the sample–detector distance $z$. The calibration alternates between 50 iterations of aPIE and 200 iterations of zPIE on experimental data with a structured beam illumination [cf. Figure 3(f)]. Each colored trajectory represents the convergence behavior for a different initial estimate starting on the dashed circles. The reconstructions converge to $z=71.16\pm 0.04$ mm and $\theta =43.37\pm 0.06^{\circ }$.

Tables (1)

Tables Icon

Algorithm 1. Angle calibration ptychographic iterative engine (aPIE) based on the Luus–Jaakola algorithm

Equations (6)

Equations on this page are rendered with MathJax. Learn more.

ψ ~ ( u , v ) = ψ ( x , y ) exp [ i 2 π ( u x + v y ) ] d x d y ,
T : u = x λ r 0 cos θ + sin θ λ [ ( 1 x 2 + y 2 r 0 2 ) 1 / 2 1 ] , v = y λ r 0 ,
T 1 : x = y v λ u + sin ( θ ) λ   cos ( θ ) z tan ( θ ) , y = 2 v z 2 b 0 [ b 0 2 4 a z 2 ] 1 / 2 ,
a = cos ( θ ) 2 v 2 cos ( 2 θ ) λ 2 + u 2 + 2 sin ( θ ) u λ ,
b 0 = 2 z sin ( θ ) ( u + sin ( θ ) λ ) .
e = u , v j | I j , m ( u ( x , y , θ ) , v ( x , y , θ ) ) ) | F [ ψ j ( x , y ) ] | 2 | ,
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.