Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Photon-limited ptychography of 3D objects via Bayesian reconstruction

Open Access Open Access

Abstract

We introduce a Bayesian framework for the ptychotomographic imaging of 3D objects under photon-limited conditions. This approach is significantly more robust to measurement noise by incorporating prior information on the probabilities of the object features and the measurement process into the reconstruction problem, and it can improve both the temporal and spatial resolution of current and future ptychography instruments. We use the alternating direction method of multipliers to solve the proposed optimization problem, where the ptychography and tomography subproblems are both solved using the conjugate-gradient method. The effectiveness of the framework is demonstrated by reconstructing a synthetic 3D integrated chip with a significantly reduced photon-budget for the simulated experiment.

1. Introduction

The increasing availability of coherent light sources at short wavelengths from the extreme ultraviolet to the hard x-ray regime has paved the way for the widespread use of coherent diffraction imaging (CDI) techniques in the past decade [14]. In contrast to classical microscopy, CDI does not require an objective lens between the sample and detector; therefore, it can provide spatial resolution without any lens-imposed limitations [49]. While CDI is traditionally used for imaging isolated or small objects, imaging wide objects can be achieved by performing CDI in scanning mode. This approach is commonly known as “ptychography” following the work of Hoppe on electron microscopy in 1969 [10]. In ptychography, a coherent beam spot is used to scan the sample, while recording diffraction patterns with a pixelated detector. The scanning process can be repeated for multiple rotations of the object around a common axis to acquire a tomographic dataset [11]; see Fig. 1. An image of the 3D object can then be reconstructed numerically by solving the phase retrieval and tomography problems subsequently. The technique is becoming the standard method for sub-${20} \,{\textrm {nm}}$ 3D imaging [12], and current implementations at synchrotrons enable a wide variety of applications from biology to materials science [1326].

 figure: Fig. 1.

Fig. 1. Schematic of a generic ptychotomography setup for imaging a 3D sample. A coherent x-ray beam spot propagates through the sample, and the diffraction pattern of the transmitted radiation is measured by a pixelated photon counting detector. In order to acquire ptychography data $d$, the sample $u$ is raster-scanned at different positions and views $\theta$ by using the rotation and translation stages. Detected photons at large scattering angles contain high-frequency information about the object, and they often follow a Poisson process at low doses of radiation exposure.

Download Full Size | PDF

While 3D ptychography with current and next-generation coherent sources is highly versatile, the main advantage of its high spatial resolution imposes a fundamental limitation for 3D imaging because the number of photons needed to resolve a feature scales inversely with the resolution of the imaging setup. A high photon budget requirement means a high radiation dose that is lethal to many biological specimens and can disrupt the structure of material samples. Also, long data acquisition times preclude imaging of many dynamic phenomena in samples. Preservation methods such as chemical fixation, dehydration or cryogenic cooling are often applied to improve the tolerance of the sample to radiation damage; but they also alter the underlying morphology and chemical composition of the sample [2729]. Using higher energy radiation can be an alternative because the ionization probability by photoelectric effect is significantly reduced at high energies. For many materials, however, variances in absorption yield better imaging contrast; therefore, high energy measurements may not always yield sufficient imaging contrast. For example, most biologically relevant elements absorb significantly more than their surroundings at lower energies. Even under the most favorable conditions, the absorbed radiation for resolving a ${10} \,{\textrm {nm}}$ feature is estimated to be about $10^{10}$ gray [30], which is at the borderline of inducing irreversible damage to biological specimens.

Computational solutions provide an effective alternative to imaging at photon-limited conditions. Denoising and deblurring while preserving the object features have been active research areas in both ptychography and tomography applications. However, these solutions are independently applied to either ptychography or tomography problems without considering 3D object reconstruction as two jointly connected problems. For example, in many implementations [31,32], a series of 2D ptychography problems for each angular position is solved first to yield a stack of tomograms. The tomography problem is solved afterward as a follow-up step, without exploiting the inherent coupling between these two problems. This disconnect implies that the resulting computational reconstruction is generally not optimal. Furthermore, the success of existing algorithms for ptychography heavily depends on data redundancy in forms of probe overlap [33]. Recently, these constraints have been shown to be redundant with an alternating coordinate descent method that allows to exploit the correlations in the projections that is inherent to tomography [34]. In follow-up work [35], we proposed using the alternating direction method of multipliers (ADMM), a popular technique in phase retrieval [36], as a generic optimization framework for the ptychotomography problem and demonstrated significant improvement of reconstruction quality compared with the commonly used sequential reconstruction approach.

In this paper, we introduce a general statistical inverse model for the object, as well as the measurement process for 3D imaging under photon-limited conditions. This approach enables us to incorporate prior information about the object and the data into the reconstruction process as probability distributions through Bayes’ theorem. As a result, our approach is significantly more robust to measurement noise by reducing the ill-conditioning of the problem. A Bayesian framework also allows for performing an uncertainty quantification analysis using the posterior distributions of reconstruction parameters. While Gaussian process models are commonly used in many applications for describing the likelihood of a noisy measurement, we use the Poisson regression model, which captures photon counting statistics in diffraction data more accurately. To further reduce the sampling requirements, we use a Gibbs prior with a sparsity enforcing penalty function for calculating the marginal probability of the object model [37]. The proposed optimization problem is solved by using the ADMM scheme, where ptychography, tomography and regularization subproblems are jointly connected.

This paper is organized as follows. In Section 2, we formulate the ptychotomography problem in terms of operators and describe the Bayesian approach that produces an optimization problem for reconstruction. Section 3 shows how the optimization problem can be solved with the ADMM by splitting the whole problem into subproblems. Details of operator implementations and high-performance computing results are given in Section 4. In Section 5, we validate our approach on synthetic data and compare reconstruction results for different inversion models. Our conclusions and outlook are given in Section 6.

2. Problem formulation

In this section, we formulate the forward problem of modeling the measurement process based on the Bayesian approach, which will set the stage for object reconstruction. For brevity, we use compound operators in the formulation to avoid excessive indices associated with the parameters such as the rotation angle, detector pixel, or scan position.

2.1 Forward model

The forward model for 3D ptychography can be constructed by combining the ptychography operator $\mathcal {G}$ and the tomography operator $\mathcal {H}$,

$$|\mathcal{G}\mathcal{H} u|^2 = d,$$
where the object is represented by its complex refractive index $u=\delta +i\beta$ and the measured data $d$ is given by photon counts or the intensity on the detector; see Fig. 1. The operator $\mathcal {H}$ is defined via the Radon transform $\mathcal {R}$ [38] as
$$\mathcal{H} u = \exp\left( i \nu\mathcal{R} u \right),$$
where $\nu$ is the wavenumber of the illumination beam. The operator $\mathcal {G}$, in turn, is represented by a set of 2D discrete Fourier transforms $\mathcal {F}$ applied to the exit wave formed by the operator $\mathcal {Q}$ for all scan positions,
$$\mathcal{G} \psi = \mathcal{F} \mathcal{Q}\psi,$$
where $\psi = \mathcal {H} u$ and the diagonal operator $\mathcal {Q}$ acts as an elementwise multiplication of the finite-sized illumination probe function with the exit wave $\psi$ at the regions of corresponding scan positions (yellow circles on the object in Fig. 1). These operators can be written in a matrix-vector form as follows:

osac-2-10-2948-i001

The object $u$ and data $d$ are considered as vectors, the operator $\mathcal {F}$ consists of 2D discrete Fourier transforms $\mathsf {F}$, and the illumination operator $\mathcal {Q}$ is used to sample exit waves using probe matrices $\mathsf {Q}_1,\mathsf {Q}_2,\dots$ defined with respect to scan positions.

2.2 Bayesian approach for reconstruction

Photon detection by a detector is described as a Poisson process [39]. For 3D ptychography (1), the probability of measuring data $d_j$, where the compound index $j$ is associated with the detector pixel, rotation angle, and scan position, is given by the likelihood function. Assuming that the measurements are independent of each other, the reconstruction turns into maximizing the total probability of measuring all data points $d_1,\dots , d_n$,

$$p(d|u) = \prod_{j=1}^{n}\frac{e^{-|\mathcal{G}\mathcal{H} u|_j^2}|\mathcal{G}\mathcal{H} u|_j^{2d_j}}{d_j!}.$$
The corresponding maximum likelihood (ML) estimate of $u$ can be computed by minimizing the negative logarithm of the total probability (3),
$$\begin{aligned} u_\textrm{ML} = \mathop {\textrm{arg min}}_u \sum_{j=1}^{n} \left\{|\mathcal{G}\mathcal{H} u|_j^2-2d_j\log |\mathcal{G}\mathcal{H} u|_j \right\}. \end{aligned}$$
Now assume that we have some knowledge (prior information) about the object $u$ before the measured data $d$ is examined. In contrast to the ML model, we will maximize the probability of having the object $u$ given the data $d$, that is, the posteriori probability $p(u|d)$. The likelihood function and the posterior probability are connected via Bayes’ theorem: $p(u|d)={p(d|u)p(u)}/{p(d)}$. The prior probability $p(u)$ is selected based on specifics of the problem. In this work, we consider priors of the Gibbs form $p(u) = \exp [-\alpha q(\mathcal {J}\!u)]$, where $\mathcal {J}$ is a linear constraint operator and $q$ is a functional weighted by a constant $\alpha$. The maximum a posteriori probability (MAP) estimate for the ptychotomography can be computed by minimizing the negative logarithm of $p(u|d)$,
$$\begin{aligned} u_\textrm{MAP} = \mathop {\textrm{arg min}}_u \sum_{j=1}^{n} \left\{ |\mathcal{G}\mathcal{H} u|_j^2-2d_j\log |\mathcal{G}\mathcal{H} u|_j \right\}+\alpha q(\mathcal{J}\!u). \end{aligned}$$
The term $q(\mathcal {J}\!u)$ can be treated as a regularization term with $\alpha$ as the regularization parameter. This term introduces a penalty to the minimization function to yield a solution consistent with the one suggested by the prior information.

The likelihood function in Eq. (3) is commonly approximated by a Gaussian because, by doing so, Eq. (5) reduces to a penalized least-squares (LS) problem [40],

$$u_\textrm{LS} = \mathop {\textrm{arg min}}_u \sum_{j=1}^{n}(|\mathcal{G}\mathcal{H} u|_j-\sqrt{d_j})^2,$$
which can be solved efficiently by existing $L_2$ solvers that promise rapid convergence rates [41,42]. Note that we used the amplitude-based model in Eq. 6 because it provides more robust solutions than other LS-based models [43]. However, often the Gaussian approximation starts to fail and introduce reconstruction errors in forms of image blurring because high-frequency information in most cases is corrupted by noise in ptychography measurements [43]. Therefore, in this work we will compare our proposed $u_\textrm {MAP}$ model with the following two models: $u_\textrm {ML}$ and $u_\textrm {LS}$. In the next section, we describe the numerical solver for the joint ptychotomography problem of finding $u_\textrm {MAP}$. Derivations for $u_\textrm {ML}$ and $u_\textrm {LS}$ can be directly adapted based on this formulation.

3. Joint reconstruction with ADMM

In this section, we focus on solving the problem of finding $u_\textrm {MAP}$ in Eq. (5) by using ADMM [44] as the solver. The proposed scheme is generic and can be adapted or extended for any other type of reconstruction problems with different object priors.

By introducing auxiliary variables $\psi$ and $\varphi$, the problem in Eq. (5) can be rewritten as

$$\begin{array} {ll} \min \limits_{\psi, \varphi} &\sum \limits_{j=1}^{n}\left\{|\mathcal{G}\psi|_j^2-2d_j\log |\mathcal{G}\psi|_j\right\}+\alpha q(\varphi) \\ \textrm{subject to} &\mathcal{H} u = \psi, \\ &\mathcal{J} u = \varphi, \end{array}$$
where, as in the preceding section, we associate the compound index $j=1,\dots ,n$ with the detector pixel, rotation angle, and scan position. The objective function in Eq. (7) is a real-valued function, whereas the variables $\psi$ and $\phi$ are complex; hence the augmented Lagrangian for the ADMM scheme has to be treated as a complex function, employing the Wirtinger calculus for computing the gradients [45,46]. An elegant technique for the ADMM with complex variables was proposed before [47], where the optimization problem is recast into an equivalent problem with complex augmented variables. The authors presented the scheme and convergence results for linear constraint operators. In our problem in Eq. (7), the operator $\mathcal {H}$ as defined in Eq. (2) is nonlinear; however, because it is formulated as an analytic function of the linear operator $\mathcal {R}$, the whole scheme and convergence results are readily applicable to our problem. For the convergence study of a similar problem we refer to our previous work [35].

For solving Eq. (7), we refer to [47, Eq. (33),] and work with the augmented Lagrangian written as follows,

$$\begin{aligned}\mathcal{L}_{\rho, \tau} (\psi, u, \varphi, \lambda, \mu) &=\sum_{j=1}^{n} \left\{|\mathcal{G}\psi|_j^2-2d_j\log|\mathcal{G}\psi|_j \right\}+ \alpha q(\varphi)+ 2\textrm{Re}\{\lambda^H(\mathcal{H} u-\psi)\}\\ &\quad +\rho\left\lVert\mathcal{H} u-\psi\right\rVert_2^2 + 2\textrm{Re}\{\mu^H(\mathcal{J} u-\varphi)\}+ \tau\left\lVert\mathcal{J} u-\varphi\right\rVert_2^2,\end{aligned}$$
where $\rho >0$, $\tau >0$ are penalty parameters and $\lambda , \mu$ are dual variables. Note that the augmented Lagrangian in Eq. (8) is a real-valued function where the terms containing $\lambda ^H$ and $\mu ^H$ can be combined with the terms representing the $L_2$ norms, which is different from the augmented Lagrangian directly written to the problem in Eq. (7).

The ADMM breaks the minimization problem of the augmented Lagrangian into local subproblems with respect to $\psi$, $\varphi$, and $u$. The subproblems are then coordinated through the dual variables $\lambda$ and $\mu$ to find a solution for the original problem. Specifically, the algorithm performs the following steps at every iteration $k$:

$$\psi^{k+1} = \mathop {\textrm{arg min}}_{\psi} \mathcal{L}_{\rho, \tau} \left(\psi, u^{k}, \varphi^{k}, \lambda^{k}, \mu^k \right),$$
$$ u^{k+1} = \mathop {\textrm{arg min}}_u \mathcal{L}_{\rho, \tau} \left(\psi^{k+1}, u, \varphi^{k}, \lambda^{k}, \mu^k \right),$$
$$\varphi^{k+1} = \mathop {\textrm{arg min}}_\varphi \mathcal{L}_{\rho, \tau} \left(\psi^{k+1}, u^{k+1}, \varphi, \lambda^{k}, \mu^k\right),$$
$$\lambda^{k+1} = \lambda^k + \rho \left(\mathcal{H} u^{k+1}-\psi^{k+1}\right),$$
$$\mu^{k+1} = \mu^k + \tau \left(\mathcal{J} u^{k+1}-\varphi^{k+1}\right).$$
This leads to three local minimization problems and two dual variable updates. On each iteration, $\mathcal {L}_{\rho , \tau }$ is minimized over each variable independently by using the most recent updates of other variables. The dual variables are then computed with respect to the scaled sum of the consensus errors. In what follows, we show our solution approach to these subproblems in Eqs. (9), (10), and (11). We refer to these as the ptychography, tomography, and regularizer subproblems, respectively.

3.1 Ptychography subproblem

We explicitly write the minimization functional for the ptychography problem in Eq. (9) as

$$F(\psi) = \sum_{j=1}^{n} \left\{ |\mathcal{G}\psi|_j^2-2d_j\log|\mathcal{G}\psi|_j \right\} + \rho\left\lVert\mathcal{H} u^k-\psi+\lambda^k/\rho\right\rVert_2^2.$$
Because $F(\psi )$ is a real-valued function, the steepest ascent direction of $F(\psi )$ is given as $\nabla _\psi F={\partial {F}} / {\partial \psi ^*} = \left ({\partial {F}} / {\partial \psi }\right )^*$; see [48] for details. Using the Wirtinger derivative, one can readily verify that
$$\nabla_\psi F(\psi) =\mathcal{G}^H\left( \mathcal{G}\psi-\frac{d}{(\mathcal{G}\psi)^*}\right) - \rho (\mathcal{H} u^k-\psi+\lambda^k/\rho).$$
With the steepest ascent direction, we can construct iterative schemes for solving (15) by using methods with different convergence rates. In contrast to our previous work [35] where the conventional gradient descent was considered, here we employ a conjugate-gradient (CG) method for its faster convergence rate [49]. CG iterations are given as $\psi _{m+1} = \psi _m + \gamma _m \eta _m$, where $\gamma _m$ is a step length computed by a line-search procedure and $\eta _m$ is the search direction. Different recursive formulas for $\eta _m$ have been proposed, including the Fletcher-Reeves [50], the Polak-Ribére-Polyak [51,52], and the Dai-Yuan [53] formulas. According to our tests, the Dai-Yuan formula demonstrates a significantly faster convergence rate for both the ptychography and tomography problems. The Dai-Yuan formula is given by
$$\eta_{m+1}={-}\nabla_\psi F(\psi_{m+1})+\frac{\left\lVert\nabla_\psi F(\psi_{m+1})\right\rVert_2^2}{(\nabla_\psi F(\psi_{m+1})-\nabla_\psi F(\psi_{m}))^H\eta_m}\eta_m$$
with $\eta _0=-\nabla _\psi F(\psi _0)$.

Recall that in the case of high photon counts on the detector, the ML estimate $u_\textrm {ML}$ can be approximated by the LS estimate $u_\textrm {LS}$ in Eq. (6). For the case of $u_\textrm {LS}$, the ptychography problem has the following minimization functional and its gradient:

$$\tilde{F}(\psi) = \sum_{j=1}^{n} (|\mathcal{G}\psi|_j-\sqrt{d})_j^2 + \rho\left\lVert\mathcal{H} u^k-\psi+\lambda^k/\rho\right\rVert_2^2,$$
$$\nabla_\psi \tilde{F}(\psi) =\mathcal{G}^H\left( \mathcal{G}\psi-\frac{\sqrt{d}\mathcal{G}\psi}{|\mathcal{G}\psi|}\right) - \rho (\mathcal{H} u^k-\psi+\lambda^k/\rho).$$
We point out that for the joint ptychotomography problem, this update step is the only difference between procedures for the ML estimation $u_\textrm {ML}$ and the LS estimation $u_\textrm {LS}$.

3.2 Tomography subproblem

We express the minimization of the tomography problem in Eq. (10) as

$$F(u)= \rho\left\lVert\mathcal{H} u-\psi^{k+1}+\lambda^k/\rho\right\rVert_2^2+ \tau\left\lVert\mathcal{J} u-\varphi^k+\mu^{k}/\tau\right\rVert_2^2$$
with a linear operator $\mathcal {J}$ for constraining the solution and the tomography operator $\mathcal {H}$ defined in Eq. (2) via the Radon transform $\mathcal {R}$. Because operator $\mathcal {H}$ is nonlinear, the problem is difficult to solve. Therefore, we approximate the first term in (15) using a Taylor expansion around a particular point. A similar strategy has been applied in [54, Appendix A,]. Knowing that the Radon transform $\mathcal {R}$ is linear and invertible, we can derive a Taylor expansion of $\mathcal {H} u = \exp (i\nu \mathcal {R} u)$ around the point $\mathcal {R} \tilde {u} = -({i\nu }/{2\pi })\log (\psi ^{k+1}-\lambda ^k/\rho )$, which leads to
$$\begin{aligned} &\left\Vert\mathcal{H} u-\psi^{k+1}+\lambda^k/\rho\right\Vert_2^2 = \left\Vert e^{\frac{2\pi i}{\nu} \mathcal{R} u}-\psi^{k+1}+\lambda^k/\rho\right\Vert_2^2\\ &\approx \left\Vert\frac{2\pi i}{\nu}\left(\psi^{k+1}-\lambda^k/\rho\right)\left(\mathcal{R} u +\frac{i\nu}{2\pi}\log(\psi^{k+1}-\lambda^k/\rho) \right)\right\Vert_2^2, \end{aligned}$$
where the resulting expression inside the norm is represented by a linear operator with constants. Hence, a solution of the problem in Eq. (15) can be found approximately by minimizing the following:
$$F(u) \approx \rho\left\lVert\mathcal{K}\mathcal{R} u - \xi_1\right\rVert_2^2+\tau\left\lVert\mathcal{J} u-\xi_2\right\rVert_2^2,$$
where the linear diagonal operator $\mathcal {K}$ is defined as
$$\mathcal{K} \mathcal{R} u = \frac{2\pi i }{\nu}(\psi^{k+1}-\lambda^k/\rho) \mathcal{R} u$$
and the right-hand sides are given by
$$\begin{aligned} & \xi_1 = (\psi^{k+1}-\lambda^k/\rho)\log(\psi^{k+1}-\lambda^k/\rho), \\ & \xi_2 = \varphi^k-\mu^k/\tau. \end{aligned}$$
Similar to the ptychography problem, the steepest ascent direction $\nabla _u F(u)$ can be obtained based on the Wirtinger calculus:
$$\nabla_u F(u) = \rho\mathcal{R}^T\mathcal{K}^H(\mathcal{K}\mathcal{R} u -\xi_1)+\tau\mathcal{J}^T(\mathcal{J} u -\xi_2).$$
The tomography problem in Eq. (15) can thus be solved by considering the rapid convergence of CG methods that utilize the direction $\nabla _u F(u)$. For a consistent approach with the ptychography problem, we employ the same Dai-Yuan formula given in Eq. (14).

3.3 Regularization subproblem

The minimization functional for the regularization problem (11) is given by

$$F(\varphi) = \alpha q(\varphi) + \tau\left\lVert\mathcal{J} u^{k+1}-\varphi + \mu^{k}/\tau\right\rVert_2^2 .$$
The prior information represented by the function $q$ and the operator $\mathcal {J}$ depends on specifics of the problem. In this work, we consider a sparse prior defined in terms of the total variation (TV) [55], which is a common method of choice for reconstructing data from noisy measurements and from the incomplete number of measurements. In this case, the initial problem in Eq. (7) contains the regularization given as the $L_1$-norm of the Euclidean length of the gradient of $u$,
$$q(\varphi)=\left\lVert|\nabla u|\right\rVert_1,$$
where $q(\varphi )$ represents the $L_1$ norm of vector length and $\mathcal {J}$ denotes the gradient operator. Although we use TV for regularization, the proposed framework allows the use of any other regularization approach based on the experimental conditions. For instance, one can use $q(\mathcal {J}\!u)=\left \lVert |\nabla u|\right \rVert _2$ for decreasing the gradient sparsity level.

For the TV regularization term in Eq. (18), we use the soft-thresholding operator [56] for explicitly solving the problem in Eq. (17),

$$\tilde{\varphi} = \frac{\nabla u^{k+1}+\mu^k/\tau}{|\nabla u^{k+1}+\mu^k/\tau|}\max(0,|\nabla u^{k+1}+\mu^k/\tau|-\alpha/\tau),$$
where, by abuse of notation, we assume element-wise operations for dividing, taking the absolute value, and comparing with 0.

osac-2-10-2948-i002

4. Implementation

In this section, we present some of the implementation aspects and address the challenges for implementing the proposed ADMM-based framework for solving Eq. (5), see Algorithm 1 for the corresponding pseudo-code.

First, we work with normalized operators defined in the preceding section for constructing the ADMM framework. These facilitate the initialization of ADMM weighting factors $\rho$ and $\tau$ in the augmented Lagrangian (8) and optimize line-search computations in the CG method for the ptychography and tomography problems. Specifically, we use normalized versions of the Fourier transform $\mathcal {F}$, gradient operator $\mathcal {J}=\nabla$, Radon transform $\mathcal {R}$, and diagonal operators $\mathcal {Q}$ and $\mathcal {K}$. We also follow the strategy in [44] for varying the penalties $\rho$ and $\tau$ after each iteration to improve the convergence rate.

Second, we optimize evaluation of the Radon transform, since in the proposed framework this operator requires a relatively large amount of computational resources. Fast methods include Fourier-based methods [57,58], hierarchical decomposition [59], or the log-polar-based method [60]. By using one of these methods, we end up with a tractable computational complexity for ptychography and tomography subproblems, as will be shown later in this section.

The third implementation aspect concerns the linear phase ambiguity in reconstructions. Indeed, one can easily see from the definition of operator $\mathcal {H}$ in (2) that both $u$ and $u+c$ satisfy the measured data $d=|\mathcal {G}\mathcal {H} u|^2=|\mathcal {G}\mathcal {H} (u+c)|^2$ whenever $\mathcal {R} c/\nu \in \mathbb {Z}$. Thereby, the reconstruction of $u$ is obtained by subtracting the value from the region treated as a zero background.

The last important aspect involves the use of high-performance computing resources. Both ptychography and tomography problems are data intensive and can be parallelized with different data-partitioning methods. Specifically, the ptychography dataset can be partitioned based on the rotation angles, and then each angle/partition can independently be processed by using the operator $\mathcal {G}$. Additional parallelization can be achieved by using parallel Fourier-based implementations within each partition [6163]. The tomography problem shows similar properties, where parallelization can easily be accomplished by slicing the object $u$ across the illumination direction [60,6466]. Both the ptychography and tomography operators can be implemented on multicore (CPUs) and many-core (Knights Landing, GPUs) architectures, in which GPUs typically show superior performance when the problem size is within memory limits [67].

In this work, we use GPUs with Nvidia CUDA technology for accelerating computations. Table 1 demonstrates computational times for processing data of different sizes, as well as how these times are distributed between the ptychography and tomography problems. While for demonstration we consider the case with 25% probe overlap and with the total number of projection angles equal to 3/8 of the object size in one dimension, computational times for other data sizes can be estimated by direct scaling. For performing tests, we use NVidia Tesla P100 with fast NVLink connection to the CPU, 3,584 cores, and 16 GB of memory on-board; all computations were done in single precision. For the current implementation we assume that whole ptychography data fit into operating memory but can considerably exceed GPU memory.

Tables Icon

Table 1. Computational times on NVidia Tesla P100 for recovering objects of the size $N\!\times \! N\!\times \!N$ from the ptychography data generated for $N_\theta =3N/8$ angles and with 25% probe overlap. Each of 300 outer ADMM iterations involves 4 inner ptychography and 4 inner tomography CG iterations.

Table 1 confirms the computational complexity of the proposed algorithm for solving the ptychography and tomography problems. Let $(N\!\times \! N\!\times \!N)$ be the object sizes, and let $(M\!\times \!M)$ be the detector sizes. The ptychography problem has a complexity of $\mathcal {O}(N_\theta N_s M^2\log M)$, where the number of angles $N_\theta$ is on the order of $N$ and the number of scan positions $N_s$ is estimated by the order of $N^2$ for having necessary probe overlapping. This gives the complexity of $\mathcal {O}(N^3M^2\log M)$, where the factor of $\log M$ is negligible. Therefore, by dividing the computational times for the ptychography problem to $N^3M^2$, we obtain similar values for variable object and detector sizes. The same situation happens for the tomography problem, where the computational complexity is $\mathcal {O}(N^3\log N)$, since we implement the tomographic reconstruction using the Fourier-based method [57]. This approach results in a tractable computational complexity for the ptychography and tomography subproblems while increasing the object size $N$ and keeping the detector size constant $M$.

5. Numerical simulations

In this section, we validate our approach through simulations. We compare reconstructions by the maximum likelihood $u_\textrm {ML}$ and least-squares $u_\textrm {LS}$ estimates and then demonstrate how the quality of reconstruction is improved by using the maximum a posteriori estimate $u_\textrm {MAP}$. We also show the robustness of $u_\textrm {MAP}$ to the incomplete number of measurements. In this paper we omit comparisons to the commonly used sequential reconstruction approach where the ptychography and tomography problems are independently solved. For such comparisons we refer the reader to our previous work [35] where the ADMM approach was applied to the unregularized least-squares model and demonstrated significantly better reconstruction quality than the sequential approach.

We use a synthetic integrated chip (IC) for numerical simulations. We use ${8.8}{\textrm {keV}}$ for the illumination energy, where the phase contrast of the copper (Cu) signal is maximum. The object consists of $512\!\times \!512\times 512$ voxels of ${10} \,{\textrm {nm}}$ with a total thickness of ${4.394}{\mu \,{\textrm {m}}}$. The 3D volume rendering of the chip allows the identification of various structurel see Fig. 2. As shown in the figure, the size of the features increases with the layers farther away from the substrate. On the bottom is a silicon substrate with ${100} \,{\textrm {nm}}$ thickness. The active transistor layers, which contain the smallest feature of the chip, are on the top of the substrate, while the top layers consist mainly of metal interconnects. The interconnect layers are manufactured from Cu, and they are vertically connected to each other by square vias, made of the same material with interconnects. In the active layers, the smallest structures are siliconFin field-effect transistors (FinFETs) with ${10} \,{\textrm {nm}}$ feature size, ${50} {\textrm {nm}}$ period, and ${16} {\textrm {nm}}$ thickness. We can also identify other periodic structures in the 3D image, for example, gates manufactured from aluminum, source and drain connections manufactured from tungsten, and the gate vias from tungsten for connecting the gate to the interconnect layers.

 figure: Fig. 2.

Fig. 2. Synthetic model of the chip with features representing different resolution levels. Pixel size is ${10} \,{\textrm {nm}}$, number of pixels $(512\times \!512\times \!512)$. Right panel shows the amplitude difference between real and imaginary parts of the complex refractive index.

Download Full Size | PDF

The illumination probe function of ${160} {\textrm {nm}}$ diameter is represented by a flat-top Gaussian and assumed to be constant during the whole scanning procedure; see Fig. 3. The far-field diffraction patterns are recorded by a $128\!\times \!128$ pixelated detector by rotating the object at regular intervals from 0 to $\pi$. Figure 3 shows the effect of applying the Poisson noise to the measurements acquired for different probe intensities measured in ${{\textrm {J}}/({\textrm {m}}^2 {\textrm {s}})}$. The data is represented by photon counts in the range $\{0,\ldots ,2^{16}\!-\!1\}$, which corresponds to TIFF16-format detector measurements. To model real experimental conditions, we perform random shifts of scan positions after each rotation to mimic the imprecise rotation stages.The shifts have a root mean square amplitude equal to 40% of the probe size and they are applied to the entire dataset at each angle with keeping regular raster scan grid. Particularly, for the 8-pixel probe shift (50% probe overlap) used in simulations actual scan positions are shifted by a value from the interval $[-3.2,3.2]$, see Fig. 3 left. In two-dimensional ptychography, regular raster grids lead to artifacts in reconstructions due to the raster grid pathology that can be addressed by using different choice of probe positions to break the symmetry, see [68] for details. However, because of the averaging property of randomized shifts in tomographic reconstruction, the joint three-dimensional ptychotomography solver allows avoiding these artifacts. Indeed, all presented reconstruction results are performed with regular raster grids for each angle. The proposed implementation can also work with irregular scanning without affecting algorithm performance.

 figure: Fig. 3.

Fig. 3. Illumination probe function and randomly shifted scan patterns used in simulations (left); photon counts on the detector when scanning the chip model for different probe intensity levels $I$ (middle, right).

Download Full Size | PDF

5.1 Comparison of the ML and LS models

Figure 4 demonstrates reconstruction results by the maximum likelihood $u_\textrm {ML}$(5) and least-squares $u_\textrm {LS}$ (7) estimates for ptychography measurements generated by using different illumination probe intensities and distorted by Poisson noise. The real part of the complex refractive index $u$ is approximately 10 times higher than the imaginary part (see Fig. 2); therefore, one can observe a significant loss of reconstruction quality when comparing $\delta$ and $\beta$ reconstructions. Indeed, for such high energy (8.8 keV) and materials used for chip manufacturing, there is typically no interest in recovering absorption since the object is semi-transparent, whereas phase information gives a higher contrast level and reveals significantly more structural detail.

 figure: Fig. 4.

Fig. 4. Comparison of reconstructions of a slice of the 3D synthetic chip with the maximum likelihood $u_\textrm {ML}=\delta _\textrm {ML}+i\beta _\textrm {ML}$ and least-squares $u_\textrm {LS}=\delta _\textrm {LS}+i\beta _\textrm {LS}$ estimates by using different probe intensity levels $I$. Parameters: object size $(512\!\times \!512\!\times \!512)$, detector size $(128\!\times \!128)$, probe overlap 50%, number of projection angles 768. The ADMM scheme was run for 300 outer iterations with 4 inner CG iterations for each ptychography and tomography subproblem.

Download Full Size | PDF

For analyzing reconstruction quality, Fig. 4 has inscribed the structural similarity (SSIM) index computed with respect to formula (6) in [69]. SSIM is a perception-based model that considers image degradation as perceived change in structural information, while also incorporating important perceptual phenomena, including both luminance masking and contrast masking terms, which in the case of the problem considered is more qualitative compared to the techniques estimating absolute errors, such as root mean square (RMS) or signal to noise ratio (SNR). The reconstruction quality degrades for the least-squares model whenever the probe intensities become small. We point out that low illumination intensities give not only zero high-frequency components on the detector but also a small number of photons for low frequencies. Photon distribution, in this case, cannot be approximated by a Gaussian; therefore, the least-squares model is generally not applied. Visually, the problem is demonstrated by a blurring effect where small features cannot be separated from the background. In the case of the maximum likelihood estimator, the features remain sharp and separable even for low probe intensities, as is also confirmed by higher values of the SSIM index.

5.2 MAP reconstruction

The maximum a posteriori estimate $u_\textrm {MAP}$ extends the maximum likelihood estimate $u_\textrm {ML}$ by adding prior knowledge about the reconstructed object and, in this way, improves reconstruction quality. The prior knowledge defined in terms of the total variation penalty factor $\alpha \||\nabla u|\|_1$ induces small oscillations in the solution by forcing the gradient to be sparse. These lead to preserving sharp edges of the object inner structure and minimizing high-frequency noise components.

We note that the TV regularization also improves reconstructions by the least-squares $u_\textrm {LS}$ estimate for the cases with high-illumination probe intensities. However, the TV approach is not efficient for low probe intensity cases because of the blurring effect in reconstruction; see Fig. 4, right. We have confirmed this fact by performing tests with the regularized $u_\textrm {LS}$ estimate and decided not to show them in this paper.

Different automated techniques exist for choosing the constant $\alpha$, which denotes a trade-off between data fidelity and the regularization term. The most popular techniques include the discrepancy principle [70], L-curve criterion [71], and generalized cross-validation [72]. A detailed comparison of the techniques can be found, for instance, in [73, Chapter 5,] where the author analyzes behavior of the listed methods. Based on this comparison, we decided to use the L-curve criterion since for several problems considered in [73] it demonstrated robustness to noise and sharp solutions.

The L-curve criterion for choosing the penalty constant $\alpha$ in the $u_\textrm {MAP}$ estimate (8) seeks to balance the error components $\sum _{j=1}^{n}\{|\mathcal {G}\mathcal {H} u|_j^2-2d_j\log |\mathcal {G}\mathcal {H} u|_j\}$ and $\|\nabla u\|_1$ via inspection of the L-curve constructed by using logarithms of these two errors. Figure 5 shows constructed L-curves for reconstructions with different probe intensities. The L-curve has two distinctly different parts: quite flat (regularization error dominates), and more vertical (fidelity error dominates). The idea is then to choose $\alpha$ that corresponds to the L-curve’s corner considered as a balance of the regularization and fidelity errors.

 figure: Fig. 5.

Fig. 5. L-curve analysis in logarithmic scale for choosing optimal regularization parameter $\alpha$. The panels have inscribed small reconstructed regions for $\alpha \!=\!1\mathrm {e}\!-\!09,1\mathrm {e}\!-\!08,$ and $1\mathrm {e}\!-\!07$, as well as horizontal profiles marked with different colors.

Download Full Size | PDF

In the ptychotomography reconstruction of the chip data, the L-curve’s corner is not clearly found; therefore, we analyze reconstructions for values $\alpha \in [1\mathrm {e}\!-\!09,1\mathrm {e}\!-\!07]$ producing the region of maximal curvature. Each panel of Fig. 5 has small reconstructed regions and corresponding horizontal profiles marked with different colors inscribed. The profile for $\alpha \!=\!1\mathrm {e}\!-\!07$ has relatively lower amplitudes compared with other profiles. At the same time, the profile for $\alpha \!=\!1\mathrm {e}\!-\!09$ is distorted by noise, especially for low probe intensity cases. The profile for $\alpha \!=\!1\mathrm {e}\!-\!08$ in turn demonstrates favorable amplitudes and noise suppression. Using this strategy, we have found optimal regularization parameters for each reconstruction case:

$$\begin{array}{ll} I = 3 {{\textrm{J}}/({\textrm{m}}^2 {\textrm{s}})}: \alpha = 8\mathrm{e} - 09, &I\!=\!0.3 {{\textrm{J}}/({\textrm{m}}^2 {\textrm{s}})}: \alpha\!=\!1\mathrm{e}\!-\!08, \\ I\!=\!0.03 {{\textrm{J}}/({\textrm{m}}^2 {\textrm{s}})}: \alpha\!=\!2\mathrm{e}\!-\!08, &I\!=\!0.003 {{\textrm{J}}/({\textrm{m}}^2 {\textrm{s}})}: \alpha\!=\!4\mathrm{e}\!-\!08 . \end{array}$$
In Fig. 6, we demonstrate reconstruction results for $u_\textrm {MAP}$ estimate with the TV regularization. Comparing these results with the ones for $u_\textrm {ML}$ in Fig. 4, we observe significant growth of the SSIM index for all reconstruction cases. One can visually identify that high-frequency noise is suppressed and edges become sharper. Moreover, the chip structure starts to be observed even in reconstructions of $\beta$ for low probe intensity cases.

 figure: Fig. 6.

Fig. 6. MAP reconstructions ($u_\textrm {MAP}=\delta _\textrm {MAP}+i\beta _\textrm {MAP}$) of a slice of the 3D synthetic chip by using different probe intensity levels $I$. The TV penalty coefficient $\alpha$ is chosen with respect to the L-curve criterion.

Download Full Size | PDF

So far the chip structure was recovered by using a sufficient number of measurements. The total number of projection angles $N_\theta =3N/2$ satisfies the Nyquist sampling criteria, and the probe overlap of 50% guarantees complete projection data covering. In what follows, we will analyze reconstruction for the incomplete number of measurements.

5.3 Incomplete number of measurements

Two cases exist for the incomplete number of measurements in the ptychotomography problem. First is a limited number of projection angles, and second is an insufficient number of scanning probe positions, a low level of probe overlapping. The TV regularization approach has shown itself as a powerful tool when dealing with artifacts from the incomplete number of projection angles in conventional tomography; see [7476]. Also shown [34,35] is that the joint approach relaxes existing requirements for probe overlap in conventional ptychography. Therefore, in the following tests we analyze results with the maximum a posteriori estimate $u_\textrm {MAP}$ for a sufficiently low number of projection angles and low probe overlap. By gradually decreasing the number of measurements until the reconstruction quality becomes unacceptable for segmentation, we end up with the total number of angles decreased by a factor of 4, namely. $N_\theta =3N/8$, and the probe overlap decreased to 25%. Figure 7 demonstrates corresponding reconstruction results from the incomplete number of measurements. The real part $\delta$ of the refractive index is acceptable for further data analysis and segmentation where the SSIM index greater than 0.9. Some features are also recovered for the noisiest case $I\!=\!{0.003}{{\textrm {J}}/({\textrm {m}}^2{\textrm {s}})}$, however, 3D volume visualization at the bottom of the figure shows that vertical connections between layers are not recovered.

 figure: Fig. 7.

Fig. 7. Reconstructions of a slice of the 3D synthetic chip from the incomplete number of measurements with $u_\textrm {MAP}$ estimate and TV regularization by using different probe intensity levels $I$. Parameters: object size $(512\!\times \!512\!\times \!512)$, detector size $(128\!\times \!128)$, probe overlap 25%, number of projection angles 192. The ADMM scheme was run for 300 outer iterations with 4 inner CG iterations for each ptychography and tomography subproblem.

Download Full Size | PDF

6. Conclusions and outlook

In this paper, we derived and validated a generic reconstruction framework for ptychotomography data based on the ADMM scheme. The framework is easily extendable with new subproblems and additional data processing procedures. A list of procedures that are common in experimental data processing workflow may contain, for instance, geometrical alignment of data, phase unwrapping, probe retrieval, phase offset, and ramp corrections. Technically, implementation of new subproblems is straightforward by adding new auxiliary variables as in Eq. (7), and additional processing procedures in turn are performed between solving the ptychography (Eq. (9)) and tomography (Eq. (10)) subproblems.

To demonstrate the capabilities of the framework, we considered a Bayesian approach with a prior model based on TV norm as an additional subproblem. We showed with numerical simulations that the proposed Poisson-based statistical noise model provides high-resolution reconstructions because the measurements at the detector pixels with low photon counts are accurately modeled. At the same time, the commonly used LS approximation model often fails to achieve high resolution at high noise levels. Even by employing the sparsity constraints such as TV, the LS model yields blurred, low-resolution reconstruction because of the inaccurate noise model for measurements at high frequencies.

The MAP estimate for the ptychotomography problem demonstrates favorable results for the incomplete number of measurements. Thus, it is promising for reducing the dose absorbed by the sample. We were able to satisfactorily recover all materials and connections of the chip that we used in simulations from a significantly reduced number of measurements, where we reduced the total number of angles by 75% and the probe overlap by 50%. Combining these observations with the fact that the Poisson-based model allows further decreasing the probe intensity by at least one order, we estimate approximately more than a 100-fold reduction in the radiation dose in real experiments. This means that with the proposed computational framework the time to collect data can be improved 100 times, assuming that the flux is kept constant.

The proposed implementation on a modern GPU demonstrates acceptable computation times for processing big data sets while performing a sufficiently large number of iterations for convergence. Reconstruction of 3D ptychography measurements of the size 20.7 GB takes approximately 11 hours for 300 outer ADMM iterations with 4 ptychography and 4 tomography CG iterations (https://github.com/math-vrn/ptychotomo). As a follow-up step, we plan to develop a multi-GPU implementation to further reduce the computation times below the data collection times and extend the computational efficiency of the framework.

While the new framework is capable of reconstructing data, we need to extend it with a number of processing tasks in order to reconstruct it from real ptychography data sets. The first challenge is the implementation of pre- and postprocessing procedures such as the alignment correction and phase unwrapping routines. These problems are studied for the conventional tomography and ptychography reconstruction see [77,78] with references therein) and need to be adapted for the proposed joint ptychotomography approach. Particularly, the alignment problem in [76] is solved with an iterative scheme involving an iterative or any other type of reconstruction method with further re-projecting procedure to find corresponding translation vectors. In the ADMM framework, the alignment procedure can be placed right after solving the ptychography subproblem. As a second step, we consider simultaneous reconstruction of the probe function, because in this paper we assume that the illumination probe is known and represented as a fixed flat-top Gaussian. Modeling of the probe is widely implemented by using orthogonal decomposition of the mutual coherence function for continuous scanning [79], or by employing a similar ADMM approach [80,81] for any type of scanning. Real datasets are larger than available working memory, so the third step concerns data management between memory on GPU, short-term memory, and hard drives. Load balancing can also be studied for effective management and use of computational resources.

The performance and quality of the proposed reconstruction method can be improved by utilizing new technologies provided by the NVidia CUDA library. Moreover, the algorithm and data structures are suitable for computations on several GPU nodes, giving rise to follow-up research with large high-performance computing facilities. Reconstruction quality may also be improved by including learned priors by use of the relatively new machine learning techniques instead of (or with) the generic priors such as the TV regularization that is used in this study. The technique has already been used for ptychography and tomography problems [8285], and the ADMM framework makes it straightforward to implement for improved joint ptychotomography reconstruction.

Funding

Office of Science (DE-AC02-06CH11357); Vetenskapsrådet (2017-00583); Intelligence Advanced Research Projects Activity.

Acknowledgments

We gratefully acknowledge Daniel J. Ching for fruitful discussions during the paper preparation.

References

1. H. N. Chapman, A. Barty, M. J. Bogan, S. Boutet, M. Frank, S. P. Hau-Riege, S. Marchesini, B. W. Woods, S. Bajt, W. H. Benner, R. A. London, E. Plönjes, M. Kuhlmann, R. Treusch, S. Düsterer, T. Tschentscher, J. R. Schneider, E. Spiller, T. Möller, C. Bostedt, M. Hoener, D. A. Shapiro, K. O. Hodgson, D. van der Spoel, F. Burmeister, M. Bergh, C. Caleman, G. Huldt, M. M. Seibert, F. R. N. C. Maia, R. W. Lee, A. Szöke, N. Timneanu, and J. Hajdu, “Femtosecond diffractive imaging with a soft-x-ray free-electron laser,” Nat. Phys. 2, 839–843 (2006). [CrossRef]  

2. P. Emma, R. Akre, J. Arthur, R. Bionta, C. Bostedt, J. Bozek, and A. Brachmann, “First lasing and operation of an angstrom-wavelength free-electron laser,” Nat. Photonics 4(9), 641–647 (2010). [CrossRef]  

3. T. Ishikawa, H. Aoyagi, T. Asaka, Y. Asano, N. Azumi, T. Bizen, and H. Ego, “A compact x-ray free-electron laser emitting in the sub-angstrom region,” Nat. Photonics 6(8), 540–544 (2012). [CrossRef]  

4. J. Miao, T. Ishikawa, I. K. Robinson, and M. M. Murnane, “Beyond crystallography: Diffractive imaging using coherent x-ray light sources,” Science 348(6234), 530–535 (2015). [CrossRef]  

5. B. Abbey, K. A. Nugent, G. J. Williams, J. N. Clark, A. G. Peele, M. A. Pfeifer, M. D. Jonge, and I. McNulty, “Keyhole coherent diffractive imaging,” Nat. Phys. 4(5), 394–398 (2008). [CrossRef]  

6. M. Dierolf, O. Bunk, S. Kynde, P. Thibault, I. Johnson, A. Menzel, K. Jefimovs, C. David, O. Marti, and F. Pfeiffer, “Ptychography & lensless x-ray imaging,” Europhys. News 39(1), 22–24 (2008). [CrossRef]  

7. H. N. Chapman and K. A. Nugent, “Coherent lensless x-ray imaging,” Nat. Photonics 4(12), 833–839 (2010). [CrossRef]  

8. K. A. Nugent, “Coherent methods in the x-ray sciences,” Adv. Phys. 59(1), 1–99 (2010). [CrossRef]  

9. J. Miao, R. L. Sandberg, and C. Song, “Coherent x-ray diffraction imaging,” IEEE J. Sel. Top. Quantum Electron. 18(1), 399–410 (2012). [CrossRef]  

10. W. Hoppe, “Beugung im inhomogenen Primärstrahlwellenfeld, I: Prinzip einer Phasenmessung,” Acta Crystallogr., Sect. A: Cryst. Phys., Diffr., Theor. Gen. Crystallogr. 25(4), 495–501 (1969). [CrossRef]  

11. M. Dierolf, A. Menzel, P. Thibault, P. Schneider, C. M. Kewish, R. Wepf, O. Bunk, and F. Pfeiffer, “Ptychographic x-ray computed tomography at the nanoscale,” Nature 467(7314), 436–439 (2010). [CrossRef]  

12. F. Pfeiffer, “X-ray ptychography,” Nat. Photonics 12(1), 9–17 (2018). [CrossRef]  

13. K. Giewekemeyera, P. Thibault, S. Kalbfleisch, A. Beerlinka, C. M. Kewishc, M. Dierolf, F. Pfeiffer, and T. Salditt, “Quantitative biological imaging by ptychographic x-ray diffraction microscopy,” Proc. Natl. Acad. Sci. 107(2), 529–534 (2010). [CrossRef]  

14. R. N. Wilke, M. Priebe, M. Bartels, K. Giewekemeyer, A. Diaz, P. Karvinen, and T. Salditt, “Hard x-ray imaging of bacterial cells: nano-diffraction and ptychographic reconstruction,” Opt. Express 20(17), 19232–19254 (2012). [CrossRef]  

15. E. Lima, A. Diaz, M. Guizar-Sicairos, S. Gorelick, P. Pernot, T. Schleirer, and A. Menzel, “Cryo-scanning x-ray diffraction microscopy of frozen-hydrated yeast,” J. Microsc. 249(1), 1–7 (2013). [CrossRef]  

16. A. Diaz, B. Malkova, M. Holler, M. Guizar-Sicairos, E. Lima, V. Panneels, G. Pigino, A. G. Bittermann, L. Wettstein, T. Tomizaki, O. Bunk, G. Schertler, T. Ishikawa, R. Wepf, and A. Menzel, “Three-dimensional mass density mapping of cellular ultrastructure by ptychographic x-ray nanotomography,” J. Struct. Biol. 192(3), 461–469 (2015). [CrossRef]  

17. J. Deng, Y. S. G. Nashed, S. Chen, N. W. Phillips, T. Peterka, R. Ross, S. Vogt, C. Jacobsen, and D. J. Vine, “Continuous motion scan ptychography: characterization for increased speed in coherent x-ray imaging,” Opt. Express 23(5), 5438 (2015). [CrossRef]  

18. M. Esmaeili, J. B. Fløystad, A. Diaz, K. Høydalsvik, M. Guizar-Sicairos, J. W. Andreasen, and D. W. Breiby, “Ptychographic x-ray tomography of silk fiber hydration,” Macromolecules 46(2), 434–439 (2013). [CrossRef]  

19. A. Schropp, P. Boye, A. Goldschmidt, S. Honig, R. Hoppe, J. Patommel, C. Rakete, D. Samberg, S. Stephan, S. Schoder, M. Burghammer, and C. G. Schroer, “Non-destructive and quantitative imaging of a nano-structured microchip by ptychographic hard x-ray scanning microscopy,” J. Microsc. 241(1), 9–12 (2011). [CrossRef]  

20. B. Mike, S. Tobias, G. Thomas, R. Michael, G. Klaus, G. Sophie-Charlotte, S. Tim, and R. Axel, “Chemical contrast in soft x-ray ptychography,” Phys. Rev. Lett. 107(20), 208101 (2011). [CrossRef]  

21. P. Trtik, A. Diaz, M. Guizar-Sicairos, A. Menzel, and O. Bunk, “Density mapping of hardened cement paste using ptychographic x-ray computed tomography,” Cem. Concr. Compos. 36, 71–77 (2013). [CrossRef]  

22. B. Chen, M. Guizar-Sicairos, G. Xiong, L. Shemilt, A. Diaz, J. Nutter, N. Burdet, S. Huo, J. Mancuso, A. Monteith, F. Vergeer, A. Burgess, and I. Robinson, “Three-dimensional structure analysis and percolation properties of a barrier marine coating,” Sci. Rep. 3(1), 1177 (2013). [CrossRef]  

23. K. Høydalsvik, J. B. Fløystad, T. Zhao, M. Esmaeili, A. Diaz, J. W. Andreasen, R. H. Mathiesen, M. Rønning, and D. W. Breiby, “In situ x-ray ptychography imaging of high-temperature CO$_2$ acceptor particle agglomerates,” Appl. Phys. Lett. 104(24), 241909 (2014). [CrossRef]  

24. J. N. Weker and M. F. Toney, “Emerging in situ and operando nanoscale x-ray imaging techniques for energy storage materials,” Adv. Funct. Mater. 25(11), 1622–1637 (2015). [CrossRef]  

25. J. Deng, D. J. Vine, S. Chen, Y. S. G. Nashed, Q. Jin, N. W. Phillips, T. Peterka, R. Ross, S. Vogt, and C. J. Jacobsen, “Simultaneous cryo x-ray ptychographic and fluorescence microscopy of green algae,” Proc. Natl. Acad. Sci. 112(8), 2314–2319 (2015). [CrossRef]  

26. M. Holler, M. Guizar-Sicairos, E. H. R. Tsai, R. Dinapoli, E. Müller, O. Bunk, J. Raabe, and G. Aeppli, “High-resolution non-destructive three-dimensional imaging of integrated circuits,” Nature 543(7645), 402–406 (2017). [CrossRef]  

27. A. J. Saubermann and P. Echlin, “The preparation, examination and analysis of frozen hydrated tissue sections by scanning transmission electron microscopy and x-ray microanalysis,” J. Microsc. 105(2), 155–191 (1975). [CrossRef]  

28. J. Miao, P. Charalambous, J. Kirz, and D. Sayre, “Extending the methodology of x-ray crystallography to allow imaging of micrometre-sized non-crystalline specimens,” Nature 400(6742), 342–344 (1999). [CrossRef]  

29. R. J. Southworth-Davies, M. A. Medina, I. Carmichael, and E. F. Garman, “Observation of decreased radiation damage at higher dose rates in room temperature protein crystallography,” Structure 15(12), 1531–1541 (2007). [CrossRef]  

30. M. R. Howells, T. Beetz, H. N. Chapman, C. Cui, J. M. Holton, C. J. Jacobsen, J. Kirz, E. Lima, S. Marchesini, H. Miao, D. Sayre, D. A. Shapiro, J. C. H. Spence, and D. Starodub, “An assessment of the resolution limitation due to radiation-damage in X-ray diffraction microscopy,” J. Electron Spectrosc. Relat. Phenom. 170(1-3), 4–12 (2009). [CrossRef]  

31. M. Guizar-Sicairos, A. Diaz, M. Holler, M. S. Lucas, A. Menzel, R. A. Wepf, and O. Bunk, “Phase tomography from x-ray coherent diffractive imaging projections,” Opt. Express 19(22), 21345–21357 (2011). [CrossRef]  

32. M. Holler, A. Diaz, M. Guizar-Sicairos, P. Karvinen, E. Färm, E. Härkönen, M. Ritala, A. Menzel, J. Raabe, and O. Bunk, “X-ray ptychographic computed tomography at 16 nm isotropic 3D resolution,” Sci. Rep. 4(1), 3857 (2015). [CrossRef]  

33. O. Bunk, M. Dierolf, S. Kynde, I. Johnson, O. Marti, and F. Pfeiffer, “Influence of the overlap parameter on the convergence of the ptychographical iterative engine,” Ultramicroscopy 108(5), 481–487 (2008). [CrossRef]  

34. D. Gürsoy, “Direct coupling of tomography and ptychography,” Opt. Lett. 42(16), 3169–3172 (2017). [CrossRef]  

35. S. Aslan, V. Nikitin, D. J. Ching, T. Bicer, S. Leyffer, and D. Gürsoy, “Joint ptycho-tomography reconstruction through alternating direction method of multipliers,” Opt. Express 27(6), 9128–9143 (2019). [CrossRef]  

36. Z. Wen, C. Yang, X. Liu, and S. Marchesini, “Alternating direction methods for classical and ptychographic phase retrieval,” Inverse Probl. 28(11), 115010 (2012). [CrossRef]  

37. S. Geman and D. Geman, “Stochastic relaxation, Gibbs distributions, and the Bayesian restoration of images,” Readings in Computer Vision, (Elsevier, 1987), pp. 564–584.

38. S. Helgason and S. Helgason, The Radon transform, vol. 2 (Springer, 1999).

39. B. Reiffen and H. Sherman, “An optimum demodulator for poisson processes: Photon source detectors,” Proc. IEEE 51(10), 1316–1320 (1963). [CrossRef]  

40. J. Qian, C. Yang, A. Schirotzek, F. Maia, and S. Marchesini, “Efficient algorithms for ptychographic phase retrieval,” Ser. Contemp. Appl. Math. 615, 261–279 (2014). [CrossRef]  

41. R. Xu, M. Soltanolkotabi, J. P. Haldar, W. Unglaub, J. Zusman, A. F. Levi, and R. M. Leahy, “Accelerated Wirtinger flow: A fast algorithm for ptychography,” arXiv preprint arXiv:1806.05546 (2018).

42. C. Yang, J. Qian, A. Schirotzek, F. Maia, and S. Marchesini, “Iterative algorithms for ptychographic phase retrieval,” arXiv preprint arXiv:1105.5628 (2011).

43. L.-H. Yeh, J. Dong, J. Zhong, L. Tian, M. Chen, G. Tang, M. Soltanolkotabi, and L. Waller, “Experimental robustness of Fourier ptychography phase retrieval algorithms,” Opt. Express 23(26), 33214–33240 (2015). [CrossRef]  

44. S. Boyd, N. Parikh, E. Chu, B. Peleato, and J. Eckstein, “Distributed optimization and statistical learning via the alternating direction method of multipliers,” Foundations Trends Mach. Learn. 3(1), 1–122 (2010). [CrossRef]  

45. W. Wirtinger, “Zur formalen Theorie der Funktionen von mehr komplexen Veränderlichen,” Math. Annalen 97(1), 357–375 (1927). [CrossRef]  

46. E. M. Stein and R. Shakarchi, Complex analysis, vol. 2 (Princeton University Press, 2010).

47. L. Li, X. Wang, and G. Wang, “Alternating direction method of multipliers for separable convex optimization of real functions in complex variables,” Math. Probl. Eng. 2015, 1–14 (2015). [CrossRef]  

48. R. Hunger, An introduction to complex differentials and complex differentiability (Munich University of Technology, Inst. for Circuit Theory and Signal Processing, 2007).

49. Y. Dai, J. Han, G. Liu, D. Sun, H. Yin, and Y.-X. Yuan, “Convergence properties of nonlinear conjugate gradient methods,” SIAM J. Control 10(2), 345–358 (2000). [CrossRef]  

50. R. Fletcher and C. M. Reeves, “Function minimization by conjugate gradients,” The Comput. J. 7(2), 149–154 (1964). [CrossRef]  

51. E. Polak and G. Ribiere, “Note sur la convergence de méthodes de directions conjuguées,” ESAIM: Math. Model. Numer. Analysis-Modélisation Mathématique et Analyse Numérique 3(16), 35–43 (1969). [CrossRef]  

52. B. T. Polyak, “The conjugate gradient method in extremal problems,” USSR Comput. Math. Math. Phys. 9(4), 94–112 (1969). [CrossRef]  

53. Y. H. Dai and Y. Yuan, “A nonlinear conjugate gradient method with a strong global convergence property,” SIAM J. Control 10(1), 177–182 (1999). [CrossRef]  

54. K. Sauer and C. Bouman, “A local update strategy for iterative reconstruction from projections,” IEEE Trans. Acoust., Speech, Signal Process. 41(2), 534–548 (1993). [CrossRef]  

55. A. Chambolle and T. Pock, “An introduction to continuous optimization for imaging,” Acta Numer. 25, 161–319 (2016). [CrossRef]  

56. D. L. Donoho, “De-noising by soft-thresholding,” IEEE Trans. Inf. Theory 41(3), 613–627 (1995). [CrossRef]  

57. G. Beylkin, “On the fast Fourier transform of functions with singularities,” Appl. Comput. Harmon. Analysis 2(4), 363–381 (1995). [CrossRef]  

58. J. A. Fessler and B. P. Sutton, “Nonuniform fast Fourier transforms using min-max interpolation,” IEEE Trans. Acoust., Speech, Signal Process. 51(2), 560–574 (2003). [CrossRef]  

59. A. George and Y. Bresler, “Fast tomographic reconstruction via rotation-based hierarchical backprojection,” SIAM J. Appl. Math. 68(2), 574–597 (2007). [CrossRef]  

60. F. Andersson, M. Carlsson, and V. V. Nikitin, “Fast algorithms and efficient GPU implementations for the Radon transform and the back-projection operator represented as convolution operators,” SIAM J. on Imaging Sci. 9(2), 637–664 (2016). [CrossRef]  

61. Y. S. G. Nashed, D. J. Vine, T. Peterka, J. Deng, R. Ross, and C. Jacobsen, “Parallel ptychographic reconstruction,” Opt. Express 22(26), 32082–32097 (2014). [CrossRef]  

62. S. Marchesini, H. Krishnan, B. J. Daurer, D. A. Shapiro, T. Perciano, J. A. Sethian, and F. R. N. C. Maia, “SHARP: a distributed GPU-based ptychographic solver,” J. Appl. Crystallogr. 49(4), 1245–1252 (2016). [CrossRef]  

63. G. Zheng, R. Horstmeyer, and C. Yang, “Wide-field, high-resolution Fourier ptychographic microscopy,” Nat. Photonics 7(9), 739–745 (2013). [CrossRef]  

64. F. Knoll, A. Schwarzl, C. Diwoky, and D. K. Sodickson, “Gpunufft-an open source gpu library for 3d regridding with direct Matlab interface,” in International Society for Magnetic Resonance in Medicine: Scientific Meeting & Exhibition, (2014).

65. F. Andersson, M. Carlsson, and V. V. Nikitin, “Fast Laplace transforms for the exponential Radon transform,” J. Fourier Analysis Appl. 24(2), 431–450 (2018). [CrossRef]  

66. T. Bicer, D. Gürsoy, V. D. Andrade, R. Kettimuthu, W. Scullin, F. D. Carlo, and I. T. Foster, “Trace: a high-throughput tomographic reconstruction engine for large-scale datasets,” Adv. Struct. Chem. Imaging 3(1), 6–0 (2017). [CrossRef]  

67. D. M. Pelt, D. Gürsoy, W. J. Palenstijn, J. Sijbers, F. De Carlo, and K. J. Batenburg, “Integration of TomoPy and the ASTRA toolbox for advanced processing and reconstruction of tomographic synchrotron data,” J. Synchrotron Radiat. 23(3), 842–849 (2016). [CrossRef]  

68. P. Thibault, M. Dierolf, O. Bunk, A. Menzel, and F. Pfeiffer, “Probe retrieval in ptychographic coherent diffractive imaging,” Ultramicroscopy 109(4), 338–343 (2009). [CrossRef]  

69. Z. Wang, E. P. Simoncelli, and A. C. Bovik, “Multiscale structural similarity for image quality assessment,” in The Thirty-Seventh Asilomar Conference on Signals, Systems & Computers, 2003 vol. 2 (IEEE, 2003), pp. 1398–1402.

70. Y.-W. Wen and R. H. Chan, “Parameter selection for total-variation-based image restoration using discrepancy principle,” IEEE Transactions on Image Process. 21(4), 1770–1781 (2012). [CrossRef]  

71. V. Agarwal, “Total variation regularization and L-curve method for the selection of regularization parameter,” ECE599 21, 1–31 (2003).

72. G. H. Golub, M. Heath, and G. Wahba, “Generalized cross-validation as a method for choosing a good ridge parameter,” Technometrics 21(2), 215–223 (1979). [CrossRef]  

73. P. C. Hansen, Discrete inverse problems: insight and algorithms, vol. 7 (Siam, 2010).

74. E. Y. Sidky and X. Pan, “Image reconstruction in circular cone-beam computed tomography by constrained, total-variation minimization,” Phys. Med. Biol. 53(17), 4777–4807 (2008). [CrossRef]  

75. M. Persson, D. Bone, and H. Elmqvist, “Total variation norm for three-dimensional iterative reconstruction in limited view angle tomography,” Phys. Med. Biol. 46(3), 853–866 (2001). [CrossRef]  

76. V. Nikitin, M. Carlsson, F. Andersson, and R. Mokso, “Four-dimensional tomographic reconstruction by time domain decomposition,” IEEE Transactions on Comput. Imaging 5(3), 409–419 (2019). [CrossRef]  

77. S. Sala, D. J. Batey, A. Prakash, S. Ahmed, C. Rau, and P. Thibault, “Ptychographic x-ray computed tomography at a high-brilliance x-ray source,” Opt. Express 27(2), 533–542 (2019). [CrossRef]  

78. D. Gursoy, Y. P. Hong, K. He, K. Hujsak, S. Yoo, S. Chen, and Y. L. et al., “Rapid alignment of nanotomography data using joint iterative reconstruction and reprojection,” Sci. Rep. 7(1), 11818 (2017). [CrossRef]  

79. E. Wolf, “New theory of partial coherence in the space-frequency domain. Part I: spectra and cross spectra of steady-state sources,” J. Opt. Soc. Am. 72(3), 343–351 (1982). [CrossRef]  

80. H. Chang, P. Enfedaque, Y. Lou, and S. Marchesini, “Partially coherent ptychography by gradient decomposition of the probe,” Acta Crystallogr., Sect. A: Found. Adv. 74(3), 157–169 (2018). [CrossRef]  

81. H. Chang, P. Enfedaque, and S. Marchesini, “Blind ptychographic phase retrieval via convergent alternating direction method of multipliers,” SIAM J. on Imaging Sci. 12(1), 153–185 (2019). [CrossRef]  

82. D. M. Pelt and K. J. Batenburg, “Fast tomographic reconstruction from limited data using artificial neural networks,” IEEE Transactions on Image Process. 22(12), 5238–5251 (2013). [CrossRef]  

83. K.-L. Hua, C.-H. Hsu, S. C. Hidayati, W.-H. Cheng, and Y.-J. Chen, “Computer-aided classification of lung nodules on computed tomography images via deep learning technique,” OncoTargets Ther. 8, 2015–2022 (2015). [CrossRef]  

84. S. Jiang, K. Guo, J. Liao, and G. Zheng, “Solving Fourier ptychographic imaging problems via neural network modeling and tensorflow,” Biomed. Opt. Express 9(7), 3306–3319 (2018). [CrossRef]  

85. T. Nguyen, Y. Xue, Y. Li, L. Tian, and G. Nehmetallah, “Deep learning approach for Fourier ptychography microscopy,” Opt. Express 26(20), 26470–26484 (2018). [CrossRef]  

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (7)

Fig. 1.
Fig. 1. Schematic of a generic ptychotomography setup for imaging a 3D sample. A coherent x-ray beam spot propagates through the sample, and the diffraction pattern of the transmitted radiation is measured by a pixelated photon counting detector. In order to acquire ptychography data $d$ , the sample $u$ is raster-scanned at different positions and views $\theta$ by using the rotation and translation stages. Detected photons at large scattering angles contain high-frequency information about the object, and they often follow a Poisson process at low doses of radiation exposure.
Fig. 2.
Fig. 2. Synthetic model of the chip with features representing different resolution levels. Pixel size is ${10} \,{\textrm {nm}}$ , number of pixels $(512\times \!512\times \!512)$ . Right panel shows the amplitude difference between real and imaginary parts of the complex refractive index.
Fig. 3.
Fig. 3. Illumination probe function and randomly shifted scan patterns used in simulations (left); photon counts on the detector when scanning the chip model for different probe intensity levels $I$ (middle, right).
Fig. 4.
Fig. 4. Comparison of reconstructions of a slice of the 3D synthetic chip with the maximum likelihood $u_\textrm {ML}=\delta _\textrm {ML}+i\beta _\textrm {ML}$ and least-squares $u_\textrm {LS}=\delta _\textrm {LS}+i\beta _\textrm {LS}$ estimates by using different probe intensity levels $I$ . Parameters: object size $(512\!\times \!512\!\times \!512)$ , detector size $(128\!\times \!128)$ , probe overlap 50%, number of projection angles 768. The ADMM scheme was run for 300 outer iterations with 4 inner CG iterations for each ptychography and tomography subproblem.
Fig. 5.
Fig. 5. L-curve analysis in logarithmic scale for choosing optimal regularization parameter $\alpha$ . The panels have inscribed small reconstructed regions for $\alpha \!=\!1\mathrm {e}\!-\!09,1\mathrm {e}\!-\!08,$ and $1\mathrm {e}\!-\!07$ , as well as horizontal profiles marked with different colors.
Fig. 6.
Fig. 6. MAP reconstructions ( $u_\textrm {MAP}=\delta _\textrm {MAP}+i\beta _\textrm {MAP}$ ) of a slice of the 3D synthetic chip by using different probe intensity levels $I$ . The TV penalty coefficient $\alpha$ is chosen with respect to the L-curve criterion.
Fig. 7.
Fig. 7. Reconstructions of a slice of the 3D synthetic chip from the incomplete number of measurements with $u_\textrm {MAP}$ estimate and TV regularization by using different probe intensity levels $I$ . Parameters: object size $(512\!\times \!512\!\times \!512)$ , detector size $(128\!\times \!128)$ , probe overlap 25%, number of projection angles 192. The ADMM scheme was run for 300 outer iterations with 4 inner CG iterations for each ptychography and tomography subproblem.

Tables (1)

Tables Icon

Table 1. Computational times on NVidia Tesla P100 for recovering objects of the size N × N × N from the ptychography data generated for N θ = 3 N / 8 angles and with 25% probe overlap. Each of 300 outer ADMM iterations involves 4 inner ptychography and 4 inner tomography CG iterations.

Equations (29)

Equations on this page are rendered with MathJax. Learn more.

| G H u | 2 = d ,
H u = exp ( i ν R u ) ,
G ψ = F Q ψ ,
p ( d | u ) = j = 1 n e | G H u | j 2 | G H u | j 2 d j d j ! .
u ML = arg min u j = 1 n { | G H u | j 2 2 d j log | G H u | j } .
u MAP = arg min u j = 1 n { | G H u | j 2 2 d j log | G H u | j } + α q ( J u ) .
u LS = arg min u j = 1 n ( | G H u | j d j ) 2 ,
min ψ , φ j = 1 n { | G ψ | j 2 2 d j log | G ψ | j } + α q ( φ ) subject to H u = ψ , J u = φ ,
L ρ , τ ( ψ , u , φ , λ , μ ) = j = 1 n { | G ψ | j 2 2 d j log | G ψ | j } + α q ( φ ) + 2 Re { λ H ( H u ψ ) } + ρ H u ψ 2 2 + 2 Re { μ H ( J u φ ) } + τ J u φ 2 2 ,
ψ k + 1 = arg min ψ L ρ , τ ( ψ , u k , φ k , λ k , μ k ) ,
u k + 1 = arg min u L ρ , τ ( ψ k + 1 , u , φ k , λ k , μ k ) ,
φ k + 1 = arg min φ L ρ , τ ( ψ k + 1 , u k + 1 , φ , λ k , μ k ) ,
λ k + 1 = λ k + ρ ( H u k + 1 ψ k + 1 ) ,
μ k + 1 = μ k + τ ( J u k + 1 φ k + 1 ) .
F ( ψ ) = j = 1 n { | G ψ | j 2 2 d j log | G ψ | j } + ρ H u k ψ + λ k / ρ 2 2 .
ψ F ( ψ ) = G H ( G ψ d ( G ψ ) ) ρ ( H u k ψ + λ k / ρ ) .
η m + 1 = ψ F ( ψ m + 1 ) + ψ F ( ψ m + 1 ) 2 2 ( ψ F ( ψ m + 1 ) ψ F ( ψ m ) ) H η m η m
F ~ ( ψ ) = j = 1 n ( | G ψ | j d ) j 2 + ρ H u k ψ + λ k / ρ 2 2 ,
ψ F ~ ( ψ ) = G H ( G ψ d G ψ | G ψ | ) ρ ( H u k ψ + λ k / ρ ) .
F ( u ) = ρ H u ψ k + 1 + λ k / ρ 2 2 + τ J u φ k + μ k / τ 2 2
H u ψ k + 1 + λ k / ρ 2 2 = e 2 π i ν R u ψ k + 1 + λ k / ρ 2 2 2 π i ν ( ψ k + 1 λ k / ρ ) ( R u + i ν 2 π log ( ψ k + 1 λ k / ρ ) ) 2 2 ,
F ( u ) ρ K R u ξ 1 2 2 + τ J u ξ 2 2 2 ,
K R u = 2 π i ν ( ψ k + 1 λ k / ρ ) R u
ξ 1 = ( ψ k + 1 λ k / ρ ) log ( ψ k + 1 λ k / ρ ) , ξ 2 = φ k μ k / τ .
u F ( u ) = ρ R T K H ( K R u ξ 1 ) + τ J T ( J u ξ 2 ) .
F ( φ ) = α q ( φ ) + τ J u k + 1 φ + μ k / τ 2 2 .
q ( φ ) = | u | 1 ,
φ ~ = u k + 1 + μ k / τ | u k + 1 + μ k / τ | max ( 0 , | u k + 1 + μ k / τ | α / τ ) ,
I = 3 J / ( m 2 s ) : α = 8 e 09 , I = 0.3 J / ( m 2 s ) : α = 1 e 08 , I = 0.03 J / ( m 2 s ) : α = 2 e 08 , I = 0.003 J / ( m 2 s ) : α = 4 e 08 .
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.