Abstract
A new algorithmic framework is developed for holographic coherent diffraction imaging (HCDI) based on maximum likelihood estimation (MLE). This method provides superior image reconstruction results for various practical HCDI settings, such as when data is highly corrupted by Poisson shot noise and when low-frequency data is missing due to occlusion from a beamstop apparatus. This method is also highly robust in that it can be implemented using a variety of standard numerical optimization algorithms, and requires fewer constraints on the physical HCDI setup compared to current algorithms. The mathematical framework developed using MLE is also applicable beyond HCDI to any holographic imaging setup where data is corrupted by Poisson shot noise.
© 2022 Optica Publishing Group under the terms of the Optica Open Access Publishing Agreement
1. Introduction
1.1 Holographic CDI and phase retrieval
Coherent Diffraction Imaging, or CDI, is a scientific imaging technique used for resolving nanoscale scientific specimens, such as macroviruses, proteins, and crystals [1]. In CDI, a coherent radiation source, often being an X-ray, is incident upon a specimen, whereupon diffraction occurs. The resulting diffracted wave is then incident upon a far-field detector which measures the resulting photon flux. This photon flux is approximately proportional to the squared magnitude values of the Fourier transform of the electric field within the diffraction area. Given this data, the specimen’s structure (e.g., its electron density) can then, in principle, be determined by solving the mathematical inverse problem of recovering a signal from squared magnitude measurements of its (oversampled) Fourier transform, which is known as the phase retrieval problem. Phase retrieval is a highly challenging inverse problem, which in general does not admit unique or closed-form solutions and can at best be approximately solved via iterative algorithms [2,3].
To improve this situation, a popular variant on CDI known as holographic coherent diffraction imaging or HCDI has been developed in which additional information is inputted and can be leveraged towards better solving the phase retrieval problem. Specifically, in HCDI a known “reference” object is placed adjacently to the imaging specimen. A schematic of such a setup is shown in Fig. 1. With this additional known information (e.g. the electron density of the reference), the resulting inverse problem, known as the holographic phase retrieval problem can be written abstractly as
1.2 Prior art
As briefly iscussed in 1.1, the knowledge of the reference values $\mathbf {R}$ allows for $\mathbf {X}$ to be solved for via a (linear) deconvolution problem, which provides exact reconstruction in the noiseless setting. We briefly sketch this deconvolution procedure and the leading algorithms to date for it’s computation. Consider a specimen $\mathbf {X} \in \mathbb {C}^{n \times n}$ and reference $\mathbf {R} \in \mathbb {C}^{n \times n}$ which are separated by an $n \times n$ zero block to altogether form the hybrid object $\mathbf {S} \in \mathbb {C}^{n \times (3n)}$ given by:
Let $\mathbf {Y} = \left | {\mathcal {F}(\mathbf {S})} \right |^{2}$, i.e. $\mathbf {Y}$ is a noiseless version of the measured HCDI data. By the well-known Wiener-Khinchine theorem, $\mathbf {A}=\mathcal {F}^{-1} \left ( \mathbf {Y} \right )$ is equal to the autocorrelation of $\mathbf {S}$. Moreover, this autocorrelation contains a submatrix which is equal to the cross-correlation of $\mathbf {X}$ and $\mathbf {R}$ [4], which we shall denote by $\mathbf {Z} = \mathbf {X} \star \mathbf {R}$. Then, given the noisy data $\widetilde {\mathbf {Y}}$, the corresponding submatrix $\widetilde {\mathbf {Z}}$ can be thought of as a noise-corrupted version of this cross-correlation. An estimate $\widetilde {\mathbf {X}}$ for the specimen image can thus be given by a deconvolution formula, which is known as inverse filtering, as is given by:Without the presence of noise, $\widetilde {\mathbf {X}}$ gives an exact reconstruction of $\mathbf {X}$, i.e. $\widetilde {\mathbf {X}}=\mathbf {X}$. Note as well that the inverse filtering formula of Eq. (3) requires the full zero separation given in Eq. (2). More recently proposed deconvolution algorithms, notably HERALDO [6] and Referenced Deconvolution [4], do not require this full separation. However, this comes at the expense of having more complicated reconstruction formulas, which are not computationally efficient for complicated reference objects $\mathbf {R}$ such as the uniformly redundant array (URA) Ref. [7].
A variant on the inverse filtering formula of Eq. (3) (which also requires the full specimen-reference separation) and incorporates a denoising method is known as Wiener filtering [8], and is given by
Another major drawback of these deconvolution methods is that they cannot directly account for missing data due to beamstop occlusion. In practice, the missing data thus must be estimated before these methods can be applied [7,8]. (This is often done in practice by interpolating with a Gaussian or error function.) A recent deterministic algorithm [10,11] is able to recover images directly with missing beamstop data via the construction and solution of a system of equations. This method, however, cannot be directly modified to make use of the denoising approach of Wiener filtering, and unlike direct deconvolution can give rise systems of equations which are numerically unstable [4,12].
This approach lacks a systematic framework, and does not produce high-quality results (especially given data in the low-photon regime).
More recent papers [13–15] have considered an approach to classical (i.e. non-holographic) phase retrieval involving maximum likelihood estimation (MLE). Our experiments demonstrate that these approaches alone are insufficient for quality image reconstruction given low-photon CDI data, and that the additional usage of a holographic reference object is crucial. As well, these recent works have focused on the theoretical and algorithmic aspects of this problem, and focus on particular optimization algorithms, being ADMM [13,15] or truncated Wirtinger flow (TWF) [14], respectively. [15] alone considers an instance of beamstop occlusion.
Other works on deep learning methods for low-photon imaging have recently been published, notably [16,17]. A recent preprint considers a hybrid Poisson-Gaussian noise model in an idealized problem setting [18].
1.3 Our contributions
In this work, we propose and study the advent of maximum likelihood estimation for holographic CDI. This method provides several practical advantages over classical algorithms. Specifically, it can accommodate low-photon data that is highly noise-corrupted, as well as missing data that is occluded by a beamstop. It also does not require the classical physical constraints of having at least two-times oversampled data, a zero separation between the specimen and the reference, and a rectangular geometry for the reference. In contrast to current HCDI methods which each require a specific reconstruction algorithm, the MLE optimization framework can also be robustly optimized via a variety of standard numerical optimization methods. In this work, we as well provide extensive and thorough testing of various practical CDI problem settings, such as the effects of variable photon flux values, the presence of a beamstop apparatus, and the usage of various popular reference objects.
2. Practical considerations and mathematical modeling
We summarize several practical HCDI considerations, and their mathematical modeling, which in practice further complicate the image reconstruction problem beyond that of the idealized holographic phase retrieval problem.
2.1 Poisson shot noise and the low-photon regime
The well-known Poisson distribution is a discrete probability distribution, which depends on a parameter $\lambda >0$. A discrete random variable with this distribution — denoted by $Z \sim \text {Pois}(\lambda )$ — has as probability mass function given by
As a consequence of the quantum dynamics of photon emission in a radiation source, the photon flux measured at a CDI detector follows a Poisson shot noise model. Specifically, let $\overline {\mathbf {Y}}$ denote the average value of $\mathbf {Y}$ (averaged over the number of detector pixels), and $N_p$ be the average photon flux per pixel. The data measured at the detector (i.e. the number of photons recorded), at each pixel location $(i,j) \in \mathcal {M}$, is modeled as [4]:
For many biophysical applications of HCDI, such as imaging of proteins, cells, and tissues, the incoming photon flux $N_p$ must be limited so as to not cause damage to the specimen from overexposure to radiation [19,20]. In this setting, HCDI must operate in the low-photon regime, e.g. given data measurements for which $N_p < 10$ [21]. Another setting in which low-photon HCDI arises is when X-ray energy resources may be limited, or sought to be minimized [21]. Since the level of noise corruption given by the Poisson shot model is inversely proportional to $N_p$, this amounts to HCDI imaging given highly noisy measurements.
In this noisy setting, the classical denoising methods for deconvolution (e.g. see 1.2) break down, since they rely on the assumption that shot noise can be well approximated as Gaussian noise. This is based on the well-known behavior that the Poisson distribution approaches a normal (i.e. Gaussian) distribution as the value of the parameter $\lambda$ increases — an assumption that breaks down as $N_p$ decreases.
2.2 Beamstop occlusion
In HCDI experiments, the central portion of diffracted radiation is often blocked from reaching the detector array by a beamstop apparatus (see Fig. 2). This is because the low-frequency content (i.e. the Fourier transform magnitudes) of the measured data is typically much larger in magnitude than that of the higher frequencies [4] (e.g. see Fig. 3). Thus, the low-frequency data must typically be excluded so that the range of measured values does not exceed the dynamic range of the detector sensors [22]. In this case, the data acquired is more realistically modelled as
2.3 Reference design
In the ideal (i.e. noiseless) setting, any known reference object satisfying mild constraints gives rise to exact image recovery via the solution of a linear deconvolution problem [4]. Practically speaking, however, given noisy measurements the choice of reference objects significantly impacts the quality of image reconstruction.
The first reference object to be implemented for holographic imaging was the pinhole reference [23]. The pinhole reference (shown in Fig. 4), which is mathematically represented by a delta function, gives rise to the simplest system of equations for image reconstruction. Due to this simplicity as well as its historical familiarity, the pinhole reference has remained a popular reference choice in holographic imaging, including for HCDI.
Recent research analyzing the behavior of various reference geometries [4] has shown that amongst simple reference geometries and given mid-to-high photon data, a block reference (i.e. a square-shaped region of empty space adjacent to the imaging specimen, as shown in Fig. 4) produces the best image reconstruction quality.
Further improved image reconstruction (in the mid-to-high photon regime) has been shown to be achievable by a reference known as a uniformly redundant array (URA) [7], which consists of a highly structured binary pattern, as shown in Fig. 4. This fabrication and implementation of this reference, however, is more challenging and expensive than a simple reference geometry.
2.4 Physical constraints
The established, deconvolution-based algorithms for holographic phase retrieval require two physical constraints for the experimental setup and acquired data. Firstly, the acquired Fourier transform magnitude data must be oversampled by at least two-times in both the x- and y- directions. (More precisely, for an object of length $n$ in the x- or y- direction, there must be at least $m \geq 2n-1$ collected frequency samples in the same direction.) Secondly, a minimum separation condition must be satisfied between the specimen and reference object. This condition, known as the classical holographic separation condition, states that for a specimen and reference each of size $n \times n$ pixels, there must be a zero-region of at least $n \times n$ pixels separating them. Physically, this zero region is realized as a portion of space where the transmitted electric field is equal to zero, i.e. where no radiation can be diffracted. An example specimen-reference setup satisfying this condition is shown in Fig. 5. (Note that this condition can be relaxed for deconvolution in the noiseless setting (e.g. see [4,6]), but is required to simplifly the mathematical expression for recovering $\mathbf {X}$ and when incorporating a denoising method, e.g. the Wiener filtering method discussed in 1.2.)
As well, these algorithms are easily implemented for reference objects that lie within a rectangular region that is physically separated from the imaging specimen. When this condition is not met, a novel deconvolution scheme is needed, and which is typically unwieldly and does allow for efficient computation. An example of such an irregular setup is given in [10], which considers an annulus-shaped reference. This setup is introduced for a specific application and results in a highly complicated and inefficient deconvolution algorithm. Thus, the condition of a rectangular and separated reference object can be effectively considered as an additional constraint for deconvolution-based algorithms.
3. Maximum likelihood estimation
3.1 HoloML objective function
To account for the practical considerations given in 2, from Eq. (5) we consider the measured data $\widetilde {\mathbf {Y}}$ as being of the form
Taking the approach of maximum likelihood estimation, we seek to determine the image $\mathbf {X}$ which maximizes the probability of obtaining the measured data $\widetilde {\mathbf {Y}}$. Let $\mathcal {M}'$ denote the subset of data points $\mathcal {M}$ that are not zeroed out by the beamstop $\mathcal {B}$. Given the Poisson probability distribution of Eq. (5) and the Poisson shot noise model of Eq. (6), and using the standard assumption that measured pixel values are independent (e.g. see [24]), it follows that the probability of obtaining the set of measured data $\widetilde {\mathbf {Y}}$ as a function of $\mathbf {X}$ is given by
Since the function $\text {log}(\cdot )$ is monotonically increasing, the global maximizer of $g(\mathbf {X})$ is equal to the global minimizer of the corresponding negative log-likelihood function, i.e. of the function
The value of $\overline {\mathbf {Y}}$, while stricly speaking is a function of $\mathbf {X}$, is assumed to be constant, since it does not vary significantly around the global minimizer $\mathbf {X}$ (see [4]). Thus, after factoring out the constant terms we arrive at the following objective function:
To avoid dealing with the possibly irregular geometry of the set of non-occluded points $\mathcal {M}'$, we can reformulate this equivalently as
(Note that the product with $\mathcal {B}_{ij}$ is omitted in the rightmost terms here, since it leads to taking the logarithms of zero, which are undefined. It is valid to omit $\mathcal {B}_{ij}$ here since for these terms $\widetilde {\mathbf {Y}}_{ij}=0$.) We shall term this the HoloML objective function, and seek the phase retrieval solution which is its global minimizer. Note that in contrast to the currently used algorithms discussed in 1.2, we have made no assumptions on $\mathbf {R}$ and do not require a minimum oversampling ratio for $\widetilde {\mathbf {Y}}$.
3.2 Optimization methods
For a general, complex-valued $\mathbf {X}$ the optimization problem of Eq. (10) can be optimized via the usage of the Wirtinger derivative, which is a popular generalization the real-valued derivative for complex functions [14]. The Wirtinger gradient with respect to $\mathbf {X}$ is given by
Given these expressions for the gradient, the HoloML objective function can then be optimized via a variety of numerical solvers. In 4, optimization is performed via the usage of both the conjugate gradient and trust-region methods, which are representative examples of first- and second-order methods, respectively. The conjugate gradient method is a popular first-order method for unconstrained nonlinear optimization problems, and often converges more rapidly than the standard gradient descent (i.e. steepest descent) method. The trust-region algorithm, a second-order method, often provides faster convergence than first-order methods. (For the trust-region method, an approximate Hessian is typically implemented by numerical solver packages given the analytic expression for the function and its gradient.)
It is observed that both these methods produce almost entirely identical solutions, both of which are of a high quality. This behavior is quite remarkable in light of the fact that Eq. (10) is a nonconvex objective function, and thus different methods could conceivably produce entirely different solutions. (For example, for the classical, i.e. non-holographic, phase retrieval problem, which is also nonconvex, alternating projection type methods produce may high quality solutions while first- and second-order methods typically fail [25].) The observed behavior that different methods produce similar high-quality solutions is indeed a surprising and attractive feature of the MLE method. In turn, this allows for flexible implementations that can be tailored to best suit particular HCDI applications.
4. Numerical experiments
Numerical experiments were conducted comparing the results of optimization algorithms applied to minimizing the HoloML objective function versus the current leading holographic phase retrieval algorithms. These experiments were conducted using size $256\times 256$ pixel test images of biophysical specimens — the mimivirus [26], embryo [27], oocytes [27], S. pistillata [28], salmonella [29], and sifA protein [29]. For the experiments in the following three subsections, the setup used is shown in Fig. 5, where the test image, zero region, and reference object (being the URA reference) are each of size $256\times 256$ pixels, altogether forming a hybrid object of size $256 \times 768$. The setup here with the zero separation between specimen and reference is to allow for comparison with the classical algorithms which require this, as discussed in 1.2.
Two-times oversampling was implemented when generating the corresponding Fourier transform magnitudes, which thus produced a data array of size $512 \times 1536$. This data was then subject to Poisson shot noise, whose average photon flux value is subsequently denoted as $N_p$.
Two algorithms were applied towards optimizing the HoloML objective function, using the Manopt optimization package for Matlab. The first such method is the conjugate gradient algorithm (a standard first-order optimization algorithm) which is implemented given the HoloML objective function of Eq. (10) and its corresponding gradient given by Eq. (11). We term this algorithm HoloML-CG. The second method is an implementation of the trust-region method discussed in 3.2, and is termed HoloML-TR. For experiments using both HoloML-CG and HoloML-TR, 50 iterations were implemented per trial. The computation time per iteration for HoloML-CG and HoloML-TR for subsequent experiments were approximately 0.1 seconds and 0.2 seconds, respectively, with experiments being run on a Windows XPS 13 9380 with an Intel Core i7-8665U processor. We compare these new methods to the leading holographic phase retrieval algorithms in use to date, namely the inverse filtering [4] and Wiener filtering methods [8]. For experiments involving a beamstop, the missing data was replaced by an approximating Gaussian function before the inverse filtering and Wiener filtering methods were applied, as per a well-known technique [7,8] which improves image reconstruction (see 1.2). Each implemenation of the Wiener filtering method encapsulates several trials, whereby the constant term $C$ (see Eq. (4)) is first set to the reciprocal of the estimated SNR value, and then scaled using a logarithmic search on the set of values $[10^{-10}, 10^{10}]$. The output is then selected which minimizes the corresponding relative error given by Eq. (13).
The relative error used for these experiments is given by
where $\mathbf {Y}_0$ denotes the experimental data, and $\mathbf {Y} = \left | {\mathcal {F}(\mathbf {X}) + \mathbf {B}} \right |^{2}$ is the data corresponding to the reconstructed image $\mathbf {X}$. For experiments involving a beamstop, this is replaced with $\mathbf {Y} = \mathcal {B} \odot \left | {\mathcal {F}(\mathbf {X}) + \mathbf {B}} \right |^{2}$, as in Eq. (7). This error metric is standard and naturally applicable for HCDI data, for which only the measured data $\mathbf {Y}_0$ is known (e.g. see [30,31]).4.1 Low-photon imaging experiments
As discussed in 2.1, imaging given low-photon data is necessary for HCDI applications to biological specimens that are sensitive to radiation damage, and as well requires less energy resources. At the same time, low-photon data is corrupted by high levels of Poisson shot noise. This provides the main motivation for developing a maximum likelihood framework for holographic phase retrieval. To validate the performance of this method, we consider the comparative performance of the HoloML algorithms on various biophysical specimen test images that are corrupted with Poisson shot noise with an average photon flux value of $N_p=1$. The images reconstructed from these simulations are shown in Fig. 6, and the corresponding relative error values are shown in Fig. 7. In these experiments, the HoloML algorithms clearly produce the best image reconstruction, as well as the smallest relative error. This behavior is expected, since only the HoloML method accounts for high values of Poisson shot noise.
4.1.1 Varying the photon flux
The comparative performance of these algorithms at varying average photon flux values, specifically for $N_p = 1000, 100, 10, 1, 0.1$, is studied. The resulting reconstructed images and corresponding relative errors are shown in Figs. 8 and 9, respectively. It is observed that as the value of $N_p$ decreases, the image reconstruction quality is dramatically improved using the HoloML algorithms.
4.2 Beamstop occlusion
We repeat the experiments of the following subsection given data that is occluded by a beamstop apparatus. Specifically, a size $25 \times 25$ region consisting of the lowest-frequency data is zeroed out, as is typicaly in HCDI experiments. When comparing with the inverse and Wiener filtering algorithms, a Gaussian function is fit to replace the missing frequency, as is commonly done in practice [8]. We observe in all experiments that the HoloML methods significantly outperform the classical algorithms, throughout all photon flux levels and especially within the low-photon regime. This is shown in Figs. 10–13.
4.3 Reference object performance
The choice of reference object implemented in holographic CDI can significantly effect the quality of the reconstructed image [4], and is a major design consideration. Thus, we seek to evaluate the performance of the HoloML method for each of the leading reference choices. To this end, experiments were performed in which the reference object was varied (while using the otherwise same simulation parameters). The reference choices implemented are the pinhole reference, block reference, and uniformly redundant array (URA) reference (see 2.3) as shown in Fig. 4, as well as no reference (i.e. a region consisting of all zero values). In these simulations, given various biophysical test images and for each of these reference choices, experiments were conducted for which data was subject to Poisson shot at $N_p=1$ and the HoloML-CG algorithm was applied. Figures 14 and 15 show the results of these simulations, for data with and without beamstop occlusion, respectively. It is observed that the URA reference consistently produces the best image reconstruction, while the block reference performs the best from amongst references with simpler geometries. This behavior is as well observed in prior theoretical and experimental works (which use different algorithms) comparing reference performance, as discussed in 2.3.
5. Breaking classical algorithm barriers
In contrast to the current methods discussed in 2.4, there is no minimal oversampling ratio necessary for implementing the HoloML methods. Numerical simulations demonstrate that these methods are capable of image recovery given data at a far smaller oversampling ratio, without loss of reconstruction quality, as shown in Figs. 16 and 17. This allows for higher resolution imaging for a given experimental setup [22].
As well, the HoloML methods do not require any minimum separation between the specimen and reference objects. Numerical simulations demonstrate that for the HoloML methods, the quality of image reconstruction is not signficiantly impacted by the distance between the specimen and reference objects, including when this distance is zero. This is illustrated in Figs. 18 and 19, which shows the reconstruction of the mimivirus image using the HoloML-CG algorithms given a specimen and reference input which are separated by a distance $d$, and given data at various photon flux levels $N_p$. (The case where $d=n$ coincides with the previous experimental setups, and is shown in Fig. 5.) By allowing for a smaller separation distance between the specimen and reference objects, or none at all, the HoloML methods thus allow for more robust and flexible design of HCDI experiments.
The HoloML methods also do not require a reference geometry which lies within a rectangle that is separate from the imaging specimen, as is essentially required for deconvolution-based methods (see 2.4). In Fig. 20 an example of a specimen-reference setup violating these classical requirements is shown, where the reference has an annular shape and surrounds the specimen. Alongside this setup is shown the recovery of the specimen (the mimivirus) via the HoloML-CG algorithm given noiseless data and data with $N_p=1$, respectively. In the noiseless setting, the recovery is essentially exact. And given the low-photon data, the recovery is of high-quality, and comparable to that achieved using standard references shapes (see 4.3). Note that HoloML algorithm and it’s numerical implementations easily accommodate this irregular geometry, in contrast with the classical algorithms.
6. Tabletop prototype experiment
The performance of the HoloML methods on real image data was tested using data from the following tabletop prototype experiment. An approximately $2mm \times 2mm$ photograph of the well-known Cameraman test image was situated on a microscope slide and a triangular-shaped region was cut from the slide to form a reference object, as shown in Fig. 21. Approximately two-times oversampled Fourier transform magnitude measurements were collected via illumination from a He-Ne laser, which serves as a low-to-mid range photon source [32]. Image reconstruction was performed from these measurements using the inverse filtering, Wiener filtering, HoloML-CG, and HoloML-TR algorithms. The results of these reconstructions are shown in Fig. 22. It is evident that the HoloML methods produce superior image reconstruction compared to the other methods.
7. Conclusions and future work
A new algorithmic framework for holographic HCDI via maximum likelihood estimation, termed HoloML, is introduced and developed. This method provides superior image reconstruction given data that is highly corrupted by Poisson shot noise, as occurs in low-photon HCDI imaging. It as well gives far improved imaging given HCDI data that is occluded by a beamstop apparatus, as is typical in practice. This optimization approach is also highly robust in that it does not require the physical constraints needed for current algorithms (specifically being at least two-times oversampled data, a minimum reference-specimen separation distance, and a reference geometry that is contained in a rectangle that does not overlap with the specimen). Moreover, the lack of these conditions does not negatively impact the image quality. It is also robust algorithmically in that the HoloML objective function can be effectively optimized using a number of standard numerical algorithms, including both first- and second- order methods. This behavior is indeed novel, since the objective function is nonconvex, and is unprecedented when comparing with the behavior of other optimization methods for phase retrieval [25].
Based on these successful results on simulated data as well as the tabletop prototype experiment, it would be very interesting to implement the HoloML method on data collected from nanoscale low-photon HCDI experiments. This framework could as well be applied to any holography data that is subject to Poisson shot noise. As well, it would be interesting to pursue a theoretical study of the function landscape for the HoloML objective function given by Eq. (10), similarly to other recent theoretical works on phase retrieval and optimization [14,33,34]. Considering the succesful application of various numerical methods towards optimizing this function, despite it’s nonconvexity, a reasonable conjecture to investigate is that all of the function’s local minima are within a small distance of the global minimizer. Another direction for future work would be to consider the effects of readout noise in CCD detectors. This readout noise adds a small Gaussian term to the Poisson shot noise model. While readout noise is largely negligible in comparison to Poisson noise and thus usually not analytically modeled [13,15,35], further refinements could conceivably be achieved via its explicit modeling.
Acknowledgments
We are very grateful for many helpful discussions with numerous collaborators and colleagues. In particular we wish to thank Emmanuel Candès for insightful suggestions and guidance which spurred the exploration of statistical methods for holographic phase retrieval. D.B. is also very grateful to many colleagues at the Flatiron Institute working on phase retrieval for much helpful discussion, namely Charles Epstein, Leslie Greengard, Alex Barnett, Michael Eickenberg, Marylou Gabrié, Hannah Lawrence, Jeremy Magland, and Manas Rachh. We are also very grateful for data from Manuel Guizar-Sicairos and James Fienup, and for guidance and insight through discussions Stefano Marchesini regarding practical CDI and holography.
Disclosures
The authors declare no conflicts of interest.
Data availability
Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.
References
1. J. Miao, P. Charalambous, J. Kirz, and D. Sayre, “Extending the methodology of X-ray crystallography to allow imaging of micrometre-sized non-crystalline specimens,” Nature 400(6742), 342–344 (1999). [CrossRef]
2. Y. Shechtman, Y. C. Eldar, O. Cohen, H. N. Chapman, J. Miao, and M. Segev, “Phase retrieval with application to optical imaging: a contemporary overview,” IEEE Signal Process. Mag. 32(3), 87–109 (2015). [CrossRef]
3. A. H. Barnett, C. L. Epstein, L. F. Greengard, and J. F. Magland, “Geometry of the phase retrieval problem,” Inverse Probl. 36(9), 094003 (2020). [CrossRef]
4. D. A. Barmherzig, J. Sun, P.-N. Li, T. J. Lane, and E. J. Candès, “Holographic phase retrieval and reference design,” Inverse Probl. 35(9), 094001 (2019). [CrossRef]
5. M. Saliba, T. Latychevskaia, J. Longchamp, and H. Fink, “Fourier Transform Holography: A Lensless Non-Destructive Imaging Technique,” Microsc. Microanal. 18(S2), 564–565 (2012). [CrossRef]
6. M. Guizar-Sicairos and J. R. Fienup, “Holography with extended reference by autocorrelation linear differential operation,” Opt. Express 15(26), 17592–17612 (2007). [CrossRef]
7. S. Marchesini, S. Boutet, A. Sakdinawat, M. Bogan, S. Bajt, A. Barty, H. Chapman, M. Frank, S. Hau-Riege, A. Szöke, C. Cui, D. Shapiro, M. Howells, J. Spence, J. Shaevitz, J. Lee, J. Hajdu, and M. Seibert, “Massively parallel x-ray holography,” Nat. Photonics 2(9), 560–563 (2008). [CrossRef]
8. H. He, U. Weierstall, J. C. H. Spence, M. Howells, H. A. Padmore, S. Marchesini, and H. N. Chapman, “Use of extended and prepared reference objects in experimental fourier transform x-ray holography,” Appl. Phys. Lett. 85(13), 2454–2456 (2004). [CrossRef]
9. T. Gorkhover, A. Ulmer, K. Ferguson, M. Bucher, F. R. N. C. Maia, J. Bielecki, T. Ekeberg, M. F. Hantke, B. J. Daurer, C. Nettelblad, J. Andreasson, A. Barty, P. Bruza, S. Carron, D. Hasse, J. Krzywinski, D. S. D. Larsson, A. Morgan, K. Mühlig, M. Müller, K. Okamoto, A. Pietrini, D. Rupp, M. Sauppe, G. van der Schot, M. Seibert, J. A. Sellberg, M. Svenda, M. Swiggers, N. Timneanu, D. Westphal, G. Williams, A. Zani, H. N. Chapman, G. Faigel, T. Möller, J. Hajdu, and C. Bostedt, “Femtosecond x-ray fourier holography imaging of free-flying nanoparticles,” Nat. Photonics 12(3), 150–153 (2018). [CrossRef]
10. A. V. Martin, A. J. D’Alfonso, F. Wang, R. Bean, F. Capotondi, R. A. Kirian, E. Pedersoli, L. Raimondi, F. Stellato, C. H. Yoon, and H. N. Chapman, “X-ray holography with a customizable reference,” Nat. Commun. 5(1), 4661 (2014). [CrossRef]
11. A. J. D’Alfonso, A. V. Martin, A. J. Morgan, P. Wang, H. Sawada, A. I. Kirkland, and L. J. Allen, “Generalized fourier holography meets coherent diffractive imaging,” Microsc. Today 23(1), 28–33 (2015). [CrossRef]
12. D. A. Barmherzig, A. H. Barnett, C. L. Epstein, L. F. Greengard, J. F. Magland, and M. Rachh, “Recovering missing data in coherent diffraction imaging,” (2020).
13. H. Chang, Y. Lou, Y. Duan, and S. Marchesini, “Total variation–based phase retrieval for poisson noise removal,” SIAM J. Imaging Sci. 11(1), 24–55 (2018). [CrossRef]
14. Y. Chen and E. J. Candès, “Solving random quadratic systems of equations is nearly as easy as solving linear systems,” Comm. Pure Appl. Math. 70(5), 822–883 (2017). [CrossRef]
15. L. Shi, G. Wetzstein, and T. Lane, “A flexible phase retrieval framework for flux-limited coherent x-ray imaging,” arXiv: Biological Physics (2016).
16. I. Kang, F. Zhang, and G. Barbastathis, “Phase extraction neural network (phenn) with coherent modulation imaging (cmi) for phase retrieval at low photon counts,” Opt. Express 28(15), 21578–21600 (2020). [CrossRef]
17. M. Deng, S. Li, A. Goy, I. Kang, and G. Barbastathis, “Learning to synthesize: robust phase retrieval at low photon counts,” Light: Sci. Appl. 9(1), 36 (2020). [CrossRef]
18. Z. Li, K. Lange, and J. A. Fessler, “Poisson phase retrieval with wirtinger flow,” in 2021 IEEE International Conference on Image Processing (ICIP), (2021), pp. 2828–2832.
19. M. Nakasako, A. Kobayashi, Y. Takayama, K. Asakura, M. Oide, K. Okajima, T. Oroguchi, and M. Yamamoto, “Methods and application of coherent x-ray diffraction imaging of noncrystalline particles,” Biophys. Rev. 12(2), 541–567 (2020). [CrossRef]
20. C. Nave, “The achievable resolution for x-ray imaging of cells and other soft biological material,” IUCrJ 7(3), 393–403 (2020). [CrossRef]
21. L. Shi, G. Wetzstein, and T. J. Lane, “A flexible phase retrieval framework for flux-limited coherent x-ray imaging,” arXiv preprint arXiv:1606.01195 (2016).
22. K. He, M. K. Sharma, and O. Cossairt, “High dynamic range coherent imaging using compressed sensing,” Opt. Express 23(24), 30904–30916 (2015). [CrossRef]
23. E. N. Leith and J. Upatnieks, “Reconstructed wavefronts and communication theory*,” J. Opt. Soc. Am. 52(10), 1123–1130 (1962). [CrossRef]
24. I. S. Wahyutama, G. K. Tadesse, A. Tünnermann, J. Limpert, and J. Rothhardt, “Influence of detector noise in holographic imaging with limited photon flux,” Opt. Express 24(19), 22013–22027 (2016). [CrossRef]
25. E. Osherovich, “Numerical methods for phase retrieval,” Ph.D. thesis, Technion - Israel Institute of Technology (2012).
26. E. Ghigo, J. Kartenbeck, P. Lien, L. Pelkmans, C. Capo, J.-L. Mege, and D. Raoult, “Ameobal Pathogen Mimivirus Infects Macrophages through Phagocytosis,” PLoS Pathog. 4(6), e1000087 (2008). [CrossRef]
27. M. Eitel, L. Guidi, H. Hadrys, M. Balsamo, and B. Schierwater, “New insights into placozoan sexual reproduction and development,” PLoS One 6(5), e19639 (2011). [CrossRef]
28. A. Venn, E. Tambutte, M. Holcomb, D. Allemand, and S. Tambutte, “Live tissue imaging shows reef corals elevate ph under their calcifying tissue relative to seawater,” PLoS One 6(5), e20013 (2011). [CrossRef]
29. R. Rajashekar, D. Liebl, D. Chikkaballi, V. Liss, and M. Hensel, “Live cell imaging reveals novel functions of salmonella enterica spi2-t3ss effector proteins in remodeling of the host cell endosomal system,” PLoS One 9(12), e115423 (2014). [CrossRef]
30. J. R. Fienup, “Phase retrieval algorithms: a personal tour,” Appl. Opt. 52(1), 45–56 (2013). [CrossRef]
31. H. N. Chapman, A. Barty, S. Marchesini, A. Noy, S. P. Hau-Riege, C. Cui, M. R. Howells, R. Rosen, H. He, J. C. H. Spence, U. Weierstall, T. Beetz, C. Jacobsen, and D. Shapiro, “High-resolution ab initio three-dimensional x-ray diffraction microscopy,” J. Opt. Soc. Am. A 23(5), 1179–1200 (2006). [CrossRef]
32. M. Guizar-Sicairos and J. R. Fienup, “Direct image reconstruction from a fourier intensity pattern using heraldo,” Opt. Lett. 33(22), 2668–2670 (2008). [CrossRef]
33. E. J. Candès, X. Li, and M. Soltanolkotabi, “Phase retrieval via wirtinger flow: Theory and algorithms,” IEEE Trans. Inf. Theory 61(4), 1985–2007 (2015). [CrossRef]
34. J. Sun, Q. Qu, and J. Wright, “A geometric analysis of phase retrieval,” Found. Comput. Math. 18(5), 1131–1198 (2018). [CrossRef]
35. G. Sluder, Digital Microscopy (Academic Press, San Diego, CA, 2013).