We analyze the spatial resolution of edge illumination X-ray phase-contrast imaging and its dependence upon various experimental parameters such as source size, source-to-sample and sample-to-detector distances, X-ray energy and size of the beam-shaping aperture. Different propagation regimes, as well as the beam divergence and polychromaticity encountered with laboratory sources, are also considered. We show that spatial resolution in edge illumination phase-contrast imaging presents peculiar features compared to other X-ray phase-contrast techniques. In particular, in the direction orthogonal to the s or mask lines used to shape the beam, this can be better than both the pixel dimension and the projected source size. Numerical simulations based on Fresnel diffraction integrals are presented, which confirm the analytical predictions. The obtained results allow a simple estimation of the spatial resolution for edge-illumination phase imaging in both synchrotron and laboratory setups.
© 2014 Optical Society of America
Edge illumination (EI) is a promising X-ray phase-contrast imaging (XPCi) technique, currently under development at University College London (UCL). Although it was initially developed as a synchrotron radiation (SR) method [1–3], more recently it has been adapted for use with spatially and/or temporally incoherent radiation, such as that produced by extended [4–8] or microfocal  laboratory sources.
Like other XPCi techniques [10–16], EI takes advantage of the phase/refraction effects caused by the sample (in addition to beam attenuation) to produce image contrast. This can lead to an increased contrast-to-noise ratio (CNR) compared to conventional X-ray imaging methods, and therefore to an enhanced visualization of the sample structures, especially for low density materials and at the high X-ray energies usually required in radiology . These are the main reasons underpinning the strong research effort which has been dedicated during the two last decades to XPCi and, more recently, to its implementation with laboratory sources. The latter, in particular, would expand the impact of the method in several areas of investigation, plus possibly lead to clinical applications.
The EI technique and the corresponding image formation principles have been previously modelled, by using both geometrical [18, 19] and wave optics [20, 21]. Besides, the angular sensitivity of an EI setup, i.e. the smallest detectable refraction angle, has been studied as a function of the experimental parameters and the photon flux for both synchrotron [3, 22] and laboratory setups .
Another key parameter to define the quality of an image is spatial resolution. While minimum angle detectability is related to contrast and CNR, spatial resolution determines the minimum dimensions of the details that can be effectively visualized. The spatial resolution achievable with EI, and its dependence upon the parameters of the experimental setup, however, have not yet been investigated in detail. This is of great importance in order to fully characterize the performance of an EI imaging system, particularly because, as shown in the following, spatial resolution in EI presents peculiar features compared to other XPCi methods.
In the next section, we will briefly review the main concepts at the basis of the EI-XPCi technique. The spatial resolutions in the directions parallel and perpendicular to the apertures of the masks are investigated in sections 3 and 4, respectively: in particular, section 4 discusses edge and area signals separately. Numerical simulations for some key examples are also reported. Conclusions are given in section 5.
2. Image formation process in the EI technique
In the typical implementation of the EI technique, the beam is collimated in one direction by a slit (to a size varying from a few to several tens of µm) before hitting the sample, and is aligned with an absorbing edge placed in front of the detector (Fig. 1(a)). The edge is positioned so as to cover part of the beam, while the other part is allowed through and hits the detector. By simply shifting the position of the edge (in practice, a slit) along y, the fraction of transmitted beam can be adjusted to the desired level (the latter is usually referred to as “illumination level” or “pixel illuminated fraction”).
The introduction of a sample has a double effect on the X-ray beam, as it produces both attenuation and phase shift. The beam transmission is expressed by and the phase shift by , where λ is the X-ray wavelength, is the complex refractive index of the object, and the integration is performed over the whole extent of the object along the optical axis z.
From a geometrical optics perspective, the EI principle can be easily understood by considering that a gradient in the phase shift produces a local refraction of the beam, with a component in the y direction equal to . Therefore, the beam at the plane of the detector edge is displaced by the quantity along y, where is the sample-to-detector edge distance. This can result in either an increase or a decrease in the detected intensity, according to the direction of refraction. If the sample is scanned along y through the collimated beam (see Fig. 1(a)), an image is obtained, which is characterized by dark and bright fringes running along sample interfaces, where the refraction is highest.
In the case of extended beams like those provided by conventional X-ray tubes, the EI principle can be replicated by using appropriate sample and detector masks made of multiple alternating apertures and absorbing septa (Fig. 1(b)). In this case, the object scanning is not required, as in a single exposure the object is effectively sampled with a step equal to the sample mask period. However, the spatial resolution along y can be improved when sample dithering is performed, i.e. when the sample is displaced in multiple positions by a sub-period step. Note that, as long as no spillover is present between adjacent aperture pairs (and no cross-talk between corresponding detector pixels), the two configurations described above are perfectly equivalent . For simplicity, we will therefore consider in the following the case where the sample is scanned through only one pair of sample and detector apertures.
From a wave optics perspective, the wave field is modulated by the presence of both the object and the sample slit. Their complex transmission functions are, respectively:Eq. (2), the scattering from the aperture walls (as in general the mask thickness is relatively small), and we modelled the mask as a planar structure: its aspect ratio (ratio between the mask thickness and the aperture width), in fact, is typically not large enough to introduce any additional angular collimation on the beam. In the case of a unit plane wave illuminating the object, the complex amplitude of the wave field at the exit surface of the object can be written as:
The propagation of the wave field through the distance can be described by Fresnel diffraction, and the complex amplitude of the wave incident onto the detector edge can be written as:24]:
The beam intensity on the detector edge produced by an extended, spatially incoherent source can then be obtained by taking the modulus square of the wave field and convolving it with the projected source intensity distribution. As we are interested in determining the spatial resolution referred to the object, it is more practical to rescale this intensity to the coordinates of the object plane (i.e. to consider the intensity rescaled for unity magnification):
We assume, for simplicity, that the source intensity distribution can be described as the product of two normalized Gaussian functions in the x and y directions, with standard deviations σsrc,x and σsrc,y respectively. The projected source intensity distribution rescaled back to the object plane can then be expressed as , where and .
The signal on the detector, at the pth step of the sample scan and at the nth pixel along x, can then be expressed by:24]. We have assumed in Eq. (7) that the effect of the detector is simply represented by an integration of the beam intensity over each pixel. Additional blurring deriving from cross-talk between pixels (typical, for example, of CCD-based detectors) can be accounted for in the x direction by replacing with , where is the line spread function of the detector along x. It can be shown that, when EI is used in scanning mode (Fig. 1(a)), the blurring along y does not affect the detected signal, as this acts after the photon selection along y operated by the detector aperture. However, blurring along y is unwanted in the full-field implementation of EI (Fig. 1(b)), as cross-talk between apertures can reduce the signal and/or create artefacts, as shown in : line skipping masks, which prevent cross-talk between apertures, are therefore usually employed with this type of detectors.
Equations (1)-(7) represent the starting point for our analysis of the spatial resolution in the EI technique. The spatial resolutions in the x and y directions are in general different and need to be treated separately. We will consider, to this aim, the two significant examples of samples which are homogeneous along y or along x. These two cases may be exemplified by an object featuring an edge parallel to y and x, respectively.
In the second case, where the sample is homogeneous along x, the beam intensity on the detector slit has no dependency upon x and the image formation can be treated as a one-dimensional problem. In fact, Eqs. (6) and (7) can be rewritten as:
3. Spatial resolution in direction parallel to edge/mask lines
As shown by Eqs. (8) and (9), when an object edge and the mask apertures are perpendicular their contributions to the image contrast can be factored out. In particular, in the direction parallel to the mask apertures the problem reduces to the case of free-space propagation (FSP), i.e. the case when no masks are present. This contribution is represented by the factor in Eq. (9).
The spatial resolution in the FSP XPCi technique has been studied in detail in previous publications [24, 26, 27], therefore only the main results will be reported here. We will follow the approach of Gureyev and associates , where the spatial resolution is defined as the minimal distance at which two sample details can be distinguished in a raw, unprocessed image. It has been widely shown that this resolution can be improved when phase retrieval algorithms are employed, in particular in the far-field diffraction regime . In these cases, the definition given by Pogany and associates , where spatial resolution is defined as the maximum object spatial frequency that is effectively transferred to the image, may be more appropriate.
We consider in the following the cases of the near-field and Fresnel (or intermediate) diffraction regimes (two typical intensity profiles in the two regimes are shown in Fig. 2). The two regimes are usually defined in the literature by the validity conditions and , respectively, where is the so-called Fresnel number, and indicates the width of the object edge (the reader is referred to  for a detailed discussion on the validity conditions of the two regimes). For sufficiently small propagation distances and sufficiently slowly varying attenuation and phase shift from the object (near-field diffraction regime), the intensity distribution can be described by the well-known transport-of-intensity equation (TIE) . The first factor in Eq. (8) can then be written as (one-dimensional version of the TIE):
where the approximate expression on the right-hand side of Eq. (12) is valid only if the derivative of the object transmission is negligible. Due to the second derivative of the phase in Eq. (12), an isolated phase edge typically gives rise to a negative and a positive intensity fringe on either side of the edge (see Fig. 2). It has been shown in  that the width of each of the fringes is equal to about . In the case of a sharp edge (as required to measure the spatial resolution of a system), the fringe width then becomes equal to 2σx. Two object features must thus be separated by approximately twice this distance in order to be distinguishable in the image. If the effect of the detector pixel size is also considered, the system spatial resolution in the x direction can then be expressed as:
Equation (13) shows that, in the near-field region, the spatial resolution depends only on geometrical parameters and is therefore independent of the X-ray energy. Note that, if the pixel size is larger than the intensity fringes, these are no longer resolved and the phase signal rapidly vanishes. This case is therefore usually avoided in FSP while it can be accepted in EI, which primarily relies on the image contrast arising in the direction normal to the apertures.
In the Fresnel region, the expression for the intensity becomes more complicated and secondary fringes arise, as illustrated in Fig. 2. It has been shown in  that, for a phase object, the position of the first maximum is equal to and that the width of the first fringe is about . In the near-field regime (), this expression reduces to , while in the opposite case of the far-field regime (). As shown in , the width of the first fringe can thus be rewritten, with only a small loss of accuracy, as . If we assume the secondary fringes to be much smaller than the first, then the spatial resolution in the Fresnel regime is given, in analogy to the near-field case, by [26, 27]:
Equation (14) shows that, in the Fresnel region, the spatial resolution is affected not only by the geometrical PSF of the system but also by X-ray diffraction, which is in turn energy dependent (note that the smaller the value of NF, the larger is the contribution from diffraction). Therefore, unlike in the near field, the spatial resolution in the Fresnel regime is also influenced by polychromaticity. In this case, the spatial resolution is determined by the width of the first fringe of the intensity profile resulting from the sum of the various monochromatic components (the secondary fringes, on the contrary, are rapidly washed out).
4. Spatial resolution in direction normal to mask lines
In the absence of dithering (i.e. sample scanning along the y direction with sub-pixel steps), it is evident that the spatial resolution along y cannot be better than the period of the sample mask (see Fig. 1(b)). In fact, two different object structures separated by a distance smaller than the mask period may not be distinguished from each other. Similarly, when dithering is performed, the employed scanning step provides a limit to the achievable resolution. Apart from these trivial dependences, however, other acquisition parameters also play an important role in determining the system spatial resolution. In order to thoroughly investigate their effects independently, we will assume in the following that dithering is performed with a negligibly small step.
We will show in the next sections that different effects arise for the “area” signal, associated to the object attenuation, and the “edge” signal typical of object interfaces, resulting in different spatial resolutions for the two cases. Note that in general both edge and area effects arise at the same time, but for illustration purposes the two signals will be considered separately.
4.1 Attenuation signal
Let us first consider the case of a negligible projected source size, and later generalize the obtained results in the case source blurring is present. We will assume that an absorbing object is scanned along y, and that the object and the imaging system configuration are such that the diffraction/refraction signals are negligible (two obvious such cases would be represented, for example, by the use of a short propagation distance and/or very large apertures, see also Eqs. (21) and (22) in section 4.2).
In order to illustrate the previous concepts, let us consider the following practical example. The edge of a 10 µm thick lead slab is scanned through a monochromatic beam of energy 20 keV, with a very small dithering step of 0.2 µm (Fig. 3(a)). The edge shape is modelled as a step function convolved with a Gaussian function of standard deviation of 1 µm (which defines the width of the edge). The source intensity distribution has a Gaussian shape with FWHM = 20 µm, the distances are z1 = 1.6 m and z2 = 0.01 m (which leads to a negligible projected source size, with FWHM of 0.1 µm). The pre-sample aperture is 12 µm and the detector aperture 20 µm, with a + 50% misalignment between them (the lower edge of the detector aperture is aligned with the centre of the sample aperture, see Fig. 3(a)). The simulation of the signal was performed using a wave optics calculation . In order to show the effect of the illumination level on the absorption spatial resolution, the simulation was repeated for a pixel illumination of 25%, obtained with a larger misalignment between the two apertures.
The original absorption profile of the edge object is reported in Fig. 3(b), together with the signal profiles obtained in the two different acquisition conditions. In the case of 50% illumination, it is evident that the finite PSF of the system leads to a blurring of the edge (in this case, l = 6 µm). While the lead slab enters the sample aperture, the transmitted intensity progressively decreases, and the 50% of the absorption signal is reached when the object edge is at position , i.e. it is perfectly in the middle of the transmitting region . Note that the signal profile is linear at the centre, while the smoothing at the toe and shoulder of the curve is given by the slightly smoothed profile of the edge – i.e. a perfectly sharp edge would give a trapezoidal profile with no smoothing. The profile obtained at 25% illumination is much sharper than the previous one (Fig. 3(b)), confirming the analytical predictions. It is only slightly wider than the original object profile, as the degree of blurring due to a rectangular function of width l = 3 µm is comparable to that of the Gaussian function with 1 µm standard deviation used to define the edge profile. Note that the signal profile obtained in this case is slightly asymmetric and presents a small undershoot at the toe: both these effects are due to a residual refraction signal (and in fact they can be eliminated by artificially setting δ to zero in the simulations).
Let us now consider the case of an extended source intensity distribution giving rise to a non-negligible projected source size at the detector plane. This results in a blurring of the ideal intensity pattern that would be obtained with a point source. The signal on the detector can again be expressed as the convolution of the transmission function with a system PSF (cf. Equations (10) and (11)):
Note that the projected source distribution convolves only the detector aperture and not the rectangular function representing the sample aperture. This has an important consequence: while in the limit of negligibly small source size the expression reduces to (like in Eq. (15)), for very large source sizes it becomes . Therefore, the sample aperture defines an upper limit for the PSF width: this means that, unlike other XPCi methods, spatial resolution better than the projected source size can be obtained when the sample aperture is sufficiently small. In fact, only the portions of the object that are illuminated (i.e. that fall within the sample aperture) can contribute to the signal in the image, which prevents the PSF from being larger than the illuminating beam. A similar result applies also to the case of an edge signal, as we will show in the next section.
The above result can be easily extended to the case where a constant (within the sample aperture) refraction is present in addition to the attenuation signal. This can represent, for instance, the case of an absorbing object with a sharp edge superimposed to a phase object characterized by a smooth edge, introducing a slowly varying refraction angle. Under these assumptions, the beam incident on the detector aperture is rigidly shifted by refraction of the quantity along y, and Eq. (16) thus needs to be rewritten as:Eq. (19) reduces to , while for a large source it becomes again . Also in this case, therefore, the sample aperture acts as a superior limit for the width of the system PSF.
4.2 Edge signal
We first take into account the case of the near-field regime, i.e. the case where both signals from the object and the sample aperture edges can be described by the TIE (see section 3). Let us consider the following example: a narrow phase edge () is scanned through the beam, and the detector slit is arranged so as to stop the lower part of the beam (see Fig. 4(a)).If the sample aperture is sufficiently large compared to the width of the FSP fringes (the width of each fringe is equal to ), and if the phase edge is far enough from the sides of the sample aperture, the fringes produced on the detector aperture are the same as those obtained in FSP (Fig. 4(a)). In other words, the signal from the edges of the sample aperture does not interfere with the fringes produced by the object. The latters are then analyzed by the detector aperture, which integrates them over the finite width d.
A typical EI signal obtained by scanning such an object through the beam is shown in Fig. 4(b), together with the corresponding FSP signal that would be recorded with no masks and a detector of negligibly small pixel size. As we can see, the peak of the EI signal is obtained when the phase edge is perfectly aligned with the detector edge, as in that situation the positive and negative fringes do not mix up . It can be noted that, in the particular case considered here (large sample aperture), the width of the signal profile is the same as that obtainable in FSP. In fact, the sample boundary produces a detectable contrast when it is in the region , because in this situation its FSP fringes extend across the detector edge. This fact has also an important general consequence for the signal in EI: it implies, in fact, that only a limited region of space around the detector edge (whose extent is related to the system spatial resolution) effectively contributes to the contrast. It is natural in this case to assume that the spatial resolution is equal to : the signals provided by two details closer than this distance, in fact, will be mixed up and as a result they may not be distinctly detected.
However, a different situation arises if the sample aperture is made narrower and becomes comparable with the width of the FSP near-field fringes. In particular, it can be shown that this leads to an improved resolution of EI with respect to an equivalent FSP setup featuring the same geometry. Equation (10), in fact, can be rewritten, by assuming no object attenuation and using the full expression of the TIE, as (cf. Equation (12)):
A situation similar to that encountered for the attenuation signal is therefore obtained also for the refraction signal. If the sample aperture is large, the spatial resolution is simply determined by the source blurring, as shown by Eq. (22). If the sample aperture is made smaller, however, the spatial resolution is improved, as the width of the PSF cannot be larger than the aperture itself (see Eq. (21)). Therefore, the spatial resolution can be reduced by narrowing down the sample aperture, while keeping all other parameters fixed (although, if the aperture is made too small, diffraction effects may become important and the signal may not be described anymore by the TIE). A better resolution than the corresponding FSP case (with the same focal spot and the same setup distances) can thus be achieved. It should be noted, however, that this comes at the expense of the flux (or of the exposure time, if the number of photons on the detector is kept constant) as more photons are stopped by the sample slit, thus possibly leading to reduced image CNR compared to a setup with larger apertures. As shown for FSP, in near field EI the spatial resolution is influenced only by geometrical factors, and is therefore independent of the beam polychromaticity. It is interesting to point out that an analogous result was obtained for the sensitivity in , where the angular resolution was found to be independent of the polychromaticity in the geometrical approximation of EI.
The case of the Fresnel regime can be treated in an analogous way. Let us first assume that the sample aperture is much larger than the FSP fringes created by the object, as illustrated in Fig. 5(a). As seen in section 3, the width of the first fringe in the Fresnel regime, , exceeds the geometrical blurring , which determines the fringe width in the near-field regime. A typical FSP signal in the Fresnel regime is reported in Fig. 5(b), together with the corresponding EI signal obtained when a detector slit is used to analyze the FSP fringes. The width of the resulting signal, determining the achievable spatial resolution, is the same for FSP and EI (like in the near-field regime), being equal in both cases to (in the considered case of a large sample aperture).
However, the spatial resolution is modified if or , i.e. if the width of the first FSP fringe is larger than the distance along y that separates the detector edge from each of the sample aperture edges. Let us consider the latter case: when the edge object is in the region , despite the fact that its distance from the detector edge is less than , it does not contribute to the image contrast because it is not illuminated (i.e. it is completely covered by the absorbing parts of the pre-sample mask). Therefore, the EI signal is truncated and its width is reduced (see also example below). The same happens in the opposite case when and the object edge is in the region . If both conditions are met, i.e. and , the EI signal is truncated from both sides and the spatial resolution is thus equal to the size of the pre-sample aperture a. This leads to the following rules for the spatial resolution in the direction orthogonal to the mask apertures (edge signal in the Fresnel region):
When a polychromatic beam is considered, the quantity in Eq. (23) should be replaced by the width of the first fringe of the corresponding polychromatic FSP intensity profile, which results from the sum of the various monochromatic components (the exact value of this width is dependent in a complex way on both the object and the imaging system).
In the following, numerical simulations based on a rigorous wave optics model  are presented, in order to illustrate the above analytical findings. In the first example, we consider the case of a laboratory-based EI setup: the distances are z1 = 1.6 m and z2 = 0.4 m, the energy E = 21 keV, the sample and detector apertures a = 12 µm and d = 20 µm, respectively (note that these are the same parameters of one of the laboratory setups installed at UCL, see ). A 50% illumination fraction is chosen, i.e. half of the beam is stopped by the detector mask. A 10 µm thick slab of Lucite, featuring an edge of 1 µm width (as above, the edge is modelled as a step function convolved with a Gaussian), is scanned through the beam with a very small step of 0.4 µm. Various source dimensions (FWHM) are considered: 1 µm, 20 µm, 60 µm and 100 µm. The simulated profiles are reported in Fig. 6.When the source FWHM of 1 µm is considered, due to the high spatial coherence obtained at the object plane, the secondary fringes typical of the Fresnel regime appear in the profile. For larger source sizes (20 µm, 60 µm and 100 µm), however, diffraction effects are washed out and only one peak is visible, as expected in the near-field regime. Importantly, in all cases the width of the profile is limited to the size of the sample aperture (12 µm, in this case), independently of the source blurring. For example, in the case of the source with FWHM = 100 µm, the projected source rescaled back to the object plane is equal to 20 µm, however the signal width is still equal to 12 µm. Note that this last profile is asymmetric: this is in agreement with the theory, as the system PSF that convolves the refraction angle distribution is not necessarily symmetric (see Eq. (21)).
The second example illustrates the factors determining the achievable spatial resolution in the opposite case of a long synchrotron beamline. The simulations are performed using the following setup parameters: source FWHM = 25 µm, z1 = 140 m, E = 40 keV, sample and detector apertures of 20 µm and 50 µm, respectively. An aluminum sample with a circular cross section of 2 µm radius is scanned using a very small step of 0.25 µm. The following propagation distances z2 are considered: 1 m, 2 m, 3 m and 6 m (Fig. 7).
Due to the high spatial coherence at the object plane, secondary fringes are observable at 1 m and 2 m. It is important to remark that here the spatial resolution is not affected by the projected source size, but is mainly determined by the width of the diffracted signal (cf. Equation (14)). At z2 = 1 m and at 40 keV, in fact, the width of the first FSP fringe is equal to , while the rescaled projected source is 0.2 µm; the corresponding values at 2 m are 7.8 µm and 0.4 µm, respectively (these values for the width of the first fringe are confirmed by the profiles obtained in Fig. 7). When the propagation distance is increased, the fringes become progressively wider. When they exceed the size of the sample aperture, however, they are truncated by the latter, so that the width profile does not grow beyond the size of the aperture. This explains why, for the largest distances (3 m and 6 m), the secondary fringes have completely disappeared. Note that at these distances the shape of the profile also looks more irregular, as the diffracted signals from the sample and from the aperture edges are mixed up. The degradation in the spatial resolution is therefore limited by the size of the sample aperture, 20 µm in this case. We should note that, in the considered example, the total width of the profile is actually equal to 24 µm, because of the finite dimensions of the sample (the sample diameter is equal to 4 µm).
Finally, we present an example where both attenuation and edge signals are present, in order to further illustrate the difference between the spatial resolutions for the two signals. The case of a long synchrotron beamline is again considered, with the following setup parameters: source FWHM = 25 µm, z1 = 140 m, z2 = 0.1 m, E = 30 keV, sample and detector apertures of 30 µm and 50 µm, respectively, and 50% illumination fraction. A lead sample with a circular cross section of 3 µm radius is scanned with a step equal to 0.25 µm. The obtained profile (Fig. 8) shows the superposition of the transmission and edge signals, the first having a trapezoidal shape and the latter introducing fringes on one side of it. The spatial resolutions of the two signals differ, as expected from the analytical predictions. As already mentioned, this is due to the different nature of the two signals: while the object attenuates the beam at all positions within the region l (in this case equal to 15 µm, see section 4.1), the edge signal originates only when the object position along y is close enough to the detector edge.
We have presented a detailed analysis of the spatial resolution in EI XPCi. The results show that the resolution in EI has specific features compared to other XPCi methods.
In particular, its dependence upon the experimental parameters is different in the two directions of the object plane. In the direction parallel to the apertures, it is equivalent to the spatial resolution achievable in corresponding FSP setups. In the near-field diffraction regime (corresponding to large Fresnel numbers), the resolution is determined by the geometrical blurring due to the projected source size and detector response. In the Fresnel diffraction regime (Fresnel number ~1), instead, the achievable resolution is also affected by the broadening of the beam due to diffraction, which becomes progressively more important when the Fresnel number decreases.
In the direction orthogonal to the apertures, however, the size of the sample aperture plays an important role in determining the resolution. In particular, since it limits the region of the sample that can contribute to the signal (as a direct consequence of the structured illumination it produces), it prevents the resolution from being larger than the aperture size. However, the resolution can be better than the sample aperture, if both source blurring and diffraction width are smaller.
We also demonstrated that, in the direction orthogonal to the apertures, the resolution is independent of both the pixel size and the mask period, at least if dithering (i.e. sub-period sample scanning) is performed. The typical limitation of FSP, where the spatial resolution cannot be better than either the source size (for very large magnifications) or the detector pixel (for small magnifications) , can therefore be circumvented thanks to the structured illumination used by EI. Likewise, the resolution can be considerably better than the period of the masks, unlike what happens for example in grating interferometry (GI) XPCi: in the latter case, in fact, the interference between adjacent apertures is actually at the basis of the working principle of the technique, with the number of interfering periods being determined by the chosen diffraction order [14,15]. It should also be mentioned that, in the laboratory-based implementations of GI, the spatial resolution is in general determined by the projected source size and/or detector PSF rather than by the gratings period and diffraction order, with the first contribution being typically one order of magnitude larger than the second.
The area and edge signals, the first typically arising from sample attenuation and the second from sample refraction/diffraction, were shown to have different effects on the obtained spatial resolution. Due the different nature of the two signals, in fact, the regions of the object effectively contributing to them are also different. Numerical simulations based on Fresnel-diffraction integrals have been presented, which confirm the analytical predictions.
Finally, it should be noted that the results of this analysis can be easily extended to the case in which two-dimensional masks, allowing phase sensitivity in two directions simultaneously, are used . The simple rules presented in this work for the estimation of the spatial resolution in EI will be useful for future implementations of the method in both laboratory and synchrotron setups.
The authors would like to thank Dr. T.E. Gureyev for fruitful discussions. They also acknowledge support from the UK Engineering and Physical Sciences Research Council (Grant No. EP/I021884/1). P.C.D. is supported by a Marie Curie Career Integration Grant (No. PCIG12-GA-2012-333990) within the Seventh Framework Programme of the European Union.
References and links
1. A. Olivo, F. Arfelli, G. Cantatore, R. Longo, R. H. Menk, S. Pani, M. Prest, P. Poropat, L. Rigon, G. Tromba, E. Vallazza, and E. Castelli, “An innovative digital imaging set-up allowing a low-dose approach to phase contrast applications in the medical field,” Med. Phys. 28(8), 1610–1619 (2001). [CrossRef] [PubMed]
3. P. C. Diemoz, M. Endrizzi, C. E. Zapata, Z. D. Pešić, C. Rau, A. Bravin, I. K. Robinson, and A. Olivo, “X-ray phase-contrast imaging with nanoradian angular resolution,” Phys. Rev. Lett. 110(13), 138105 (2013). [CrossRef] [PubMed]
4. A. Olivo and R. D. Speller, “A coded-aperture technique allowing X-ray phase contrast imaging with conventional sources,” Appl. Phys. Lett. 91(7), 074106 (2007). [CrossRef]
5. P. R. T. Munro, K. Ignatyev, R. D. Speller, and A. Olivo, “Phase and absorption retrieval using incoherent X-ray sources,” Proc. Natl. Acad. Sci. U.S.A. 109(35), 13922–13927 (2012). [CrossRef] [PubMed]
6. M. Marenzana, C. K. Hagen, P. Das Neves Borges, M. Endrizzi, M. B. Szafraniec, K. Ignatyev, and A. Olivo, “Visualization of small lesions in rat cartilage by means of laboratory-based X-ray phase contrast imaging,” Phys. Med. Biol. 57(24), 8173–8184 (2012). [CrossRef] [PubMed]
7. A. Olivo, S. Gkoumas, M. Endrizzi, C. K. Hagen, M. B. Szafraniec, P. C. Diemoz, P. R. T. Munro, K. Ignatyev, B. Johnson, J. A. Horrocks, S. J. Vinnicombe, J. L. Jones, and R. D. Speller, “Low-dose phase contrast mammography with conventional X-ray sources,” Med. Phys. 40(9), 090701 (2013). [CrossRef] [PubMed]
8. P. C. Diemoz, C. K. Hagen, M. Endrizzi, and A. Olivo, “Sensitivity of laboratory based implementations of edge illumination X-ray phase-contrast imaging,” Appl. Phys. Lett. 103(24), 244104 (2013). [CrossRef]
9. F. Krejci, J. Jakubek, and M. Kroupa, “Hard X-ray phase contrast imaging using single absorption grating and hybrid semiconductor pixel detector,” Rev. Sci. Instrum. 81(11), 113702 (2010). [CrossRef] [PubMed]
10. S. W. Wilkins, Y. I. Nesterets, T. E. Gureyev, S. C. Mayo, A. Pogany, and A. W. Stevenson, “On the evolution and relative merits of hard X-ray phase-contrast imaging methods,” Phil. Trans. R. Soc. A 372(2010), 20130021 (2014). [CrossRef] [PubMed]
11. T. Davis, D. Gao, T. E. Gureyev, A. W. Stevenson, and S. W. Wilkins, “Phase-contrast imaging of weakly absorbing materials using hard X-rays,” Nature 373(6515), 595–598 (1995). [CrossRef]
12. A. Snigirev, I. Snigireva, V. Kohn, S. Kuznetsov, and I. Schelokov, “On the possibility of X-ray phase contrast microimaging by coherent high-energy synchrotron radiation,” Rev. Sci. Instrum. 66(12), 5486–5492 (1995). [CrossRef]
13. S. W. Wilkins, T. E. Gureyev, D. Gao, A. Pogany, and A. W. Stevenson, “Phase-contrast imaging using polychromatic hard X-rays,” Nature 384(6607), 335–338 (1996). [CrossRef]
14. A. Momose, S. Kawamoto, I. Koyama, Y. Hamaishi, K. Takai, and Y. Suzuki, “Demonstration of X-ray Talbot interferometry,” Jpn. J. Appl. Phys. 42(Part 2, No. 7B), L866–L868 (2003). [CrossRef]
15. F. Pfeiffer, T. Weitkamp, O. Bunk, and C. David, “Phase retrieval and differential phase-contrast imaging with low-brilliance X-ray sources,” Nat. Phys. 2(4), 258–261 (2006). [CrossRef]
16. K. S. Morgan, D. M. Paganin, and K. K. W. Siu, “Quantitative single-exposure X-ray phase contrast imaging using a single attenuation grid,” Opt. Express 19(20), 19781–19789 (2011). [CrossRef] [PubMed]
17. P. C. Diemoz, A. Bravin, and P. Coan, “Theoretical comparison of three X-ray phase-contrast imaging techniques: propagation-based imaging, analyzer-based imaging and grating interferometry,” Opt. Express 20(3), 2789–2805 (2012). [CrossRef] [PubMed]
20. P. R. T. Munro, K. Ignatyev, R. D. Speller, and A. Olivo, “The relationship between wave and geometrical optics models of coded aperture type X-ray phase contrast imaging systems,” Opt. Express 18(5), 4103–4117 (2010). [CrossRef] [PubMed]
21. F. A. Vittoria, P. C. Diemoz, M. Endrizzi, L. Rigon, F. C. Lopez, D. Dreossi, P. R. T. Munro, and A. Olivo, “Strategies for efficient and fast wave optics simulation of coded-aperture and other X-ray phase-contrast imaging methods,” Appl. Opt. 52(28), 6940–6947 (2013). [PubMed]
22. P. C. Diemoz, M. Endrizzi, A. Bravin, I. K. Robinson, and A. Olivo, “Sensitivity of edge illumination X-ray phase-contrast imaging,” Phil. Trans. R. Soc. A 372(2010), 20130128 (2014). [CrossRef] [PubMed]
23. P. R. T. Munro, K. Ignatyev, R. D. Speller, and A. Olivo, “Limitations imposed by specimen phase gradients on the design of grating based X-ray phase contrast imaging systems,” Appl. Opt. 49(20), 3860–3863 (2010). [CrossRef] [PubMed]
24. A. Pogany, D. Gao, and S. Wilkins, “Contrast and resolution in imaging with a microfocus X-ray source,” Rev. Sci. Instrum. 68(7), 2774–2782 (1997). [CrossRef]
26. Y. I. Nesterets, S. W. Wilkins, T. E. Gureyev, A. Pogany, and A. W. Stevenson, “On the optimization of experimental parameters for X-ray in-line phase-contrast imaging,” Rev. Sci. Instrum. 76(9), 093706 (2005). [CrossRef]
27. T. E. Gureyev, Y. I. Nesterets, A. W. Stevenson, P. R. Miller, A. Pogany, and S. W. Wilkins, “Some simple rules for contrast, signal-to-noise and resolution in in-line X-ray phase-contrast imaging,” Opt. Express 16(5), 3223–3241 (2008). [CrossRef] [PubMed]
28. J. W. Miao, R. L. Sandberg, and C. Y. Song, “Coherent X-ray diffraction imaging,” IEEE J. Sel. Top. Quantum Electron. 18(1), 399–410 (2012). [CrossRef]
29. M. R. Teague, “Deterministic phase retrieval: a Green's function solution,” J. Opt. Soc. Am. 73(11), 1434–1441 (1983). [CrossRef]
30. A. Olivo, S. E. Bohndiek, J. A. Griffiths, A. Konstantinidis, and R. D. Speller, “A non-free-space propagation X-ray phase contrast imaging method sensitive to phase effects in two directions simultaneously,” Appl. Phys. Lett. 94(4), 044108 (2009). [CrossRef]