Image magnification via twofold asymmetric Bragg reflection (a setup called the ”Bragg Magnifier”) is a recently established technique allowing to achieve both sub-micrometer spatial resolution and phase contrast in X-ray imaging. The present article extends a previously developed theoretical formalism to account for partially coherent illumination. At a typical synchrotron setup polychromatic illumination is identified as the main influence of partial coherence and the implications on imaging characteristics are analyzed by numerical simulations. We show that contrast decreases by about 50% when compared to the monochromatic case, while sub-micrometer spatial resolution is preserved. The theoretical formalism is experimentally verified by correctly describing the dispersive interaction of the two orthogonal magnifier crystals, an effect that has to be taken into account for precise data evaluation.
© 2009 Optical Society of America
X-ray imaging with analyzer crystals as optical elements in the exit beam (analyzer-based imaging) provides not only sensitivity to phase contrast, but also a means to achieve image magnification with asymmetrically cut analyzers . The feasibility of this x-ray imaging tool was experimentally demonstrated by several groups in various setups [2, 3, 4, 5, 6, 7, 8, 9]. In recent years it turned out that a theoretical understanding of the imaging process is crucial and, thus, several models for the case of fully coherent illumination were developed [10, 11, 12, 13]. Generalized models accounting for polychromatic illumination were proposed in [14, 15] for the case of symmetric analyzers. In  a rigorous wave optics approach for the theoretical description of partial coherence in analyzer-based imaging was presented. However, the validation of this formalism is limited to symmetric Bragg reflection, since the varying propagation distances occurring with asymmetric Bragg reflection is not properly taken into account.
In the present article, we suggest an extension of our previously developed theoretical formalism  based on dynamical theory of x-ray diffraction  allowing for polychromaticity also under conditions of asymmetric Bragg reflection. Our model is based on a plane wave approximation, which we find to be easier and yet as reliable as spherical-wave approximations. We further generalize the model to describe a setup with two successive analyzers acting in perpendicular diffraction planes. From the formalism, the decisive instrumental parameters turn out to be the incident divergence (i.e. divergence as seen by one point of the sample) and the wavelength band incident at individual points in the sample plane, as these two phenomena determine the relevant figures of merit of the imaging process such as contrast and resolution. The general treatment of partial coherence is based on a decomposition of the entire wave field into monochromatic plane waves, which are propagated through the system individually and summed up incoherently afterwards. Our treatment does not consider the influence of the camera, whose influence would be straightforward to include.
The remainder of this article is structured as follows. Section 2 introduces the instrumental and experimental background of analyzer-based imaging with Bragg magnification. Section 3 collects basic formulae of relevance for the case of polychromatic illumination. Section 4 represents the main theoretical part of the article. We first briefly review the previously developed formalism describing image formation for monochromatic illumination . The following subsections extend the formalism to the general case of illumination with incident divergence and wavelength spread. Section 5 applies the developed formalism in numerical investigations of the polychromatic response function. Simulations further serve to analyze the influence of polychromaticity on contrast and resolution. Section 6 establishes the link with experiment and compares the simulations with experimental results. After the general conclusions presented in Sec. 7, an appendix is dedicated to the formalism for inverse ray-tracing simulations.
2. Experimental background
In order to provide two-dimensional magnification a Bragg Magnifier utilizes two asymmetric Bragg reflections with perpendicular diffraction planes downstream of the sample (see Fig. 1). This setup allows for a wide variety of choices of design parameters such as the used reflections, the magnification factor or the x-ray photon energy. Thus, quantitative analysis of the influence of partially coherent illumination depends on the chosen parameters. Therefore, we will focus the quantitative discussion on the experimental parameters (see below) for the setup used by our group. However, we emphasize that the argumentation is valid in the general case.
The Bragg Magnifier built by our group utilizes a Si-224 Bragg reflection for vertical magnification and a Si-004 reflection for horizontal magnification, respectively. The asymmetry of both reflections was chosen to provide 40-fold magnification at a photon energy of 8.048 keV. By slightly increasing the x-ray photon energy and adjusting the angles of incidence accordingly the setup can provide larger magnification factors up to 120×250. This choice of two different reflections allows a compact design of the setup, which is advantageous concerning the spatial resolution . For image acquisition a highly efficient Bruker ASX Smart Apex 2 CCD-camera with a pixel size of 15 mm is used. A typical magnification factor of 100 thus leads to effective geometrical pixel sizes of 0.15 mm. Table 1 shows some important quantities of the reflections of interest. In this configuration, we experimentally demonstrated a resolution of 0:4mm  and theoretically estimated a sensitivity to refraction angles in the order of microradians at 5% intensity contrast .
In a first step, we will analyze the influences of divergence and polychromaticity of the x-ray beam on imaging with the Bragg Magnifier separately. Since the coherence is strongly connected to the properties of the experimental conditions, the quantitative discussion will be carried out for two very different beamlines: the short TOPO-TOMO beamline at ANKA in Karlsruhe (Germany) with a bending magnet as the X-ray source and the long ID19 beamline at the European Synchrotron Radiation Facility (ESRF) in Grenoble (France) with a undulator as the insertion device. This choice should give a good impression about the influences of partial coherence for the Bragg Magnifier for a wide variety of experimental situations.
At this point, it is important to note that for imaging the smearing out of a single object point on the detector determines the (possible) reduction of visible contrast and spatial resolution. Therefore, the wavelength band and the divergence of the beam as seen from a single object point (i.e. incident divergence) are the suitable quantities for the following discussion. Fig. 2 illustrates this argument and the corresponding quantitative values for the two example beamlines are shown in Tab. 2.
In principle, there are two possible consequences of a divergent beam. First, the angular deviation of the divergent beam corresponds to an angular offset on the reflection curve. For
the ID19 beamline the incident divergence is two orders of magnitude smaller than the Darwin widths of the corresponding reflections (cmp. values given in Tab. 1 and Tab. 2). Thus, this effect of divergence can be neglected in this case. For the TOPO-TOMO beamline the incident divergence is at least two times smaller than the corresponding Darwin widths and may be neglected in a first order approximation.
Secondly, the information about the sample is transported to different locations on the detector, which results in a smearing out of the image. To quantify this effect, one has to account for the fact that during asymmetric Bragg reflection the incident divergence σinc is transformed to the exit divergence σout by the equation
for the vertical direction and
for the horizontal direction. All values are at least one order of magnitude smaller than the geometrical pixel size of the used detector. Thus, the disadvantageous effect of beam divergence can legitimately be neglected for long beamlines (e.g. ID19) and in a first order approximation for short beamlines (e.g. TOPO-TOMO).
On the other hand, the wavelength bandwidth ζ incident on the sample is essentially given by the spectral width of the monochromator, if the spectral width of the source is much larger than the spectral acceptance of the monochromator crystals. The ID19 utilizes a silicon 111 double monochromator in a non-dispersive (n,-n) setup, resulting in an effective bandwidth of ζ=1.3×10-4 under the assumption of zero beam divergence. Comparing this to the spectral acceptance of the reflections of the analyzer crystals given in Tab. 1 clarifies that this effect is not negligible. Thus, polychromaticity is identified as the main influence of partially coherent illumination in the present context.
This discussion also defines a unique combination of imaging properties of the Bragg Magnifier in the field of phase-sensitive X-ray imaging techniques with parallel beam geometry, such as phase propagation imaging  or grating interferometry . The Bragg Magnifier delivers a sub-micron spatial resolution with comparably low requirements with respect to the maximum allowed beam divergence. In fact, the incident beam divergence may be as large as several 10 µrad (i.e. smaller than about half of the Darwin widths of the reflections), which is essentially due to the small sample-to-detector distance as well as due to the utilization of asymmetric Bragg reflection. Therefore, the Bragg Magnifier opens the possibility of sub-micron resolution even for short beamlines like the TOPO-TOMO beamline.
3. Influence of polychromaticity at the analyzer crystal
Let us collect some relations which are important for the description of an asymmetric Bragg reflection. The magnification factor b of the asymmetric reflection is defined by
where θB is the Bragg angle, ρ is the asymmetry angle between crystal surface and diffracting lattice planes, and the convention b>1 for smaller incidence angles was used (see also Fig. 3).
As discussed in the previous section, only a narrow wavelength band z contributes to the visible intensity. Therefore, the differential Bragg equation
is a good approximation for the additional Bragg angle offset at the analyzer crystals (i.e. in the argument of the reflection curves). The differential Bragg equation can also be used to transform the Darwin width ωD (i.e. the angular acceptance) to the corresponding spectral acceptance (or wavelength band) ζ
We will now analyze the effect of asymmetric Bragg reflection of a perfectly collimated but polychromatic beam [24, 23]. For this purpose, we define the reference beam as the beam path for the reference wavelength (i.e. ΔθD=0). It is important to carefully distinguish between the angular offset on the reflection curve Dqd and the angular offset from the reference beam after reflection Δθad. From geometrical considerations as indicated by Fig. 3 we conclude that Dqad is given by
and it is obvious that the exit angle is in fact wavelength dependent. Thus even a perfectly collimated but polychromatic beam will be divergent after asymmetric Bragg reflection - an effect that may be called dispersion induced divergence. This effect can result in a degradation of the spatial resolution. The total blurring Δx is obviously proportional to the incident spectral width ζ and the distance between analyzer crystal and detector d
At the synchrotron, the small incident spectral width ζ (see Tab. 1) and the analyzer-to-detector distance of about d=d 2+d 3=195 mm for the first reflection lead to a total blurring of Δx≈25µm. Although this is comparable to the physical pixel size (15 µm) of the used camera it is still smaller than the width of the point spread function (about two pixels) of the camera. Thus, the effect of dispersion induced divergence can be neglected in this case.
4. Theoretical formalism
In the following, we will lay out the theoretical description of the imaging process of the Bragg Magnifier in the dispersive case. We start with a short repetition of the previously developed formalism that is valid for perfect coherent illumination. Then we will extend the formalism to the dispersive case. In this first approach, we will limit the discussion to the one-dimensional case. The extension to two dimensions is straightforward, and will be partly laid out in section 6. The one-dimensional treatment, however, will make it easier to analyze the influence of polychromaticity. Since only relative intensities are of interest, we will omit constant factors and global phase terms throughout the discussion.
4.1. Case of fully coherent illumination
The theoretical formalism for the case of perfectly coherent illumination was developed in two steps. First, we used a three-dimensional Fourier expansion of the wave fields of interest (i.e. wave field before reflections, between first and second reflection and after the second reflection) and linked them by dynamic diffraction theory . The result was the (one-dimensional) diffraction integral that connects the input wave field in Fourier space D̂in(q) to the output wave field Dout (x) on the detector
where K=2π/λ is the modulus of the wave vector, R̂1 is the reflection curve of the analyzer crystal, ω 1 is the angular position of the main beam direction on the reflection curve (”the working point”) and the definition of the remaining quantities can be found in Fig. 4.
The implicit assumption of negligible free-space propagation after Bragg reflection in Eq. (9) is justified by the following argumentation. Free-space propagation contributes to the visible contrast, if the propagation distance is comparable or larger than zeff=a 2/λ, where a denotes the size of typical sample features. After asymmetric Bragg reflection the contrast is expanded by the magnification factor b, so that zeff scales with b 2. This means that the propagation distance after asymmetric reflection is effectively decreased by b 2. For a magnification factor of 40, this implies an effective reduction of the propagation distance by the factor 1600. Thus, free-space propagation after asymmetric reflection can be neglected.
A peculiarity of the diffraction integral (9) is that it cannot be calculated by one Fourier transform, but has to be calculated for typically several hundred planes of constant z; interpolation is necessary to retrieve the output wave field on the image position s. However, in a second step  we have shown that it is possible to reduce the integral (9) to a simple Fourier transform by using the identities
and performing the substitution
The diffraction integral then becomes
where we have introduced the generalized propagator of the Bragg Magnifier as
Equations (11)–(13) deliver the theoretical background to numerically calculate the output wave field in the one-dimensional case with one Fourier transform. For a known input wave field D̂in(q), which is usually defined on a regular grid in q, it is necessary to use interpolation for retrieving D̂in(q(f)), because the numerical Fourier transform requires a regular grid in f. But if sufficient sampling points are used, linear interpolation seems to be adequate and the entire calculation time is only slightly increased by the interpolation.
4.2. Case of partially coherent illumination
Under the condition of partial coherence due to a spatially extended source of finite bandwidth, it is possible to decompose the beam incident on the sample into monochromatic plane waves each with a certain wavelength λ and a certain angular deviation Δα from the main beam direction. Each plane wave delivers an output wave field that is generally dependent on Δα and λ : Dout (x)=D Δα,λ (x). Then, the observable intensity is given by the incoherent sum over all incident directions and wavelengths 
weighted with the wavelength and the angular distribution of the beam impinging on the sample Ib(Δα,λ). We would like to remind the reader that Ib(Δα,λ) is defined in the object plane (see Fig. 2) and in Appendix A we briefly lay out a ray-tracing approach to express Ib(Δα,λ) at the position of the sample in terms of monochromator and source parameters.
On the one hand, the implicit assumption of mutual incoherent wavelengths in Eq. (14) is justified by the fact, that reasonable exposure times are orders of magnitudes larger than the encountered coherence times. On the other hand, the use of plane waves instead of spherical waves is qualitatively justified by the following argumentation.
In principle, the width of the first Fresnel zone determines the validity of a plane wave description. In the present case, the first Fresnel zone has a width of about 100 µm at a wavelength of 1 Å, which seems to imply that the used plane wave approximation is only valid within the same range. However, if the detectable interference phenomena due to a single object point extends less than the first Fresnel zone the plane wave approach is applicable over the entire field of view. Our experimental and theoretical results in the case of monochromatic illumination [13, 27] show that this is fulfilled with synchrotron radiation.
In order to establish the formalism, it is now necessary to quantify the output wave field D Δα,λ (x) in dependence of the wavelength and the angular deviation Δα. Since the angular deviation from the main beam direction is proportional to the derivative of the phase ϕ(x) of the wave field
a plane wave with angular deviation can be accounted for by multiplying the input wave field Din(x) with the corresponding phase factor, yielding
or in Fourier space with q0=-ΔαK
Performing the substitution q→q-q 0 in integral (9) and following a discussion analogous to section 4.1, an adapted relation between q and f
and the generalized propagator in case of divergence
is obtained. The influence of a plane wave with an angular deviation of Δα illuminating the sample is three-fold: First, an additional angular offset occurs in the argument of the reflection curve. Secondly, the image position appears shifted on the detector, which is reflected in the additional q 0 in Eq. (18). Lastly, the mean propagation distance is changed according to Eq. (19).
Since the effect of dispersion induced divergence at the analyzer crystals can be neglected at a synchrotron setup the inclusion of polychromatic illumination is more simple. As stated in section 3 the differential Bragg equation is a good approximation for the additional angular offset due to polychromaticity in the argument of the reflection curve R̂. In the present context, the angular offset Δθλ has to be written in terms of deviation from the reference wavelength λref and thus reads
where i=1,2 applies for the first and second reflection, respectively (the second reflection will be used in section 6). This is the only influence of polychromatic illumination and therefore, the generalized propagator in the case of partially coherent illumination reads
The formalism developed in this section constitutes a theoretical description of the imaging process in the case of partially coherent illumination of the Bragg Magnifier in the one-dimensional case.
4.3. Case of polychromatic illumination
As it was shown in section 2 the effect of divergence is negligible in a synchrotron setup with a long beamline. Therefore, the numerical analysis of the influence of partial coherence will be carried out for the case of a perfectly collimated but polychromatic beam. Obviously, Δα and q 0 equal zero and the angular and spectral distribution of the beam Ib can be approximated by
where δD denotes the Dirac-δ function and W(λ) is given by the spectral distribution of the double crystal reflection curve of the monochromator. With the approximations mentioned, the generalized propagator reduces to
The output wave field for a given wavelength equals
and the observable intensity can be calculated by
5. Numerical investigations
It is well known that the general influence of partially coherent illumination is to decrease contrast as well as spatial resolution in experimental images. In this section we will theoretically investigate the influence of polychromaticity on the observable intensity and the spatial resolution for different cases. This will be done by numerical calculation of Eq. (25). The number of parameters that can be varied (e.g. wavelength band, reflection, magnification factor, working point, propagation distance, etc.) is quite large so that we decided to use one set of parameters that represent a typical experimental situation for the first reflection throughout the section. If not otherwise specified the parameters are as follows:
photon energy: 8.048 keV
magnification b: 40fold
working point ω 1: left slope of RC
mean propagation distance z 0: 5 mm
shape of W(λ): Gaussian
wavelength band ζ: variable
5.1. Polychromatic response function
We start with the simplest case. The response function of an imaging system can be defined as the observable intensity distribution caused by a single object point . So using the Dirac d -function as a single object point (Din(x)=δD(x), or in Fourier space D̂in(q)=1) in Eq. (23) and Eq. (25) yields the polychromatic response function for the Bragg Magnifier
where P̂λ (q(f)) is given by Eq. (23).
Fig. 5 shows the numerically calculated polychromatic response function for three different spectral widths ζ, where ζ=0 corresponds to the case of monochromatic illumination. As expected, with increasing spectral width ζ the interference fringes cancel out and the width of the maximum is broadened. Generally speaking, this will tend to decrease visible contrast and degrade spatial resolution in the images. However, we emphasize the fact that this is only a qualitative and not a quantitative argument since this interpretation of the response function is implicitly based on the assumption of two incoherent object points. But for every given wavelength λ the imaging process is, in fact, coherent and only the contributions of different wavelengths are incoherent to each other.
5.2. Influence of polychromatic illumination on visible contrast
We will use a Gaussian-shaped sample in order to quantify the influence of polychromatic illumination on the visible contrast. Let the thickness distribution h(x) of the sample be given by
where h 1=5µm is the maximum thickness and σ=1µm corresponds to the width of the sample feature (see also Fig. 6(a)). Thus, the input wave field equals
where n is the complex refractive index of the material at the given x-ray photon energy. The material was chosen to be amorphous carbon, modeling a biological sample with a maximum attenuation of 0.2% and a maximum phase shift of about π/2. Fourier transform and linear interpolation according to Eq. (11) was used to retrieve the input wave field in Fourier space Din(q(f)).
Figure 6(a) shows the visible intensity distribution in dependence of the relative spectral width ζ. Although contrast decreases with increasing ζ, the general shape of the intensity distribution is preserved. Regarding the post-detection analysis of experimental images, this implies that the influence of polychromatic illumination leads to no (additional) qualitative artifacts.
An example for a post-detection analysis is a previously developed iterative phase reconstruction algorithm . We have shown the feasibility of this post-detection analysis by a simulated reconstruction of a one-dimensional model sample under monochromatic illumination. The computation time was about 14 minutes. The correct inclusion of polychromatic illumination would be to account for the additional integral according to Eq. (25). Typically, this would increase the computation time by two orders of magnitude rendering the algorithm unfeasible. Since the shape of the intensity distribution is approximately the same for monochromatic and polychromatic illumination, a post-detection analysis assuming monochromatic illumination of experimental images does not lead to qualitative artifacts but only affects the quantitative values. Thus, the additional integration may be skipped in a first order approximation.
To quantify the contrast degradation due to polychromatic illumination, the contrast ratio between the monochromatic and polychromatic case is shown in Fig. 6(b). Even though the curves depend on the chosen parameters of the sample, the following rule of thumb may be deduced:
The visible contrast decreases to 50%, when the spectral width of the incident beam becomes comparable to the spectral acceptance of the reflection curves.
In the typical experimental situation the beam is monochromatized by a double crystal monochromator with Si-111 reflections and a corresponding spectral width of ζ=1.3×10-4. The dashed lines in Fig. 6(b) indicate that the visible contrast degrades to about 30% (mean of both reflections) due to polychromatic instead of monochromatic illumination.
Although this result seems to suggest that the spectral width of the incident beam has to be chosen as small as possible, there is the disadvantage of drastically increasing the exposure time. Thus, in the practical implementation a compromise between the desired contrast and a feasible exposure time has to be found.
5.3. Influence of polychromatic illumination on spatial resolution
Similarly to the monochromatic case  we will use the Sparrow criterion for the theoretical estimation of the achievable spatial resolution. The Sparrow criterion states that two object points are resolved if the intensity distribution between the two corresponding image points shows a minimum .
This criterion has the advantage of simple numerical implementation: the amplitude of two object points separated by the distance x0 is represented by the sum of two Dirac δ-functions: Din(x)=δD(x)+δD(x+x 0). The corresponding intensity distribution, which is calculated by Eqs. (23)–(25), can easily be checked for a minimum between the two image points. The distance between the two object points x0 can then be varied until the minimum has a certain contrast compared to the smaller of both maxima. We have chosen the contrast to be 5%, which is experimentally well observable.
Figure 7 shows the application of the Sparrow criterion to both the monochromatic and polychromatic case. Due to the occurrence of varying propagation distances it is obvious that the resolution will also vary within an image. Therefore, the resolution limit is plotted in dependence of the sample-to-analyzer distance. As expected, the resolution decreases in the polychromatic case but it is still well in the sub-micrometer regime. Furthermore, it is surprising that the resolution may very well improve with increasing spectral width ζ. Although this reminds of a focussing effect, we cannot explain it completely. But we point out that extensive investigations have shown that this is not a result of numerical artifacts.
6. Comparison with experiment
In this section, we will compare the theoretical formalism developed in the present article with experimental results. The comparison will be done in the simplest case, without sample, allowing an analytical calculation of the monochromatic intensities (Eq. 24). Obviously, we first have to extend the formalism to account for the second reflection.
We start with a one-dimensional treatment and include the reflection at the second analyzer crystal of the Bragg Magnifier later on. With no sample the input wave field is unity in direct space and proportional to the Dirac δD-function in Fourier space
Using the right hand side in Eq. 23 leads to the observable intensity for a particular wavelength
As expected from the absence of a sample, the intensity is uniform on the field of view (i.e. no dependence on x on the right hand side), and Fresnel diffraction does not contribute. Since the diffraction planes of the first and second analyzer crystals of the Bragg Magnifier are perpendicular with respect to each other, the second reflection can be included by a comparable treatment, yielding
where the dependence on x was dropped and the quantities Δθλ 2 and ω 2 for the second reflection are defined analogously to the first. According to Eq. 25 the result is then given by
which describes the observable intensity in dependence on the working points ω 1 and ω 2 of the first and second reflection. Equation 32 can now be used to compare the theoretical and experimental result in the case of an absent sample.
The measurement was carried out at the beamline ID19 at the ESRF. An undulator was chosen as x-ray source in order to obtain high flux. A photon energy of 8:048 keV was selected by a Si-111 double crystal monochromator corresponding to 40-fold magnification at both analyzer crystals. A two-dimensional scan with ω 1 and ω 2 as experimental parameters was performed and by calculating the mean intensity of each experimental image, the CCD-detector was used as point detector. The experimental intensity map of the scan is shown in Fig. 8(a).
The theoretical part of the comparison was realized by numerical integration of Eq. 32. The wavelength spectrum W(λ), that is offered to the Bragg Magnifier, was taken as the Si-111 double-crystal reflection curve of the monochromator. The corresponding full width at half maximum (FWHM) of 47.4µrad implies a wavelength band of ζ=1.3×10·4 at the given photon energy. The reflection curves were calculated according to standard dynamic theory . The theoretical intensity map is shown in Fig. 8(b). The excellent agreement between experiment and theory validates the theoretical formalism developed in this article.
A further analysis of the experimental intensity map (see Fig. 8(a)) concerning the observable half widths reveals a surprising influence of polychromaticity on imaging with the Bragg Magnifier. Each horizontal line of the intensity map represents a rocking curve with ω 1 as the scan parameter and ω 2 as a fixed parameter. The half widths of each rocking curve can be determined and the result is the half width in dependence of ω 2: FWHMω 1(ω 2). The same can be done with the vertical lines resulting in FWHMω 2(ω 1). Both half widths are shown in Fig. 9.
We conclude that the half width corresponding to one rocking curve depends on the chosen working point of the other reflection. This constitutes an unexpected influence of polychromatic illumination but can be understood as follows.
We will use the rocking curves corresponding to the second analyzer crystal (i.e. ω 2 is the scan parameter; bottom curve in Fig. 9) for our explanation. Assuming for the moment that all reflection curves can be approximated by Gaussian functions, then the integral in Eq. 32 in fact describes a convolution of the function |R̂2|2 with the function |W×R̂1|2. With this assumption, the observable half width after convolution (FWHMout) is given by
where FWHMWR1 is the width of the function W×R̂1 and FWHMR2 is the width of the second reflection curve. But as Fig. 10 shows, FWHMWR1 depends on the chosen working point of the first reflection. This means that one reflection limits the available wavelength band for the other.
The dispersive interaction described above has important consequences on the two-dimensional diffraction-enhanced imaging (2D-DEI) algorithm introduced in . With the 2D-DEI algorithm it is possible to separate the contributions of absorption, horizontal refraction and vertical refraction to the transmission image. In the theoretical ground work it was assumed that the two reflections are independent of each other. However, the discussion presented above clearly shows that this is not strictly true and an extension of the 2D-DEI algorithm to account for the dispersive interaction of the analyzer crystals will be shown in a forthcoming paper.
In summary, we have extended the formalism describing image formation in the Bragg Magnifier to the case of partially coherent illumination. This leads to a theoretical description of the imaging process, which allows to numerically simulate and analyze arbitrary instrumental setups in analyzer-based X-ray imaging with and without image magnification.
Polychromaticity was identified as the dominant effect of partially coherent illumination, while beam divergence can be neglected at least in a first order approximation for typical imaging conditions as met at a synchrotron-based X-ray source. The latter property led to the conclusion that the Bragg Magnifier offers the unique combination of sub-micron spatial resolution and low requirements with respect to the incident divergence.
For this typical case of negligible divergence numerical analyses showed that contrast decreases by typically 50% as compared to the case of perfectly monochromatic illumination. The general shape of the intensity distribution is preserved, however, so that a simple first-order correction of experimental data is possible. The achievable spatial resolution was shown to be less sensitive to polychromaticity, still remaining well in the sub-micrometer regime. Very good agreement between theory and experiment was achieved for the setup with two analyzer crystals. The interaction between the two perpendicular analyzer crystals leads to varying half widths due to polychromatic illumination. The comprehension of this effect is essential for the correct analysis of images obtained by Bragg magnification and will be discussed in forthcoming papers.
We like to acknowledge Lenny Sapei and Oskar Paris for providing us with a sample of a horse tail plant and Jürgen Härtwig for is help during the experiments and for fruitful discussions. We further like to thank Samuel McDondald for the help during the preparation of this article.
The theoretical formalism of the influence of partial coherence as laid out in section 4.2 required the knowledge of the angular and spectral distribution Ib(Δα,λ) of the x-ray beam at the position of the sample. In this appendix, we want to present a ray-tracing approach to connect Ib(Δα,λ) to the parameters of the monochromator and the source. The direction and intensity of the rays correspond to the propagation direction and intensity of the plane waves used in section 4.2. We also want to investigate an implicit assumption about Ib used in section 4.2. In general, the spectral and angular distribution of the x-ray beam at the sample will depend on the location x in the object plane, i.e. Ib(Δα,λ)=Ib(Δα,λ,x) and the aim of this section will be to justify this assumption. The definition of quantities not defined here, can be found in section 3. Please note that we denote Δα as ϕ in the following.
The back-tracing will be performed by tracing rays from the position of the sample through a double crystal monochromator (where each crystal is denoted by m=1,2) to the position on the source. Each ray is characterized by three quantities (see also Fig. 12) expressed in the appropriate deviations from the reference beam: x the starting position in the object plane, Δλ/λ the relative deviation from the reference wavelength and ϕ the angular deviation from the reference beam. Thus, the reference beam (i.e. x=0, Δλ/λ=0, ϕ=0) defines the optical axis of the system.
A.1. Dispersion at the monochromator crystal
Since a single object point was the reference point throughout the discussion of this article, we have to connect a fixed exit direction to the corresponding incident direction at the monochromator crystal. This can be done by using the boundary conditions for the corresponding wave vectors , which states that the components of the incident and exit wave vector along the crystal surface must equal. Figure 11 illustrates this condition and we point out to readers familiar with dynamic theory that Fig. 11 does not show the Ewalds sphere inside and outside of the crystal but two spheres outside of the crystal for two different wavelengths.
Exactly as stated in section 3, we have to once again carefully distinguish between the angular offset in the argument of the reflection curve Δθm and the angular deviation from the reference beam ϕ for the exit side and ϕ′ for the incident side, respectively. In consistency with the theoretical formalism developed in section 4 we use a formulation of dynamic theory that uses angular deviations at the incidence side of the reflection Δθm as the input argument of the reflection curve R̂cm. From Fig. 11 it can be concluded that Δθm is given by
while the angular deviation from the reference beam is given by
First, we trace a beam from the object plane through one monochromator crystal to an intermediate plane as shown in Fig. 12. Typically, the distance between sample and crystal is much larger than the crystal surface. Thus it is possible to use the mean distance and from Fig. 12 it can be concluded that the position of the ray after reflection is given by
where Eq. (35) applies for ϕ′. Therefore, we have connected the ray characterized by (x,ϕ,Δλ/λ) before reflection to the ray characterized by (x′,ϕ′,Δλ/λ) after reflection. Naturally, this can be done a second time in order to connect the intermediate plane with the source plane using the same equations. The result for the position on the source plane x″ is
and for the angular deviation
where it was assumed that the Bragg angle of both reflections is equal (i.e. θ 1=θ 2=θ B).
During both reflections the intensity associated with each ray is reduced according to its angular position Δθm (Eq. 34) on the corresponding rocking curve: |R̂cm(Δθm)|2 (with m=1,2). On the presumption that the source is completely incoherent, the angular and spectral intensity distribution at the position of the sample is finally given by
where Is(θ″,λ,x″) denotes the intensity distribution of the source in dependence of the emission angle ϕ″, the wavelength λ and the position x″. With the help of the ray-tracing approach developed here, it is possible to calculate the characteristics of the beam Ib for nearly all practical relevant setups.
The implications of the back-tracing results on the experimental situation of the beamline ID19, that is mainly dealt with in this article, will now be analyzed. The double crystal monochromator utilizes two symmetrical Si-111 reflection (i.e. b 1=b 2=1), which implies ϕ=ϕ′=ϕ″. Furthermore, the spectral acceptance of the monochromator crystals is much smaller than the spectrum provided by the source and as it was shown in section 2 the influence of divergence is negligible. Thus, the intensity distribution of the source can be written as Is(ϕ″,λ,x″)=Is(x″)=I 0 δD(x″) with I 0 a factor of proportionality. According to Eq. (36) and Eq. (37) with x″=0 and zg=z 1+z 2+z 3+z 4 the angular offset ϕ is now given by ϕ=x/zg. This directional change corresponds to a shift of the center of the wavelength band (Eq. 34) of
along the position on the sample plane x. The field of view of the Bragg Magnifier is typically 1 mm in width, which results into a shift of the center of the wavelength band of about ≈2×10-5. This is one order of magnitude smaller than the wavelength band itself. Therefore, this effect is not visible in the experiment and thus the assumption of the independency of Ib on x is justified.
References and links
1. E. Förster, K. Goetz, and P. Zaumseil, “Double crystal diffractometry for the characterization of targets for laser fusion experiments,” Krist. Tech. 15, 937–945 (1980).
2. M. Kuriyama, R. C. Dobbyn, R. D. Spal, H. E. Burdette, and D.R. Black, “Hard x-ray microscope with submicrometer spatial resolution,” J. Res. Natl. Inst. Stand. Technol. 95, 559–574 (1990).
3. T. J. Davis, D. Gao, T. E. Gureyev, A. W. Stevenson, and S.W. Wilkins, “Phase-contrast imaging of weakly absorbing materials using hard X-rays,” Nature 373, 595–598 (1995).
4. V. N. Ingal and E. A. Beliaevskaya, “Imaging of biological objects in the plane-wave diffraction scheme,” Nuovo Cimento 19, 553–560 (1997).
5. K. Kobayashi, K. Izumi, H. Kimura, S. Kimura, T. Ibuki, Y. Yokoyama, Y. Tsusaka, Y. Kagoshima, and J. Matsui, “X-ray phase-contrast imaging with submicron resolution by using extremely asymmetric Bragg diffractions,” Appl. Phys. Lett. 78, 132–134 (2001).
6. R. Köhler and P. Schäfer, “Asymmetric Bragg reflection as magnifying optics,” Cryst. Res. Technol. 37, 734–746 (2002).
7. D. Korytár, P. Mikulík, C. Ferrari, J. Hrdý, T. Baumbach, A. Freund, and A. Kubena, “Two-dimensional x-ray magnification based on a monolithic beam conditioner,” J. Phys. D: Appl. Phys. 36, A65–A68 (2003).
8. M. Stampanoni, G. Borchert, and R. Abela, “Towards nanotomography with asymmetrically cut crystals,” Nucl. Instrum. Meth. A 551, 119–124 (2005).
11. J. Keyriläinen, M. Fernandez, and P. Suortti, “Refraction contrast in x-ray imaging,” Nucl. Instrum. Meth. A 488, 419–427 (2002).
12. Ya.I. Nesterets, T. E. Gureyev, D. Paganin, K. M. Pavlov, and S.W. Wilkins, “Quantitative diffraction-enhanced x-ray imaging of weak objects,” J. Phys. D: Appl. Phys. 371262–1274 (2004).
13. P. Modregger, D. Lübbert, P. Schäfer, and R. Köhler, “Magnified x-ray phase imaging using asymmetric Bragg reflection: Experiment and theory,” Phys. Rev. B 74054107 (2006).
14. J.P. Guigay, E. Pagot, and P. Cloetens, “Fourier optics approach to X-ray analyser-based imaging,” Opt. Commun. 270, 180–188 (2007).
15. A. Bravin, V. Mocella, P. Coan, A. Astolfo, and C. Ferrero, “A numerical wave-optical approach for the simulation of analyzer-based x-ray imaging,” Opt. Express 15, 5641–5648 (2007).
16. Ya. I. Nesterets, T. E. Gureyev, and S. W. Wilkins, “Polychromaticity in the combined propagation-based/analyser-based phase-contrast imaging,” J. Phys. D: Appl. Phys. 38, 4259–4271 (2005).
17. A. AuthierDynamical Theory of X-Ray Diffraction, Vol. 11 of IUCr Monographs on Crystallography, 2nd ed. (Oxford University Press, Oxford2001).
18. P. Modregger, D. Lübbert, P. Schäfer, and R. Köhler, “Spatial resolution in Bragg-magnified X-ray images as determined by Fourier analysis,” Phys. Status Solidi (a) 204, 2746–2752 (2007).
19. A. Rack, H. Riesemeier, S. Zabler, T. Weitkamp, B. Müller, G. Weidemann, P. Modregger, J. Banhart, L. Helfen, A. Danilewsky, H. Gräber, R. Heldele, B. Mayzel, J. Goebbels, and T. Baumbach, “The high resolution synchrotron-based imaging stations at the BAMline (BESSY) and TopoTomo (ANKA),” Proc. SPIE 7078, 70780X (2008).
20. P. Coan, E. Pagot, S. Fiedler, P. Cloetens, J. Baruchel, and A. Bravin, “Phase-contrast X-ray imaging combining free space propagation and Bragg diffraction,” J. Synch. Rad. 12, 241–245 (2005)
21. P. Cloetens, R. Barrett, J. Baruchel, J. P. Guigay, and M. Schlenker, “Phase objects in synchrotron radiation hard X-ray imaging,” J. Phys. D: Appl. Phys. 29, 133–146 (1996).
22. T. Weitkamp, A. Diaz, C. David, F. Pfeiffer, M. Stampanoni, P. Cloetens, and E. Ziegler, “X-ray phase imaging with a grating interferometer,” Opt. Express 13, 6296–6304 (2005).
23. M. Kuriyama, W. J. Boettinger, and G. G. Cohen, “Synchrotron radiation topography,” Annu. Rev. Mater. Sci. 12, 23–50 (1982).
24. J. Als-Niehlsen and D. McMorrowElements of modern x-ray physics, Wiley & Sons (2001).
25. P. Modregger, D. Lübbert, P. Schäfer, R. Köhler, T. Weitkamp, M. Hanke, and T. Baumbach, “Fresnel diffraction in the case of an inclined image plane,” Opt. Express 16, 5141–5149 (2008).
26. J. W. Goodman, Introduction to Fourier Optics, McGraw-Hill, San Fransisco, pp. 106–110 (1968).
27. P. Modregger, D. Lübbert, P. Schäfer, and R. Köhler, “Two dimensional diffraction enhanced imaging algorithm,” Appl. Phys. Lett. 90, 193501 (2007).
28. E. WilsonFourier Series and Optical Transform Techniques in Contemporary Optics, Wiley & Sons (1995).
29. C. M. Sparrow, “On spectroscopic resolving power,” Astrophys. J. 44, 76–86 (1916).
30. B. Batterman and H. Cole, “Dynamical diffraction of x rays by perfect crystals,” Rev. Mod. Phys. 36, 681–716 (1964).