Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Investigation of discrete imaging models and iterative image reconstruction in differential X-ray phase-contrast tomography

Open Access Open Access

Abstract

Differential X-ray phase-contrast tomography (DPCT) refers to a class of promising methods for reconstructing the X-ray refractive index distribution of materials that present weak X-ray absorption contrast. The tomographic projection data in DPCT, from which an estimate of the refractive index distribution is reconstructed, correspond to one-dimensional (1D) derivatives of the two-dimensional (2D) Radon transform of the refractive index distribution. There is an important need for the development of iterative image reconstruction methods for DPCT that can yield useful images from few-view projection data, thereby mitigating the long data-acquisition times and large radiation doses associated with use of analytic reconstruction methods. In this work, we analyze the numerical and statistical properties of two classes of discrete imaging models that form the basis for iterative image reconstruction in DPCT. We also investigate the use of one of the models with a modern image reconstruction algorithm for performing few-view image reconstruction of a tissue specimen.

© 2012 Optical Society of America

1. Introduction

Differential phase-contrast tomography (DPCT) employing hard X-rays [15] refers to a class of imaging method for reconstructing the X-ray refractive index distribution of objects from knowledge of differential projection data. At hard X-ray energies, variations in the real component of the refractive index distribution of a light- or medium-density material are generally several orders of magnitude larger than are the variations in the imaginary component (i.e., the X-ray absorption). Consequently, DPCT may permit the visualization and quantitation of objects that present very low or no X-ray absorption contrast. In recent years, there have also been advancements [6, 7] in implementing the method on the bench top by use of tube-based X-ray sources. This is particular important in order for DPCT to find widespread use in biomedical and nondestructive imaging applications.

The tomographic projection data in DPCT, from which an estimate of the refractive index distribution is reconstructed, correspond to one-dimensional (1D) derivatives with respect to the detector row coordinate of the two-dimensional (2D) Radon transform of the refractive index distribution. These data can be interpreted as the angles in a plane that is perpendicular to the axis of tomographic scanning by which the probing X-ray beams are deflected by the object due to refraction. Several methods are available for implementing DPCT by use of synchrotron-or tube-based X-ray sources. Such methods include those based on diffractive optics [8, 9] or interferometry [10]. When DPCT is implemented with optical wavefields, which has been referred to as beam-deflection tomography [11], techniques such as moire deflectometry [12] have been employed for measuring the beam-deflection data.

It has been demonstrated that image reconstruction in DPCT can be achieved by use of modified filtered backprojection (FBP) algorithms [11, 13, 14]. An important observation by Faris and Byer [11] was that the 1D differentiation of the projection data is prescribed by the classic FBP algorithm. Accordingly, instead of integrating the differential projection data explicitly and then applying the classic FBP algorithm for reconstruction, they proposed a deflection filtered backprojection DFBP algorithm that acts directly on the differential projection data. In order to avoid image artifacts when employing this algorithm and other analytic reconstruction algorithms, tomographic measurements must be typically be acquired at a large number of view angles. This is highly undesirable because it can result in long data-acquisition times, especially in bench top applications where the X-ray tube power is limited, and also may damage the sample due to the large radiation exposure. Iterative image reconstruction algorithms have been widely employed in mature tomographic imaging modalities for mitigating data-incompleteness and noise. However, there is a scarcity of studies of iterative image reconstruction in DPCT [6, 15] and there remains an important need to develop robust iterative reconstruction methods for this modality.

In this work, we analyze the numerical and statistical properties of two classes of discrete imaging models that form the basis for iterative image reconstruction in DPCT. The models differ in the choice of expansion functions that are employed to discretize the infinite-dimensional refractive index distribution that one seeks to estimate. One model employs conventional pixel expansion functions while the other employs Kaiser-Bessel window functions. The latter choice is shown to have the attractive feature that the 1D derivative operator in the DPCT imaging model can be computed analytically, thereby cirvumventing the need to numerically approximate it. This feature has also recently been identified by Köhler, et al. [15]. A modern iterative reconstruction algorithm that seeks to minimize total variation (TV) -norm of the refractive index estimate is employed with a discrete imaging model for few-view image reconstruction. The effectiveness of the reconstruction method is demonstrated by use of experimental DPCT projection data corresponding to a biological tissue specimen.

2. Background: Imaging model for differential X-ray phase-contrast tomography

We will utilize the parallel-beam tomographic scanning geometry depicted in Fig. 1. However, the results that follow can readily be adapted to the case of spherical wave illumination in the paraxial limit [7]. The z-axis of the reference coordinate system (x,y,z) defines the axis of rotation of the tomographic scanning. The rotated coordinate system (xr, yr, z) is related to the reference system by by xr = x cosθ + ysinθ, yr = y cosθxsinθ, where θ ∈ [0,π) is the tomographic view angle measured from the positive x-axis. A phase-amplitude object positioned at the origin is irradiated by an X-ray plane-wave with wavelength λ, or equivalently wavenumber k=2πλ, which propagates in the direction of the positive yr-axis.

 figure: Fig. 1

Fig. 1 A schematic of differential phase-contrast imaging tomography. The black box represents the system of optical elements that is specific to the implementation.

Download Full Size | PDF

2.1. Data function and imaging model in continuous form

Let δ(x,y,z) ≡ 1 − n(x,y,z) denote the compactly supported and bounded object function we seek to reconstruct, where n(x,y,z) is the real-valued refractive index distribution. We will employ the notation δ(r2; z) ≡ δ(x,y,z), where r2 = (x, y), as a convenient description of a transverse slice of the 3D object function.

In DPCT employing a grating interferometer [1, 8, 10, 16, 17] or X-ray crystal optics [9,1824], the wavefield transmitted through the object is perturbed by one or more optical elements. The intensity of the perturbed wavefield at view angle θ is measured in the (xr, z) plane located at yr = d and will be denoted by I(xr, z,θ;K). Here K represents an integer-valued index that specifies the state of the imaging system. For example, in crystal analyzer-based systems, distinct values of K would correspond to different orientations of the analyzer crystal. Alternatively, in grating interferometry when a phase-stepping procedure [1, 8] is employed, distinct values of K correspond to different translational positions of the grating that is being scanned.

From knowledge of {I(xr,z,θ;k)}K=1NK with NK ≥ 1, methods are available [1, 25, 26] for computing a data function g(xr, z,θ) that, for a fixed value of z, is related to the sought-after object function δ(r2; z) as

g(xr,θ;z)=xrdyrδ(r2;z)xrRδ(r2;z).
Here, R denotes the 2D Radon transform operator. Equation (1) represents an idealized imaging model for DPCT in its continuous form that assumes a geometrical optics approximation. A discussion of the validity of this approximation is provided in Chapter 2 of reference [27]. Note that the right hand side of Eq. (1) corresponds to a stack, along the z-axis, of differentiated 2D Radon transforms of δ(r2; z). The coordinate z can be interpreted as a parameter that specifies a transverse slice and therefore the 3D imaging model can be described by a collection of 2D ones.

The image reconstruction task in DPCT is to determine an estimate of δ(r2; z) from knowledge of g(xr,θ;z). When g(xr,θ;z) is measured at a large number of view angles θ, this can be accomplished by use of analytic image reconstruction algorithms [11,28]. However, in the case of noisy and/or few-view measurement data, analytic reconstruction methods are known to be suboptimal and the use of iterative methods is warranted. The construction and investigation of discrete imaging models that form the basis for iterative image reconstruction in DPCT is described in the remainder of the article.

2.2. General forms of discrete imaging models

A natural way to obtain a discrete imaging model is to discretize the continuous model in Eq. (1). When a digital detector is employed, the measured intensity data and associated data function correspond to an ordered collection of numbers rather than a function of a continuous variable. We will denote the discrete data function as

g[s,t;h]g(xr,θ;z)|xr=sΔd,θ=tΔθ,z=hΔd,
where s and h are integer-valued detector element indices and t is the tomographic view index. Here, Δd=LQ denotes the detector element dimension in a square detector array of dimension L × L, and Q denotes the number of samples measured in each dimension. The quantity Δθ denotes the angular sampling interval between the uniformly distributed view angles. The reconstruction algorithms described below can be applied in the case of non-uniformly sampled measurement data as well. The general forms of the reconstruction algorithms would remain unchanged for the case of non-uniformly sampled measurement data; However, the explicit forms of the system matrices would be changed. Although not indicated in Eq. (2), the measured discrete data will also be degraded by the averaging effects of the sampling aperture. Additionally, the effects of finite temporal and spatial beam coherence will effectively blur the data function g[s,t;h]. These effects can limit the attainable spatial resolution in the reconstructed DPCT images. Because the reconstruction problem is inherently 2D, we will consider the problem of reconstructing a transverse slice of the object function located at z = hΔd. Let the vector g ∈ 𝕉M denote a lexicographically ordered representation of g[s,h,t]. The dimension M is defined by the product of the number of detector row elements and the number of view angles.

Many iterative image reconstruction algorithms require a finite-dimensional approximate representation of the object function. A linear N-dimensional approximation of δ(r2; z = hΔd) can be formed as

δa(r2;z=hΔd)=n=0N1bnhϕn(r2),
where the subscript a indicates that δa(r2; z) is an approximation of δ(r2; z), {ϕn(r2)} are a set of expansion functions, and { bnh} are the corresponding expansion coefficients that depend on the slice index h. Let the 2D function δa(r2; z = hΔd) be contained within a disk of radius r0. The discrete data function satisfies
g[s,t;h]xrRδa(r2;z=hΔd)|xr=sΔd,θ=tΔθ,
assuming that Rδa(r2; z = hΔd) is differentiable ∀xr ∈ (−r0, r0). For certain choices of the expansion functions, such as the pixels described below, this differentiability requirement will not be met. Moreover, when computing Eq. (4), as required by iterative image reconstruction algorithms, the operator xr will generally be replaced by a numerical approximation. For use in these cases, a modified version of Eq. (4) is given by
g[s,t;h]xrSRδa(r2;z=hΔd)|xr=sΔd,θ=tΔθ,
where S is a smoothing operator that acts with respect to the xr coordinate and ensures that SRδa(r2; z = hΔd) is differentiable. The composite operator xrS can be interpreted as a regularized derivative operator.

In the special case where Rϕn(r2) is differentiable ∀xr ∈ (−r0, r0), as satisfied by the Kaiser-Bessel expansion functions investigated below, Eq. (4) can be expressed as

g[s,t;h]n=0N1bnhxr(Rϕn(r2))(xr,θ)|xr=sΔd,θ=tΔθ.
In matrix form, each of Eqs. (4)(6) can be expressed as
g=Hb,
where g is a lexicographically ordered representation of the sampled data function, H is an M × N system matrix, and b is a N × 1 vector of expansion coefficients that has an n-th element given by bnh.

Equations (5) or (6) describe discrete imaging models for DPCT that can be employed with iterative image reconstruction algorithms for estimation of b from knowledge of g and H. From the estimated b, the object function estimate - the sought after image - can be obtained by use of Eq. (3). In the special case in which the expansion functions are classical pixels, the estimates of b and δa(r2; z = hΔd) coincide. Explicit forms for the system matrix H are found by specifying the expansion functions ϕn(r2) and implementation of the operator xrR or xrSR.

Below, we investigate the use of two different choices of expansion functions: the pixel basis function and Kaiser-Bessel window functions. Because the 3D reconstruction problem corresponds to a stack of 2D ones, we will focus on the reconstruction of a transverse slice of constant z = hΔd and the discrete index h will be suppressed hereafter. For use with the pixel basis functions, three different discrete implementations of the operator xrSR are implemented and system matrices are established according to Eq. (5). For the case of the Kaiser-Bessel window expansion functions, the operator xrRϕn(r2) can be computed analytically and system matrices are established according to Eq. (6).

3. Construction of system matrices for iterative image reconstruction in DPCT

3.1. System matrix construction employing pixel basis functions

The classic pixel is a commonly employed expansion function and is defined as

ϕnpixel(r2)=rect(xxnɛ)rect(yynɛ),
where rect(x) = 1 for |x|12 and zero elsewhere, (xn, yn) specifies the coordinate of the nth lattice point on a uniform Cartesian lattice, and ɛ is the spacing between those lattice points. A description of the system matrix construction for use with pixel expansion functions provided below. According to Eq. (5), this will require specifying methods for : (1) numerically approximating Rδa(r2; z) and (2) computing a regularized discrete derivative operator xrS.

Numerous standard numerical methods are available to compute approximations of Rδa(r2; z) [2931]. Most of these numerical methods compute the projection data as

p[s,t](Rδa(r2))[s,t]=(Rδa(r2))(xr,θ)|xr=sΔd,θ=tΔθj=0N1wstjbj,
where wstj is the weighting factor that corresponds to the contribution of the j-th expansion function to the projection data recorded at detector location [s,t], and bj is the j-th component of b. By defining p ∈ 𝕉M to be a lexicographically ordered representation of p[s,t], Eq. (8) can be expressed in matrix form as
p=HRb,
where
[HR]m=t×S+s,n=wstn,
in which S is the total number of discrete projection data for each view and the notation [HR]m,n denotes the element of HR corresponding to the m-th row and n-th column. In our numerical studies, we adopted a ’ray-driven’ method to establish HR [30].

We adopted a meshfree method known as smoothed particle hydrodynamics (SPH) [32, 33] for implementing xrS. Let p′ ∈ 𝕉M denote a 1D discrete derivative of p that approximates samples of xrSRδa(r2;z=hΔd). The SPH method determines p′ as

pk'=1ρki=kK/2i=k+K/2(pipk)W(didk,h)xr,
where pk is the k-th element of p′, K is the total number of neighbouring particles, pi and pk are the i-th and k-th elements of p respectively, and W(xr, h) is a kernel function with a smoothing length h. In our studies we employed three different kernel functions of the form: linear, quadratic spline, and cubic spline [32, 33]. Explicit forms of the kernels are provided in the appendix. The density factor ρk is defined as
ρk=i=kK/2i=k+K/2W(didk,h).
In matrix form, Eq. (11) is expressed as
p=HDp,
where explicit forms of HD are provided in the appendix that correspond to different choices of W(xr, h).

By use of Eqs. (9) and (13), the discrete imaging models for the case of pixel expansion functions are obtained as

gp=Hpixelb,
where
HpixelHDHR.
The system matrix Hpixel is generally sparse, since only a few expansion functions contribute to one specific projection value pi.

3.2. System matrix construction employing generalized Kaiser-Bessel window functions

For Kaiser-Bessel window expansion functions, referred to hereafter as “blobs” [34, 35], xrRϕn(r2) is continuous and can be computed analytically. In this case, Eq. (6) can be employed to establish the system matrix in which the derivative and Radon transform operators can be computed accurately.

The blob expansion functions are defined as

ϕnblob(r2;m,a,α)={[1(rb/a)2]mIm[α1(rb/a)2]Im(α),rba0,otherwise,
where Im(·) is the m-th order modified Bessel function, rb ≡ |r2rn| with rn = (xn, yn) denoting the blob center, and a and α determine the blob’s radius and specific shape.

Let ξxrxn cosθyn sinθ. As demonstrated by Lewitt [34], the 2D Radon transform of one window function is given by

Rϕnblob(r2;m,a,α)=aIm(α)(2πα)1/2[1(ξ/a)2]m+1/2Im+1/2(α1(ξ/a)2),
for |ξ| ≤ a and zero otherwise. As derived in Appendix B, the 1D derivative of this quantity is given by
(Rϕnblob(r2;m,a,α))xr=(2πα)1/2Im(α)ξa(1(ξ/a)2)m1/2Im1/2(α1(ξ/a)2).

By use of Eqs. (18) and (6) the discrete imaging model is given by

g[s,t](2πα)1/2Im(α)×n=0N1bnξa(1(ξ/a)2)m1/2Im1/2(α1(ξ/a)2)|ξ=sΔdxncos(tΔθ)ynsin(tΔθ),
or, in matrix form,
gHblobb,
where
[Hblob]m,n=[Hblob]m=t×S+s,n=(2πα)1/2Im(α)×ξa(1(ξ/a)2)m1/2Im1/2(α1(ξ/a)2)|ξ=sΔdxncos(tΔθ)ynsin(tΔθ),
and S is the total number of discrete projection data for each view. Similar to the pixel case, the system matrix Hblob is sparse because only a relatively few blobs contribute to each component of g.

Note that the k-th order spatial derivative of ϕnblob(r2;m,a,α) is continuous when m > k [34]. In the studies below, m = 2 was chosen. This ensured that the first-order derivatives of the blobs were continuous.

4. Comparison of numerical and statistical properties of system matrices

4.1. SVD analysis of the system matrices

In order to investigate how the different expansion functions influence the numerical properties of the imaging models described in Sections 3.1 and 3.2, the singular value decomposition (SVD) was employed. Specifically, the rates of decay of the singular values associated with the different system matrices were examined to gain insights into the stability of the associated reconstruction problems. It is well-known that the stability of a reconstruction problem is adversely affected by a rapid decay of singular values [36]. For the pixel basis function, three system matrices Hpixel were constructed as described in Sec. 3.1 for the cases where linear, quadratic spline, and cubic spline kernel functions W(xr, h) were employed [32, 33, 37]. Explicit forms of three kernels are provided in the appendix. The scanning configuration assumed 180 equally spaced tomographic views and 256 samples along the detector array. The detector pixel pitch was 25 μm. The window size of h was chosen to be two times detector pixel pitch, three times detector pixel pitch, and four times detector pixel pitch for linear interpolation, quadratic spline and cubic spline kernel, respectively. The object was assumed to be contained within an area of dimension 6.4 mm × 6.4 mm. For the pixel-based studies, a 128 × 128 array of 50 μm square pixels was employed to discretize the object. Accordingly, the system matrices Hpixel were of dimension 46080 (256 × 180) by 16384 (128 × 128). For the case of blob expansion functions, the same scanning configuration was considered. Six system matrices Hblob were constructed as described in Sec. 3.2 for the cases where the blob parameters were chosen as m = 2, radius a = 75 μm (1.5 times sampling interval) or 100 μm (2 times sampling interval), and α = 2, 6, or 10.4. Hereafter, we will refer to the blob radius relative to the image grid spacing. For example, we use indicate a = 1.5 to represent a physical radius of 75 μm and a = 2 to represent a physical radius of 100μm. The value of α = 10.4 was chosen because it results in a quasi-bandlimited blob function that has been demonstrated to suppress artifacts in other tomographic image reconstruction applications [35,38]. Similar values were employed in references [39, 40]. The distance between the centers of two neighboring blobs was fixed at 50 μm. The dimension of Hblob is the same as that of Hpixel. The spectrum of singular values was computed for all system matrices using the Matlab programming environment [41].

Figures 24 display the computed normalized singular value spectra. Figure 2 shows the normalized singular spectra for the different system matrices Hpixel for the three different weighting kernels. The matrix constructed by use of the cubic spline kernel is the most ill-conditioned, while the system matrix constructed by use of the linear kernel is the least ill-conditioned. This behavior is expected since the cubic spline kernel imposes the most smoothness on the data, followed by the quadratic spline and linear kernels.

 figure: Fig. 2

Fig. 2 Singular value spectra associated with the system matrices Hpixel with pixel size 50 μm.

Download Full Size | PDF

 figure: Fig. 3

Fig. 3 Singular value spectra associated with the system matrices Hblob with m = 2, relative radius a = 1.5 (physical size 75μm).

Download Full Size | PDF

 figure: Fig. 4

Fig. 4 Singular value spectra associated with the system matrices Hblob with m = 2, relative radius a = 2 (physical size 100μm).

Download Full Size | PDF

The spectra for the blob system matrices are shown in Figs. 3 and 4. Figure 3 displays the spectra when the blobs had a relative radius a = 1.5 and varying shape parameter α. These results indicate that the parameter α will generally affect the stability of the system matrix. In this case, α = 2.0 corresponds to the most poorly conditioned system matrix while α = 10.4 corresponds to the best conditioned system matrix. The spectra for the case when the blob relative radius a was increased to 2 (physical size 100 μm) are displayed in Fig. 4. The parameter α is again observed to have a significant effect on the stability of the system matrices. The system matrix corresponding to α = 2 is the most ill-conditioned, while the one corresponding to α = 10.4 is the least ill-conditioned. In order to gain insight into this behavior, one can examine the normalized differential projection profile of one blob as shown in Fig. 5. One observes that the profile is more localized when α increases from 2 to 10.4, which results in a better conditioned system matrix.

 figure: Fig. 5

Fig. 5 Profiles of the differential projection value of one blob with m = 2, relative radius a = 2.

Download Full Size | PDF

In order to facilitate the comparison of the pixel- and blob-based results, the three highest singular value spectra from Figs. 35 were re-plotted together in Fig. 6. Two of these spectra correspond to different Hblob with α = 10.4. and relative radius a = 1.5 and a = 2 and the third to Hpixel employing the linear weighting kernel. The two blob-based spectra possess a slower rate of decay than the pixel-based spectra, indicating that that the blob-based system matrices will yield more stable reconstruction problems than will pixel-based ones.

 figure: Fig. 6

Fig. 6 The three highest singular value spectra are replotted for comparison. Two of the spectra correspond to Hblob with α = 10.4, relative radius a = 1.5 and a = 2. The third spectra corresponds to Hpixel based on the linear weighting kernel.

Download Full Size | PDF

4.2. Investigation of image variance and spatial resolution

Computer-simulation studies were conducted to investigate the trade-offs between image variance and spatial resolution for images reconstructed by use of the different system matrices.

4.2.1. Simulation data and image reconstruction algorithm

The 2D numerical phantom displayed in Fig. 7 was employed to represent our object function δ(r2; z). The physical size of the phantom was 25.6 mm × 25.6 mm. The phantom was composed of nine uniform disks possessing different values and physical sizes, which were blurred with a Gaussian kernel of width 0.15 mm. From knowledge of the phantom, the elements of the differential projection data g were computed analytically. The scanning geometry employed assumed 180 tomographic views that were uniformly spaced over a π angular range. At each view, the detector was assumed to possess 1024 elements of pitch 25 μm.

 figure: Fig. 7

Fig. 7 The numerical phantom employed in our simulation studies with an ROI indicated

Download Full Size | PDF

There are several sources of noise in X-ray DPCT [42] that include phase stepping jitter, quantum noise, and noise from the detection electronics. One hundred noisy data vectors were computed as realizations of an uncorrelated zero-mean Gaussian random vector [43]. The standard deviation σn of each element of the random vector was constant and was set according to the rule σn = 0.2|g|mean, where |g|mean=1Mm=1M|gm| with gm denoting the m-th component of the noiseless data vector g.

From the 100 noisy differential projection data vectors, the penalized least-squares (PLS) algorithm described in reference [44] was employed to reconstruct 100 noisy coefficient estimates . The analytic solution of the PLS algorithm with L2 regularization can be written as a pseudo-inverse operator H+ acting on g. The pseudo-inverse operator H+ can be decomposed as a linear combination of certain outer-product operators, whose coefficients are the reciprocals of the singular values of the operator H [36, 45] that were analyzed in Sec. 4.1. The estimates represent approximate solutions of the optimization program

b^=argminbgHb+γL(b),
where γ is a regularization parameter,
L(b)=n=0N1k𝒩n([b]n[b]k)2,
with the set 𝒩n containing the index values of the four neighbour points of the nth value of b. From knowledge of , estimates of the object function δa(r2; z) were obtained by use of Eq. (3). For the cases where blob expansion functions were employed, the estimates of δa(r2; z) were sampled by use of a 2D Dirac delta sampling function onto a Cartesian grid and the resulting values stored as a matrix for analysis and display.

Sets of images were reconstructed by use of different system matrices Hpixel or Hblob. For the pixel-based studies, the object was represented by a 512 × 512 pixel array with a 50 μm pitch. Three different pixel-based matrices Hpixel were constructed corresponding to the weighting kernel functions described in Sec. 3.1. For the blob-based studies, six different system matrices were employed that corresponded to blob parameters relative radius a = 1.5 (physical size 75 μm), relative radius a = 2 (physical size 100 μm), and α = 2, 6, or 10.4. In all cases, 512 × 512 blobs were employed to represent the object function and the distance (sampling interval) between the blobs was 50 μm. For each system matrix, five sets of 100 noisy images were reconstructed for distinct values of the regularization parameter specified by γ = 10, 200, 1000, 2000, or 5000.

Computer-simulation studies were conducted to validate our reconstruction algorithm implementations that utilized the system matrices Hpixel and Hblob. Example images reconstructed from noisy data sets by use of Hpixel and Hblob are shown in Figs. 8(a) and 8(b). The system matrix Hpixel utilized linear interpolation and Hblob utilized blob parameters relative radius a = 2, m = 2, and α = 10.4. The regularization parameter was set at γ = 10 for both cases. Horizontal profiles through the centers of the images in Figs. 8(a) and 8(b) are shown in Fig. 9. The solid blue line (pixel-based result) appears to overshoot some of the boundaries and has more oscillations than the dashed red line (blob-based result). Note that the grey levels of the true object were recovered with good fidelity due to the fact that the object was contained within the field-of-view of the simulated imaging system and therefore there was no truncation of the data function with respect to the detector coordinate.

 figure: Fig. 8

Fig. 8 Examples of reconstructed images by use of PLS algorithms based on pixel system matrix Hpixel and blob system matrix Hblob. Regularization parameter γ = 10 for both cases. (a) An reconstructed image produced by pixel system matrix with linear interpolation. (b) An reconstructed image produced by blobs with relative radius a = 2 (physical size 100μm), m = 2 and α = 10.4.

Download Full Size | PDF

 figure: Fig. 9

Fig. 9 (Color online) Profiles through the center row of the reconstructed images in Fig. 8. The solid blue line corresponds to Fig. 8(a). The dashed red and dashdotted black lines correspond to Fig. 8(b) and the true phantom.

Download Full Size | PDF

4.2.2. Empirical determination of image statistics and resolution measures

For each combination of system matrix and regularization parameter, the mean image and image variance were estimated [46] from the associated set of 100 noisy images within the 70×70 pixel region-of-interest (ROI) indicated by the white box in Fig. 7. The average value of the image variance map was computed to establish a scalar summary measure of the variance associated with the ROI. To quantify the spatial resolution, we fitted the profile in the mean image corresponding to the boundary indicated in Fig. 7. The profile was fit to a cumulative Gaussian function [47]:

G(x)=I1+I2I12(1+erf(xμσ2)),
where x denotes the coordinate along the image profile, I1 and I2 indicate the image values on the two sides of the boundary with I1 < I2, μ is the true boundary location, and erf(x) is the error function, and σ is the associated standard deviation. We adopted the full-width at half-maximum (FWHM) value of the fitted error function as a summary measure of spatial resolution [47] at that location in image space, with smaller values indicating higher spatial resolution. Repeating these procedures for different choices of the regularization parameter γ produced a collection of (variance, FWHM) pairs for each system matrix, which were plotted to characterize the trade-offs between spatial resolution and noise levels in the reconstructed images.

The variance-resolution curves for the pixel- and blob-based system matrices are shown in Figs. 10 and 11, respectively. The left-most point on each curve corresponds to γ = 10, while the right-most point on each curve corresponds to γ = 5000. As expected, when the value of γ increases, the image variance decreases at the cost of spatial resolution.

 figure: Fig. 10

Fig. 10 Variance versus resolution curves corresponding to use of the system matrices Hpixel.

Download Full Size | PDF

 figure: Fig. 11

Fig. 11 Variance versus resolution curves corresponding to use of the system matrices Hblob. (a) Curves are produced by blobs with relative radius a = 1.5 (physical size 75μm), m = 2 and varying α. (b) Curves are produced by blobs with relative radius a = 2 (physical size 100μm), m = 2 and varying α.

Download Full Size | PDF

For the pixel-based case, Fig. 10 reveals that the curve corresponding to the use of a linear weighting kernel is the lowest, followed by those corresponding to the quadratic and cubic spline kernels. Stated otherwise, the use of the linear interpolation-based system matrix Hpixel produced images with smaller variances at any of the attained spatial resolution values than did the other two system matrices. These observations are consistent with the singular value spectra displayed in Fig. 2, where the linear and cubic spline-based system matrices were demonstrated to yield the best and worst, respectively, conditioned system matrices for the pixel-based studies.

For the blob-based cases shown in Fig. 11, the variance-resolution curves corresponding to the shape parameter α = 10.4 were lower than those corresponding to the other α values for both relative radius a = 1.5 [Fig. 11(a)] and relative radius a = 2 [Fig. 11(b)]. The curves corresponding to the shape parameter α = 2.0 were higher than the others for both values of a. These observation are consistent with the singular value spectra displayed in Figs. 3 and 4, where the system matrices Hblob corresponding to α = 10.4 and α = 2.0 were demonstrated to yield the best and worst, respectively, conditioned system matrices for the blob-based studies.

In order to facilitate the comparison of the pixel- and blob-based results, the three best variance-resolution curves from Figs. 10 and 11 were superimposed and replotted in Fig. 12. Two of these curves correspond to different Hblob with α = 10.4 and relative radius a = 1.5 and relative radius a = 2 and the third to Hpixel employing the linear weighting kernel. The two blob-based curves are everywhere lower than the pixel-based curve, indicating images produced by use of Hblob can possess improved variance-resolution trade offs than those produced by use of Hpixel. Below we demonstrate and investigate the use of Hblob for reconstructing images of biological tissue from few-view experimental differential projection data.

 figure: Fig. 12

Fig. 12 The three best variance-resolution curves picked from the pixel and blob cases.

Download Full Size | PDF

5. Application to few-view image reconstruction

5.1. Experimental data and image reconstruction algorithm

In our studies of few-view image reconstruction, we utilized experimental DPCT data that were acquired previously [4] using a grating-based phase-contrast imaging system at the Swiss Light Source. A tissue sample corresponding to a rat brain was the imaged object. The tomographic scanning consisted of 720 tomographic view angles that were uniformly distributed over a 180 degree angular range. The differential projection data contained 1621 samples at each view angle corresponding to a detector pitch of 7μm. In the studies described below, certain subsets of these data were employed for few-view image reconstruction. A phase-stepping procedure was employed, which utilized four steps, to compute the differential projection data at each tomographic view angle. We refer the reader to reference [4] for additional details regarding the data-acquisition and sample preparation.

To obtain an estimate of the object function based on Eq. (20), the constrained, total variation minimization (TV) program [4850] was employed:

b^=argminbbTVs.t.|gHblobb|ɛ,
in which ||b||TV represents the TV norm of the vector b and ɛ is the specified data tolerance. It has been demonstrated that this image reconstruction strategy can be highly effective at mitigating data-incompleteness for certain classes of objects [5052]. The adaptive steepest-descent-projection onto convex sets (ASD-POCS) algorithm proposed by Sidky and Pan [51] was employed to determine approximate solutions of Eq. (25). Details regarding this algorithm and its implementation can be found in reference [51]. The system matrix Hblob with m = 2, relative radius a = 2 (physical size 14μm, which is twice the sampling interval 7μm) and α = 10.4 was constructed as described in Sec. 3.2. The values of the data tolerance ɛ employed were 43.8 and 58.2 for the reconstruction problems involving 90 and 180 view angles, respectively. From knowledge of , estimates of δa(r2; z) were obtained by use of Eq. (3) and were subsequently sampled by use of a 2D Dirac delta sampling function with a period of 7 μm onto a Cartesian grid for display. Because it is commonly employed in current applications of DPCT, we also reconstructed images by use of a modified FBP algorithm that acts directly on the differential projection data [11].

5.2. Reconstructed images

The images reconstructed by use of the FBP algorithm and the TV algorithm from 90 view angles are displayed in Figs. 13(a) and 13(b). The corresponding images reconstructed from 180 view angles are displayed in Fig. 14. All of the images are displayed in the same grey scale window. The images reconstructed by use of the FBP algorithm [Figs. 13(a) and 14(a)] have streak artifacts due to the limited number of view angles employed, while those artifacts are suppressed in the images reconstructed by use of the ASD-POCS algorithm [Figs. 13(b) and 14(b)]. Because the object was embedded in a container that did not fit entirely in the field-of-view, there was effectively projection truncation. Therefore, we expect that our reconstruction algorithm will reconstruct δ(r) only up to a constant. Because the true values of δ(r) were not available, we did not investigate this. All the images presented were normalized into the same scale.

 figure: Fig. 13

Fig. 13 Images reconstructed from 90 projections by use of the (a) FBP (b) ASD-POCS algorithm. The dashed boxes indicate two ROIs chosen for comparison. All images are displayed in the same grey scale window [0 1].

Download Full Size | PDF

 figure: Fig. 14

Fig. 14 Images reconstructed from 180 projections by use of the (a) FBP (b) ASD-POCS algorithm Two dashed boxes indicate two ROIs chosen for comparison. All images are displayed in the same grey scale window [0 1].

Download Full Size | PDF

In order to more easily visualize differences in the reconstructed images, two ROIs indicated by black dashed boxes in Fig. 13(a) were displayed. Figures 15(a) and 15(b) display the smaller ROIs corresponding to images in Figs. 13(a) and 13(b), respectively, for the 90 view angle case. Subfigure (c) displays the smaller ROI extracted from an FBP image reconstructed from 720 view angles, which serves as a reference image. Figures 16(a) and 16(b) display the smaller ROIs corresponding to Figs. 14(a) and 14(b), respectively, for the 180 view angle case. Figure 16(c) displays the smaller ROI from the FBP reference image. As shown in Fig. 15(a) the visual appearance of the image reconstructed by use of the FBP algorithm from 90 projections is significantly degraded by noise and other artifacts. Some of the blood vessels (dark hole-like structures) may be difficult to detect due to the high artifact and noise levels in these image. The images reconstructed by use of the ASD-POCS algorithm from 90 tomographic views, shown in Figs. 15(b) has significantly reduced noise and artifact levels and possesses a visual appearance similar to the reference image that was reconstructed from the complete data set containing 720 views. Similar observations hold for the smaller ROI images corresponding to the 180 view angle case displayed in Fig. 16.

 figure: Fig. 15

Fig. 15 Zoomed-in images of the smaller ROIs denoted in Figs. 13(a) and (b), reconstructed from 90 view angles, are displayed in subfigures (a) and (b). Subfigure (c) displays the corresponding reference ROI corresponding to an image reconstructed from 720 projections by use of a DPCT FBP algorithm. All images are displayed in the same grey scale window [0 1].

Download Full Size | PDF

 figure: Fig. 16

Fig. 16 Zoomed-in images of the smaller ROIs denoted in Figs. 14(a) and (b), reconstructed from 180 view angles, are displayed in subfigures (a) and (b). Subfigure (c) displays the corresponding reference ROI corresponding to an image reconstructed from 720 projections by use of a DPCT FBP algorithm. All images are displayed in the same grey scale window [0 1].

Download Full Size | PDF

The larger ROIs are shown in Figs. 17 and 18. Figures 17(a) and 17(b) display the larger ROIs corresponding to images in Figs. 13(a) and 13(b), respectively, for the 90 view angle case. Subfigure (c) displays the larger ROI extracted from the FBP image reconstructed from 720 view angles, which again serves as a reference image. Figures 18(a) and 18(b) display the larger ROIs corresponding to images in Figs. 14(a) and 14(b), respectively, for the 180 view angle case. Subfigure (c) displays the larger ROI from the FBP reference image. Again, we observe that the images reconstructed by use of the ASD-POCS algorithm from 90 tomographic views, shown in Figs. 17(b) has significantly reduced noise and artifact levels and possesses a visual appearance similar to the reference image that was reconstructed from the complete data set containing 720 views. Similar observations hold for the larger ROI images corresponding to the 180 view angle case displayed in Fig. 18.

 figure: Fig. 17

Fig. 17 Zoomed-in images of the larger ROIs denoted in Figs. 13(a) and (b), reconstructed from 90 view angles, are displayed in subfigures (a) and (b). Subfigure (c) displays the corresponding reference ROI corresponding to an image reconstructed from 720 projections by use of a DPCT FBP algorithm. All images are displayed in the same grey scale window [0 1].

Download Full Size | PDF

 figure: Fig. 18

Fig. 18 Zoomed-in images of the larger ROIs denoted in Figs. 14-(a) and (b), reconstructed from 180 view angles, are displayed in subfigures (a) and (b). Subfigure (c) displays the corresponding reference ROI corresponding to an image reconstructed from 720 projections by use of a DPCT FBP algorithm. All images are displayed in the same grey scale window [0 1].

Download Full Size | PDF

6. Summary

We have analyzed the numerical and statistical properties of two classes of discrete imaging models that form the basis for iterative image reconstruction in DPCT. The models differ in the choice of expansion functions that were utilized to discretize the sought-after object function. The models based on Kaiser-Bessel window functions (“blobs”) were demonstrated to produced images that possess more favorable variance-resolution trade-offs than images reconstructed by use of pixel-based imaging models. This observation was consistent with the results of an SVD analysis of the system matrices, which demonstrated that the blob-based system matrices can yield more stable reconstruction problems than do pixel-based ones.

A reconstruction algorithm that seeks solutions of a constrained TV minimization optimization program was employed with a blob-based imaging model for few-view image reconstruction. By use of few-view experimental data, it was demonstrated that this algorithm can produce images with significantly weaker artifacts and lower noise levels than the FBP algorithm that has been utilized the majority of previously published studies. To our knowledge, this was the first published application of an iterative reconstruction method in X-ray DPCT for reconstruction of a biological specimen. We expect that the findings of our study will benefit the continued development of DPCT imaging systems by permitting reduction of data-acquisition times and radiation doses. Future research efforts will be required to identify blob parameters that are optimal for specific imaging tasks.

Appendix A: Explicit construction of the pixel-based system matrices

Below we describe how the matrices Hpixel employed in our numerical studies were constructed by use of Eq. (15). Specifically, because HR is defined by Eq. (10) with the elements provided in reference [30], we need to specify the explicit forms of discrete derivative operator HD for the three kernel functions W(xr, d) employed.

A general form of the matrix HD can be expressed as follows

HD=(H110000H2200000Htt000000HTT),
where Htt(t = 1, 2,··· ,T) is a S × S matrix, T is the total number of projection views and S is the number of sampled projection data at each view. Explicit forms of Htt are determined by different interpolation kernels W(xr, h). Three types of Htt corresponding to three different kernels W(xr, h) adopted in the paper are provided as follows:

Linear interpolation kernel

W1(d,h)=nd{1s0s<1,s=|d|h,0s1,
where nd is a normalization constant which is determined by the dimensionality and the smoothing length h. The value of h was set to 2 times the projection sampling interval, and nd is equal to 1h. The explicit form of Htt corresponding to use of W1(xr, d) can be expressed as
Htt=(boundarycondition1/201/20001/201/2001/201/2boundarycondition)S×S,
where the the boundary condition elements are appropriately defined. In our studies, the projection data were not truncated and the object was embedded in uniform background medium. In this case, the boundary condition elements were set to zero.

Quadratic spline

W2(d,h)=nd{34s20s<12,s=|d|h,9832s+s2212s<32,0s32,
where h was set to 2 times the projection sampling interval, and nd is equal to 1h. The explicit form of Htt corresponding to use of W2(xr, d) can be expressed as
Htt=(boundarycondition1/81/401/41/8001/81/401/41/80001/81/401/41/8boundarycondition)S×S.

Cubic spline

W3(d,h)=nd{23s2+s320s<1,s=|d|h,432s+s2s361s<2,0s2,
where h was set to 2 times the sampling interval with linear interpolation case, and nd is equal to 1h. The explicit form of Htt corresponding to use of W3(xr, d) can be expressed as
Htt=(boundarycondition1/321/85/3205/321/81/32001/321/85/3205/321/81/320001/321/85/3205/321/81/32boundarycondition)S×S.

Appendix B: The derivation of the Eq. (18) in Sec.3.2

Let ξxrxn cosθyn sinθ. As demonstrated by Lewitt [34], The 2D Radon transform of one window function is given by

Rϕnblob(r2;m,a,α)=aIm(α)(2πα)1/2[1(ξ/a)2]m+1/2Im+1/2(α1(ξ/a)2),
for |ξ| ≤ a and zero otherwise. The gradient of the modified bessel function has the following relationship as [53]
ddz{z±mIm(z)}=z±mIm1(z),
where z is the distance to the center of the blob and m is a real number. Let z=α1(ξ/a)2. Note that Eqn. (26) can be re-expressed as
Rϕnblob(r2;m,a,α)=aIm(α)(2πα)1/2(1α)m+1/2zm+1/2Im+1/2(z).
By use of Eq. (27) and Eq. (28), along with the chain rule, it can be verified readily that
(Rϕnblob(m,a,α,r))xr=(Rϕnblob(m,a,α,r))zzξξxr=aIm[α](2πα)1/2(1α)m+1/2zm+1/2Im1/2(z)×(αa)2(ξz)=(2πα)1/2Im(α)ξa(1(ξ/a)2)m1/2Im1/2(α1(ξ/a)2).

Acknowledgments

QFX and MA were supported in part by NIH awards EB009715 and EB010049. EYS and XP were supported in part by NIH awards grants CA120540 and EB000225, and a Pilot Project Award from NIH SPORE Grant P50CA090386. PM was supported by Centre d’Imagerie Biomedicale (CIBM) of the UNIL, UNIGE, HUG, CHUV, EPFL and the Leenaards and Jeantet Foundations. The contents of this article are solely the responsibility of the authors and do not necessarily represent the official views of the National Institutes of Health.

References and links

1. T. Weitkamp, A. Diaz, C. David, F. Pfeiffer, M. Stampanoni, P. Cloetens, and E. Ziegler, “X-ray phase imaging with a grating interferometer,” Opt. Express 13, 6296–6304 (2005). [CrossRef]   [PubMed]  

2. A. Momose, W. Yashiro, Y. Takeda, Y. Suzuki, and T. Hattori, “Phase tomography by x-ray talbot interferometry for biological imaging,” Jpn. J. Appl. Phys. 45, 5254–5262 (2006). [CrossRef]  

3. A. Momose, T. Takeda, Y. Itai, and K. Hirano, “Phase–contrast X–ray computed tomography for observing biological soft tissues,” Nat. Med. 2, 473–475 (1996). [CrossRef]   [PubMed]  

4. S. McDonald, F. Marone, C. Hintermuller, G. Mikuljan, C. David, F. Pfeiffer, and M. Stampanoni, “Advanced phase-contrast imaging using a grating interferometer,” J. Synchrotron Radiat. 16, 562–572 (2009). [CrossRef]   [PubMed]  

5. J. Brankov, M. Wernick, Y. Yang, J. Li, C. Muehleman, Z. Zhong, and M. A. Anastasio, “A computed tomography implementation of multiple-image radiography,” Med. Phys. 33, 278–289 (2006). [CrossRef]   [PubMed]  

6. Z. Qi, J. Zambelli, N. Bevins, and G. Chen, “A novel method to reduce data acquisition time in differential phase contrast: computed tomography using compressed sensing,” in “Proc. SPIE ,” 7258A1–A8 (2009)

7. M. Engelhardt, J. Baumann, M. Schuster, C. Kottler, F. Pfeiffer, O. Bunk, and C. David, “High-resolution differential phase contrast imaging using a magnifying projection geometry with a microfocus x-ray source,” Appl. Phys. Lett. 90, 224101 (2007). [CrossRef]  

8. F. Pfeiffer, T. Weitkamp, O. Bunk, and C. David, “Phase retrieval and differential phase-contrast imaging with low-brilliance X-ray sources,” Nature Phys. 2, 258–261 (2006). [CrossRef]  

9. D. Chapman, W. Thomlinson, R. Johnston, D. Washburn, E. Pisano, N. Gmür, Z. Zhong, R. Menk, F. Arfelli, and D. Sayers, “Diffraction enhanced x-ray imaging,” Phys. Med. Biol. 42, 2015–2025 (1997). [CrossRef]   [PubMed]  

10. A. Momose, S. Kawamoto, I. Koyama, Y. Hamaishi, K. Takai, and Y. Suzuki, “Demonstration of X-ray Talbot interferometry,” Jpn. J. Appl. Phys. 42, 866–868 (2003). [CrossRef]  

11. G. Faris and R. Byer, “Three-dimensional beam-deflection optical tomography of a supersonic jet,” Appl. Opt. 27, 5202–5212 (1988). [CrossRef]   [PubMed]  

12. J. Stricker, “Analysis of 3-d phase objects by moiré deflectometry,” Appl. Opt. 23, 3657–3659 (1984). [CrossRef]   [PubMed]  

13. P. Zhu, K. Zhang, Z. Wang, Y. Liu, X. Liu, Z. Wu, S. McDonald, F. Marone, and M. Stampanoni, “Low-dose, simple, and fast grating-based x-ray phase-contrast imaging,” Proc. Natl. Acad. Sci. USA 107, 13576–13581 (2010). [CrossRef]   [PubMed]  

14. Z. Huang, K. Kang, L. Zhang, Z. Chen, F. Ding, Z. Wang, and Q. Fang, “Alternative method for differential phase-contrast imaging with weakly coherent hard x rays,” Phys. Rev. A 79, 013815 (2009). [CrossRef]  

15. T. Köhler, B. Brendel, and E. Roessl, “Iterative reconstruction for differential phase contrast imaging using spherically symmetric basis functions,” Med. Phys. 38, 4542–4545 (2011). [CrossRef]   [PubMed]  

16. A. Momose, “Demonstration of phase-contrast X-ray computed tomography using an X-ray interferometer,” Nucl. Instrum. Meth. A 352, 622–628 (1995). [CrossRef]  

17. A. Momose, “Phase-sensitive imaging and phase tomography using x-ray interferometers,” Opt. Express 11, 2303–2314 (2003). [CrossRef]  

18. T. Davis, D. Gao, T. Gureyev, A. Stevenson, and S. Wilkins, “Phase-contrast imaging of weakly absorbing materials using hard X-rays,” Nature 373, 595–598 (1995). [CrossRef]  

19. M. Wernick, O. Wirjadi, D. Chapman, Z. Zhong, N. Galatsanos, Y. Yang, J. Brankov, O. Oltulu, M. A. Anastasio, and C. Muehleman, “Multiple-image radiography,” Phys. Med. Biol. 48, 3875–3895 (2003). [CrossRef]  

20. F. Dilmanian, Z. Zhong, B. Ren, X. Wu, L. Chapman, I. Orion, and W. Thomlinson, “Computed tomography of x-ray index of refraction using the diffraction enhanced imaging method,” Phys. Med. Biol. 45, 933–946 (2000). [CrossRef]   [PubMed]  

21. K. Pavlov, C. Kewish, J. Davis, and M. Morgan, “A new theoretical approach to x-ray diffraction tomography,” J. Phys. D Appl. Phys. 33, 1596–1605 (2000). [CrossRef]  

22. S. Fiedler, A. Bravin, J. Keyriläinen, M. Fernández, P. Suortti, W. Thomlinson, M. Tenhunen, P. Virkkunen, and M. Karjalainen-Lindsberg, “Imaging lobular breast carcinoma: comparison of synchrotron radiation dei-ct technique with clinical ct, mammography and histology,” Phys. Med. Biol. 49, 175–188 (2004). [CrossRef]   [PubMed]  

23. A. Maksimenko, M. Ando, S. Hiroshi, and T. Yuasa, “Computed tomographic reconstruction based on x-ray refraction contrast,” Appl. Phys. Lett. 86, 124105–124105-3 (2005). [CrossRef]  

24. I. Koyama, A. Momose, J. Wu, T. Lwin, and T. Takeda, “Biological imaging by x-ray phase tomography using diffraction-enhanced imaging,” Jpn. J. Appl. Phys. 44, 8219–8221 (2005). [CrossRef]  

25. K. Creath, “Phase-measurement interferometry techniques,” Prog. Opt. 26, 349–393 (1988). [CrossRef]  

26. Y. Takeda, W. Yashiro, Y. Suzuki, S. Aoki, T. Hattori, and A. Momose, “X-ray phase imaging with single phase grating,” Jpn. J. Appl. Phys. 46, 89–91 (2007). [CrossRef]  

27. D. Paganin, Coherent X-ray Optics, (Oxford University Press, 2006). [CrossRef]  

28. M. A. Anastasio and X. Pan, “Region-of-interest imaging in differential phase-contrast tomography,” Opt. Lett. 32, 3167–3169 (2007). [CrossRef]   [PubMed]  

29. A. Kak and M. Slaney, Principles of Computerized Tomographic Imaging (IEEE Service Center, Piscataway, NJ, 1988).

30. R. Siddon, “Fast calculation of the exact radiological path for a three-dimensional CT array,” Med. Phys. 12, 252–255 (1985). [CrossRef]   [PubMed]  

31. S. Lo, “Strip and line path integrals with a square pixel matrix: A unified theory for computational CT projections,” IEEE Trans. Med. Imag. 7, 355–363 (1988). [CrossRef]  

32. J. Monaghan, “Smoothed particle hydrodynamics,” Rep. Prog. Phys. 68, 1703–1760 (2005). [CrossRef]  

33. A. Chaniotis and D. Poulikakos, “High order interpolation and differentiation using b-splines,” J. Comput. Phys. 197, 253–274 (2004). [CrossRef]  

34. R. Lewitt, “Multidimensional digital image representations using generalized Kaiser-Bessel window functions,” J. Opt. Soc. Am. A 7, 1834–1846 (1990). [CrossRef]   [PubMed]  

35. R. Lewitt, “Alternatives to voxels for image representation in iterative reconstruction algorithms,” Phys. Med. Biol. 37, 705–716 (1992). [CrossRef]   [PubMed]  

36. M. Bertero, Introduction to Inverse Problems in Imaging (Taylor & Francis, 1998). [CrossRef]  

37. R. Fatehi, M. Fayazbakhsh, and M. Manzari, “On discretization of second-order derivatives in smoothed particle hydrodynamics,” Proceedings of World Academy of Science, Engineering and Technology . 30 pp. 243–246 (2008).

38. S. Matej and R. Lewitt, “Image representation and tomographic reconstruction using spherically-symmetric volume elements,” in Nuclear Science Symposium and Medical Imaging Conference, 1992., Conference Record of the 1992 IEEE, (IEEE, 1992), pp. 1191–1193.

39. S. Matej and R. Lewitt, “Practical considerations for 3-D image reconstruction using spherically symmetric volume elements,” IEEE Trans. Med. Imag. 15, 68–78 (1996). [CrossRef]  

40. T. Obi, S. Matej, R. Lewitt, and G. Herman, “2.5-D simultaneous multislice reconstruction by series expansion methods from fourier-rebinned pet data,” IEEE Trans. Med. Imag. 19, 474–484 (2000). [CrossRef]  

41. D. Hanselman and B. Littlefield, Mastering Matlab 7 (Pearson Education, 2005).

42. V. Revol, C. Kottler, R. Kaufmann, U. Straumann, and C. Urban, “Noise analysis of grating-based x-ray differential phase contrast imaging,” Rev. Sci. Instrum. 81, 073709 (2010). [CrossRef]   [PubMed]  

43. T. Köhler, K. Engel, and E. Roessl, “Noise properties of grating-based x-ray phase contrast computed tomography,” Med. Phys. 38, S106–S116 (2011). [CrossRef]  

44. J. Fessler, “Penalized weighted least-squares image reconstruction for positron emission tomography,” IEEE Trans. Med. Imag. 13, 290–300 (1994). [CrossRef]  

45. H. Barrett, K. Myers, and S. Dhurjaty, Foundations of Image Science (Wiley-Interscience, 2003), 2nd ed.

46. M. A. Anastasio, M. Kupinski, and X. Pan, “Noise propagation in diffraction tomography: Comparison of conventional algorithms with a new reconstruction algorithm,” IEEE Trans. Nucl. Sci. 45, 2216–2223 (1998). [CrossRef]  

47. J. Zhang, M. A. Anastasio, P. La Rivière, and L. Wang, “Effects of different imaging models on least-squares image reconstruction accuracy in photoacoustic tomography,” IEEE Trans. Med. Imag. 28, 1781–1790 (2009). [CrossRef]  

48. E. Candès, J. Romberg, and T. Tao, “Robust uncertainty principles: Exact signal reconstruction from highly incomplete frequency information,” IEEE Trans. Inform. Theory 52, 489–509 (2006). [CrossRef]  

49. E. Candes, J. Romberg, and T. Tao, “Stable signal recovery from incomplete and inaccurate measurements,” Commun. Pur. Appl. Math. 59, 1207–1223 (2006). [CrossRef]  

50. E. Y. Sidky, C. Kao, and X. Pan, “Accurate image reconstruction from few-views and limited-angle data in divergent-beam CT,” Journal of X-Ray Science and Technology 14, 119–139 (2006).

51. E. Y. Sidky and X. Pan, “Image reconstruction in circular cone-beam computed tomography by constrained, total-variation minimization,” Phys. Med. Biol. 53, 4777–4807 (2008). [CrossRef]   [PubMed]  

52. X. Han, J. Bian, D. Eaker, T. Kline, E. Y. Sidky, E. Ritman, and X. Pan, “Algorithm-enabled low-dose micro-CT imaging,” IEEE Trans. Med. Imag. 30 pp. 606–620 (2011). [CrossRef]  

53. M. Abramovitz and I. Stegun, Handbook of Mathematical Functions (Dover Publications, 1972).

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (18)

Fig. 1
Fig. 1 A schematic of differential phase-contrast imaging tomography. The black box represents the system of optical elements that is specific to the implementation.
Fig. 2
Fig. 2 Singular value spectra associated with the system matrices Hpixel with pixel size 50 μm.
Fig. 3
Fig. 3 Singular value spectra associated with the system matrices Hblob with m = 2, relative radius a = 1.5 (physical size 75μm).
Fig. 4
Fig. 4 Singular value spectra associated with the system matrices Hblob with m = 2, relative radius a = 2 (physical size 100μm).
Fig. 5
Fig. 5 Profiles of the differential projection value of one blob with m = 2, relative radius a = 2.
Fig. 6
Fig. 6 The three highest singular value spectra are replotted for comparison. Two of the spectra correspond to Hblob with α = 10.4, relative radius a = 1.5 and a = 2. The third spectra corresponds to Hpixel based on the linear weighting kernel.
Fig. 7
Fig. 7 The numerical phantom employed in our simulation studies with an ROI indicated
Fig. 8
Fig. 8 Examples of reconstructed images by use of PLS algorithms based on pixel system matrix Hpixel and blob system matrix Hblob. Regularization parameter γ = 10 for both cases. (a) An reconstructed image produced by pixel system matrix with linear interpolation. (b) An reconstructed image produced by blobs with relative radius a = 2 (physical size 100μm), m = 2 and α = 10.4.
Fig. 9
Fig. 9 (Color online) Profiles through the center row of the reconstructed images in Fig. 8. The solid blue line corresponds to Fig. 8(a). The dashed red and dashdotted black lines correspond to Fig. 8(b) and the true phantom.
Fig. 10
Fig. 10 Variance versus resolution curves corresponding to use of the system matrices Hpixel.
Fig. 11
Fig. 11 Variance versus resolution curves corresponding to use of the system matrices Hblob. (a) Curves are produced by blobs with relative radius a = 1.5 (physical size 75μm), m = 2 and varying α. (b) Curves are produced by blobs with relative radius a = 2 (physical size 100μm), m = 2 and varying α.
Fig. 12
Fig. 12 The three best variance-resolution curves picked from the pixel and blob cases.
Fig. 13
Fig. 13 Images reconstructed from 90 projections by use of the (a) FBP (b) ASD-POCS algorithm. The dashed boxes indicate two ROIs chosen for comparison. All images are displayed in the same grey scale window [0 1].
Fig. 14
Fig. 14 Images reconstructed from 180 projections by use of the (a) FBP (b) ASD-POCS algorithm Two dashed boxes indicate two ROIs chosen for comparison. All images are displayed in the same grey scale window [0 1].
Fig. 15
Fig. 15 Zoomed-in images of the smaller ROIs denoted in Figs. 13(a) and (b), reconstructed from 90 view angles, are displayed in subfigures (a) and (b). Subfigure (c) displays the corresponding reference ROI corresponding to an image reconstructed from 720 projections by use of a DPCT FBP algorithm. All images are displayed in the same grey scale window [0 1].
Fig. 16
Fig. 16 Zoomed-in images of the smaller ROIs denoted in Figs. 14(a) and (b), reconstructed from 180 view angles, are displayed in subfigures (a) and (b). Subfigure (c) displays the corresponding reference ROI corresponding to an image reconstructed from 720 projections by use of a DPCT FBP algorithm. All images are displayed in the same grey scale window [0 1].
Fig. 17
Fig. 17 Zoomed-in images of the larger ROIs denoted in Figs. 13(a) and (b), reconstructed from 90 view angles, are displayed in subfigures (a) and (b). Subfigure (c) displays the corresponding reference ROI corresponding to an image reconstructed from 720 projections by use of a DPCT FBP algorithm. All images are displayed in the same grey scale window [0 1].
Fig. 18
Fig. 18 Zoomed-in images of the larger ROIs denoted in Figs. 14-(a) and (b), reconstructed from 180 view angles, are displayed in subfigures (a) and (b). Subfigure (c) displays the corresponding reference ROI corresponding to an image reconstructed from 720 projections by use of a DPCT FBP algorithm. All images are displayed in the same grey scale window [0 1].

Equations (37)

Equations on this page are rendered with MathJax. Learn more.

g(xr,θ;z)=xrdyrδ(r2;z)xrRδ(r2;z).
g[s,t;h]g(xr,θ;z)|xr=sΔd,θ=tΔθ,z=hΔd,
δa(r2;z=hΔd)=n=0N1bnhϕn(r2),
g[s,t;h]xrRδa(r2;z=hΔd)|xr=sΔd,θ=tΔθ,
g[s,t;h]xrSRδa(r2;z=hΔd)|xr=sΔd,θ=tΔθ,
g[s,t;h]n=0N1bnhxr(Rϕn(r2))(xr,θ)|xr=sΔd,θ=tΔθ.
g=Hb,
ϕnpixel(r2)=rect(xxnɛ)rect(yynɛ),
p[s,t](Rδa(r2))[s,t]=(Rδa(r2))(xr,θ)|xr=sΔd,θ=tΔθj=0N1wstjbj,
p=HRb,
[HR]m=t×S+s,n=wstn,
pk'=1ρki=kK/2i=k+K/2(pipk)W(didk,h)xr,
ρk=i=kK/2i=k+K/2W(didk,h).
p=HDp,
gp=Hpixelb,
HpixelHDHR.
ϕnblob(r2;m,a,α)={[1(rb/a)2]mIm[α1(rb/a)2]Im(α),rba0,otherwise,
Rϕnblob(r2;m,a,α)=aIm(α)(2πα)1/2[1(ξ/a)2]m+1/2Im+1/2(α1(ξ/a)2),
(Rϕnblob(r2;m,a,α))xr=(2πα)1/2Im(α)ξa(1(ξ/a)2)m1/2Im1/2(α1(ξ/a)2).
g[s,t](2πα)1/2Im(α)×n=0N1bnξa(1(ξ/a)2)m1/2Im1/2(α1(ξ/a)2)|ξ=sΔdxncos(tΔθ)ynsin(tΔθ),
gHblobb,
[Hblob]m,n=[Hblob]m=t×S+s,n=(2πα)1/2Im(α)×ξa(1(ξ/a)2)m1/2Im1/2(α1(ξ/a)2)|ξ=sΔdxncos(tΔθ)ynsin(tΔθ),
b^=argminbgHb+γL(b),
L(b)=n=0N1k𝒩n([b]n[b]k)2,
G(x)=I1+I2I12(1+erf(xμσ2)),
b^=argminbbTVs.t.|gHblobb|ɛ,
HD=(H110000H2200000Htt000000HTT),
W1(d,h)=nd{1s0s<1,s=|d|h,0s1,
Htt=(boundarycondition1/201/20001/201/2001/201/2boundarycondition)S×S,
W2(d,h)=nd{34s20s<12,s=|d|h,9832s+s2212s<32,0s32,
Htt=(boundarycondition1/81/401/41/8001/81/401/41/80001/81/401/41/8boundarycondition)S×S.
W3(d,h)=nd{23s2+s320s<1,s=|d|h,432s+s2s361s<2,0s2,
Htt=(boundarycondition1/321/85/3205/321/81/32001/321/85/3205/321/81/320001/321/85/3205/321/81/32boundarycondition)S×S.
Rϕnblob(r2;m,a,α)=aIm(α)(2πα)1/2[1(ξ/a)2]m+1/2Im+1/2(α1(ξ/a)2),
ddz{z±mIm(z)}=z±mIm1(z),
Rϕnblob(r2;m,a,α)=aIm(α)(2πα)1/2(1α)m+1/2zm+1/2Im+1/2(z).
(Rϕnblob(m,a,α,r))xr=(Rϕnblob(m,a,α,r))zzξξxr=aIm[α](2πα)1/2(1α)m+1/2zm+1/2Im1/2(z)×(αa)2(ξz)=(2πα)1/2Im(α)ξa(1(ξ/a)2)m1/2Im1/2(α1(ξ/a)2).
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.