## Abstract

Propagation-based X-ray phase-contrast tomography (PCT) seeks to reconstruct information regarding the complex-valued refractive index distribution of an object. In many applications, a boundary-enhanced image is sought that reveals the locations of discontinuities in the real-valued component of the refractive index distribution. We investigate two iterative algorithms for few-view image reconstruction in boundary-enhanced PCT that exploit the fact that a boundary-enhanced PCT image, or its gradient, is often sparse. In order to exploit object sparseness, the reconstruction algorithms seek to minimize the *l*
_{1}-norm or TV-norm of the image, subject to data consistency constraints. We demonstrate that the algorithms can reconstruct accurate boundary-enhanced images from highly incomplete few-view projection data.

© 2010 Optical Society of America

## 1. Introduction

X-ray phase-contrast tomography (PCT) methods [1–6] seek to reconstruct three-dimensional (3D) images that depict object features possessing little or no X-ray absorption-contrast. In boundary-enhanced PCT [2,7–9], information regarding the locations of boundaries in the refractive index distribution is sought, rather than an accurate estimate of the refractive index distribution itself [10,11]. Such boundary-enhanced images can, for example, facilitate the delineation of soft tissue structures in biomedical applications or material flaws in nondestructive testing applications.

It has been demonstrated that image reconstruction in boundary-enhanced PCT can be achieved by use of the parallel-beam filtered backprojection (FBP) algorithm [2,7,8] or other analytic reconstruction algorithms [12,13]. In order to avoid image artifacts when employing these algorithms, tomographic measurements must be typically be acquired at a large number of view angles [14]. This is highly undesirable because it can yield long data-acquisition times and also may damage the sample due to the large radiation exposure. For these reasons, there remains an important need to develop reconstruction algorithms that can reconstruct accurate boundary-enhanced images in PCT from knowledge of measurement data acquired at a reduced number of tomographic view angles, *i.e*., few-view measurement data. A natural and effective way to accomplish this is to develop iterative image reconstruction algorithms that exploit *a priori* information regarding commonly possessed characteristics of boundary-enhanced images.

In this work, we investigate two iterative algorithms for few-view image reconstruction in boundary-enhanced PCT. The image reconstruction algorithms are inspired by the emerging field of compressive sensing [15], and are based on the premise that a boundary-enhanced PCT image, or the gradient of the image, is often sparse. By ‘sparse’, we mean that the discrete representation of the image possesses a relatively small number of non-zero voxels or other elements that are used to mathematically represent the image. In order to exploit object sparseness, the reconstruction algorithms seek to minimize the *l*
_{1}-norm or TV-norm of the image, subject to data consistency constraints. Both algorithms employ a thresholding procedure to promote sparse solutions. We demonstrate that the algorithms can reconstruct accurate boundary-enhanced images from highly incomplete few-view projection data. The proposed algorithms are also demonstrated to produce significantly weaker image artifacts than those produced by a conventional iterative image reconstruction algorithm.

## 2. Background: Imaging model for boundary-enhanced PCT

#### 2.1. Imaging geometry and data function

We will utilize the parallel-beam tomographic scanning geometry depicted in Fig. 1. However, the results that follow can readily be adapted to the case of spherical wave illumination in the paraxial limit [16,17], i.e., a cone-beam geometry with a small cone-angle. The *z*-axis of the reference coordinate system (*x*,*y*,*z*) defines the axis of rotation of the tomographic scanning. The rotated coordinate system (*x _{r}*,

*y*,

_{r}*z*) is related to the reference system by

*x*=

_{r}*x*cos

*ϕ*+

*y*sin

*ϕ*,

*y*=

_{r}*y*cos

*ϕ*-

*x*sin

*ϕ*, where the tomographic view angle

*ϕ*is measured from the positive

*x*-axis. A weak phase-amplitude object positioned at the origin is irradiated by a plane-wave

*U*(

_{i}*x*,

_{r}*z*,

*ϕ*) with wavelength

*λ*, or equivalently wavenumber $k=\genfrac{}{}{0.1ex}{}{2\pi}{\lambda}$, which propagates in the direction of the positive

*y*-axis. The intensity of the transmitted wavefield is measured in the (

_{r}*x*,

_{r}*z*) plane located at

*y*=

_{r}*d*, and will be denoted by

*I*(

*x*,

_{r}*z*,

*ϕ*). Unless a phase-object is considered, an additional intensity measurement

*I*

_{0}(

*x*,

_{r}*z*,

*ϕ*) on the contact plane

*y*= 0 will also be acquired. A tomographic data set is obtained by measuring a collection of such intensity measurements for

_{r}*ϕ*∈[0,

*π*).

Let *δ*(*x*,*y*,*z*) ≡ *n*(*x*,*y*,*z*) - 1 denote the object function, where *n*(*x*,*y*,*z*) is the object’s real-valued refractive index distribution *n*(*x*,*y*,*z*). We will employ the notation *δ*(**r**
_{2};*z*) ≡ *δ*(*x*,*y*,*z*), where **r**
_{2} = (*x*,*y*), as a convenient description of a transverse slice (of constant *z*) of the 3D object function. For a sufficiently small object-to-detector distance *d* and a weakly absorbing object [16], the data function

satisfies the imaging model

where *R* denotes the 2D Radon transform acting on a plane of constant *z* of *δ* (**r**
_{2}; *z*) and ${\nabla}_{{x}_{r},z}^{2}\equiv \genfrac{}{}{0.1ex}{}{{\partial}^{2}}{{\partial x}_{r}^{2}}+\genfrac{}{}{0.1ex}{}{{\partial}^{2}}{{\partial}_{z}^{2}}.$ We consider that both *I*(*x _{r}*,

*z*,

*ϕ*) and

*I*

_{0}(

*x*,

_{r}*z*,

*ϕ*) are measured, and therefore

*g*(

*x*,

_{r}*z*,

*ϕ*) can be regarded as a known quantity. By use of the commutativity of the Radon and Laplace transforms [18], it can be verified that

where ∇^{2} is the 3D Laplacian operator. Equation (3) represents the imaging model for boundary-enhanced PCT in its continuous form. The image reconstruction task in boundary-enhanced PCT is to determine an estimate of ∇^{2}
*δ*(**r**
_{2};*z*) from knowledge of *g*(*x _{r}*,

*z*,

*ϕ*). When

*g*(

*x*,

_{r}*z*,

*ϕ*) is measured at a large number of view angles

*ϕ*, this can readily be accomplished, for example, by use of the 2D parallel-beam FBP algorithm [2,7,8]. However, in the case of few-view measurement data, analytic reconstruction methods are known to be ineffective and the use of iterative methods is warranted.

Note that the right-hand side of Eq. (3) corresponds to a stack of 2D Radon transforms of ∇^{2}
*δ*(**r**
_{2}; *z*) along the *z*-axis and the coordinate *z* can be interpreted as a parameter that specifies a transverse slice. This reflects that the 3D imaging model can be described by a collection of 2D ones.

#### 2.2. Discrete form of imaging model

When a digital detector is employed, the measured intensity data correspond to an ordered collection of numbers rather than a function of a continuous variable. We will denote the discrete data function as

where *r* and *s* are integer-valued detector-element indices and *t* is the tomographic view index. Here, ${\Delta}_{d}=\genfrac{}{}{0.1ex}{}{L}{N}$ denotes the detector-element dimension in a square detector array of dimension *L* × *L*, and *N* denotes the number of samples measured in each dimension. The quantity Δ* _{θ}* denotes the angular sampling interval between the uniformly distributed view angles. The reconstruction algorithms described below can be applied in the case of non-uniformly sampled measurement data as well. Although not indicated in Eq. (4), the measured discrete data will also be degraded by the averaging effects of the sampling aperture.

Because the reconstruction problem is inherently 2D, we will consider the problem of reconstructing a transverse slice of the object function located at *z* = *s _{n}*Δ

*, which corresponds to the position of a detector row indexed by*

_{d}*s*=

*s*. Let the vector

_{n}**g**∈ ℝ

^{M1}denote a lexicographically ordered representation of

*g*[

*r*,

*s*,

_{n}*k*]. The dimension

*M*

_{1}is defined by the product of the number of detector row elements and the number of view angles.

Because we will be focusing on iterative reconstruction algorithms, we will also require a discrete representation of the object. An *M*
_{2}-dimensional approximate representation of ∇^{2}
*δ*(**r**
_{2};*z* = *s _{n}*Δ

*) can be described as*

_{d}where, without loss of generality, √*M*
_{2} is assumed to be an integer. In Eq. (5), the expansion coefficients *b _{z}*[

*l*,

*m*] represent the discrete image values and {Ψ

_{l,m}(

**r**

_{2})∣

*l*= 1, ⋯

*M*

_{2}/2,

*m*= 1,⋯

*M*

_{2}/2} represent a collection of expansion functions. As discussed in Section 3.1, in this work we will adopt conventional image pixels as the choice for Ψ

_{l,m}(

**r**

_{2}) and

*b*[

_{z}*l*,

*m*] will represent the

**L**

_{2}inner product of Ψ

_{l,m}(

**r**

_{2}) and ∇

^{2}

*δ*(

**r**

_{2};

*z*=

*s*Δ

_{n}*). However, it should be noted that other sets of expansion functions [19] could be employed to form a finite-dimensional approximate object representation. The accuracy of the reconstructed tomographic image will generally depend, in a complicated and object-dependent manner, on the choice of expansion functions and reconstruction algorithm [20].*

_{d}Let **b** ∈ ℝ^{M2} denote a lexicographically ordered representation of the discrete image *b _{z}*[

*l*,

*m*]. The system of linear algebraic equations that represent the discrete imaging model can be expressed as

where the *M*
_{1} × *M*
_{2} system matrix *R*̂ is a discrete representation of the 2D Radon transform operator *R* [21]. In practice, *R*̂ can and should be modified to account for the spatial resolution characteristics of the imaging system and/or other physical factors that are well-characterized.

## 3. Few-view boundary-enhanced image reconstruction in PCT

The reconstruction task we consider is to determine an estimate of the boundary-enhanced image **b** from knowledge of few-view measurement data **g**. Because in this situation Eq. (6) is typically underdetermined, conventional reconstruction algorithms can produce significant image artifacts. To circumvent this, we will exploit the fact that **b** or its gradient can often be sparse. The problem of determining sparse solutions to underdetermined linear systems of equations has received much attention in recent years, in large part due to the emerging field of compressive sensing and related signal processing applications [15,22–26]. Inspired by these works, we have investigated the image reconstruction approaches described below.

#### 3.1. Object sparseness in boundary-enhanced PCT

A vector **b** is said to be sparse if it contains a relatively small number of non-zero elements. Sparseness has been exploited to effectively regularize the ill-conditioned or singular nature of certain classes of discrete linear inverse problems [27, 28]. In our study of boundary-enhanced PCT, sparsity will be exploited in two different ways.

In many cases, the boundary-enhanced image ∇^{2}
*δ*(**r**
_{2};*z*) will be naturally sparse; i.e., the number of non-zero terms in Eq. (5) will be much less than *M*
_{2}. For example, when *δ*(**r**
_{2};*z*) is piecewise constant and pixels are employed as the expansion functions, the non-zero terms in Eq. (5) correspond to locations of boundaries in *δ*(**r**
_{2};*z*), while the expansion functions that correspond to uniform regions will be weighted by a zero value of *b _{z}*[

*l*,

*m*]. As demonstrated below, by reconstructing an estimate of ∇

^{2}

*δ*(

**r**

_{2};

*z*) that is sparse, image artifacts due to data incompleteness can be effectively mitigated. This reconstruction strategy is mathematically formulated in Section 3.2.

It is interesting to note that, in many applications of image processing, a Laplacian operator is explicitly applied to an image in order to establish a sparse image representation [29], which can facilitate the solution of an inverse problem. However, in boundary-enhanced PCT, we observe that the Laplacian operator is implicitly administered by the wave propagation physics. Accordingly, a sparse object representation is automatically embodied by the imaging model as described by Eqs. (3) or (6).

Alternatively, data incompleteness can be mitigated by exploiting the sparseness of the gradient of an object rather than the object itself [15, 30–32]. If an object is sparse, its gradient image will also be sparse. However, the converse is not necessarily true. As demonstrated below, by reconstructing an estimate of ∇^{2}
*δ*(**r**
_{2};*z*) that possesses a sparse gradient, noise in the measurement data can be effectively mitigated. This reconstruction strategy is mathematically formulated in Section 3.3.

#### 3.2. Constrained, l_{0}-norm minimization

A sparse estimate of **b** can be obtained by solving the following optimization problem:

where **b**
_{l0} represents the reconstructed image estimate, ∥**b**∥_{0} = lim_{p→0} ∑^{M}_{m=1}∣*b*
* _{m}*∣

*denotes the*

^{p}*l*

_{0}-norm of

**b**= (

*b*

_{1},

*b*

_{2}, ⋯

*b*), and the data error tolerance e is a parameter of the optimization problem that needs to be specified. The optimization problem in Eq. (7) yields the image estimate with minimum

_{M}*l*

_{0}-norm constrained such that its projection data are within a given

*l*

_{2}-distance

*ε*of the measured projection data. As the

*l*

_{0}-norm presents a formidable computational challenge, a fruitful strategy has been to relax this norm to

*l*

_{1}minimization [15,22], or to relax to the non-convex

*l*-norm with

_{p}*p*< 1 [33].

For this work, we aim at solving Eq. (7) through a strategy similar to the iterative-hard-thresholding (IHT) algorithm by Blumensath and Davies [34], where the sparsity of the image **b** is fixed to *s*, the number of non-zero pixels. Consider the optimization problem:

which finds the *s*-sparse image that minimizes the data error. Clearly, the data residual *ε* (*s*) = ∣*R*̂**b**
* _{s}* -

**g**∣ will depend on

*s*. When

*s*is smaller than the actual number of non-zero pixels, then

*ε*(

*s*) >> 0. As

*s*increases

*ε*(

*s*) will decrease, eventually leveling off at a minimum value permitted by inherent inconsistency in the data. The desired image

**b**

*occurs for a value of*

_{s}*s*where

*ε*(

*s*) levels off.

#### 3.3. Constrained, TV-norm minimization

Solving the optimization problem Eq. (8) for a series of *s*-values can effectively find sparse solutions, but it does nothing to control the noise-level in the image. Our previous experience with TV-minimization methods in CT [23, 30,31] has shown that they not only find images possessing a sparse spatial-gradient but can also effectively denoise the image. In order to introduce some control over noise, one must first decide on a sparsity *s*
^{*} of the solution as described above. Next, we consider the following TV-minimization problem:

where *ε*
^{′} ≥ *ε*(*s*
^{*}). Here, ∥**b**∥* _{TV}* ≡ ∥∇

**b**∥

_{1}denotes the TV-norm of

**b**, which is defined as the

*l*

_{1}-norm of the discrete gradient-image ∇

**b**. The sparsity of the image is constrained to remain at or below

*s*

^{*}, and as

*ε*

^{′}increases the resulting image becomes denoised. Because boundary-enhanced images are often sparse, the sparsity constraint is expected to help mitigate artifacts due to data incompleteness. Unlike

*s*

^{*}, which depends only on the sparsity of the object, the optimal value of

*ε*

^{′}depends on many factors such as data quality, and object properties such as contrast and structural complexity. As a result,

*ε*

^{′}is in important parameter of the reconstruction algorithm that needs to be varied and optimized on a case-by-case basis taking into account each particular imaging task.

## 4. Description of image-reconstruction algorithms

The IHT algorithm solves Eq. (7) by alternating between a gradient descent step on the data residual, (*R*̂**b** - **g**)^{2}, and hard thresholding, keeping the *s* largest elements in the image. For our implementation, the gradient descent step is replaced by projection onto convex sets (POCS). For the problem at hand, we have found empirically that POCS is more efficient than using gradient descent in the IHT algorithm. On the other hand, we offer no proof of convergence for the present variation of IHT. While standard IHT is proved to converge for certain conditions on the isometry constants, it can be demonstrated that the isometry constants of the operator *R*̂ lie well outside the range for which convergence proved. It has, however, been noted in the literature that derived restricted isometry properties, though sufficient, are far from being necessary conditions. For the IHT-POCS algorithm, detailed below, we have performed extensive testing with computer simulated phantoms under difficult scanning conditions where only 25 projection views are used. Under conditions of ideal, noiseless data we were able to accurately recover several test phantoms of similar sparsity to the images shown in the results section. We tested robustness of the recovery by varying the starting image, and by designing difficult test phantoms where a component, **n**, with the same sparsity is adding to the test image that has a minimum value of ∥*R*
**n**∥_{2}. These simulation results give us confidence in using IHT-POCS for reconstructing data sets with 90 views.

We refer to the present IHT implementation as IHT-POCS. An example pseudo-code of IHT-POCS is:

The symbol := is used to represent replacement, i.e., the variable on the left is replaced by the quantity on the right hand side. The parameter *β* is the relaxation parameter for the POCS algorithm on line 4. It is not used here, hence it is set to one, but often implementations of POCS use a relaxation schedule for *β*. For example, one might use *β* = 1.0 * 0.99* ^{i}*, where

*i*indicates the iteration number. The size of the data

**g**is

*N*, and in line 4 each data measurement is indexed by

_{d}*j*. The vector (

**R**)

*is a row of the system matrix*

_{j}*R*̂ corresponding to the single data measurement

*g*. The operator

_{j}*H*at line 6 is responsible for enforcing

_{s}*s*-sparseness; the

*s*largest elements, in terms of absolute value, of its argument are kept, and all other elements are set to zero. The IHT-POCS algorithm was run to convergence, meaning that there was no longer any appreciable change in the image. This point generally occurred within 1000 iterations for the results shown below.

To perform the TV-minimization in Eq. (9) we employ our ASD-POCS algorithm described in Ref. [31], but we alter the POCS portion of the algorithm to incorporate image sparseness. Our previous implementation of ASD-POCS included steps for enforcing image positivity. As this assumption does not apply here, the positivity-step is removed. Instead, we substitute, in place of the positivity-step, the *H _{s}* operator to enforce

*s*-sparseness of the image. Strictly speaking, the

*H*operator does not perform a projection onto a convex set and as a result it is not likely that ASD-POCS converges mathematically to the solution of Eq. (9), but we find empirically that the algorithm is effective and optimality conditions of the resulting image can be checked. Further information on constrained-TV minimization and optimality conditions can be found in Ref. [31]. Here, we refer to this version of ASD-POCS as the IHT-POCS-TV algorithm.

_{s}## 5. Guidance from simulations in designing the image reconstruction algorithm framework

At first glance, the image-reconstruction problem posed by phase-contrast imaging seems to be an ideal application for recent, compressive sensing (CS) algorithms, because the object function is dominated by edges and is therefore typically sparse. The application of CS methods, however, is complicated by two main factors: (1) the inherent ill-posedness of the few-view image reconstruction problem, and (2) CS theory deals with object representation using a finite expansion set. Specifically, on the second point, the underlying object function that we wish to recover is represented in a pixel expansion set, which can only approximately represent the object. The following set of simulations illustrate how we arrived at the algorithms described in the previous section. As the justification of the algorithms is based on simulation, we have performed numerous tests, the most of which cannot be presented in this article. The simulations are performed with a phantom composed of randomly placed ellipses of high eccentricity somewhat resembling, in terms of sparsity, the foam object scanned in the next section. The first set of simulations are performed on an idealization, where the phantom is first pixelized on a 512×512 grid prior to generating the projection data. The second set of simulations generate projection data from analytical ray-ellipse intersection formulas. As will be seen, the discretization of the image array involves a significant approximation, and its impact can be more than that due to noise.

## 5.1. Discrete, ellipse phantom

The first set of simulations involve the same scanning configuration used later with actual experimental data. The projection data were computed as a 2D discrete Radon transform acquired on a 2048 bin detector, and few-view reconstruction is studied with 90 projection views taken at equal intervals over 180°. The ellipse phantom shown in Fig. 2 is discretized on a 512×512 array. The resulting test image has a sparsity of 16257 or approximately 2^{14} pixels. We first investigate the restricted isometry property (RIP) for *R*̂ with the above system parameters, then demonstrate the exact recovery of this test image using IHT-POCS.

#### 5.1.1. The restricted isometry properties of the Radon transform

The RIP studies presented here have two purposes: (1) to put the present image-reconstruction problem in the context of the state-of-the-art CS algorithms, and (2) to help generate difficult test phantoms with which to explore robustness of the image-reconstruction algorithms. It is clear that solving Eq. (6) may admit a large number of solutions, but the the null-space of *R*̂ may be empty when the set of images, which *R*̂ operates on, is restricted to only those with a sparsity *s*. In fact, the sparsity restriction will, in general, make *R*̂ closer to an isometry: an operator that does not change the size of **f**
* _{s}*, where

**f**

*represents an image of sparsity*

_{s}*s*. One of the central properties of an operator that CS makes use of is the isometry constant

*δ*, which is a number where the following inequalities hold

_{s}for all images **f**
* _{s}* [35]. Smaller

*δ*leads to better recovery, a value of

_{S}*δ*= 1 means that there is an

_{s}*s*-sparse image in the null space and exact recovery is impossible. For proving convergence of algorithms, it is often important to consider isometry constants of multiples of

*s*, such as

*δ*

_{2s}. This is because the difference of two

*s*-sparse images will in general be 2

*s*-sparse.

Two problems associated with practical use of the RIP are that isometry constants are not invariant to scalings of the system matrix *R*̂ and that finding isometry constants for all but certain classes of random matrices can be computationally prohibitive; one essentially has to search all *s*-sparse images. The search issue for the present problem can be simplified, using the fact that the Radon transform has the greatest difficulty in distinguishing neighboring pixels. To assess isometry constants for the Radon transform, we design a search using only neighboring pixels. The scaling issue can be handled by obtaining a distribution of *σ _{s}* ≡ (∥

*R*̂

**f**

_{s}∥/∥

**f**

_{s}∥)

^{2}and multiplying

*R*̂ by a constant that yields the minimum

*δ*, where the largest and smallest values straddle 1.0.

_{s}To find an estimate of *δ _{s}*, we start by assessing

*δ*

_{1}which can be done simply by projecting all images with one non-zero pixel (the one-pixel images are normalized to unity so that the denominator in the definition of

*σ*

_{1}is not needed), finding the image that shrinks the most under

*R*̂

and the one that expands the most under *R*̂

Correspondingly, we have

and

With the constant scaling, the isometry constant for *s* = 1 becomes:

The result for *δ*
_{1} is accurate because it is feasible to search all one-pixel images. The following iterative procedure, however, yields approximate estimates of *δ _{S}* that are less than or equal to the true values. Given

**f**

^{(min)}

_{s-1}we find

**f**

^{(min)}

*= cos*

_{s}*α*

**f**

^{(min)}

_{s-1}+ sin

*α*

**p**by solving:

where *α* varies in [0,2*π*) and **p** represents all unit, one-pixel images where the non-zero pixel is both, located at a zero-valued pixel and adjacent to a non-zero pixel, in **f**
^{(min)}
_{s-1}. Note that the search combination with the trigonometric functions automatically preserves normalization. The image **f**
^{(max)}
* _{s}* can be searched for in a similar manner and the

*δ*estimate is found from the corresponding values of

_{s}*σ*

_{s}^{(min)}and

*σ*

_{s}^{(max)}. The above procedure gives a lower bound on

*δ*, as the search space is restricted and the one parameter optimization in Eq. (13) is also limited; lower norms of

_{s}*R*̂

**f**

*can likely be found by allowing variation of the coefficients of all non-zero pixels in the test image.*

_{s}For the present configuration with the 512×512 image, we carried out the *δ _{S}* estimation to

*s*= 100. This is well short of the sparsity of the ellipse phantom, but far enough to make an important point. In Fig. 3, the estimated isometry constant is plotted as a function of sparsity. The obvious feature of this graph is that the isometry constants are large. An article presenting the CS algorithm GraDeS [36], similar to IHT, has a concise summary table of CS algorithms and isometry constant values where these algorithms are proven to converge to the exact solution, under noiseless conditions. The largest of these constraints, for algorithms that can be applied to the present tomographic system, is

*δ*

_{2S}< 1/3. This value is exceeded already at very low sparsity. On the other hand it is known that the RIP is a sufficient condition, not a necessary one. And it appears that the gap between the two is quite large. Thus, at this point of the development of CS algorithm applications to tomography, the only course of action is to perform exhaustive simulation. A side benefit of performing the RIP analysis above is that we can use the images

**f**

^{(min)}

_{100}and

**f**

^{(max)}

_{100}as test phantoms, shown in Fig. 4, to increase difficulty of reconstruction.

#### 5.1.2. Accurate image-reconstruction with IHT-POCS, and algorithm stress tests

One of the basic tests of any CS algorithm involve exact image recovery from ideal data. We have performed multiple tests of this sort under many conditions and with many different phantoms. We show here some of these results. For the first set of tests the ellipse phantom is discretized on the 512×512 image array, and from this image the projection data are generated with exactly the same discrete projection operator as used in the reconstruction algorithms. The results of the IHT algorithm and the IHT-POCS algorithm are shown in Fig. 5. For both algorithms the hard-thresholding employed the exact sparsity of the phantom, because this test attempts to demonstrate recoverability of the exact image. Although the plots show convergence numbers up to 1000 iterations, the IHT-POCS and IHT algorithms were run to 1000 and 5000 iterations, respectively, to the point where the change in error went to zero. The plots in Fig. 5 show faster convergence for IHT-POCS than that of IHT for this example. For the present POCS implementation, we employed sequential access of the data, but it may be possible to accelerate convergence further by adopting other data access strategies [37–39].

From Fig. 5 it is clear that image recovery by IHT-POCS is highly accurate even though, as was demonstrated above, the isometry constant for the present phantom’s sparsity is quite close to unity. As we are operating IHT well outside of its proven range of isometry constant values, we do not expect it to yield an accurate reconstruction. The IHT results show recovery of the ellipse boundaries, but there is significant speckle noise overlaying these structures. The performance of IHT-POCS is surprisingly good, and we conjecture that it may be possible to prove exact convergence for a large range of isometry constants or based on some other CS principle.

To test robustness of the IHT-POCS algorithm we varied the starting image, using uniform zero values and random values. In each case the image recovery was highly accurate, to numerical precision. We also performed reconstructions with IHT and IHT-POCS on the ellipse phantom in Fig. 4 with **f**
^{(min)}
_{100} and **f**
^{(max)}
_{100} added. The results in Fig. 6 again show a surprisingly good recovery with the IHT-POCS algorithm. Note that the blown-up region of the image, showing reconstructions of **f**
^{(min)}
_{100} at the lower left of each panel, shows remarkable recovery by IHT-POCS while the same component of the phantom is practically invisible to IHT.

Again, we stress here that these simulations are just that. We cannot generalize such conclusions to arbitrary system matrices, but we are confident from multiple simulations that for the discrete Radon transform data on discrete sparse objects, IHT-POCS is a useful and robust algorithm. The results of this section also points to a few interesting theoretical problems: is it possible to find exact isometry constants of the discrete Radon transform?, and is it possible to prove exact recovery with IHT-POCS and for what conditions?

## 5.2. Continuous, ellipse phantom

This section describes some important issues in the application of CS to Radon transform inversion. The above example points out one of these issues. The highly accurate image reconstruction, above, was performed on data generated from the *discretized* ellipse phantom. When the ellipse phantom data are generated by applying the continuous operator *R*, even when no noise is introduced, the projection data are not consistent with the 512×512 image matrix. The magnitude of the resulting inconsistency can be larger than that due to signal noise. In this section, we discuss the impact of this type of inconsistency, the motivation of going to IHT-POCS-TV, and how we view the role of the image-reconstruction algorithm.

#### 5.2.1. Image reconstruction on the 512×512 grid

The results of applying IHT and IHT-POCS to data generated by the continuous Radon transform are shown in Fig. 7. Both algorithms are able to recover the basic structure of the phantom, but it is clear that there are artifacts in both images. The comparison between IHT and IHT-POCS is also interesting to illustrate an example of how each algorithm handles data corruption. The fact that the main structures are clearly visible is evidence of robustness of each algorithm. And the lack of streak artifacts is noteworthy. The absence of streak artifacts clearly depends on the fact that we input the correct sparsity into each algorithm, so a good estimate of image sparsity is important for streak artifact reduction and is part of the proposed algorithm.

The speckle noise present in the images, however, can obscure small structures. That each of the algorithms has speckle noise is not surprising, because there is no element in either IHT or IHT-POCS that controls variations amongst neighboring pixels. An obvious way to reduce this type of data inconsistency would be to decrease the pixel size in order to better approximate continuous objects. Another motivation of increasing the number of pixels is to fully utilize the resolution of the 2048-bin projections. But implementing this modification leads to fundamental difficulties requiring us to change the sparsification principle.

#### 5.2.2. Going to larger grid sizes

In order to be able to reconstruct images of dimension 2048×2048 pixels, we employ the above mentioned IHT-POCS-TV algorithm. Using a pixel basis, the problem with exploiting pixel-sparsity is that the number of non-zero pixels scales inversely with the square of the pixel-width for 2D images. Sure enough, embedding the ellipse phantom into a 2048×2048 array yields 260,513 non-zero pixels, roughly a factor of 16 increase over the 512×512 embedding. This number of non-zero pixels already exceeds the number of measurement rays, 90×2048=184,320. Thus exact recovery is impossible using pixel sparsity. The issue projection data from continuous objects complicates the oft-made claim in the CS literature about sub-Nyquist sampling [40].

Exploiting, instead, sparsity in the image gradient has three advantages: (1) the sparsity at each level of embedding is lower, at least for this type of piece-wise constant phantom, (2) the sparsity scaling with image dimension is linear in the pixel width instead of quadratic, and (3) algorithms for TV-minimization do penalize large variations between neighboring pixels, so when scanning conditions do not meet the conditions of exact recovery, images with reduced speckle noise can still be obtained.

Direct application of our previous algorithm on constrained, TV-minimization in Ref. [31] has to be modified. The previous application to CT image reconstruction included a positivity constraint. That constraint cannot be used here because the object function is the Laplacian of the refractive index distribution, which can take on negative values. If we are able to determine a ball-park estimate of object sparsity, we can use effectively a sparsity constraint with the TV-minimization to control steak artifacts in the image. Although we cannot expect IHT or IHT-POCS to recover an accurate 2048×2048 image, we may be able to use these algorithms to estimate image sparsity. Shown in Fig. 8, is the data error as a function of the sparsity parameter for IHT-POCS. For this image resolution, we know that the sparsity is slightly less than 2^{18} and in the plot we see that it is around such a value that there is a clear change in slope of the data-error as a function of image sparsity. Not only does this plot suggest a range of sparsities *s* to explore, but it also yields lower bounds on *ε* for IHT-POCS-TV as seen by the optimization problem, Eq. (9), which it is designed to solve approximately.

The way we view the application of IHT-POCS-TV is that *s* and *ε* are parameters of the algorithm that need to be explored to find optimal settings for a particular application. The sparsity parameter *s* is used to control streak artifacts, while *ε* controls image roughness (larger *ε* allows greater data error to achieve lower image TV). The plot in Fig. 8 is used as a guide to help choose appropriate values of *ε* and *s*.

## 6. Investigation of algorithm performance using experimental data

#### 6.1. Experimental data

An experimental investigation of boundary-enhanced PCT was conducted at the 2-BM microtomography beamline at the Advanced Photon Source (Argonne National Laboratory). A detailed description of this imaging system has been published elsewhere [14]. A piece of foam was imaged with *λ* = 1.0 × 10^{-10} m, and untruncated PCT projection data were acquired at 1440 view angles. A few-view data set was obtained by keeping only 90 measurements that were evenly-spaced over the interval [0,180°). The detector contained 2048 × 2048 pixels with an effective pixel dimension of 6.5 microns.

### 6.2. Implementation of reconstruction algorithms

We numerically implemented the POCS, IHT-POCS, and IHT-POCS-TV reconstruction algorithms described above and employed them to reconstruct boundary-enhanced images from few-view measurement data. As a benchmark, we also reconstructed images from the complete data set containing 1440 view angles by use of the FBP algorithm. All images were reconstructed on a 2048 × 2048 Cartesian grid with pixel dimension of 6.5 microns.

The performances of the IHT-POCS and IHT-POCS-TV algorithms are strongly influenced by the choice of the threshold parameter *s*. We adopted a systematic method for determining an effective value for this parameter, as described in Section 5.2.2. Estimates of the image **b** were reconstructed assuming different values of *s*. From knowledge of the reconstructed **b**, the data residual was computed as *ε _{b}* = ∣

*R*̂

**b**-

**g**∣ and plotted as a function of

*s*as shown in Fig. 9. A relatively small value of

*s*is sought, due to our knowledge that

**b**is generally sparse. However, if the chosen value is too small, the data residual will sharply increase indicating that

**b**could not be accurately represented by the number of pixels that were permitted to take on non-zero values. For the object we considered in our experimental studies, Fig. 9 suggested that

*s*= 2

^{18}specified a reasonable compromise between object sparsity and representation error.

### 6.3. Reconstructed images

Reconstructed boundary-enhanced images corresponding to a transverse slice through the object are displayed in Fig. 10. The image reconstructed from the complete data by use of the FBP algorithm is shown in Fig. 10(a). The images reconstructed from the few-view data by use of the POCS, IHT-POCS, and IHT-POCS-TV algorithms are show in Figs. 10(b)–10(d), respectively. In order to more easily visualize some image features and artifacts, 550 × 900 pixel regions positioned near the center of the images in Fig. 10(a)–10(d) are re-displayed in Figs. 11(a)–11(d). As expected [2, 7], the image reconstructed by use of the FBP algorithm [Fig. 10(a)] clearly reveals the locations of the boundaries in the foam sample. Due to the sharp boundaries in the object, certain streak-like artifacts can be observed in the zoomed-in image [Fig. 11(a)]. Similarly, the image reconstructed by use of the standard POCS algorithm [Figs. 10(b) and 11(b)] contains streak-like artifacts whose magnitidues are enhanced due to the use of few-view measurement data. The images reconstructed by use of the IHT-POCS algorithm [Figs. 10(c) and 11(c)] and the IHT-POCS-TV algorithm [Figs. 10(d) and 11(d)] do not contain the conspicuous artifacts produced by the POCS algorithm and contain most of the object features that are visible in the image reconstructed from the complete data by use of the FBP algorithm [Fig. 10(a)]. The images show some variation in the grey-level scalings, which is due to differences in how the discretization error is propagated through each algorithm. Although the image produced by the IHT-POCS algorithm clearly depicts the image boundaries, it does contain some perceptible numerical errors such as isolated bright spots in the background. Such artifacts are not present in the image produced by the IHT-POCS-TV algorithm because they would increase the value of the image TV, which is sought to be minimized by the algorithm.

Regions-of-interest within the reconstructed images corresponding to two additional transverse slices are shown in Figs. 12 and 13. The size of the displayed regions are 500×900 and 600×350 pixels, respectively. In each figure, the image in subfigure (a) was reconstructed by use of the FBP algorithm from the complete data and the images in subfigures (b), and (c) were reconstructed by use of the IHT-POCS and IHT-POCS-TV algorithms from the few-view data. The boundary-enhanced images reconstructed by use of the IHT-POCS and IHT-POCS-TV algorithms from few-few data contain almost all of the object features that are visible in the image reconstructed by use of the FBP data from complete data. Moreover, as observed and explained above, streak-like and other image artifacts are very effectively mitigated by the IHT-POCS-TV algorithm.

#### 7. Summary

We have proposed and investigated two iterative algorithms for few-view image reconstruction in boundary-enhanced PCT. The image-reconstruction algorithms are based on the premise that a boundary-enhanced PCT image is typically sparse or possesses a sparse gradient image. In order to exploit object sparseness, the reconstruction algorithms seek to minimize the *l*
_{1}-norm or TV-norm of the image, subject to data consistency constraints. Both algorithms employ a thresholding procedure to promote sparse solutions. To our knowledge, this is the first attempt to mitigate data incompleteness in PCT by use of object sparsity constraints.

By use of experimental data, we demonstrated that the algorithms can reconstruct accurate boundary-enhanced images from highly incomplete few-view projection data. The proposed algorithms were also demonstrated to produce significantly weaker image artifacts than those produced by a conventional iterative image reconstruction algorithm. The proposed reconstruction algorithms will benefit applications of boundary-enhanced PCT by permitting a significant reduction in data-acquisition times and minimizing the exposure of the sample to damaging radiation. In future studies, the reconstruction algorithms can be investigated for use in boundary-enhanced PCT employing polychromatic sources [9] and in applications that involve other types of data incompleteness.

## Acknowledgments

MA was supported in part by National Science Foundation (NSF) Awards CBET-0546113 and CBET-0854430. EYS and XP were supported in part by NIH R01 grants CA120540 and EB000225, and by an Illinois Department of Public Health Ticket for the Cure Grant. EYS was also supported in part by a Career Development Award from NIH SPORE grant CA125183-03. The authors thank Dr. Francesco De Carlo for his assistance in acquiring the experimental data and Qiaofeng Xu for assistance in data pre-processing. Use of the Advanced Photon Source was supported by the U. S. Department of Energy, Office of Science, Office of Basic Energy Sciences, under Contract No. DE-AC02-06CH11357 Partially funding for this work was provided by the NIH S10 RR021039 and P30 CA14599. The contents of this paper are solely the responsibility of the authors and do not necessarily represent the official views of any of the supporting organizations.

## References and links

**1. **A. Barty, K. Nugent, A. Roberts, and D. Paganin, “Quantitative phase tomography,” Opt. Commun. **175**(4), 329–336 (2000). [CrossRef]

**2. **T. Weitkamp, C. Rau, A. Snigirev, B. Brenner, T. Gunzler, M. Kuhlmann, and C. Schroer, “In-line phase contrast in synchrotron-radiation microradiography and tomography,” in Developments in X-ray Tomography III, Proceedings of the SPIE , **vol. 4503**, pp. 92–102 (2002).

**3. **P. Cloetens, R. Barrett, J.-P. Guigay, and M. Schlenker, “Phase objects in synchrotron radiation hard x-ray imaging,” J. Phys. D: Appl. Phys. **29**, 133–146 (1996). [CrossRef]

**4. **A. V. Bronnikov, “Theory of quantitative phase-contrast computed tomography,” J. Opt. Soc. Am. A **19**(3), 472–480 (2002). [CrossRef]

**5. **T. E. Gureyev, D. M. Paganin, G. R. Myers, Y. I. Nesterest, and S. W. Wilkins, “Phase-and-amplitude computer tomography,” Appl. Phys. Lett. **89**, 034,102 (2006). [CrossRef]

**6. **M. A. Anastasio, D. Shi, Y. Huang, and G. Gbur, “Image Reconstruction in Spherical Wave Intensity Diffraction Tomography,” J. Opt. Soc. Am. A **22**, 2651–2661 (2005). [CrossRef]

**7. **P. Cloetens, M. Pateyron-Salome, J. Y. Buffiere, G. Peix, J. Baruchel, F. Peyrin, and M. Schlenker, “Observation of microstructure and damage in materials by phase sensitive radiography and tomography,” J. Appl. Phys. **81**, 5878–5886 (1997). [CrossRef]

**8. **P. Spanne, C. Raven, I. Snigireva, and A. Snigirev, “In-line holography and phase-contrast microtomography with high energy x-rays,” Phys. Med. Biol. **44**(3), 741–749 (1999). [CrossRef] [PubMed]

**9. **E. F. Donnelly, R. R. Price, K. G. Lewis, and D. R. Pickens, “Polychromatic phase-contrast computed tomography,” Med. Phys. **34**(8), 3165–3168 (2007). [CrossRef] [PubMed]

**10. **G. R. Myers, D. M. Paganin, T. Gureyev, and S. C. Mayo, “Phase-contrast tomography of single-material objects from few projections,” Opt. Express **16**, 908–919 (2008). [CrossRef] [PubMed]

**11. **G. R. Myers, T. E. Gureyev, D. M. Paganin, and S. C. Mayo, “The binary dissector: phase contrast tomography of two- and three-material objects from few projections,” Opt. Express **16**, 10736–10749 (2008). [CrossRef] [PubMed]

**12. **M. A. Anastasio, D. Shi, F. D. Carlo, and X. Pan, “Analytic image reconstruction in local phase-contrast tomography,” Phys. Med. Biol. **49**, 121–144 (2004). [CrossRef] [PubMed]

**13. **D. Shi, M. Anastasio, and X. Pan, “Reconstruction of refractive index discontinuities from truncated phase-contrast tomography projections,” Appl. Phys. Lett. **86**, 034102 (2005). [CrossRef]

**14. **Y. Wang, F. D. Carlo, D. Mancini, I. McNulty, B. Tieman, J. Bresnahan, I. Foster, J. Insley, P. Lane, G. von Laszewski, C. Kesselman, M. Su, and M. Thiebaux, “A high-throughput x-ray microtomography system at the Advanced Photon Source,” Rev. Sci. Instrum. **72**, 2062–2068 (2001). [CrossRef]

**15. **E. Candes, J. Romberg, and T. Tao, “Exact signal reconstruction from highly incomplete frequency information,” IEEE Trans. Inf. Theory **52**, 3526–3530 (2006). [CrossRef]

**16. **A. Pogany, D. Gao, and S. W. Wilkins, “Contrast and resolution in imaging with a microfocus x-ray source,” Rev. Sci. Instrum. **68**, 2774–2782 (1997). [CrossRef]

**17. **T. E. Gureyev and S. W. Wilkins, “On x-ray phase imaging with a point source,” J. Opt. Soc. Am. A **15**(3), 579–585 (1998). [CrossRef]

**18. **F. Natterer, *The Mathematics of Computerized Tomography* (Wiley, New York, 1986).

**19. **M. Liebling, T. Blu, and M. Unser, “Fresnelets: New Multiresolution Wavelet Bases for Digital Holography,” IEEE Trans. Image Process. **12**(1), 29–43 (2003). [CrossRef]

**20. **H. Barrett and K. Myers, *Foundations of Image Science* (Wiley Series in Pure and Applied Optics, 2004).

**21. **A. Kak and M. Slaney, *Principles of Computerized Tomographic Imaging* (IEEE Press, 1988).

**22. **J. Tropp, “Just relax: Convex programming methods for identifying sparse signals,” IEEE Trans. Info. Theory **51**, 1030–1051 (2006). [CrossRef]

**23. **E. Y. Sidky, R. Chartrand, and X. Pan, “Image reconstruction from few views by non-convex optimization,” IEEE Nuc. Sci. Conf. Rec. **5**, 3526–3530 (2007).

**24. **R. Baraniuk, “Compressive Sensing,” IEEE Sig. Process. Mag. **24**, 118–121 (2007). [CrossRef]

**25. **M. A. Figueiredo, R. D. Nowak, and S. J. Wright, “Gradient projection for sparse reconstruction: Application to compressed sensing and other inverse problems,” IEEE J. Sel. Top. Sig. Process. **1**, 586–598 (2007). [CrossRef]

**26. **M. Li, H. Yang, and H. Kudo, “An accurate iterative reconstruction algorithm for sparse objects: Application to 3D blood vessel reconstruction from a limited number of projections,” Phys. Med. Biol. **47**, 2599–2609 (2002). [CrossRef] [PubMed]

**27. **S. Chen, D. Donoho, and M. Saunders, “Atomic decomposition by basis pursuit,” SIAM J. Sci. Comput. **20**, 33–61 (1998). [CrossRef]

**28. **F. Santosa and W. Symes, “Linear inversion of band-limited reflection histograms,” SIAM J. Sci. Stat. Comput. **7**, 1307–1330 (1986). [CrossRef]

**29. **W. Souidene, A. Aissa-El-Bey, K. Abed-Meraim, and A. Beghdadi, “Blind Image Separation using Sparse Representation,” in ICIP 2007. IEEE International Conference on Image Processing, **vol. 3**, pp. III -125–III -128 (2007).

**30. **E. Y. Sidky, C.-M. Kao, and X. Pan, “Accurate image reconstruction from few-views and limited-angle data in divergent-beam CT,” J. X-ray Sci. Tech. **14**, 119–139 (2006).

**31. **E. Y. Sidky and X. C. Pan, “Image reconstruction in circular cone-beam computed tomography by constrained, total-variation minimization,” Phys. Med. Biol. **53**, 4777–4807 (2008). [CrossRef] [PubMed]

**32. **G.-H. Chen, J. Tang, and S. Leng, “Prior image constrained compressed sensing (PICCS): A method to accurately reconstruct dynamic CT images from highly undersampled projection data sets,” Med. Phys. **35**(2), 660–663 (2008). [CrossRef] [PubMed]

**33. **R. Chartrand, “Exact reconstruction of sparse signals via nonconvex minimization,” IEEE Sig. Process. Lett. **14**, 707–710 (2007). [CrossRef]

**34. **T. Blumensath and M. Davies, “Iterative hard thresholding for compressed sensing,” Applied and Computational Harmonic Analysis **27**, 265–274 (2009). [CrossRef]

**35. **E. J. Candès and M. B. Wakin, “An Introduction To Compressive Sampling,” IEEE Sig. Proc. Mag. **25**, 21–30 (2008). [CrossRef]

**36. **R. Garg and K. Khandekar, “Gradient Descent with Sparsification: An iterative algorithm for sparse recovery with restricted isometry property,” Proceedings of the 26th Annual International Conference on Machine Learning **382**, 337–344 (2009).

**37. **C. Hamaker and D. C. Solmon, “Angles between null spaces of x-rays,” J. Math. An. App. **62**, 1–23 (1978). [CrossRef]

**38. **G. T. Herman and L. B. Meyer, “Algebraic reconstruction techniques can be made computationally efficient,” IEEE Trans. Med. Imag. **12**, 600–609 (1993). [CrossRef]

**39. **H. Q. Guan and R. Gordon, “A projection access order for speedy convergence of ART (algebraic reconstruction technique) - a multilevel scheme for computed-tomography,” Phys. Med. Biol. **39**, 2005–2022 (1994). [CrossRef] [PubMed]

**40. **X. Pan, E. Y. Sidky, and M. Vannier, “Why do commercial CT scanners still employ traditional, filtered back-projection for image reconstruction?” Inv. Prob. **25**, 123009 (2009). [CrossRef]