Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Constrained regularized reconstruction of X-ray-DPCI tomograms with weighted-norm

Open Access Open Access

Abstract

In this paper we introduce a new reconstruction algorithm for X-ray differential phase-contrast Imaging (DPCI). Our approach is based on 1) a variational formulation with a weighted data term and 2) a variable-splitting scheme that allows for fast convergence while reducing reconstruction artifacts. In order to improve the quality of the reconstruction we take advantage of higher-order total-variation regularization. In addition, the prior information on the support and positivity of the refractive index is considered, which yields significant improvement. We test our method in two reconstruction experiments involving real data; our results demonstrate its potential for in-vivo and medical imaging.

© 2013 Optical Society of America

1. Introduction

X-ray differential phase-contrast imaging (DPCI) is a tomographic technique that was first proposed by David et al. [1] and Momose et al. [2]. Among its advantages are its compatibility with regular laboratory X-ray sources and its high sensitivity.

The data provided by DPCI corresponds to the first derivative of the Radon transform of the refractive index of a sample. Thus, in practical applications, the common reconstruction scheme for DPCI is based on a variant of the filtered back-projection (FBP) algorithm. While FBP is a fast (non-iterative) method, it typically requires a large number of projections to achieve a good reconstruction quality [3]. This implies long exposure times which could damage the specimen.

Recently, several authors have proposed iterative techniques that exploit prior knowledge on the specimen to significantly reduce the number of required projections [47]. Their approaches are all based on a penalized maximum-likelihood formulation, with a standard 2-norm data-fidelity term. In this paper, we aim at further reducing the number of projections by proposing an improved iterative reconstruction algorithm for DPCI. The contributions of this paper are summarized as follows:

  1. Formulation of the reconstruction as a variational problem using a weighted norm for the data term that ensures that the iteration matrix is well-conditioned and which speeds-up the algorithm considerably.
  2. Use of a non-quadratic 1 regularization that consists either of total variation (TV) for piecewise-constant images or some higher-order extension that is better matched to biological specimens.
  3. Inclusion of support and positivity constraints in the reconstruction algorithm.
  4. Design of a novel variable-splitting scheme for the constrained optimization problem, which improves the reconstruction quality compared to [4]. As a result, the last step of every iteration is a denoising operation, which is beneficial for suppressing artifacts. Furthermore, this splitting is inherently matched to our preconditioner, leading to a fast-converging numerical scheme.
Finally, we conduct experiments on real data to evaluate the proposed method. A preliminary version of this paper has been presented at ISBI 2013 [8]. The overlap is only the use of a weighted norm (item 1), but the presented algorithm and the other refinements (items 2–4) are specific to this paper.

The remainder of the paper is organized as follows: In Section 2, we review the continuous-domain formulation of the DPCI forward model and state an important relationship between the reconstruction and the measurements. In Section 3, we describe the discretization scheme of the forward model. We propose there a novel formulation of the reconstruction. In addition, the details of the algorithm are discussed. We evaluate the proposed reconstruction scheme in two experiments involving real data in Section 4. We summarize and conclude the paper in Section 5.

2. Continuous-domain forward model

Let f denote the 2D distribution of the refractive-index of the object. In DPCI, the measurement g is the first derivative of the Radon transform (FDRT) of this function, with

g(y,θ)=(1){f}(y,θ)={f}(y,θ)y,
where {f}(y, θ) =∫f(yuθ + tvθ) dt and (uθ, vθ) are orthonormal vectors that form an angle θ with some reference coordinate system. An important property of the continuous-domain FDRT is that it can be inverted formally using a 1D convolution operator followed by the adjoint of FDRT. This leads to
f=(1)*{h*yg}.
Here *y denotes a 1D convolution with respect to the variable y. The frequency response of the convolution kernel h is given by ĥ(ω) = 1/|ω|.

3. Model discretization and image reconstruction

3.1. Model discretization

In order to discretize the forward model Eq. (1), we expand f as

f(x)=kckβn(xk),
where βn is a tensor-product polynomial B-spline of degree n. In practice, the domain of f is bounded. Therefore, we are only interested in a finite number of expansion coefficients ck, which we gather in a vector c.

The measurement g is only known at discrete locations and we collect the corresponding values in a vector g. Then, in the absence of noise, Eq. (1) implies the linear relationship g = Hc, where H is a matrix that represents the discretized FDRT in the B-spline basis [4].

3.2. Image reconstruction

We formulate the reconstruction as a constrained optimization problem with a generalized weighted 2-norm data term. Specifically, we aim at finding the vector c0 such that

c0=argminc𝒞{J(c)12HcgW2+λ1Ψ1(c)+λ2Ψ2(c)},
where g is an (M × 1) data vector, H is an (M × N) forward projection matrix, W2 is the weighted norm which is defined as 〈W·,·〉, and 𝒞 is a convex set that enforces support and positivity constraints. Since reconstruction with a limited number of projections is an ill-posed problem, we introduce regularization terms to take advantage of the prior information. The regularizations Ψ1(c) and Ψ2(c) are smooth and non-smooth, respectively. The parameters λ1, λ2 ∈ ℝ control the strength of the regularization.

For rapid convergence, our proposal is to use a weighting matrix which is the discrete counterpart of the convolution operator h in Eq. (2), with a slight modification to the frequency-domain singularity at zero. We modify it with the frequency response 1|ω|+β, where β is an appropriate positive parameter; it is a positive-definite operator.

We solve the nonlinear regularized problem while defining an axillary variable u and using an augmented-Lagrangian (AL) scheme.

This in turn is equivalent to finding critical point of the augmented Lagrangian (AL)

μ(c,u,α)=12HugW2+λ1Ψ1(u)+λ2Ψ2(c)+αT(uc)+μ2uc22,
where α is a vector of Lagrange multipliers that imposes the constraint u = c. The classical AL scheme alternates between a joint minimization step and an update step, so that
{(ck+1,uk+1)argminc𝒞,uμ(c,u,αk)αk+1αk+μ(uk+1ck+1).
Moreover, we use alternating direction method of multipliers (ADMM) [9] to separate the joint minimization into the succession of simpler partial problems
{uk+1argminuμ(ck,u,αk)(Step1)ck+1argminc𝒞μ(c,uk+1,αk)(Step2)αk+1αk+μ(uk+1ck+1).(Step3)
Since the zero frequency is in the null-space of the forward operator, we use Tikhonov regularization term Ψ1(u) = 1/2||u||2.

In Step 1, ck and αk are fixed, therefore μ(ck, u, αk) is a quadratic function of u whose gradient is

μ(ck,u,αk)=(HTWH+(μ+λ1)I)u(HTWg(αkμck)).
We use the conjugate-gradient (CG) method to solve this step. Since the condition number of the matrix HTWH + (μ + λ1)I is quite small, the corresponding iterative algorithm converges rapidly.

Step 2 of ADMM, which minimizes μ(c, uk, αk) with respect to c, is the constrained denoising problem

argminc𝒞{μ(c,uk+1,αk)αkT(uk+1c)+μ2uk+1c22+λ2ψ2(c)}=argminc𝒞{12uk+1+αkμc22+λ2μΨ2(c)}.
The common expression for the regularizer is
Ψ2(c)=Rc,
where ||·|| is a non-quadratic norm and R : ℝN → ℝ(NK) is the regularization operator (e.g., gradient with K = 2 or Hessian with K = 2 × 2). For the identity regularization operator, R = I, Eq. (9) typically admits a direct threshold-based solution.

For the general case of regularization operator, we aim at solving the denoising problem, which is equivalent to the proximal map

prox,λ,𝒫𝒞(z)=argminc𝒞{12zc22+λRc},
where 𝒫𝒞 is the convex projection that corresponds to the constraint. In order to find the solution of Eq. (11), we use Fenchel duality to rewrite the regularization term as
Rc=maxpRTp,u,
where RT : ℝ(NK) → ℝN is the adjoint of the operator R, p ∈ ℝ(NK) and := {p ∈ ℝ(NK)| ||p||* ≤ 1} with ||·||* the dual norm.

It can be shown that the solution of Eq. (11) is 𝒫𝒞 (z −λRTp*), where

p*=argminpf(p)+1,
with ∇f (p) = −λR𝒫𝒞 (zλRTp). We apply fast iterative shrinkage-thresholding algorithm (FISTA) [10] to solve Eq. (13). The step size is constrained by the Lipschitz constant L of ∇f (p) that depends on the regularization operator R. The other important component is the orthogonal projection onto the set that is specified by the chosen norm. Let us denote it by 𝒫. Algorithm 1 describes the denoising algorithm.

Tables Icon

Algorithm 1:. Denoising algorithm

The benefits of the proposed splitting are: 1) the transformation of a complex reconstruction problem into a sequence of simpler optimizations where the constraint is applied as simple projection in each iteration of the denoising step; 2) any regularization term can be handled by knowing its corresponding denoising function; 3) the output of the algorithm is the solution of the denoising step that results in an improved quality of reconstruction.

The reconstruction method is summarized in Algorithm 2. Here, the starting point of each inner CG iteration is the outcome of the previous CG iteration called as warm initialization.

Tables Icon

Algorithm 2:. DPCI-constrained regularized reconstruction with weighted norm (CRWN).

As for the regularization, we consider two distinct options

  • 1) Our first option is the use of a total-variation (TV) regularization term to enhance the edges in the reconstructed image. Therefore, we set
    Ψ2(c)=Lc1,1
    with ||Lc||1,1 = ∑i ||{Lc}i||1, where the sum is computed on all B-spline coefficients and {Lc}i ∈ ℝ2 is the gradient vector of the image at position i. The discrete gradient operator L : ℝN → ℝN×2 is computed according to proposition 2 in [4].

    Here, the regularization operator is the discrete gradient operator and the mixed 11 norm is chosen as the potential function. As the dual norm of the 1 norm is , the dual ball is defined as

    ,={p=[p1T,p2T,,pNT]TN×2:pi1,i=1,2,,N}.
    Therefore, the orthogonal projection of yN×2=[y1T,y2T,,yNT]T onto this ball is = 𝒫𝔹∞,∞(y) with
    [y˜i]j=sgn([yi]j)min(|[yi]j|,1),i=1,2,,N,j=1,2,
    where [·]j is the j-th entry of the corresponding vector and y˜=[y˜1T,y˜2T,,y˜NT]T. This regularization is well-matched to piecewise-constant images.

  • 2) Owing to the fact that biological and medical specimens consist mostly of filament-like and complicated structures, we investigate higher-order extensions of total variation. We apply Hessian Schatten-norm regularization (HS) as our second option. It can eliminate the staircase effect of TV regularization and results in piecewise-smooth variations of intensity in the reconstructed image. We set
    Ψ2(c)=Hc1,𝒮1,
    where H : ℝN → ℝN×2×2 is the discrete Hessian operator and ||Hc||1,𝒮1 is the mixed of 1 and nuclear norm. The norm can be computed with ||Hc||1,𝒮1 = ∑i(σ1,i + σ2,i), where σ1,i and σ2,i are the singular values of the Hessian matrix at position i. Therefore, the corresponding unit-norm dual ball is defined as
    ,𝒮={p=[p1T,p2T,,pNT]TN×2×2:pi𝒮1,i=1,2,,N},
    where ||·||𝒮 is the -norm of the singular values of the corresponding matrix (for more details, we refer the reader to [11]).

3.3. Parameter setting

The proposed algorithm has several parameters. We adjust them as follows:

  • parameters λ1 and λ2: we use the approach proposed in [4]; λ1 = 10−5 and λ2 = 10−4 ||g||. The experimental results suggest that this choice of parameters yields the optimal performance.
  • parameter μ: this parameter affects the convergence speed of ADMM. Since the algorithm is not too sensitive to it, we use a fixed value (μ = 1).
  • parameter λ : this is a parameter of the proximal map operator in Eq. 11. Since the second step of ADMM is solving Eq. 9, we have λ = λ2/μ.
  • Lipschitz constant L: the Lipschitz constant of f (p) = −λR𝒫𝒞 (zλRTp) is approximated by its Lipschitz constant of the same operator without the convex projection 𝒫𝒞. Thus, its value is
    L=λ2×λmax(RRT),
    where λmax(A) is the maximum eigenvalue of the matrix A. For the our regularization scheme, λmax(RRT) = 8 for , and its value is 64 for the HS regularization as computed in [11] for two-dimensional problems.
  • parameter τ: we set it to τ = 1/10 × L−1.

4. Experimental results

We compared the proposed algorithm to FBP and to ADMM-PCG, which appears to be the current state of the art for the reconstruction of X-ray-DPCI tomograms [4]. All experiments involved real data acquired at the TOMCAT beam line of the Swiss Light Source at the Paul Scherrer Institute in Villigen, Switzerland. The common approach for these experiments is to use a reconstruction from a large number of projections as a reference for evaluating results obtained with significantly fewer projections. In addition, the convex constraints that we apply are the positivity of the refractive index combined with the support-related constraint that the solution should be zero outside the tube that contains the specimen.

In order to identify the benefits of the proposed algorithm (CRWN), we first tested the algorithms under extreme conditions: We used only 72 projections as input, while the reference was reconstructed from 1,200 projections. For this first experiment we used a phantom that was composed of a tube and three cylinders containing liquids with different refractive indices as shown in Fig. 1(a).

 figure: Fig. 1

Fig. 1 Two reference samples (a) and (b).

Download Full Size | PDF

The performance of different algorithms are compared in Table 1. Clearly, the new method outperforms ADMM-PCG [4]. Applying the convex constraint improves the signal-to-noise ratio (SNR) and structural similarity index measure (SSIM) [12] even further. The result of the algorithm proposed in [8] is the same as CRWN-TV without CC, but it is slower since it uses FISTA. As expected, owing to the piecewise-constant structure of the sample, TV outperforms HS regularization.

Tables Icon

Table 1. Performance of different reconstruction techniques which have been applied on Phantom and Scaffold samples.

We conducted another experiment with a coronal section of a scaffold that is used for surgery. The reference image was built from 2,000 projections as depicted in Fig. 1(b). The algorithms were then benchmarked on a subset of 250 projections. Although these conditions are less severe, FBP still produces high-frequency patterns that are visible in Fig. 2(a). ADMM-PCG almost completely suppresses these artifacts, at the expense of light smoothing as shown in Fig. 2(b). Overall, CRWN yields sharper images Figs. 2(c) and 2(d), which are also reflected by the quality metrics. In addition, Hessian type regularization eliminates the staircase effect of TV which is more visible in the selected region of interest.

 figure: Fig. 2

Fig. 2 Scaffold reconstruction with 250 projections using (a) FBP, (b) ADMM-PCG, (c) CRWN with HS regularization and (d) CRWN with TV regularization.

Download Full Size | PDF

It is seen in Fig. 3(a) that CRWN is significantly faster at minimizing the cost functional than the standard FISTA algorithm. In addition, it appears that the convergence speed is not very sensitive to the number of inner iterations as we use warm initialization. We illustrate in Fig. 3(b) the robustness of CRWN with respect to the number of projections in terms of SNR. Owing to the poor performance of FBP in reconstructing boundaries, we compute the SNR for the specified region with dashed circle shown in Fig. 1(b).

 figure: Fig. 3

Fig. 3 The reconstruction performances concerning speed and quality is shown in (a) and (b), respectively.

Download Full Size | PDF

5. Conclusion

We have proposed a new iterative method for the reconstruction of X-ray-DPCI tomograms, whose experimental performance exceeds that of recent algorithms. We showed that side information such as the support-related constraints and positivity of the refractive index can significantly improve the quality of reconstruction. We interpreted this side information as a convex constraint in a variational formulation. In addition, we took advantage of Hessian-type regularization to reduce the staircase effect of TV and enhance the quality of higher-order structures in the sample. Our results demonstrated its potential for in-vivo and medical imaging.

Acknowledgments

This work was funded (in part) by European Research Council under the European Union's Seventh Framework Programme(FP7/2007-2013) / ERC grant agreement n° 267439, the Swiss National Science Foundation under Grant 200020-144355 and the Center for Biomedical Imaging of the Geneva-Lausanne Universities and EPFL.

References and links

1. C. David, B. Nohammer, H. H Solak, and E. Ziegler, “Differential X-ray phase contrast imaging using a shearing interferometer,” Appl. Phys. Lett. 81, 3287–3289 (2002). [CrossRef]  

2. A. Momose, S. Kawamoto, I. Koyama, Y. Hamaishi, K. Takai, and Y. Suzuki, “Demonstration of X-Ray Talbot interferometry,” Jpn. J. Appl. Phys. 42, L866–L868 (2003). [CrossRef]  

3. F. Pfeiffer, O. Bunk, C. Kottler, and C. David, “Tomographic reconstruction of three-dimensional objects from hard X-ray differential phase contrast projection images,” Nucl. Instrum. Methods Phys. Res. 580(2), 925–928 (2007). [CrossRef]  

4. M. Nilchian, C. Vonesch, P. Modregger, M. Stampanoni, and M. Unser, “Fast iterative reconstruction of differential phase contrast X-ray tomograms,” Opt. Express 21, 5511–5528 (2013). [CrossRef]   [PubMed]  

5. Z. Qi, J. Zambelli, N. Bevins, and G. Chen, “A novel method to reduce data acquisition time in differential phase contrast computed tomography using compressed sensing,” Proc. SPIE 7258, 72584A (2009). [CrossRef]  

6. T. Köhler, B. Brendel, and E. Roessl, “Iterative reconstruction for differential phase contrast imaging using spherically symmetric basis functions,” Med. Phys. 38, 4542–4545 (2011). [CrossRef]   [PubMed]  

7. Q. Xu, E. Y. Sidky, X. Pan, M. Stampanoni, P. Modregger, and M. A. Anastasio, “Investigation of discrete imaging models and iterative image reconstruction in differential X-ray phase-contrast tomography,” Opt. Express 20, 10724–10749 (2012). [CrossRef]   [PubMed]  

8. M. Nilchian, C. Vonesch, P. Modregger, M. Stampanoni, and M. Unser, “Iterative FBP for improved reconstruction of X-ray differential phase-contrast tomograms,” in Proc. of ISBI’13(2013), pp. 1248–1251.

9. Y. Wang, J. Yang, W. Yin, and Y. Zhang, “A new alternating minimization algorithm for total variation image reconstruction,” SIAM J. Imaging Sci. 1, 248–272 (2008). [CrossRef]  

10. A. Beck and M. Teboulle, “A fast iterative shrinkage-thresholding algorithm for linear inverse problems,” SIAM J. Imaging Sci. 2, 183–202 (2009). [CrossRef]  

11. S. Lefkimmiatis, J. P. Ward, and M. Unser, “Hessian Schatten-norm regularization for linear inverse problems,” IEEE Trans. Imaging 22, 1873–1888 (2013).

12. Z. Wang and A. Bovik, “A universal image quality index,” IEEE Signal Process. Lett. 9, 81–84 (2002). [CrossRef]  

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (3)

Fig. 1
Fig. 1 Two reference samples (a) and (b).
Fig. 2
Fig. 2 Scaffold reconstruction with 250 projections using (a) FBP, (b) ADMM-PCG, (c) CRWN with HS regularization and (d) CRWN with TV regularization.
Fig. 3
Fig. 3 The reconstruction performances concerning speed and quality is shown in (a) and (b), respectively.

Tables (3)

Tables Icon

Algorithm 1: Denoising algorithm

Tables Icon

Algorithm 2: DPCI-constrained regularized reconstruction with weighted norm (CRWN).

Tables Icon

Table 1 Performance of different reconstruction techniques which have been applied on Phantom and Scaffold samples.

Equations (19)

Equations on this page are rendered with MathJax. Learn more.

g ( y , θ ) = ( 1 ) { f } ( y , θ ) = { f } ( y , θ ) y ,
f = ( 1 ) * { h * y g } .
f ( x ) = k c k β n ( x k ) ,
c 0 = argmin c 𝒞 { J ( c ) 1 2 Hc g W 2 + λ 1 Ψ 1 ( c ) + λ 2 Ψ 2 ( c ) } ,
μ ( c , u , α ) = 1 2 Hu g W 2 + λ 1 Ψ 1 ( u ) + λ 2 Ψ 2 ( c ) + α T ( u c ) + μ 2 u c 2 2 ,
{ ( c k + 1 , u k + 1 ) argmin c 𝒞 , u μ ( c , u , α k ) α k + 1 α k + μ ( u k + 1 c k + 1 ) .
{ u k + 1 argmin u μ ( c k , u , α k ) ( Step 1 ) c k + 1 argmin c 𝒞 μ ( c , u k + 1 , α k ) ( Step 2 ) α k + 1 α k + μ ( u k + 1 c k + 1 ) . ( Step 3 )
μ ( c k , u , α k ) = ( H T WH + ( μ + λ 1 ) I ) u ( H T Wg ( α k μ c k ) ) .
argmin c 𝒞 { μ ( c , u k + 1 , α k ) α k T ( u k + 1 c ) + μ 2 u k + 1 c 2 2 + λ 2 ψ 2 ( c ) } = argmin c 𝒞 { 1 2 u k + 1 + α k μ c 2 2 + λ 2 μ Ψ 2 ( c ) } .
Ψ 2 ( c ) = Rc ,
prox , λ , 𝒫 𝒞 ( z ) = argmin c 𝒞 { 1 2 z c 2 2 + λ Rc } ,
Rc = max p R T p , u ,
p * = argmin p f ( p ) + 1 ,
Ψ 2 ( c ) = Lc 1 , 1
, = { p = [ p 1 T , p 2 T , , p N T ] T N × 2 : p i 1 , i = 1 , 2 , , N } .
[ y ˜ i ] j = sgn ( [ y i ] j ) min ( | [ y i ] j | , 1 ) , i = 1 , 2 , , N , j = 1 , 2 ,
Ψ 2 ( c ) = Hc 1 , 𝒮 1 ,
, 𝒮 = { p = [ p 1 T , p 2 T , , p N T ] T N × 2 × 2 : p i 𝒮 1 , i = 1 , 2 , , N } ,
L = λ 2 × λ max ( RR T ) ,
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.