Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Simultaneous algebraic reconstruction technique based on guided image filtering

Open Access Open Access

Abstract

The challenge of computed tomography is to reconstruct high-quality images from few-view projections. Using a prior guidance image, guided image filtering smoothes images while preserving edge features. The prior guidance image can be incorporated into the image reconstruction process to improve image quality. We propose a new simultaneous algebraic reconstruction technique based on guided image filtering. Specifically, the prior guidance image is updated in the image reconstruction process, merging information iteratively. To validate the algorithm practicality and efficiency, experiments were performed with numerical phantom projection data and real projection data. The results demonstrate that the proposed method is effective and efficient for nondestructive testing and rock mechanics.

© 2016 Optical Society of America

1. Introduction

X-ray computer tomography (CT) is an effective non-destructive technique for monitoring internal structures of the body. CT reconstruction algorithms are classified into analytic or iterative reconstruction algorithms. An analytic reconstruction algorithm, such as the filtered back projection (FBP) algorithm, is derived from the inverse Radon transform, while an iterative reconstruction algorithm yields an iterative solution for a linear system by discretizing the Radon transform. An FBP algorithm requires complete projection data and achieves the reconstruction quickly. FBP-reconstructed results are less accurate when using incomplete projection data. Incompleteness of projection data may result from scanning environmental constraints in industrial CT or from reduced X-ray radiation dose when performing medical CT on the body. It is therefore desirable to find ways to reconstruct high-quality CT images from incomplete projection data. Compared with the FBP algorithm, the main advantages of iterative methods are that they can be applied to inconsistent, incomplete, and noisy projection data [1–4].

The discretization linear system of CT is underdetermined and ill-posed for an incomplete data problem [5]. A regularization method is usually adopted in these circumstances [6]. Other methods include principal component analysis and anisotropic diffusion models [7,8]. An appropriate utilization of prior information allows an improvement of the reconstructed image quality. The choice of prior term depends on the specific object examined and on the desire to preserve or emphasize particular features [9]. In industrial and clinical tomographic imaging applications, the image itself is sparse or has a sparse representation in a transform domain. Sparse prior information has attracted considerable interest [10]. One example in the field of image reconstruction is the total variation (TV) norm [11,12]. TV minimization methods are suited for piecewise constant images. TV minimization images produced from images containing complex textures may incur a loss in fine structures and produce “staircasing” artifacts. These problems are overcome by applying better sparsity methods, including high-order TV [13], edge-preserving dictionaries [14], total generalized variation [15], curvelets [16] and shearlets [17].

Clinical CT applications often provide a high-quality prior image. The use of a prior image for X-ray CT is a promising and interesting research topic. For instance, Chen et al. proposed a prior-image constrained compressed sensing (PICCS) algorithm that can accommodate substantial undersampling and exposure reduction more robustly than conventional compressed sensing (CS)-based techniques [18]. In the case of repeated CT scans, a prior-image induced nonlocal (PINL) regularization for statistical iterative reconstruction was proposed for radiation reduction [19]. Ma et al. [20] proposed a low-dose CT image filtering method, utilizing a high-quality normal-dose scan as the prior information to perform current low-dose CT image restoration based on nonlocal means criteria. Image-filtering methods have been applied to image processing. A good image-filtering method smoothes an image by suppressing noise while preserving important image features, such as edges, ridges, and corners. This is a considerable challenge since smoothing often conflicts with feature preservation [21]. Recently, a guided image-filtering method was proposed [22] that yielded both quality and efficiency in many applications, including noise reduction, detail smoothing, image matting, haze removal, and joint upsampling. The guided filter has good edge-preserving smoothing properties and other advantages: with the help of the prior guidance image, it can make the filtering output more structured and less smoothed than the input. Inspired by that work, we here propose a “simultaneous algebraic reconstruction technique” (SART) [23,24] algorithm, based on guided image filtering, for reconstructing images from few-view projection data. This new algorithm is referred to as the SART-G algorithm.

This SART-G algorithm is designed mainly for cases of CT image reconstruction where the prior image is known, small parts of the prior image have changed, and the changed image is to be reconstructed from few-view projection data using the prior image as the guidance image. It is also assumed that the prior guidance and reconstructed images are taken at the same geometrical coordinates, when applying the SART-G algorithm to nondestructive testing and rock mechanics. The inclusion of an image-registration step is indispensable in industrial and medical applications of the SART-G algorithm.

The SART-G algorithm has two advantages over the SART algorithm. Firstly, the initial prior guidance image is used to constrain the image reconstruction process. Secondly, the prior guidance image is updated in the image reconstruction process (see Section 2.5.2). As the number of iterations in the SART algorithm increases, the reconstruction image converges to the minimum-norm solution of the underdetermined linear equation system. The updated guidance image merges the information contained in the initial prior guidance image and in the image reconstructed by the SART algorithm. The guided filter method transfers the merged information to the filtering output, so that the changed parts of prior guidance image can be reconstructed effectively.

We analyzed the SART-G algorithm characteristics using numerical phantom projection data and real projection data derived from volcanic rock (provided by the Beijing Higher Institution Engineering Research Center of Testing and Imaging). The results demonstrate that the SART-G algorithm can reduce noise and preserve the fine structures in few-view data better, compared to the SART-TV [12] and SART algorithms. Each SART-G algorithm iteration involves two steps: a SART step, which uses the SART algorithm to solve the discretization linear system of CT; and a guided image-filtering step, which reduces noise and preserves the detailed structure of the reconstructed image.

The rest of this paper is organized as follows. In Section 2, we present the SART-G algorithm. In Section 3, the numerical simulation and real experimental data results are presented for SART-G. Finally, Section 4 closes with concluding remarks.

2. Method

2.1 CT system

Figure 1 shows our experimental fan-beam setup. The detector samples data points at regular spatial intervals. SO denotes the distance between the X-ray source and the object, and OD the object-detector distance. An X-ray fan beam is produced while the X-ray source and detector are made to rotate fully around the object.

 figure: Fig. 1

Fig. 1 Schematic scan setup

Download Full Size | PDF

2.2 CT image reconstruction method

A CT imaging system can be modeled using

Af=p
whereA=(aij)denotes anM×Nmeasurement matrix, aijthe contribution of thejthpixel to theithX-ray, pRMthe projection data collected by the detector after necessary processing, andfRNthe linear attenuation coefficients of the measured object. The task of CT imaging is to reconstructffromp.

We consider the convergence of the following simultaneous iterative scheme [25,26]:

f(K+1)=f(K)+λKV1A*W(pAf(K))
whereK=0,1...,λK>0is the relaxation coefficient, and Vand Ware two positive definite diagonal matrices of orderNandM.

The SART algorithm is a special case of Eq. (2), and its formula [23,24] can be expressed as

fj(K+1)=fj(K)+λKi=1Maiji=1M[pin=1Nainfn(K)n=1Nain]aij
withVandWdefined
V=diag(V1,V2,...,VN)
W=diag(W1,W2,...,WM)
and with

Vj=i=1Maij,forj=1,...,N
1Wi=n=1Nain,fori=1,...,M

The SART relaxation parameterλKwas chosen by considering the convergence theorem [27]:

Theorem 1: Assuming that Wi>0 for all i=1,2,...M, and if, for all K0,

0ελK(2ε)/max{Wi|i=1,2,...,M}
whereεis an arbitrarily small constant, then any sequence{f(K)}K=0generated by SART converges to the weighted least-squares solutionf*=argmin{Afp2|fRn}.

2.3 SART algorithm based on guided image filtering

The guided filtering [22] is a local affine model that connects the filtering outputfoutto the guidance image:

Iguide:fjout=akIjguide+bk,jwk
where (ak,bk) are linear coefficients that are assumed to be constant in a square window wk,rdenotes a radius of a disc containingwk. A determination of the linear coefficients (ak,bk)requires constraints from the filtering inputfin. A solution is sought that minimizes the difference betweenfinandfoutwhile maintaining the linear model (Eq. (9)).

We propose a SART technique based on guided image filtering, denoted SART-G. It is described by the following optimization program:

minfjoutjwk((fjoutfjin)2+εak2)subjecttoEq.(1)Afout=psubject to Eq. (1)
The regularization parameterεis a positive constant. The following cost function is minimized in the windowwk:
E(ak,bk)=jwk((fjoutfjin)2+εak2)=jwk((akIjguide+bkfjin)2+εak2)
The optimization program (Eq. (10)) is solved by applying the following minimization procedure, which constitutes the SART-G algorithm:

  • 1: An iterative reconstruction algorithm (SART) is used to solve Eq. (1);
  • 2: E(ak,bk) is minimized by linear regression.

2.4 Guided image filtering

Equation (11) is the linear ridge regression model and its solution can be expressed by a linear regression [22,28]:

ak=1|w|jwkIjguidefjinηkf˜kσk2+ε
bk=f˜kakηk
Whereηkandσk2are, respectively, the mean and variance ofIguideinwk, |w|is the number of pixels inwk, andf˜k=1|w|jwkfjin is the mean offininwk.

Because several windowswk overlap a given pixel, so the value of fjout in Eq. (9) is not identical when it is computed in different windows. A simple strategy is to average all the possible values of fjout. So after computing (ak,bk) for all the windows wk in the image, the filter output is computed as

fjout=1|w|k:jwk(akIjguide+bk)=a¯jIjguide+b¯j
wherea¯j=1|w|kwjakandb¯j=1|w|kwjbk,(a¯j,b¯j) are the average coefficients calculated from all the overlapping windows j.

Equations (12), (13), and (14) taken together define the guided image filtering, for which the main computational burden is a box filter of window radiusr, denoted fboxfilter. The box filter is computed efficiently by a simple moving-sum method.

The workflow for a moving sum with a one-dimensional box filter is as follows:

  • Inputs:sin, radius r
  • Output signal:fboxfilter(sout)
  • 1: fboxfilter(s0out)=i=0rsiin
  • 2: for i=1 to end
  • 3: fboxfilter(siout)=fboxfilter(si-1out)+si+rin-si-r-1in
  • 4: end for

The workflow for the guided image filtering is as follows:

  • Inputs: input imagefinfor filtering, guidance imageIguide,radiusr,

    regularization parameterε

  • Output: filtered output
  • 1: boxfilter(Iguide)=fboxfilter(Iguide)

    boxfilter(fin)=fboxfilter(fin)

    corr(Iguide)=fboxfilter(Iguide*Iguide)

    corr(Iguidefin)=fboxfilter(Iguide*fin)

  • 2: var(Iguide)=corr(Iguide)boxfilter(Iguide)boxfilter(Iguide)

    cov(Iguidefin)=corr(Iguidefin)boxfilter(Iguide)boxfilter(fin)

  • 3: a=cov(Iguidefin)/(var(Iguide)+ε)

    b=boxfilter(fin)aboxfilter(Iguide)

  • 4: boxfilter(a)=fboxfilter(a)

    boxfilter(b)=fboxfilter(b)

  • 5: fout=boxfilter(a).*Iguide+boxfilter(b)
wheredenotes element-by-element multiplication,/denotes element-by-element division, andfboxfilterdenotes a box filter of square window radiusr. The abbreviations for correlation (corr), variance (var), and covariance (cov) are self-explanatory.

2.5 Steps of the SART-G algorithm

The SART-G algorithm has two steps: a SART step and a guided image-filtering step. Reconstruction images were obtained by iterating these two steps in turn. The SART-G algorithm involves:

  • 1:Initialization:

    The number of SART-step loop iterations is denotedN1, the iteration indexK[1,N1], the relaxation parametersλK, the regularization parameterε, and the radius of the square windowr;

  • 2: SART step:

    The SART step used in this paper is described by Eq. (3) for the purpose of solving Eq. (1). The reconstruction image converges to the minimum-norm solutionfKSART;

  • 3:Update of the prior guidance image Iguideupdate.
    Iguide-update=Iguide-initial×[v1+v2×(K1)]/M+fKSART×[v3+v4×(K1)]/M

    wherev1<v3,v2>v4,M=v1+v2×(K1)+v3+v4×(K1),Kis the SART iteration index. The prior guidance imageIguide-initialis reconstructed by SART using complete projection data;

  • 4: Guided image-filtering step:
    fKSARTG=fboxfilter(a).*Iguideupdate+fboxfilter(b)

    withaandbdefined as

    a=cov(IguideupdatefKSART)/(var(Iguideupdate)+ε)
    b=fboxfilter(fKSART)afboxfilter(Iguideupdate)

    Wherefboxfilterdenotes a box filter of square window radiusr. The box filter window radiusr and the regularization parameterεin Eq. (17) specify which edges or high-variance patches should be preserved;

  • 5: Initialization of the next loop:
    fK+1SART=fKSARTG

    The procedure is repeated from step 2 until the stopping criterion is satisfied.

The effectiveness of the SART–G algorithm is assessed by considering the content of an updated guidance image. We updated the guidance image using Eq. (15), which is in effect a weighted sum of the initial guide image (previously reconstructed by SART from sufficient-view data) and the image reconstructed by SART-G from few-view data. The weight factors are controlled via the positive parametersv1,v2,v3, andv4. In order to increase the changed parts information of the initial guide image, more weight is given to the SART reconstructed imagefKSARTin the first iterationK=1, with the parameters vi(i=1,3) set so that v1<v3. The CT image artifacts from few-view data become enhanced as the increase of iteration index. To limit the appearance of artifacts, more weight is given to the initial guide imageIguide-initial, with vi(i=2,4) set so thatv2>v4.

2.6 Comparison method - SART algorithm based on total variation (SART-TV)

The CT reconstruction problem given in Eq. (1) can be solved by the CS-based reconstruction method to minimize the image TV regularized by the projections. This process is equivalent to solving the following optimization program:

minfTVfsubjecttoWf=p,fj>0

The model described by Eq. (20) was investigated using the alternating minimization procedure [11]:

  • 1: a SART step to solve Eq. (1);
  • 2: descending the TV-gradient to minimize the image TV.

The gradient descent is controlled via the step size parameterμ [11]. If the step size of the gradient descent is too strong the image becomes uniform and inconsistent with the projection data. On the other hand, if the step size of the gradient descent is too small, the algorithm reduces to standard SART with a positivity constraint included. The SART-step loop is executedN1times. Ngraddenotes total number of gradient descents.

3. Numerical results

We assessed the performance of our proposed approach using both simulation data and real CT data. All data sets were reconstructed by SART, SART-G, and SART-TV. All the computationally intensive parts of the algorithms were implemented in C + + on a PC (4.0 GB memory, 3.4 GHz CPU).

To test the feasibility of the method, the mean-squared error (MSE) and Peak Signal to Noise Ratio (PSNR) were calculated from the reconstructed image:

MSE=1I×Ji=1Ij=1J(ti,jri,j)2
PSNR=10*log10max1iI,1jJ(ri,j)MSE
whereI×Jrepresents the image size, andti,jandri,jare, respectively, the pixel intensities in the original and reconstructed images.

3.1 Simulation data

The object model was created to validate the SART-G algorithm is effective and efficient for nondestructive testing. As shown in Fig. 2, the object model image consists of areas of uniform intensity and forms the initial prior guidance image. Figure 3(a) shows the object model reference image used to measure the performance of the SART-G algorithm. It includes areas of uniform intensity and a complex pattern of cracks. Taking the example of a nondestructive test for an aircraft engine [29], the initial object image (without damages and cracks) can be reconstructed using sufficient projection data before installation and before the object image, containing damages and cracks after operation, is to be reconstructed. The damage-free image is taken as the initial prior guidance image for the SART-G algorithm. The reconstruction process of the SART-G algorithm was guided by an updated guidance image related to the initial guidance image.

 figure: Fig. 2

Fig. 2 Initial guided image

Download Full Size | PDF

 figure: Fig. 3

Fig. 3 Comparison of image quality: (a) reference reconstructed image; (b) SART; (c)SART-G;(d) SART-TV (μ=0.02); (e) SART-TV (μ=0.06); (f) SART-TV (μ=0.1)

Download Full Size | PDF

To detect cracks in the test-object model, the images were reconstructed using SART, SART-TV, and SART-G. The reconstruction matrix had dimensions 256 × 256 in the simulation-data studies, with a pixel size 0.5 mm in the detector. We collected 32 projection data, evenly distributed over a 2π range to simulate the few-view projection data, using the fan-beam-geometry scanning mode (Fig. 1). The distance between the X-ray source and the reconstructed object was 400 mm, and between the X-ray source and the detector 1400 mm. The SART, SART-TV and SART-G algorithms all performN1=20iterations. We compared three different empirical values forμ, which specifies the step size of gradient descent. The total number of gradient descents wasNgrad=20. The parameterεfor guided image filtering was set empirically to 0.0016, the box-filter-window radius tor=4, and the parameters vi(i=1,2,3,4)of the updated guide image tov1=0.3,v2=8.5,v3=0.7,andv4=0.1. The relaxation parameter for SART, SART-TV and SART-G was set toλK=λ=0.15, based on the convergence theorem.

Figures 3(b)-3(f) show the reconstructed images returned by the different algorithms using 32 noise-free projections. Some streak artifacts are apparent in the SART reconstructed image (Fig. 3(b)) and are due to the use of few-view projection data; such artifacts are better suppressed by SART-G. Figure 3(c) shows that the SART-G algorithm does not produce piecewise-constant features, as opposed to the complex crack structure produced by SART-TV (Figs. 3(d)-3(f), solid arrow). Figures 3(c)-3(f) demonstrate a similar performance by SART-G and SART-TV in regions of uniform intensity (dashed arrows).

To compare the results in a quantitative way, we have computed the PSNR, which is widely used to measure image qualities. From Table 1, we see that the reconstructed images obtained by SART-G is of the best quality.

Tables Icon

Table 1. PSNR for numerical scanning data shown in Fig. 3.

Figures 4(g)-4(l) show magnified images of a small region of interest. In terms of the suppression of streak artifacts, SART-G outperforms SART and SART-TV (Figs. 4(h)-(l)). SART-TV produces smoother uniform regions, while SART-G reconstructs the region of interest more precisely. Theμin SART-TV was varied in the rangeμ{0.02,0.06,0.1}. As shown in Figs. 4(j)-(l), a smallerμyields a smoother uniform regions.

 figure: Fig. 4

Fig. 4 Partial magnification:(g)reference reconstructed image; (h) SART;(i)SART-G;(j)SART-TV (μ=0.02); (k) SART-TV (μ=0.06); (l) SART-TV (μ=0.1)

Download Full Size | PDF

To quantify the reconstruction convergence rate, the MSE is calculated. Figure 5 indicates that the SART-G algorithm yields the fastest convergence.

 figure: Fig. 5

Fig. 5 Convergence analysis: mean-squared error (MSE)

Download Full Size | PDF

3.2 Real CT data

By extending the procedure applied to such nondestructive tests in industrial CT, other images in different contexts can also be reconstructed by the SART-G algorithm from few-view projection data. For example, the SART-G algorithm can also be applied to rock mechanics. In order to monitor the evolution of internal cracks in rocks in real time, the same image slice is scanned repeatedly under a load [30,31]. In the first scan, the rock sample is scanned without the load. Its projection data are sufficient for achieving an exact image reconstruction. This exact reconstructed rock image can be regarded as the initial prior guidance image. In the next scan, the loaded rock sample is scanned and its internal structure is reconstructed by the SART-G algorithm based on few-view projection data.

In the case of real CT data, we used realistic volcanic-rock projection data (provided by the Beijing Higher Institution Engineering Research Center of Testing and Imaging). These data were collected from 1440 projections, evenly distributed over a 2π-range, using the fan-beam geometry (Fig. 1). The distance from the X-ray source to the reconstructed object was 82.7 mm, and from the X-ray source to the detector 600 mm. The reconstruction matrix had dimensions 1024 × 1024, with a detector pixel size of 0.127 mm.

As shown in Fig. 6, the volcanic rock is viewed as the initial guidance image, reconstructed by the SART algorithm using 1440 projections. We used a total of 20 SART iterations for the initial guidance reconstruction. We monitored the evolution in the internal structure of volcanic rocks with simple and complex crack patterns to measure the properties of the SART-G algorithm. The reconstruction process was guided by successively updating the initial guidance image.

 figure: Fig. 6

Fig. 6 Initial guided image

Download Full Size | PDF

To study the few-view problem, we selected 90 projections from the 1440 available, evenly distributed over a 2π-view range. The total number of iterations for SART, SART-TV, and SART-G wasN1=20. The parameterμwas empirically set to three different values for the purpose of comparison. The total number of gradient descents wasNgrad=20. The parameterεfor guided image filtering was empirically set toε=0.003, the box-filter-window parameter tor=4, and the parametersvi(i=1,2,3,4)of the updated guide image to v1=0.3,v2=2.5,v3=0.7,andv4=0.1. The relaxation parameters for SART, SART-TV, and SART-G wereλK=λ=0.15, based on the convergence theorem.

3.2.1 Volcanic rock with a simple crack

Figure 7(a) is assumed to represent a reference image reconstructed by the SART algorithm using 1440 projections. It consists of a simple shape with a region of uniform intensity (solid arrow) and a crack (dashed arrow). To study the few-view problem, 90 projections were used to achieve the reconstruction in Figs. 7(b)-7(f). The SART-G reconstruction process was guided by an updated guidance image. SART-G yields a higher-quality reconstruction image than the other methods when applied to few-view data. To provide further validation of the SART-G reconstruction accuracy, one region of interest (ROI) was magnified to allow a better detailed comparison. Figure 8(g) shows a ROI of the reference reconstructed image. Figure 8(h) shows that SART produces more noise. The experiment result in Fig. 8(i) suggests that the SART-G algorithm is better at preserving cracks (dashed arrow) and at denoising, compared with other methods. Figures 8(i)-8(l) show similar performance from SART-G and SART-TV in the simple structure with uniform intensity (solid arrow). However, SART-TV blurs the details of the crack structure.

 figure: Fig. 7

Fig. 7 Comparison of image quality: (a) reference reconstructed image; (b) SART; (c)SART-G;(d) SART-TV (μ=0.02); (e) SART-TV (μ=0.04); (f) SART-TV (μ=0.06)

Download Full Size | PDF

 figure: Fig. 8

Fig. 8 Partial magnification:(g)reference reconstructed image; (h) SART;(i)SART-G;(j)SART-TV (μ=0.02); (k) SART-TV (μ=0.04); (l) SART-TV (μ=0.06)

Download Full Size | PDF

In order to evaluate the reconstruction results objectively, PSNR is adopted to evaluate the reconstructed image, just as in Table 2. From Table 2, we can see that the SART-G performs better than other methods. The convergence of SART, SART-G, and SART-TV is displayed in Fig. 9. The SART-G algorithm converges fastest. The parametersμfor SART-TV were varied in the rangeμ{0.02,0.04,0.06}, with smaller values yielding a faster convergence.

Tables Icon

Table 2. PSNR for numerical scanning data shown in Fig. 7.

 figure: Fig. 9

Fig. 9 Convergence analysis: mean-squared error (MSE)

Download Full Size | PDF

3.2.2 Volcanic rock with complex cracks

Figure 10 shows that the reconstruction images contain more complex cracks than in Fig. 7. The 1024 × 1024 image in Fig. 10(a) is a reference image reconstructed by the SART algorithm using 1440 projections, while the others were reconstructed using 90 projections. Compared with the slice images in Figs. 10(b) and 10(c), cracks in volcanic rock are reconstructed better and streak artifacts are suppressed by the SART-G algorithm, while the SART reconstruction image is contaminated by streak noise. Figures 10(d)-10(f) show that smoother uniform regions are produced using SART-TV.

 figure: Fig. 10

Fig. 10 Comparison of image quality: (a) reference reconstructed image; (b) SART;(c)SART-G; (d) SART-TV (μ=0.02). (e) SART-TV (μ=0.04). (f) SART-TV (μ=0.06).

Download Full Size | PDF

Figure 11 shows magnifications of a ROI. Figure 11(g) shows a ROI of the reference reconstructed image. It is clear from Figs. 11(h)-11(l) that SART-G outperforms SART and SART-TV, in terms of suppressing streak artifacts. However, the crack regions (solid arrow) show some artifacts when the SART-G algorithm is used (Fig. 11(i)), implying there is still scope for improving the SART-G algorithm in the case of complex cracks from few-view data.

 figure: Fig. 11

Fig. 11 Partial magnification: (g) reference reconstructed image; (h) SART;(i)SART-G; (j)SART-TV (μ=0.02). (k) SART-TV (μ=0.04). (l) SART-TV (μ=0.06).

Download Full Size | PDF

From Table 3, we see that the reconstructed image obtained by SART-G algorithm is of the best quality. Figure 12 shows that the SART-G algorithm converges fastest.

Tables Icon

Table 3. PSNR for numerical scanning data shown in Fig. 10.

 figure: Fig. 12

Fig. 12 Convergence analysis: mean-squared error (MSE)

Download Full Size | PDF

4. Conclusion

Few-view image reconstruction is a standard problem in CT. Our proposed SART-G algorithm demonstrably limits noise and preserves fine structures in the few-view data better

than the SART-TV and SART algorithms. Despite improving image quality, the SART-G algorithm clearly generates artifacts in the case of complex cracks under few-view conditions. With a view to tackling this problem, further studies are currently under way to investigate and develop refinements to the SART-G algorithm.

Funding

This work was supported by the National Natural Science Foundation of China (NSFC) (81371549, 61271012, 30900333, and 11501415); The Natural Science Foundation of Tianjin City in China (16JCYBJC28600); The Instrument Developing Project of the Chinese Academy of Sciences (YZ201410), and SRF for ROCS, SEM; The IHEP-CAS Scientific Research Foundation (2013IHEPYJRC801).

Acknowledgments

The authors thank the Beijing Higher Institution Engineering Research Center of Testing and Imaging for their kind assistance with our experiments.

References and links

1. L. Flores, V. Vidal, and G. Verdú, “Iterative reconstruction from few-view projections,” Procedia Comput. Sci. 51(1), 703–712 (2015). [CrossRef]  

2. G. R. Myers, D. L. Thomas, D. M. Paganin, T. E. Gureyev, and J. G. Clement, “A general few-projection method for tomographic reconstruction of samples consisting of several distinct materials,” Appl. Phys. Lett. 96(2), 021105 (2010). [CrossRef]  

3. G. R. Myers, D. M. Paganin, T. E. Gureyev, and S. C. Mayo, “Phase-contrast tomography of single-material objects from few projections,” Opt. Express 16(2), 908–919 (2008). [CrossRef]   [PubMed]  

4. G. R. Myers, T. E. Gureyev, D. M. Paganin, and S. C. Mayo, “The binary dissector: phase contrast tomography of two- and three-material objects from few projections,” Opt. Express 16(14), 10736–10749 (2008). [CrossRef]   [PubMed]  

5. E. Y. Sidky, R. Chartrand, and X. C. Pan, “Image reconstruction from few views by non-convex optimization,” in IEEE Nuclear Science Symposium Conference Record Nuclear Science Symposium (IEEE, 2007), pp. 3526–3530. [CrossRef]  

6. S. J. Wright, R. D. Nowak, and M. A. T. Figueiredo, “Sparse Reconstruction by Separable Approximation,” IEEE Trans. Signal Process. 57(7), 2479–2493 (2009). [CrossRef]  

7. M. Endrizzi, Y. I. Nesterets, S. C. Mayo, A. Trinchi, and T. G. Gureyev, “Multi-energy computed tomography using pre-reconstruction decomposition and iterative reconstruction algorithms,” J. Phys. D Appl. Phys. 45(47), 475103 (2012). [CrossRef]  

8. D. Ji, C. Hu, and H. Yang, “Image reconstruction algorithm for in-line phase contrast imaging computed tomography with an improved anisotropic diffusion model,” J. XRay Sci. Technol. 23(3), 311–320 (2015). [CrossRef]   [PubMed]  

9. G. H. Chen, J. Tang, and S. Leng, “Prior image constrained compressed sensing (PICCS): a method to accurately reconstruct dynamic CT images from highly undersampled projection data sets,” Med. Phys. 35(2), 660–663 (2008). [CrossRef]   [PubMed]  

10. W. Dong, Z. Li, and D. Xiang, “Prior image constrained low-rank matrix decomposition method in limited-angle reverse helical cone-beam CT,” J. XRay Sci. Technol. 23(6), 759–772 (2015). [CrossRef]   [PubMed]  

11. S. J. LaRoque, E. Y. Sidky, and X. Pan, “Accurate image reconstruction from few-view and limited-angle data in diffraction tomography,” J. Opt. Soc. Am. A 25(7), 1772–1782 (2008). [CrossRef]   [PubMed]  

12. H. Yu and G. Wang, “A soft-threshold filtering approach for reconstruction from a limited number of projections,” Phys. Med. Biol. 55(13), 3905–3916 (2010). [CrossRef]   [PubMed]  

13. J. Yang, H. Yu, M. Jiang, and G. Wang, “High-order total variation minimization for interior tomography,” Inverse Probl. 26(3), 0350131 (2010). [CrossRef]   [PubMed]  

14. B. Liu, H. Yu, S. S. Verbridge, L. Sun, and G. Wang, “Dictionary-learning-based reconstruction method for electron tomography,” Scanning 36(4), 377–383 (2014). [CrossRef]   [PubMed]  

15. S. Niu, Y. Gao, Z. Bian, J. Huang, W. Chen, G. Yu, Z. Liang, and J. Ma, “Sparse-view x-ray CT reconstruction via total generalized variation regularization,” Phys. Med. Biol. 59(12), 2997–3017 (2014). [CrossRef]   [PubMed]  

16. M. Wieczorek, J. Frikel, J. Vogel, E. Eggl, F. Kopp, P. B. Noël, F. Pfeiffer, L. Demaret, and T. Lasser, “X-ray computed tomography using curvelet sparse regularization,” Med. Phys. 42(4), 1555–1565 (2015). [CrossRef]   [PubMed]  

17. B. Vandeghinste, B. Goossens, R. Van Holen, C. Vanhove, A. Pižurica, S. Vandenberghe, and S. Staelens, “Iterative CT reconstruction using shearlet-based regularization,” IEEE Trans. Nucl. Sci. 60(5), 3305–3317 (2013). [CrossRef]  

18. P. T. Lauzier and G. H. Chen, “Characterization of statistical prior image constrained compressed sensing (PICCS): II. Application to dose reduction,” Med. Phys. 40(2), 021902 (2013). [CrossRef]   [PubMed]  

19. H. Zhang, J. Huang, J. Ma, Z. Bian, Q. Feng, H. Lu, Z. Liang, and W. Chen, “Iterative reconstruction for x-ray computed tomography using prior-image induced nonlocal regularization,” IEEE Trans. Biomed. Eng. 61(9), 2367–2378 (2014). [CrossRef]   [PubMed]  

20. J. Ma, J. Huang, Q. Feng, H. Zhang, H. Lu, Z. Liang, and W. Chen, “Low-dose computed tomography image restoration using previous normal-dose scan,” Med. Phys. 38(10), 5713–5731 (2011). [CrossRef]   [PubMed]  

21. B. Dong and Z. Shen, “Image restoration: a data-driven perspective,” in Proceedings of the International Congress of Industrial and Applied Mathematics (ICIAM, 2015), pp. 65–108.

22. K. He, J. Sun, and X. Tang, “Guided image filtering,” IEEE Trans. Pattern Anal. Mach. Intell. 35(6), 1397–1409 (2013). [CrossRef]   [PubMed]  

23. A. C. Kak and M. Slaney, “Principles of computerized tomographic imaging,” Soc. Industrial Appl. Mathematics, (2001).

24. A. H. Andersen and A. C. Kak, “Simultaneous algebraic reconstruction technique (SART): a superior implementation of the art algorithm,” Ultrason. Imaging 6(1), 81–94 (1984). [CrossRef]   [PubMed]  

25. G. Qu, C. Wang, and M. Jiang, “Necessary and sufficient convergence conditions for algebraic image reconstruction algorithms,” IEEE Trans. Image Process. 18(2), 435–440 (2009). [CrossRef]   [PubMed]  

26. G. Han, G. Qu, and M. Jiang, “Relaxation strategy for the Landweber method,” Signal Process. 125(C), 87–96 (2016). [CrossRef]  

27. Y. Censor, T. Elfving, G. T. Herman, and T. Nikazad, “On Diagonally Relaxed Orthogonal Projection Methods,” SIAM J. Sci. Comput. 30(1), 473–504 (2008). [CrossRef]  

28. N. Draper and H. Smith, Applied Regression Analysis (John Wiley, 1981).

29. D. Chen, H. Li, Q. Wang, P. Zhang, and Y. Zhu, “Computed tomography for high-speed rotation object,” Opt. Express 23(10), 13423–13442 (2015). [CrossRef]   [PubMed]  

30. J. X. Ren, X. R. Ge, Y. B. Pu, and Y. Zhu, “Real-time CT test on the meso-damage evolution of rock under uniaxial compression,” Chin. Civil Eng. J. 33(6), 99–104 (2000).

31. H. Kawakata, A. T. Cho, and M. S. Yanagidani, “The observation of faulting in westerly granite under triaxial compression by X-ray CT scan,” Int. J. Rock Mech. Min. Sci. 34(3–4), 151 (1997).

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (12)

Fig. 1
Fig. 1 Schematic scan setup
Fig. 2
Fig. 2 Initial guided image
Fig. 3
Fig. 3 Comparison of image quality: (a) reference reconstructed image; (b) SART; (c)SART-G;(d) SART-TV ( μ=0.02 ); (e) SART-TV ( μ=0.06 ); (f) SART-TV ( μ=0.1 )
Fig. 4
Fig. 4 Partial magnification:(g)reference reconstructed image; (h) SART;(i)SART-G;(j)SART-TV ( μ=0.02 ); (k) SART-TV ( μ=0.06 ); (l) SART-TV ( μ=0.1 )
Fig. 5
Fig. 5 Convergence analysis: mean-squared error (MSE)
Fig. 6
Fig. 6 Initial guided image
Fig. 7
Fig. 7 Comparison of image quality: (a) reference reconstructed image; (b) SART; (c)SART-G;(d) SART-TV ( μ=0.02 ); (e) SART-TV ( μ=0.04 ); (f) SART-TV ( μ=0.06 )
Fig. 8
Fig. 8 Partial magnification:(g)reference reconstructed image; (h) SART;(i)SART-G;(j)SART-TV ( μ=0.02 ); (k) SART-TV ( μ=0.04 ); (l) SART-TV ( μ=0.06 )
Fig. 9
Fig. 9 Convergence analysis: mean-squared error (MSE)
Fig. 10
Fig. 10 Comparison of image quality: (a) reference reconstructed image; (b) SART;(c)SART-G; (d) SART-TV ( μ=0.02 ). (e) SART-TV ( μ=0.04 ). (f) SART-TV ( μ=0.06 ).
Fig. 11
Fig. 11 Partial magnification: (g) reference reconstructed image; (h) SART;(i)SART-G; (j)SART-TV ( μ=0.02 ). (k) SART-TV ( μ=0.04 ). (l) SART-TV ( μ=0.06 ).
Fig. 12
Fig. 12 Convergence analysis: mean-squared error (MSE)

Tables (3)

Tables Icon

Table 1 PSNR for numerical scanning data shown in Fig. 3.

Tables Icon

Table 2 PSNR for numerical scanning data shown in Fig. 7.

Tables Icon

Table 3 PSNR for numerical scanning data shown in Fig. 10.

Equations (22)

Equations on this page are rendered with MathJax. Learn more.

Af=p
f (K+1) = f (K) + λ K V 1 A * W(pA f (K) )
f j (K+1) = f j (K) + λ K i=1 M a ij i=1 M [ p i n=1 N a in f n (K) n=1 N a in ] a ij
V=diag( V 1 , V 2 ,..., V N )
W=diag( W 1 , W 2 ,..., W M )
V j = i=1 M a ij ,forj=1,...,N
1 W i = n=1 N a in ,fori=1,...,M
0ε λ K ( 2ε ) / max{ W i | i=1,2,...,M }
I guide : f j out = a k I j guide + b k ,j w k
min f j out j w k ( ( f j out f j in ) 2 +ε a k 2 )subject to Eq. ( 1 )A f out =p
E( a k , b k )= j w k ( ( f j out f j in ) 2 +ε a k 2 )= j w k ( ( a k I j guide + b k f j in ) 2 +ε a k 2 )
a k = 1 | w | j w k I j guide f j in η k f ˜ k σ k 2 +ε
b k = f ˜ k a k η k
f j out = 1 | w | k:j w k ( a k I j guide + b k )= a ¯ j I j guide + b ¯ j
I guide-update = I guide-initial × [ v 1 + v 2 ×(K1)] /M + f K SART ×[ v 3 + v 4 ×(K1)] /M
f K SARTG = f boxfilter (a).* I guideupdate + f boxfilter ( b )
a=cov( I guideupdate f K SART )/(var( I guideupdate )+ε)
b= f boxfilter ( f K SART )a f boxfilter ( I guideupdate )
f K+1 SART = f K SARTG
min f TV f subject to Wf=p, f j >0
MSE= 1 I×J i=1 I j=1 J ( t i,j r i,j ) 2
PSNR=10* log 10 max 1iI,1jJ ( r i,j ) MSE
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.