Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Practical cone-beam algorithm

Open Access Open Access

Abstract

A convolution-backprojection formula is deduced for direct reconstruction of a three-dimensional density function from a set of two-dimensional projections. The formula is approximate but has useful properties, including errors that are relatively small in many practical instances and a form that leads to convenient computation. It reduces to the standard fan-beam formula in the plane that is perpendicular to the axis of rotation and contains the point source. The algorithm is applied to a mathematical phantom as an example of its performance.

© 1984 Optical Society of America

1. INTRODUCTION

Direct reconstruction in three dimensions from projection data has been the subject of many studies.[1] As used here, direct three-dimensional (3D) reconstruction implies use of two-dimensional (2D) projection data with reconstruction on a 3D mesh of points, which may or may not be organized as parallel slices. In medical applications, direct 3D reconstruction is at the forefront of investigation[2]; to our knowledge, no commercial tomographic scanners employ direct 3D reconstruction. Industrial use of tomography as a nondestructive evaluation (NDE) technique is still quite limited. Use of x-ray image-intensifier images as tomographic input is a natural extension of electronic radiography and is being pursued in our laboratory in a system of minimum mechanical complexity. The system consists of a microfocus x-ray source, a single-axis rotational stage, and the x-ray image intensifier with associated electronics. Hence a fast and reliable direct reconstruction procedure should be valuable.

Several methods of attacking the full 3D problem have been proposed. Altschuler et al.[3] have reviewed many of these. Colsher[4] described iterative methods that can be applied to the problem. Altschuler et al.[5] proposed a basis-function method of reducing the dimensionality of the problem; however, no examples of application to real or mathematical objects have appeared. Altschuler et al.[6] has also used an interative technique based on Bayesian optimization. The “twin cone-beam” geometry has been discussed by Schlindwein[7] (iterative method) and by Kowalski[8] (basis-function method). Knutsson et al.[9] have discussed a different method for a geometry similar to that used in traditional (noncomputerized) axial transverse tomography. Their method is based on the projection-slice theorem, applied as if rays from the source were parallel, and involves 2D filtering and weighting. Nalcioglu and Cho[10] and Denton et al.[11] have presented convolution-backprojection methods that are applicable if the source positions encompass a sphere about the object, rather than just a circle as in the present case. Tuy[12] has given an inversion formula that is appropriate when the source positions lie on two intersecting circles. Minerbo[13] has used the 3D form of the Radon inversion formula to derive an approximate solution for the geometry of interest here, i.e., a single circle of source positions. Unfortunately, his method is computationally intensive, and his derivation contains an error that cannot easily be rectified. Herman[1] and Lewitt and McKay[14] have described use of the unmodified fan-beam reconstruction method to handle cone-beam data. The latter authors also have described a modification in which backprojection in three dimensions is used. When the cone angle is small, as it is in the scanner being developed at the Mayo Clinic,[2] these convolution-backprojection procedures work quite well.

Since none of the methods mentioned above is fully satisfactory for our applications, we describe in this paper an algorithm appropriate to our geometry that can be implemented easily. Like that of Lewitt and McKay,[14] our procedure involves convolution and 3D backprojection; it further includes the crucial step of correctly weighting the data. This is extremely important if the cone angle is large and ensures that certain desirable properties will be present. Our method is necessarily faster than iterative methods and for reasonable cone angles produces reconstructions not significantly inferior to those of slice-by-slice reconstruction using parallel- or fan-beam data, acquisition of which for 3D reconstruction requires a system of significantly greater mechanical complexity.

In Section 2 we review the fan-beam method from our perspective. In Section 3 we present the cone-beam algorithm and discuss some of its important properties. In Section 4 we present results of application to a mathematically constructed phantom.

2. FAN-BEAM RECONSTRUCTION FORMULA

In this section, we rewrite the Radon transform for two dimensions in the form of a convolution and backprojection. This results in a fan-beam reconstruction formula and is a step preliminary to the determination of an algorithm for 3D reconstruction. The planar detector system may, for convenience, be represented by its projection on a plane parallel to it that contains the axis of rotation. This axis is a distance d from the x-ray point source, as in Fig. 1. Hereafter, we refer to the detector plane as if it were in the plane containing the rotation axis. A simple scaling by d/D converts data from the actual physical arrangement to this form. We refer to the direction defined by the axis of rotation as the axial (or z) direction and refer to the plane that contains the point source and is perpendicular to the axis as the midplane. Planes parallel and perpendicular to the midplane are called horizontal and vertical, respectively. The intersection of the axis and the midplane is taken as the origin of coordinates. The angle of rotation is Φ, and the coordinate along the detector that specifies the point of detection is Y. Only projection data (Fig. 2) along the intersection (aa′) of the midplane and the detector plane apply to the fan-beam case; a point along the line of intersection is defined by a coordinate Y. We take the object to be fixed and the source and detector to rotate about it with angle Φ. The perpendicular distance l from the origin to the ray that intersects the detector plane at Y is related to Y by (see Fig. 2)

l=Yd/(d2+Y2)1/2orY=ld/(d2-l2)1/2.

The angle θ from the x axis (fixed with respect to the object) to the perpendicular is given by

θ=Φ+π/2+α,

where

α=tan-1(Y/d)=tan-1[l/(d2-l2)1/2].

The value of the projection is denoted by PΦ(Y), or by p(l, θ), where in cylindrical coordinates

p(l,θ)=rdrdϕf(r,ϕ)δ[rcos(θ-ϕ)-l],

i.e., a line integral of the density f(r, ϕ). Clearly, we should assume that for r > d, f(r, ϕ) vanishes. Consequently,

p(l,θ)=PΦ(Y),l<d=0,l>d.

From the projection data, the density can be reconstructed according to the Radon transform[15]:

f(r,ϕ)=14π2dθ-dlrcos(θ-ϕ)-llp(l,θ).

The symbol ⨍ represents the principal value of the integral.

Let us now introduce the Fourier transform q(ω, θ) by writing

p(l,θ)=-dω2πexp(iωl)q(ω,θ).

Substituting into Eq. (6) and performing the principal-value integration, we find that

f(r,ϕ)=18π2dθ-ωdωq(ω,θ)exp[iωrcos(θ-ϕ)].

The final form in terms of the coordinates (l, θ) is obtained by noting first that the density is, of course, real and then by inverting the Fourier transform:

f(r,ϕ)=14π2Redθ-ωdω-dlp(l,θ)×exp{iω[rcos(θ-ϕ)-l]},

where Re indicates taking of the real part. If one were dealing with the parallel-beam case, Eq. (9) would be appropriate. However, since in the actual situation (divergent-beam case) the convenient coordinates are (Y, Φ), we must change variables. From Eqs. (1)(3) we can determine the Jacobian, so that we have

dθdl=dΦdYd3/(d2+Y2)3/2.

Making a further change of variables, which is just a scaling of the frequency

ω=ω[d+rcos(ϕ-Φ)]/(d2+Y2)1/2,

we find (dropping the prime) that

f(r,ϕ)=14π2RedΦd2[d+rcos(ϕ-Φ)]2×0ωdω-dYd(d2+Y2)1/2PΦ(Y)×exp[iω(drsin(ϕ-Φ)d+rcos(ϕ-Φ)-Y)].

Since the detector spacing ds is finite, the frequency range is limited to ω < π/ds, and it becomes necessary to bandlimit Eq. (12). We can therefore write

f(r,ϕ)=14π2dΦd2[d+rcos(ϕ-Φ)]2P˜Φ[Y(r,ϕ)],

where

Y(r,ϕ)=drsin(ϕ-Φ)/[d+rcos(ϕ-Φ)],
P˜Φ(Y)=-dYd(d2+Y2)1/2PΦ(Y)g(Y-Y),

and

g(Y)=Re0ωy0exp(iωY)ωdω.

The function g(Y) is the convolution function, and P˜Φ(Y) is the convolution of g and PΦ(Y). Equation (13) represents the backprojection of the convoluted data, since Y depends explicitly on the reconstruction point (r, ϕ) and is the desired result. This result, although written differently, is substantially the same as that given by Herman et al.[15]

In practice, PΦ(Y) is sampled for discrete values of Φ and Y (or averages about such values). Hence we have {PΦ(i)(Yj)}, with ΔΦ = Φi − Φi−1 and ΔY = YjYj−1, assuming equally spaced samples. We assume that PΦ(Y)cos θ is slowly varying compared to g(Y) and define [cos θd/(d2 + Y2)1/2]

P˜Φ(i)(Yj)=jPΦ(i)(Yj)cosθjYj-ΔY/2Yj+ΔY/2g(Yj-Y)dY.

We take ωy0 = πY. For g(Y) as in Eq. (16), defining P˜Φ(i)(Yj) as in Eq. (15′) is equivalent to use of the Shepp–Logan[16] window.

3. CONE-BEAM RECONSTRUCTION FORMULA

The purpose of this section is to make plausible an algorithm for 3D reconstruction. First we present a heuristic development. We subsequently derive analytically some important properties of the algorithm and demonstrate numerically some measure of its performance. No rigorous proof exists for what follows, since the result is approximate.

The procedure of this section is as follows. From the results of Section 2, we determine the incremental contribution δf to the reconstructed density from the projection data for a small increment δΦ of rotation angle. From the projection data along the intersection of the detector plane and the midplane (Z = 0), the contribution at points that lie in the midplane can be calculated. The projections that intersect the detector plane along a line parallel to the midplane, but not in it (constant, nonzero Z), themselves define a plane. This plane is treated as if it were the midplane of another, tilted arrangement. [Of course, if we had a complete set of projections (i.e., all rotation angles about the normal) for such a tilted plane, we could reconstruct the density for this plane by using the Radon transform. This would entail sweeping the source around the sample along a circle in the tilted plane.] We must correct for the difference between the actual rotation δΦ about the vertical axis and the equivalent rotation δΦ′ about the normal to the plane. Further, the source-to-detector distance in the tilted plane must be substituted into the Radon transform. Making these corrections, we obtain the increment of reconstructed density. The total density at a point r is taken to be the sum of the incremental contributions from all planes (one for each rotation angle) that pass through r. The planes of projection that contribute to the reconstruction at a given r may be visualized as forming a sheaf. Except for points in the midplane, the sheaf for each reconstruction point is unique.

First we rewrite Eq. (12) using a vector notation. Let ρ extend from the origin to the reconstruction point, let m^ be a unit vector along the ray from the source to the axis of rotation (i.e., normal to the detector), let n^ be along the detector in the plane of rotation (Y direction), and let k^ be normal to the plane. The three unit vectors m^,n^, and k^ form a right-handed, orthonormal set (see Fig. 3), so k^=m^×n^.

In the midplane, the density is, from Eq. (12),

f(ρ)=14π2RedΦd2(d+ρ·m^)20ωdω×-dYd(d2+Y2)1/2PΦ(Y,Z=0)×exp[iω(dρ·n^d+ρ·m^-Y)],

Here we note specifically that it is the z = 0 plane under consideration.

Next we consider all projections for some angle Φ of the source that pass through the detector plane along a line Z = constant ≠ 0 (see Fig. 4). These projections define a plane in which the m^ unit vector now points along the ray from the source to the detector at (Y = 0, Z) and n^ is along the Y direction. The normal is given by k^=m^×n^ and is tilted from the z axis (axis of rotation). Note that the redefined unit vectors m^,n^, and k^ reduce to those of Fig. 3 for Z = 0.

Now a small range of rotation δΦ about the z axis is equivalent to a rotation of δΦ′ about k^. To first order,

δm^=δΦz^×m^
=δΦd(d2+Z2)1/2n^

and

δm^=δΦk^×m^
=δΦn^.

Equating Eqs. (18) and (19), we have

δΦ=δΦd/(d2+Z2)1/2.

Let us calculate the contribution to the reconstructed density. Any reconstruction point in the plane can be written as

r=ρ+Zz^,

where ρ′ lies in the shaded plane of Fig. 4, i.e.,

ρ·k^=0.

For Z = 0, ρ′ is identical with the ρ of Eq. (17). The distance from the source to the axis is d′ = (d2 + Z2)1/2. From Eq. (17), we have

δf(ρ+Zz^)=14π2Reδϕd2(d+ρ·m^)20ωdω×-dYd(d2+Y2)1/2PΦ(Y,Z)×exp[iω(dρ·n^d+ρ·m^-Y)].

From Fig. 4 and Eqs. (21) and (22),

ρ·m^=dr·x^/d

and

d(d2+Y2)1/2=(d2+Z2)1/2(d2+Y2+Z2)1/2.

Substituting into Eq. (23), we find, for any reconstruction point r = (x, y, z) in the plane, that

δf(r)=14π2ReδΦd2(d+r·x^)20ωdω×-dYd(d2+Y2+Z2)1/2PΦ(Y,Z)×exp[iω(dr·y^d+r·x^-Y)],

where

Z=zd/[d+r·x^].

Here we have made a change of notation, setting n^ = ŷ′ since it corresponds to the rotated y axis.

Following the usual procedures of convolution and backprojection, we simply sum Eq. (26) over all projection data, obtaining

f(r)=14π2dΦd2(d+r·x^)2P˜Φ[Y(r),Z(r)],

where

Y(r)=r·y^d/(d+r·x^),
Z(r)=r·z^d/(d+r·x^),
P˜Φ(Y,Z)=-dY-dZgy(Y-Y)gz(Z-Z)×PΦ(Y,Z)d/(d2+Y2+Z2)1/2,
gy(Y)=Re0ωy0ωdωexp(iωY),

and

gz(Z)=sinωz0Z/πZ.

As it was in Eq. (16) for Y, band limiting that is due to the finite detector spacing has been introduced in Eqs. (32) and (33) for both Y and Z.

It is clear that, as d → ∞, Eq. (28) goes into the correct slice-by-slice form of the parallel-projection case. Furthermore, for the z = 0 plane, Eq. (28) is the exact result for any d (aside from the limitations imposed by band limiting). It is shown in Appendix A that Eq. (28) gives the correct result for the intensity integrated in the axial direction, -dzf(r). From this we conclude that density is conserved along lines parallel to the axis of rotation. This is an important property, as it implies that the principal distortion to be expected is blurring in the axial direction. Further, it can be shown that each reconstructed horizontal slice of an object with vertical translational symmetry will be identical to the midplane slice. Using these properties in addition to linearity, we conclude that blurring in the axial direction occurs only for that portion of an object that remains when the translationally symmetric portion is subtracted.

Discretization of the cone-beam algorithm proceeds as before. The sampled data {PΦ(i)(Yj, Zk)} are convoluted similarly to Eq. (15′):

P˜Φ(i)(Yj,Zk)=j,kPΦ(i)(Yj,Zk)×cosθjkYj-ΔY/2Yj+ΔY/2gy(Yj-Y)dY×Zk-ΔZ/2Zk+ΔZ/2gz(Zk-Z)dZ,

where ΔZ = ZkZk−1 and cos θjk = d/(d2 + Yj2 + Zk2)1/2. As before, we have averaged the convolution functions over intervals centered on the sampling points. It appears that averaging in the axial direction is not crucial; replacing the last integral in Eq. (34) by gz(ZkZkZ and taking ωz0 = πZ effectively eliminates the Z convolution since gz(ZkZkZ = δkk. As in the fan-beam case, linear interpolation on { P˜Φ(i)(Yj, Zk)} is used to determine P˜Φ(i)(Y, Z).

4. APPLICATION TO A MATHEMATICAL PHANTOM

The cone-beam algorithm described above has been in use in our laboratory for over a year. It has been used to process experimentally obtained projection data taken on a developmental system designed primarily to explore applications of x-ray tomography to industrial nondestructive evaluation. The system is also being applied successfully to the examination of bone biopsy specimens.

Implementation of the algorithm is relatively straightforward. The weighting of the projection data is performed with point-by-point multiplication by a precalculated 2D array. The convolution step may be carried out as a series of one-dimensional convolutions. The backprojection is performed by projecting a reconstruction point into the detector plane and then interpolating among the four relevant detector positions. The interpolation itself separates into horizontal and vertical parts. Since the horizontal interpolation coefficients are the same for all reconstruction points with the same (x, y), the associated computation time is shared among the number of vertical reconstruction points. Consequences of this are that (1) the amount of computation per reconstruction point is comparable with or less than that for the fan-beam case, (2) the efficiency of the procedure increases with increasing number of vertical reconstruction positions, and (3) a single vertically oriented plane requires much less computation than a horizontally oriented plane with the same number of sampling points. We have programmed the procedure for a general-purpose computer (VAX 11/780) with attached array processor (FPS AP-120B). The array processor is typically used for the computationally intensive operations of convolution and backprojection, the host computer being used to control the apparatus and preprocess the incoming data.

In order to suggest the efficacy of the algorithm, we illustrate its application to a mathematically derived phantom based closely on that given by Schlindwein[7]; ours differs only in that it is constructed purely of superposed ellipsoids. Table 1 gives the details of the phantom. This particular phantom embodies potential difficulties, such as abrupt large density changes, asymmetrically placed objects, and an object with a density differing only slightly from that of the surrounding region. Use of this phantom also provides an opportunity to compare the present method with a previously described procedure.

Projection data were formed from the phantom as analytically derived line integrals of the density from the origin to each detector position. The detector arrangement described in Table 1 approximates the amount of information used in Ref. [7].

The reconstruction mesh bears no necessary relationship to the detector array. To encompass the objects of interest, the space extends 40 cm by 40 cm horizontally and 20 cm vertically and contains 99 × 99 × 49 points. For display, we have chosen five horizontal planes, equally spaced and symmetrically located about the midplane.

In Fig. 5 we compare the reconstruction of the selected planes with an exact, digitized representation of the phantom. The large range of density encompassed in the gray scale used here for display renders difficult the observation of low-contrast features such as the sphere (object 4), which is only 5% denser than its surroundings. However, as the plot of a single selected line shows, the density difference is indeed reflected in the reconstruction. As expected, band limiting results in rounding of abrupt density changes. Radial streak artifacts are also visible, and considerable noise is apparent in the reconstruction of the dense outer shell. Note, however, that these deficiencies are similarly evident in the midplane slice, which amounts to a standard fan-beam reconstruction.

The reconstruction shown is substantially superior to those in Ref. [7], in which the low-contrast sphere was clearly visible only when the projection data were altered to conform with the assumptions of the algebraic method.

The reconstruction noise and the streaks are both characteristic of convolution-based methods and may be reduced by increasing the number of angles. (If relatively few angles of data are used, as in Fig. 5, techniques such as angular averaging may be employed to reduce the streaking and the noise with a small loss in edge definition.)

The reconstruction of edges may be improved by increasing the frequency content of the projection data. In Fig. 6 we display the effect of doubling the number of detector positions in each direction as well as increasing the number of angles to 128. The reconstruction is seen to be quite good. Although it is not apparent here, a slight blurring in the vertical direction exists and would persist regardless of increases in the amount of data. For moderate cone angles, the effect is negligible, especially when compared to uncertainties inherent to experimentally acquired projection data. Furthermore, since the blurring adversely affects only that structure that differs from the background structure, it seldom should cause a feature to be missed altogether.

Reconstruction errors may occur in this algorithm by virtue of the incomplete data which with it operates. Although generalizations are difficult to make because of the dependence of such errors on the shape, density, and position of the object as well as the source-axis distance, we offer the following observations. The greatest errors might be expected with a flat object lying parallel to and far from the midplane. The reconstruction of such an object will be smeared in the vertical direction over a distance related to (but much smaller than) its shadow on the detector plane; hence, for a given source-axis distance, the smearing will increase with increasing object distance from the midplane. Interestingly, the smearing tends to be less if the object is located away from the axis of rotation. This may be understood by noting that the smearing is a consequence of incompletely canceled spurious density from the backprojection process. That is, the shadow of the outer portion of the object results in apparent density above and below the object. If the object lies near the rotational axis, the incompletely canceled density tends to accumulate with each angle rather than being diluted or undergoing destructive interference. An extreme example of this would occur with a torus located parallel to the midplane and centered on the axis of rotation. In this case, spurious density will result on and near the axis and will be undiminished by an increase in the number of angles.

Having mentioned the difficult cases, we hasten to add that the present method remains a significant improvement over the unmodified fan-beam approach, in which the shadowing effect is completely uncorrected. In Fig. 7 we display vertical slices of the phantom described above along with the corresponding slices reconstructed with the present algorithm and, for comparison, with the stack-of-fans method, in which horizontal lines of projection data are treated as conventional fan-beam projections. In order to illustrate the comments made above, we have augmented the phantom with two oblate spheroids, one of which is on the axis. We also show the error of each reconstruction in the form of an absolute difference. As in the horizontal slices, the most apparent deviations in reconstruction by our algorithm amount to the smoothing of abrupt density changes, a direct consequence of band limiting. The flat objects are reasonably well defined, although some blurring is evident, especially if the difference images are examined. Each ellipsoid tends to spin off low-level artifacts; these generally are reduced with the number of angles and could be reduced further by spatial averaging. The fan-beam aproximation results in considerably greater distortion, especially for ellipsoids whose centers are off axis. The flat ellipsoids are essentially unrecognizable.

In Fig. 8, we show what happens when the cone angle is increased by reducing the source-axis distance by one third, such that it is equal to the diameter of the outer cylinder and also equal to the projection of the reconstructed volume on the Z axis. This represents a cone angle of about 53 deg. The detector system is extended to accommodate the increased projected area. The present method continues to result in well-defined objects, although blurring greater than in Fig. 7 is evident. Under these conditions, the stack-of-fans method has become quite unreliable. Because of its lack of normalization (the latter is achieved in our method by the weighting employed), the unmodified fan-beam approach can generate uncontrolled spurious structure; note particularly the structure produced by the upper flat ellipsoid.

The present algorithm holds up reasonably well for even larger cone angles. As a practical matter, this may not be of great importance, since current microfocus x-ray sources of which the authors are aware have angular limits in the same range as the cone angle represented by Fig. 8.

Although the results are not shown here, we have also compared the present algorithm with conventional (slice-by-slice) fan-beam results. When the calculations are made under the same conditions, the reconstruction is nearly identical with that of Fig. 7, the principal difference being that it has slightly better defined flat ellipsoids and slightly less noticeable spin-off artifacts. The treatment of abrupt density changes is essentially identical (i.e., the edges are practically invisible in images of the difference between corresponding results). It is thus to be expected that much of what is understood about conventional convolution-backprojection behavior will likewise apply to the present method.

5. CONCLUSIONS

We have presented an easily implemented, practical algorithm for tomographic reconstruction from 2D projection data and have illustrated its application to numerically generated projection data. Its performance is shown to be generally comparable with that of the standard fan-beam algorithm, of which it may be regarded as a natural extension.

APPENDIX A

In this appendix, we prove two properties of the 3D algorithm: (1) The integral -dzf(r) is reproduced accurately and (2), if the density is independent of z, the algorithm is exact.

As a preliminary, consider a 2D reconstruction for a density δ(ρρ0) (Dirac delta function). Since PΦ(Y) is the line integral of the density,

PΦ(Y)=(d2+Y2)1/2/(d+ρ0·x^)×δ[Y-(ρ0·y^d)/(d+ρ0·x^)].

The reconstruction based on substituting Eq. (A1) into Eq. (15) in Section 2 gives [from Eq. (13)]

14π2dΦd2(d+ρ·x^)2g[(ρ·y^d)/(d+ρ·x^)-(ρ0·y^d)/(d+ρ0·x^)]dd+ρ0·x^δ(ρ-ρ0),

to within the limitations of the 2D algorithm. Band limiting in the convolution function g eliminates the components with high spatial frequency. As ωy0 → ∞, expression (A2) becomes exact. We use this result in the proof of property (1).

In three dimensions, we can write [r0 = (ρ0, z0)]

PΦ(Y,Z)=d2ρ0dz0f0(r0)d(d2+Y2+Z2)1/2(d+ρ0·x^)2×δ[Y-(ρ0·y^d)/(d+ρ0·x^)]δ[Z-z0d/(d+ρ0·x^)],

where f0(r) is the actual density. [If the algorithm were exact, the reconstruction f(r) would equal f0(r).] Substituting Eq. (A3) into Eq. (31) in Section 3 and taking the integral over z of Eq. (28), we find that

dzf(r)=14π2dΦd2(d+r·x^)2d2ρ0dz0f0(r0)×gy[Y(r)-(ρ0·y^d)/(d+ρ0·x^)]d2(d+ρ0·x^)2×dzgz[zd/(d+r·x^)-z0d/(d+ρ0·x^)].

Since gz is normalized [see Eq. (33)], the last integral on the right-hand side of Eq. (A4) can be replaced by (d + r · x^)/d. Then, rewriting Eq. (A4) and letting r = (ρ, z), we have

dzf(r)=d2ρ0dz0f0(r0)14π2×dΦd(d+ρ·x^)gy[(ρ·y^d)/(d+ρ·x^)-(ρ0·y^d)/(d+ρ0·x^)]d2(d+ρ0·x^)2.

The Φ integral is the same as in expression (A2) except that ρ and ρ0 are interchanged. (Recall that gy = g is an even function.) Hence it can be replaced by δ(ρ0ρ), and Eq. (A5) becomes

dzf(r)dz0f0(ρ,z0).

In the limit ωy0 → ∞, expression (A6) should be exact, and property (1) is proved. Note that taking ωz0 → ∞ is not required. The latter is a mathematical result for the continuous case only; it does not imply anything about detector spacing in the z direction.

Property (2) follows from observing that

PΦ(Y,Z)=(d2+Y2+Z2)1/2(d2+Y2)1/2PΦ(Y,Z=0)

when the density is uniform in the z direction. The factor in front of PΦ(Y, Z = 0) is due merely to path-length scaling. Again, since gz is normalized, we obtain, from substituting Eq. (A7) into Eq. (31),

P˜Φ(Y,Z)=-dYgy(Y-Y)×PΦ(Y,Z=0)d/(d2+Y2)1/2,

which is independent of Z and is identical to P˜Φ, the appropriate convolution of the projection data for the two-dimensional problem. Further substitution into Eq. (28) gives a z-independent f(r) that is correct for the plane z = 0 and hence correct for all z.

Figures and Table

 figure: Fig. 1

Fig. 1 Schematic physical arrangement of the 3D tomographic system. The source-to-rotation axis distance is d; the source-to-detector plane distance is D = d + d′.

Download Full Size | PDF

 figure: Fig. 2

Fig. 2 Geometry in the midplane for derivation of the fan-beam formula. The detector system is here represented by its projection on a line (aa′) through the origin and parallel to the actual detector line.

Download Full Size | PDF

 figure: Fig. 3

Fig. 3 Coordinate system for describing projection and reconstruction in the midplane of a 3D system. The unit vectors m^,n^, and k^ form an orthonormal set. The axis of rotation is along k^.

Download Full Size | PDF

 figure: Fig. 4

Fig. 4 Coordinate system for projections above the midplane. The axis of rotation is along z. The vector n^ is parallel to the midplane. The vector k^ is inclined with respect to z and is given by k^=m^×n^. ρ′ lies in the shaded plane.

Download Full Size | PDF

 figure: Fig. 5

Fig. 5 Comparison of representative slices of a phantom with its reconstruction. The phantom and the detector array used are defined in Table 1. The lower row of slices is an exact digitized representation of the phantom, with the corresponding reconstruction just above. The horizontal line defines the position corresponding to the line drawing, in which the density of the phantom (solid line) is compared with that of the reconstruction (points). The scale is linear with a range of 0.0 to 1.0.

Download Full Size | PDF

 figure: Fig. 6

Fig. 6 Same as Fig. 5 except that 128 angles and twice the linear detector density were employed.

Download Full Size | PDF

 figure: Fig. 7

Fig. 7 Comparison of vertical (x = constant) slices of the exact phantom (middle row) with corresponding slices from the present method (row below middle) and the unmodified fan-beam method (row above middle). The display scale has been concentrated in the density range 0.64 to 1.44 for clarity. Absolute differences from exact are shown in the bottom row (present method) and the top row (unmodified fan-beam method). In order to highlight small differences, the scale is linear from 0.0 to 0.2. The phantom contains the two additional ellipsoids noted in Table 1.

Download Full Size | PDF

 figure: Fig. 8

Fig. 8 Same as Fig. 7 except that the source-axis distance d has been reduced to d = 40.0 with a corresponding increase in the detector coverage.

Download Full Size | PDF

Tables Icon

Table 1. Detector Array and Phantom Used in Algorithm Testa

REFERENCES

1. See, for example, references contained in G. T. Herman, Image Reconstruction from Projections (Academic, New York, 1980), Chap. 14.

2. R. A. Robb, A. H. Lent, B. K. Gilbert, and A. Chu, “The dynamic spatial reconstructor,”J. Med. Syst. 4, 253–288 (1980) [CrossRef]  .

3. M. D. Alschuler, Y. Censor, G. T. Herman, A. Lent, R. M. Lewitt, S. N. Srihari, H. Tuy, and J. K. Udupa, “Mathematical aspects of image reconstruction from projections,” Department of Computer Science Tech. Rep. MIPG56 (State University of New York at Buffalo, Buffalo, N.Y., May , 1981).

4. J. G. Colsher, “Iterative three-dimensional image reconstruction, from tomographic projections,” Comput. Graphics Image Processing , 6, 513–537 (1977) [CrossRef]  .

5. M. D. Altschuler, G. T. Herman, and A. Lent, “Fully three dimensional reconstruction from cone-beam sources,” in Proceedings of the Conference on Pattern Recognition and Image Processing (IEEE Computer Society, Long Beach, Calif., 1978), pp. 194–199.

6. M. D. Altschuler, Y. Censor, P. P. B. Eggermont, G. T. Herman, Y. H. Kuo, R. M. Lewitt, M. McKay, H. Tuy, J. Udupa, and M. M. Yau, “Demonstration of a software package for the reconstruction of the dynamically changing structure of the human heart from cone-beam x-ray projections,” Department of Computer Science Tech. Rep. MIPG32 (State University of New York at Buffalo, Buffalo, N.Y., August , 1979).

7. M. Schlindwein, “Iterative three-dimensional reconstruction from twin-cone beam projections,”IEEE Trans. Nucl. Sci. NS-25, 1135–1143 (1978) [CrossRef]  .

8. G. Kowalski, “Multislice reconstruction from twin-cone beam scanning,”IEEE Trans. Nucl. Sci. NS-26, 2895–2903 (1979) [CrossRef]  .

9. H. E. Knutsson, P. Edholm, G. H. Granlund, and C. Petersson, “Ectomography—a new radiographic reconstruction method, I., II.,”IEEE Trans. Biomed. Eng. BME-27, 640–655 (1980) [CrossRef]  .

10. O. Nalcioglu and Z. H. Cho, “Reconstruction of 3-d objects from cone-beam projections,” Proc. IEEE 66, 1584–1585 (1978) [CrossRef]  .

11. R. V. Denton, B. Friedlander, and A. J. Rockmore, “Direct three-dimensional image reconstruction from divergent rays,”IEEE Trans. Nucl. Sci. NS-26, 4695–4703 (1979) [CrossRef]  .

12. H. K. Tuy, “An inversion formula for cone-beam reconstruction,” Department of Computer Science Tech. Rep. MIPG57 (State University of New York at Buffalo, Buffalo, N.Y., June , 1981).

13. G. N. Minerbo, “Convolutional Reconstruction from cone-beam projection data,”IEEE Trans. Nucl. Sci. NS-26, 2682–2684 (1979) [CrossRef]  .

14. R. M. Lewitt and M. R. McKay, “Description of a software package for computing cone-beam x-ray projections of time-varying structures, and for dynamic three-dimensional image reconstruction,” Department of Computer Science Tech. Rep. MIPG45 (State University of New York at Buffalo, Buffalo, N.Y., May , 1980).

15. G. T. Herman, A. V. Lakshminarayanan, and A. Naparstek, “Convolution reconstruction techniques for divergent beams,” Comput. Biol. Med. 6, 259–271 (1976) [CrossRef]   [PubMed]  .

16. L. A. Shepp and B. F. Logan, “The Fourier reconstruction of a head section,”IEEE Trans. Nucl. Sci. NS-21, 21–43 (1974).

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (8)

Fig. 1
Fig. 1 Schematic physical arrangement of the 3D tomographic system. The source-to-rotation axis distance is d; the source-to-detector plane distance is D = d + d′.
Fig. 2
Fig. 2 Geometry in the midplane for derivation of the fan-beam formula. The detector system is here represented by its projection on a line (aa′) through the origin and parallel to the actual detector line.
Fig. 3
Fig. 3 Coordinate system for describing projection and reconstruction in the midplane of a 3D system. The unit vectors m ^ , n ^, and k ^ form an orthonormal set. The axis of rotation is along k ^.
Fig. 4
Fig. 4 Coordinate system for projections above the midplane. The axis of rotation is along z. The vector n ^ is parallel to the midplane. The vector k ^ is inclined with respect to z and is given by k ^ = m ^ × n ^. ρ′ lies in the shaded plane.
Fig. 5
Fig. 5 Comparison of representative slices of a phantom with its reconstruction. The phantom and the detector array used are defined in Table 1. The lower row of slices is an exact digitized representation of the phantom, with the corresponding reconstruction just above. The horizontal line defines the position corresponding to the line drawing, in which the density of the phantom (solid line) is compared with that of the reconstruction (points). The scale is linear with a range of 0.0 to 1.0.
Fig. 6
Fig. 6 Same as Fig. 5 except that 128 angles and twice the linear detector density were employed.
Fig. 7
Fig. 7 Comparison of vertical (x = constant) slices of the exact phantom (middle row) with corresponding slices from the present method (row below middle) and the unmodified fan-beam method (row above middle). The display scale has been concentrated in the density range 0.64 to 1.44 for clarity. Absolute differences from exact are shown in the bottom row (present method) and the top row (unmodified fan-beam method). In order to highlight small differences, the scale is linear from 0.0 to 0.2. The phantom contains the two additional ellipsoids noted in Table 1.
Fig. 8
Fig. 8 Same as Fig. 7 except that the source-axis distance d has been reduced to d = 40.0 with a corresponding increase in the detector coverage.

Tables (1)

Tables Icon

Table 1 Detector Array and Phantom Used in Algorithm Testa

Equations (45)

Equations on this page are rendered with MathJax. Learn more.

l = Y d / ( d 2 + Y 2 ) 1 / 2 or Y = l d / ( d 2 - l 2 ) 1 / 2 .
θ = Φ + π / 2 + α ,
α = tan - 1 ( Y / d ) = tan - 1 [ l / ( d 2 - l 2 ) 1 / 2 ] .
p ( l , θ ) = r d r d ϕ f ( r , ϕ ) δ [ r cos ( θ - ϕ ) - l ] ,
p ( l , θ ) = P Φ ( Y ) , l < d = 0 , l > d .
f ( r , ϕ ) = 1 4 π 2 d θ - d l r cos ( θ - ϕ ) - l l p ( l , θ ) .
p ( l , θ ) = - d ω 2 π exp ( i ω l ) q ( ω , θ ) .
f ( r , ϕ ) = 1 8 π 2 d θ - ω d ω q ( ω , θ ) exp [ i ω r cos ( θ - ϕ ) ] .
f ( r , ϕ ) = 1 4 π 2 Re d θ - ω d ω - d l p ( l , θ ) × exp { i ω [ r cos ( θ - ϕ ) - l ] } ,
d θ d l = d Φ d Y d 3 / ( d 2 + Y 2 ) 3 / 2 .
ω = ω [ d + r cos ( ϕ - Φ ) ] / ( d 2 + Y 2 ) 1 / 2 ,
f ( r , ϕ ) = 1 4 π 2 Re d Φ d 2 [ d + r cos ( ϕ - Φ ) ] 2 × 0 ω d ω - d Y d ( d 2 + Y 2 ) 1 / 2 P Φ ( Y ) × exp [ i ω ( d r sin ( ϕ - Φ ) d + r cos ( ϕ - Φ ) - Y ) ] .
f ( r , ϕ ) = 1 4 π 2 d Φ d 2 [ d + r cos ( ϕ - Φ ) ] 2 P ˜ Φ [ Y ( r , ϕ ) ] ,
Y ( r , ϕ ) = d r sin ( ϕ - Φ ) / [ d + r cos ( ϕ - Φ ) ] ,
P ˜ Φ ( Y ) = - d Y d ( d 2 + Y 2 ) 1 / 2 P Φ ( Y ) g ( Y - Y ) ,
g ( Y ) = Re 0 ω y 0 exp ( i ω Y ) ω d ω .
P ˜ Φ ( i ) ( Y j ) = j P Φ ( i ) ( Y j ) cos θ j Y j - Δ Y / 2 Y j + Δ Y / 2 g ( Y j - Y ) d Y .
f ( ρ ) = 1 4 π 2 Re d Φ d 2 ( d + ρ · m ^ ) 2 0 ω d ω × - d Y d ( d 2 + Y 2 ) 1 / 2 P Φ ( Y , Z = 0 ) × exp [ i ω ( d ρ · n ^ d + ρ · m ^ - Y ) ] ,
δ m ^ = δ Φ z ^ × m ^
= δ Φ d ( d 2 + Z 2 ) 1 / 2 n ^
δ m ^ = δ Φ k ^ × m ^
= δ Φ n ^ .
δ Φ = δ Φ d / ( d 2 + Z 2 ) 1 / 2 .
r = ρ + Z z ^ ,
ρ · k ^ = 0.
δ f ( ρ + Z z ^ ) = 1 4 π 2 Re δ ϕ d 2 ( d + ρ · m ^ ) 2 0 ω d ω × - d Y d ( d 2 + Y 2 ) 1 / 2 P Φ ( Y , Z ) × exp [ i ω ( d ρ · n ^ d + ρ · m ^ - Y ) ] .
ρ · m ^ = d r · x ^ / d
d ( d 2 + Y 2 ) 1 / 2 = ( d 2 + Z 2 ) 1 / 2 ( d 2 + Y 2 + Z 2 ) 1 / 2 .
δ f ( r ) = 1 4 π 2 Re δ Φ d 2 ( d + r · x ^ ) 2 0 ω d ω × - d Y d ( d 2 + Y 2 + Z 2 ) 1 / 2 P Φ ( Y , Z ) × exp [ i ω ( d r · y ^ d + r · x ^ - Y ) ] ,
Z = z d / [ d + r · x ^ ] .
f ( r ) = 1 4 π 2 d Φ d 2 ( d + r · x ^ ) 2 P ˜ Φ [ Y ( r ) , Z ( r ) ] ,
Y ( r ) = r · y ^ d / ( d + r · x ^ ) ,
Z ( r ) = r · z ^ d / ( d + r · x ^ ) ,
P ˜ Φ ( Y , Z ) = - d Y - d Z g y ( Y - Y ) g z ( Z - Z ) × P Φ ( Y , Z ) d / ( d 2 + Y 2 + Z 2 ) 1 / 2 ,
g y ( Y ) = Re 0 ω y 0 ω d ω exp ( i ω Y ) ,
g z ( Z ) = sin ω z 0 Z / π Z .
P ˜ Φ ( i ) ( Y j , Z k ) = j , k P Φ ( i ) ( Y j , Z k ) × cos θ j k Y j - Δ Y / 2 Y j + Δ Y / 2 g y ( Y j - Y ) d Y × Z k - Δ Z / 2 Z k + Δ Z / 2 g z ( Z k - Z ) d Z ,
P Φ ( Y ) = ( d 2 + Y 2 ) 1 / 2 / ( d + ρ 0 · x ^ ) × δ [ Y - ( ρ 0 · y ^ d ) / ( d + ρ 0 · x ^ ) ] .
1 4 π 2 d Φ d 2 ( d + ρ · x ^ ) 2 g [ ( ρ · y ^ d ) / ( d + ρ · x ^ ) - ( ρ 0 · y ^ d ) / ( d + ρ 0 · x ^ ) ] d d + ρ 0 · x ^ δ ( ρ - ρ 0 ) ,
P Φ ( Y , Z ) = d 2 ρ 0 d z 0 f 0 ( r 0 ) d ( d 2 + Y 2 + Z 2 ) 1 / 2 ( d + ρ 0 · x ^ ) 2 × δ [ Y - ( ρ 0 · y ^ d ) / ( d + ρ 0 · x ^ ) ] δ [ Z - z 0 d / ( d + ρ 0 · x ^ ) ] ,
d z f ( r ) = 1 4 π 2 d Φ d 2 ( d + r · x ^ ) 2 d 2 ρ 0 d z 0 f 0 ( r 0 ) × g y [ Y ( r ) - ( ρ 0 · y ^ d ) / ( d + ρ 0 · x ^ ) ] d 2 ( d + ρ 0 · x ^ ) 2 × d z g z [ z d / ( d + r · x ^ ) - z 0 d / ( d + ρ 0 · x ^ ) ] .
d z f ( r ) = d 2 ρ 0 d z 0 f 0 ( r 0 ) 1 4 π 2 × d Φ d ( d + ρ · x ^ ) g y [ ( ρ · y ^ d ) / ( d + ρ · x ^ ) - ( ρ 0 · y ^ d ) / ( d + ρ 0 · x ^ ) ] d 2 ( d + ρ 0 · x ^ ) 2 .
d z f ( r ) d z 0 f 0 ( ρ , z 0 ) .
P Φ ( Y , Z ) = ( d 2 + Y 2 + Z 2 ) 1 / 2 ( d 2 + Y 2 ) 1 / 2 P Φ ( Y , Z = 0 )
P ˜ Φ ( Y , Z ) = - d Y g y ( Y - Y ) × P Φ ( Y , Z = 0 ) d / ( d 2 + Y 2 ) 1 / 2 ,
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.