Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Singular-value decomposition of a tomosynthesis system

Open Access Open Access

Abstract

Tomosynthesis is an emerging technique with potential to replace mammography, since it gives 3D information at a relatively small increase in dose and cost. We present an analytical singular-value decomposition of a tomosynthesis system, which provides the measurement component of any given object. The method is demonstrated on an example object. The measurement component can be used as a reconstruction of the object, and can also be utilized in future observer studies of tomosynthesis image quality.

©2010 Optical Society of America

1. Introduction

The tomosynthesis method [1, 2] is currently considered to be of great potential, especially in detection of breast cancer [3]. The idea is to replace mammography, which gives two-dimensional (2D) projection, with tomosynthesis which gives three-dimensional (3D) information e.g. as slice images. The ultimate purpose is to improve the detection of breast cancer, at a relatively small increase in dose and cost. Prototype tomosynthesis systems have already been developed and pilot clinical trials performed [47]. These trials have so far not demonstrated distinct advantages in detection compared to mammography, but indicate that tomosynthesis images have better visualization of structures without masking by overlapping tissue. Improved detection has been clearly demonstrated in studies on mastectomy specimens [8].

Tomosynthesis can be said to be half-way between a single projection as used in mammography, and the full-angle scan performed in computed tomography (CT). The first produces a 2D representation and the other a 3D representation. Tomosynthesis, where a number of projections are taken at a limited angle, provides partial 3D information — the resolution in one direction must be limited. The hope is that tomosynthesis will provide the advantage of 3D information over conventional mammography, without the increased dose and cost of a full CT.

The 3D object, for example, the breast tissue, must be reconstructed from the 2D projection images. Several techniques are used [9,10], mainly back-projection methods, iterative methods, or statistical methods. The reconstruction task is more difficult than in tomography, since the information is incomplete and a true inverse transform cannot be found. Especially in analytical methods such as back-projection, this has a tendency to produce artifacts. We propose to use singular-value decomposition (SVD) [1113] for the reconstruction, in a manner very similar to the technique we developed for through-focus imaging systems [14]. The result produced is the measurement component of the object, which could be regarded as its pseudoinverse [11] reconstruction and is as close to the inverse as we can get. The measurement component is the part of the object that is actually transmitted into the image. An advantage of the method is its mathematical stringency: what does not show up in the measurement component simply cannot be reconstructed, and the measurement component is unique for each set of measurements. A disadvantage is that the measurement component is different from a conventional reconstruction since it contains negative values. The measurement component could be used for further studies on image quality and detectability, using a channelized Hotelling observer [11, 15, 16].

At present, we are analyzing a parallel-path setup where the x-ray source moves in a plane parallel to the detector plane. Other geometries [2] could be analyzed but it would require some work to adjust the analysis. We consider the geometrical approximation, disregarding any effects of diffraction. The object is seen as purely absorbing, with no scattering or phase contribution. Scattering cannot be included, since it breaks the lateral shift-invariance. As we analyze the ideal case, we do not yet consider noise.

2. SVD analysis

We consider a geometry where all source positions are in one transverse plane, and images captured in a second plane parallel to the first and placed behind the object. This geometry is illustrated in Fig. 1. In the figure the object is shown as limited in both z, x and y direction. We present this case of limited object and image support as the most general, but will mainly consider the simplified case of infinite transverse extent. The support in the z direction is per definition limited to at most the distance between the source and image planes. The variables used are r = (x,y) for transverse coordinates in object space and z for longitudinal coordinate in object space. The image plane is placed at the origin (z = 0), and has transverse coordinates r d = (xd,yd). The source plane is placed at z = −s and position number m given by r m = (χm, ψm). The source is assumed to be a point. The object f(r,z) is identical to the absorption coefficient, the image is gm(r d) for the image taken using source position r m, and g(r d) is the vector of all images.

 figure: Fig. 1

Fig. 1 Geometry of the considered tomographic system. The different source positions are χm, and the screen is placed along the z axis.

Download Full Size | PDF

A relation between object and image can be found from the steady-state solution of the Boltzmann transport equation in absorbing media, assuming a point source [11]:

Im(rd)=Acosθ|r1r0|2exp[0|r1r0|dlf(r1r1r0|r1r0|l)],

where Im(r d) is the irradiance, A is the strength of the point source in emitted flux per unit volume per unit solid angle, r 1 = (r d, 0) = (xd, yd, 0) is the position of the image point in three dimensions and r 0 = (r m, −s) = (χm, ψm, −s) the position of the source. The angle θ is the angle of incidence of the ray from r m to r d onto the detector plane. A change of variables to our chosen coordinate system, namely z = −l′s/|r 1r 0|, leads to

log(Im(rd)|r1r0|2Acosθ)=|r1r0|s0sdzf(r0+(r0r1)zs).

Using cosθ = s/|r 1r 0| and |r 1r 0| = (|r dr m|2 + s 2)1/2, and changing notation from the three-dimensional r 1 and r 0 to the two-dimensional r d and r m, leads to

log[1As(|rdrm|2+s2)3/2Im(rd)]=0sdzf(rd+(rdrm)zs,z)

We introduce a relation between the irradiance Im(r d) registered in the image plane and the image function gm(r d):

gm(rd)=s(s2+|rdrm|2)1/2log[1As(s2+|rdrm|2)3/2Im(rd)].

This definition of gm(r d) is designed to yield a linear relation between object and image, and inserting Eq. (2) into Eq. (1) leads to the propagation integral from object to image

gm(rd)=[Hf]m(rd)=z1z2dzd2rbf(r,z)f(r,z)hm(r,rd,z),

and its adjoint (or backprojection) operator

[Hg](r,z)=m=1Md2rdbg(rd)gm(rd)hm(r,rd,z),

where z 1 and z 2 limit the object extension in the z direction, −s < z 1 < z 2 < 0, M is the number of source positions, bf(r,z) and bg(r d) define the object and image support respectively, the asterisk denotes complex conjugate, and the impulse-response function h is the two-dimensional Dirac delta function

hm(r,rd,z)=δ(rrd(rdrm)zs).

The object and image support contained in bf(r,z) and bg(r d) can in general be adjusted according to specific situations, but the support in the axial direction can never extend beyond the source or image plane as indicated by the limits z 1 and z 2. Eqs. (3)(5) give a complete description of the propagation, and consequently also of the reconstruction. Now the SVD method for multiple images, earlier presented for telecentric optical imaging systems [14], can be applied. However, this method requires shift invariance in the transverse direction, so we apply a change of variables [17]:

r=(x,y)=(ss+zx,ss+zy),z=ss+zz

with the inverse relations

r=(x,y)=(sszx,sszy),z=sszz.

Intuitively, the transformation in Eq. (6) means the source plane is moved to minus infinity and the rays from one source point become parallel as illustrated in Fig. 2. The position r m of the source determines the inclination of the rays. The object is distorted by this transformation, but the image remains the same. In these coordinates, the impulse-response function becomes

hm(rrd,z)=(szs)2δ(rrd+zsrm),
 figure: Fig. 2

Fig. 2 The change of variables moves the source plane to minus infinity. Rays originating from two source positions, one on-axis and one off-axis, are shown. The object in this figure is distorted compared to the object in Fig. 1, but the image is not.

Download Full Size | PDF

where m(r d,z̃) = hm(r,r d,z). Since the Jacobian of the transformation is

|(x,y,z)(x,y,z)|=s4(sz)4

the integral relations now become

gm(rd)=[Hf]m(rd)=z1z2dzd2rs4(sz)4bf(r,z)f(r,z)hm(rrd,z),

and

[Hg](r,z)=m=1Md2rdbg(rd)gm(rd)hm(rrd,z),

where (,) = f(r,z) is the distorted object function and f(,) = bf(r,z) the distorted object-support function. The integration limits are 1 = sz 1/(s + z 1) and 2 = sz 2/(s + z 2) and fulfill −∞ < 1 < 2 < 0. We note that due to the change of variable, the distorted object- and image-space functions are defined on the Hilbert spaces with weighted inner products [18]

f1(r,z),f2(r,z)obj=d2rz1z2dzbf(r,z)f1(r,z)f2(r,z)s4(sz)4

and

g1(rd),g2(rd)im=m=1Md2rdbg(rd)g1m(rd)g2m(rd).

The weighted inner product of Eq. (11) is required in order to fulfill the adjoint relation

Hf,gim=f,Hgobj.

Combining Eqs. (9) and (10) finally yields the image-space Hermitian operator as

[H,Hg]m(rd)=m=1Md2rdbg(rd)gm(rd)kmm(rd,rd)

where

kmm(rd,rd)=z1z2dzd2rbf(r,z)s4(sz)4hm(rrd,z)hm(rrd,z).

Similarly, the object-space Hermitian operator will be

[HHf](r,z)=z1z2dzd2rbf(z,r)f(z,r)p(r,r,z,z)

where

p(r,r,z,z)=s4(sz)4m=1Md2rdbg(rd)hm(rrd,z)hm(rrd,z).

2.1. Infinite object and image

If both object and image are considered infinite in the transverse direction, and the object confined in the longitudinal direction by z 1 < z < z 2, both support functions bf(r, z) and bg(r d) are identically equal to one. While this assumption is obviously not true, its effects on the resulting analysis are surprisingly small. Its main contribution is edge effects. Theoretically, the object and images are assumed to be infinite, but for the evaluation, finite object and images must be used. This leads to strange effects close to the edges, so the rim of the resulting images includes non-reliable results and should be discarded. The assumption simplifies the analysis considerably, since the system is now transversally shift-invariant and the kernels of the Hermitian operators may be written k mm(r dr d) and p(′,z̃,z̃′). Insertion of Eq. (8) into Eq. (13) yields, after performing the integral in ,

kmm(rdrd)=z1z2dzδ(rdrdzs(rmrm)).

This integral could easily be evaluated, but the subsequent analysis gets easier if it is left as it is. Assuming singular functions of the form

vρ,j(rd)=Vj(ρ)exp(2πiρrd)

yields, after insertion into Eq. (12), the eigenequation [14]

K(ρ)Vj(ρ)=μρ,jVj(ρ)

where K(ρ) is an M × M matrix with elements

kmm=d2rdkmm(rd)exp(2πiρrd).

Inserting Eq. (14) into Eq. (17) and performing the integral in r d yields

kmm=z1z2dzexp(2πiρ(rmrm)zs),

which is integrated to yield

Kmm=z2z1

for m = m′ or ρ = (0,0), and

Kmm=is2πρ(rmrm){exp[2πiρ(rmrm)z2s]exp[2πiρ(rmrm)z1s]}

for mm′. Thus the elements of K have been found, and Eq. (16) can be solved numerically to yield the vectors V j(ρ). The image-space singular functions have been found, in exactly the same manner as Burvall et al. [14]. The object-space eigenfunctions can then be found by projection of the image-space eigenfunctions using the adjoint operator in Eq. (10).

The solution can also be obtained in object space. The results will be identical to those obtained from the image-space analysis, but we still outline the analysis required. The kernel of the Hermitian operator is

p(rr,z,z)=(szsz)2m1Mδ(rr+rmzzs),

leading to the object-space eigenequation

z1z2dz(szsz)2Uj(ρ,z)m=1Mexp(2πiρrm(zz)1s)=μρ,jUj(ρ,z)

where it has been assumed the singular functions can be written as

uj,ρ(r,z)=Uj(ρ,z)exp(2πiρr).

Introducing Û(ρ,z̃)(s)2 = U(ρ,z̃) we find the integral equation

U^j(ρ,z)=z1z2dzU^(ρ,z)m=1Mexp(2πizsρrm)exp(2πizsρrm).

This integral equation has a degenerate kernel, and such equations can be solved using standard methods [19].

Equation (19) is the kernel of the object-space Hermitian operator, but can also be seen the impulse-response function of the system. A delta function at (′,′) in the object will cause a distribution p(′,z̃,z̃′) in the reconstruction. The impulse response function is a sum of M lines whose intensity varies with , and that all meet at = ′, = ′.

3. Numerical results

In section 2, the singular functions have been found for the simplified case of infinite object and image. These results can be illustrated numerically. Under certain circumstances, the SVD technique is not very time-consuming and can be run on a normal desk-top computer. The calculation times are on the order of minutes or tens of minutes. The most time-consuming part is actually the simulation of the object (see section 3.2). The technique for numerical SVD calculations is very similar to Burvall et al. [14].

3.1. Numerical SVD

The geometry is the same as in Fig. 1. We have assumed a distance of 30 cm from source to image plane, and that the object is located between 10 and 1 cm from the image plane, i.e. s = 0.3 m, z 1 = −0.1 m, and z 2 = −0.01 m. Furthermore, it is assumed that all the sources are placed along a line or arc rather than in a 2D array. While it would be interesting to investigate other arrangements, this option simplifies the analysis significantly and is often employed in tomosynthesis. Assuming the source distribution is along the x axis, we have r m = (ξm, 0). The outermost source positions are at χm = ±0.1 m, and the source positions evenly distributed in between. The examples below are for 11 source positions, i.e., M = 11.

The assumption that r m = (χm, 0) is used to simplify the analysis by reducing large parts of it from 3D to 2D. A closer look at Eq. (18) reveals that it depends on ρ only as ρ · (r mr m) = ρx(χmχ m), so the elements of K depend only on ρx and not on ρy. Comparison to Eq. (16) shows that V j(ρ) = V j(ρx) or μρ,j = μρx,j will not depend on ρy either.

Once the image-space singular functions are known, the object-space singular functions Uρ,j(,) can be found from the projection

[Hvρ,j]=μρ,juρ,j(r,z).

This equation is also known as part of the shifted-eigenvalue equations [20]. Insertion of Eqs. (9), (8) and (15) into Eq. (20) yields

uρ,j(r,z)=exp(2πiρr)Uj(ρ,z)

where

Uj(ρ,z)=1μρ,j(szs)2m=1M[Vj(ρ)]mexp(2πizsρrm).

So Uj(ρ,z̃) = Uj(ρx,z̃) only needs to be calculated for ρ = (ρx, 0) and then the same 2D matrix can be used for all ρy. Figure 3 shows the eigenvalues μρx,j while Fig. 4 contains the absolute values of some of the singular functions Uj(ρx,z̃). We note that the eigenvalues are approximately zero for low frequencies, and that this band of zero eigenvalues grows broader with increasing j. The eigenvalues are a combination of discrete and continuous spectra: discrete in j which handles the axial dimension, but continuous in ρx which handles the transverse dimensions.

 figure: Fig. 3

Fig. 3 Eigenvalues μρx,j for j ranging from 1 to 11.

Download Full Size | PDF

 figure: Fig. 4

Fig. 4 Absolute values of the singular functions (a) U 1(ρx,z̃), (b) U 2(ρx,z̃), (c) U 6(ρx,z̃), and (d) U 11(ρx,z̃).

Download Full Size | PDF

Unlike the 3D imaging case [14], where the singular functions are real, these functions are complex. In Fig. 4 their absolute value is shown. Interpreting the results intuitively is fairly difficult, as there are no obvious image planes and the coordinates are a combination of spatial () and frequency (ρ). It is easier to interpret the expansion of an example object into these eigenfunctions, i.e., its measurement component. This requires a relevant sample object.

3.2. Sample object

We illustrate the method using an example of at least some relevance to the future task, namely tomosynthesis of breast tissue. The object consists of two parts: a designer nodule and a clustered lumpy background (CLB). This approach is chosen since the expected advantage of tomosynthesis is the ability to detect low-contrast masses, such as the nodule, that are masked by overlapping tissue, such as the clustered lumpy background. The designer nodule has a projection onto the image plane given by given by [21]

gm(rd)=Arect(2|rd|R)(1|rd|2R2)v

where A is the amplitude of the nodule, R its radius, and v a real and non-negative shape factor that determines the sharpness of the edges. We have chosen R = 3mm, A = 0.03, and v = 1.5. This value of v is reported to give the best fit to average tumor profiles [21]. The 3D object that gives this projection is [22]

f(r)=A34Rrect(2|r|R)(1|r|2R2),

as can be shown by finding its projection for a far-away source, i.e., by integrating Eq. (22) with respect to z.

The CLB simulates the 2D projection of the background by distributing lumps according to a statistical model [23]. The projection is given by

g(rd)=k=1Kn=1Nkb(1akn(rdrkrkn),Rθkn)

where K is the numbers of clusters, each centered on r k, and Nk is the number of blobs or lumps in each cluster, their positions within the cluster given by r kn. The function b describes the shape of each blob, akn its strength, and R θ the rotation matrix of the random angle θkn. The theory of this 2D background is thoroughly described by Bochud et al. [23]. However, no proper 3D model exists and we need to model the 3D object rather than its projection. Since the intention in the current paper is only to provide an illustration, we have extended the model in the simplest possible way. The lumps are made three-dimensional, and placed in a volume by the 3D vectors r k and r kn. They were also rotated in 3D rather than in 2D. Apart from that, we followed the classic CLB as introduced by Bochud [23]. The number K was a Poisson random variable with expectation value 150 and Nk a Poisson random variable with expectation value 20. The scale akn was set to 1 for all blobs, and the position of the cluster r k a uniform random variable within the object area which is 40 × 40 × 90mm. The position within the cluster r kn is a Gaussian with standard deviation 3.6mm. The function b is an asymmetrical exponential blob b(r,R θkn) = exp(−α|R θkn r|β/L(R θkn r)) as given by Bochud [23], of length Lx = 15mm and width Ly = Lz = 6mm. Parameters of the blob function are α = 2.1 and β = 0.5. The blob is rotated an angle θkn = θk (uniform random variable on [0,2π]) in the xy-plane, and an angle ϕkn = ϕk (uniform random variable on [0, π]) in the xz-plane.

An example object f ̃(,) is shown in Fig. 5, in the distorted coordinates and . It is shown in two slices: one through the middle of the designer nodule in the x̃ỹ plane, and one through the middle of the designer nodule in the x̃z̃ plane. The object is not a proper 3D breast model, since it is not connected enough for a breast CT slice [24], but it will give reasonable results once projected and will serve as an illustration.

 figure: Fig. 5

Fig. 5 Example object f ̃(, ), including a designer nodule and a 3D clustered lumpy background. (a) Section through the object at = −60mm, or z = −50mm. (b) Section through the object at = = 0. Both sections go through the center of the designer nodule.

Download Full Size | PDF

3.3. Numerical results

Once an object has been generated, and the singular functions of the system found, they can be combined to generate the measurement component. It is an expansion of the object into the object-space singular functions according to

fmeas(r,z)=d2ρj=1MAj(ρ)Uj(ρ,z)exp2πiρr,

where the integral is numerically found as a Fast Fourier Transform. The expansion coefficients Aj(ρ) are found as

Aj(ρ)=z1z2dzF(ρ,z)Uj(ρ,z)s4(sz)4

where (ρ,z̃) is the 2D Fourier transform of the object (,) with respect to . Two things should be noted, as they differ from the expression used in Ref. [14]. First, the complex conjugate in Eq. 23 was omitted in [14] since the singular functions were real, but must be included here. Second, the weighting function w(x̃,z̃) = s 4/(s)4 follows from the earlier change of variables. Slices through the measurement component are shown in Fig. 6. The slices are taken at the same positions as those in Fig. 5. In part (b), the low axial resolution is showing up as elongation of both nodule and background.

 figure: Fig. 6

Fig. 6 Measurement component of the object in Fig. 5. (a) Section through the measurement component at = −60mm, or z = −50mm. (b) Section through the measurement component at = y = 0. Both sections go through the center of the designer nodule.

Download Full Size | PDF

The zero-angle projection of the object is shown in Fig. 7(a), along with the same projection of its measurement component in part (b). The images are generated by direct numerical propagation of the quantities involved, using a sampling of 400 points in the transverse directions and 300 points in the longitudinal direction. This serves as a test of both the analysis and the numerical accuracy: the image of an object should be identical to the image of its measurement component. As can be seen in Fig. 8, the difference between them is below 1% except at the edges were edge effects could cause it to rise to around 10%.

 figure: Fig. 7

Fig. 7 (a) Image or projection of the measurement component in Fig. 6. (b) Image or projection of the object in Fig. 5. Both are calculated through direct numerical propagation of the object and measurement component respectively, for the source placed on the z-axis.

Download Full Size | PDF

 figure: Fig. 8

Fig. 8 Difference between the projection of the measurement component (see Fig. 7(a)) and the projection of the object (see Fig. 7(b)). The maximum absolute value is around 3 × 10−6 compared to the maximum of the projections, which is around 2 × 10−4. This gives an error of roughly 1%.

Download Full Size | PDF

An interesting point is how the slice through the measurement component is a hybrid between the projection and the similar object slice. This illustrates the very principle of tomosynthesis. Unlike tomography, which provides a full reconstruction of every plane of the object, tomosynthesis provides only partial 3D information. The longitudinal resolution will always be low, and contributions from close-by transverse planes will be present in every slice of the measurement component.

4. Discussion and future work

The measurement component of an object provides information on how well this object is transmitted through the imaging system. Comparing the measurement component in Fig. 6 to its object in Fig. 5, we can see that the information received in the image plane is not complete. This illustrates the fundamental limit of the acquisition process, and serves as an estimate of the best reconstruction that could be achieved. We can also see that the measurement component looks a lot like the object, but that it also borrows traits from the projection in Fig. 7. This reflects the small angle used in tomosynthesis and its effect on the results. The reconstruction, while given in 3D, is in many ways an improved projection or a projection with some separation of information into different layers. It is not the full 3D reconstruction that could be achieved from e.g. tomography.

It is also becoming evident that there is a lack of three-dimensional models of the breast tissue. Statistical models for the normal breast, such as the clustered lumpy background [23], are made for 2D and not for 3D. Since mammography has been the dominating investigation method, it has simply been sufficient to work only on the 2D projection through the breast. Extensions to 3D are either intuitive and lack the statistical back-up required for a systematic study, like the example presented in this paper, or advanced models designed to produce one really good phantom and not the large number of samples required for a statistical study. There is need for a model of the breast tissue that has the same statistical properties as experimental images in 3D but also in its 2D projections, that looks similar to experimental images in 3D but also in its 2D projections, and that allows for generation of numerous realizations. Strong candidates at the moment are a more sophisticated extension of the clustered lumpy background [23], the threshold model by Abbey and Boone [25], or the breast phantoms by Zhang et al. [26]. The example object in Fig. 5 is sufficient as an illustration of the SVD method, but is not interconnected enough for a real slice through breast tissue.

The next step would be to use the SVD of a more realistic breast tissue model in an image-quality study, including the effect of noise. The SVD model is particularly suitable for this since the statistics of the object background are relatively easily transferred to the image through the use of characteristic functionals [11, Sec. 8.5.3]. The SVD singular functions can also be used as channels for a channelized observer [27]. This allows for many comparisons, for example how or if the detection seems to improve with tomosynthesis compared to mammography, or how the SVD method compares to other reconstruction algorithms.

5. Conclusions

The mixed continuous and discrete operators of a tomosynthesis system have been derived from the Boltzmann equation, and used to develop an analytical singular-value decomposition of a tomosynthesis system. In combination with numerical calculations, it provides the measurement component of any object, where object refers to the breast tissue. The measurement component of an example object has been presented. Although the example object is not truly realistic, interesting information on the tomosynthesis process and its effects on reconstruction have been found. Future prospects involve more realistic 3D breast tissue models, and an image-quality study involving a channelized Hotelling observer.

Acknowledgments

C. Dainty acknowledges support from Science Foundation Ireland Grant No 07/IN.1/1906, H. Barrett from the Center for Gamma-Ray Imaging Grant 5 P41EB002035-11 and the SPECT Imaging and Parallel Computing Grant 5 R37 EB000803-19, and A. Burvall from the Swedish Research Council.

References and links

1. J. T. Dobbins III and D. J. Godfrey, “Digital x-ray tomosynthesis: current state of the art and clinical potential,” Phys. Med. Biol. 48, R65–R106 (2003). [PubMed]  

2. J. T. Dobbins III, “Tomosynthesis: at translational crossroads,” Med. Phys. 36, 1956–1967 (2009). [PubMed]  

3. L. T. Niklason, “Digital tomosynthesis in breast imaging,” Radiology 1997, 399–406 (1997).

4. G. Gennaro, A. Toledano, C. di Maggio, E. Baldan, E. Bezzon, M. La Grassa, L. Pescarini, I. Polico, A. Proietti, A. Toffoli, and P. C. Muzzio, “Digital breast tomosynthesis versus digital mammography: a clinical performance study,” Eur. Radiol. (2009). [PubMed]  

5. I. Andersson, D. M. Ikeda, S. Zackrisson, M. Ruschin, T. Svahn, P. Timberg, and A. Tingberg, “Breast tomosynthesis and digital mammography: a comparison of breast cancer visibility and BIRADS classification in a population of cancers with subtle mammographic findings,” Eur. Radiol. 18, 2817–2825 (2008). [PubMed]  

6. W. F. Good, G. S. Abrams, V. J. Catullo, D. M. Chough, M. A. Ganott, C. M. Hakim, and D. Gur, “Digital breast tomosynthesis: a pilot observer study,” Am. J. Radiology 190, 865–869 (2008).

7. S. P. Poplack, T. D. Tosteson, C. A. Kogel, and H. M. Nagy, “Digital breast tomosyntheis: Initial experiance in 98 women with abnormal digital screening mammography,” Am. J. Radiology 189, 616–623 (2007).

8. A. S. Chawla, J. Y. Lo, J. A. Baker, and E. Samei, “Optimized image acquisition for breast tomosynthesis in projection and reconstruction space,” Med. Phys. 36, 4859–4869 (2009). [PubMed]  

9. T. Wu, R. H. Moore, E. A. Rafferty, and D. B. Kopans, “A comparison of reconstruction algorithms for breast tomosynthesis,” Med. Phys. 31, 2636–2647 (2004). [PubMed]  

10. Y. Zhang, H.-P. Chan, B. Sahiner, J. Wei, M. M. Goodsitt, L. M. Hadjiiiski, J. Ge, and C. Zhou, “A comparative study of limited-angle cone-beam reconstruction methods for breast tomosynthesis,” Med. Phys. 33, 3781–3795 (2006). [PubMed]  

11. H. H. Barrett and K. J. Myers, Foundations of Image Science (John Wiley, Hoboken, New Jersey, 2004).

12. M. Bertero and P. Boccacci, Inverse Problems in Imaging (Institute of Physics Publishing, Bristol, UK, 1998)

13. H. H. Barrett, J. N. Aarsvold, and T. J. Roney, “Null functions and eigenfunctions: tools for the analysis of imaging systems,” Lect. Notes. Comput. Sci. 11, 211–226 (1991).

14. A. Burvall, H. H. Barrett, C. Dainty, and K. J. Myers, “Singular-value decomposition for through-focus imaging systems,” J. Opt. Soc. Am. A 23, 2440–2448 (2006).

15. J. Yao and H. H. Barrett, “Predicting human performance by a channelized Hotelling observer,” Proc. SPIE 1768, 161–168 (1992).

16. H. H. Barrett, J. Yao, J. P. Rolland, and K. J. Myers, “Model observers for assessment of image quality,” Proc. Natl. Acad. Sci. USA 90, 9758–9765 (1993). [PubMed]  

17. M. Y. Chiu, H. H. Barrett, R. G. Simpson, C. Chou, J. W. Arendt, and G. R. Gindi, “Three-dimensional radio-graphic imaging with a restricted view angle,” J. Opt. Soc. Am. 69, 1323–1333 (1979).

18. R. Pierri, A. Liseno, F. Soldovieri, and R. Solimene, “In-depth resolution for a strip source in the Fresnel zone,” J. Opt. Soc. Am. A 18, 352–359 (2001).

19. A. D. Polianin and A. V. Manzhirov, Handbook of integral equations (CRC Press, Florida, 1998) chapter 11.2.

20. C. Lanczos, Linear differential operators (Van Nostrand, London, 1961).

21. A. E. Burgess, “Visual signal detection with two-component noise: low-pass spectrum effects,” J. Opt. Soc. Am. A 16, 694–704 (1999).

22. I. Reiser and R. M. Nishikawa, “Task-based assessment of breast tomosynthesis: Effects of acquisition parameters and quantum noise,” Med. Phys. 37, 1591–1600 (2010). [PubMed]  

23. F. O. Bochud, C. K. Abbey, and M. P. Eckstein, “Statistical texture synthesis of mammographic images with clustered lumpy backgrounds,” Opt. Express 4, 33–43 (1998).

24. K. G. Metheany, C. K. Abbey, N. Packard, and J. M. Boone, “Characterizing anatomical variability in breast CT images,” Med. Phys. 35, 4685–4694 (2008). [PubMed]  

25. C. K. Abbey and J. M. Boone, “An ideal observer for a model of x-ray imaging in breast parenchymal tissue,” (E.A. Krupinski, Ed.): IWDM 2008, LNCS5116, 393–400 (2008).

26. C. Zhang, P. R. Bakic, and A. D. A. Maidment, “Development of an anthropomorphic breast software phantom based on region growing algorithm,” Proc. SPIE 6918, 69180V (2008).

27. S. Park, J. M. Witten, and K. J. Myers, “Singular vectors of a linear imaging system as efficient channels for the bayesian ideal observer,” IEEE Trans. Med. Imaging 28, 657–668 (2009). [PubMed]  

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (8)

Fig. 1
Fig. 1 Geometry of the considered tomographic system. The different source positions are χm , and the screen is placed along the z axis.
Fig. 2
Fig. 2 The change of variables moves the source plane to minus infinity. Rays originating from two source positions, one on-axis and one off-axis, are shown. The object in this figure is distorted compared to the object in Fig. 1, but the image is not.
Fig. 3
Fig. 3 Eigenvalues μρx,j for j ranging from 1 to 11.
Fig. 4
Fig. 4 Absolute values of the singular functions (a) U 1(ρx,z̃), (b) U 2(ρx,z̃), (c) U 6(ρx,z̃), and (d) U 11(ρx,z̃).
Fig. 5
Fig. 5 Example object f ̃(, ), including a designer nodule and a 3D clustered lumpy background. (a) Section through the object at = −60mm, or z = −50mm. (b) Section through the object at = = 0. Both sections go through the center of the designer nodule.
Fig. 6
Fig. 6 Measurement component of the object in Fig. 5. (a) Section through the measurement component at = −60mm, or z = −50mm. (b) Section through the measurement component at = y = 0. Both sections go through the center of the designer nodule.
Fig. 7
Fig. 7 (a) Image or projection of the measurement component in Fig. 6. (b) Image or projection of the object in Fig. 5. Both are calculated through direct numerical propagation of the object and measurement component respectively, for the source placed on the z-axis.
Fig. 8
Fig. 8 Difference between the projection of the measurement component (see Fig. 7(a)) and the projection of the object (see Fig. 7(b)). The maximum absolute value is around 3 × 10−6 compared to the maximum of the projections, which is around 2 × 10−4. This gives an error of roughly 1%.

Equations (39)

Equations on this page are rendered with MathJax. Learn more.

I m ( r d ) = A cos θ | r 1 r 0 | 2 exp [ 0 | r 1 r 0 | d l f ( r 1 r 1 r 0 | r 1 r 0 | l ) ] ,
log ( I m ( r d ) | r 1 r 0 | 2 A cos θ ) = | r 1 r 0 | s 0 s d z f ( r 0 + ( r 0 r 1 ) z s ) .
log [ 1 A s ( | r d r m | 2 + s 2 ) 3 / 2 I m ( r d ) ] = 0 s d z f ( r d + ( r d r m ) z s , z )
g m ( r d ) = s ( s 2 + | r d r m | 2 ) 1 / 2 log [ 1 A s ( s 2 + | r d r m | 2 ) 3 / 2 I m ( r d ) ] .
g m ( r d ) = [ H f ] m ( r d ) = z 1 z 2 d z d 2 r b f ( r , z ) f ( r , z ) h m ( r , r d , z ) ,
[ H g ] ( r , z ) = m = 1 M d 2 r d b g ( r d ) g m ( r d ) h m ( r , r d , z ) ,
h m ( r , r d , z ) = δ ( r r d ( r d r m ) z s ) .
r = ( x , y ) = ( s s + z x , s s + z y ) , z = s s + z z
r = ( x , y ) = ( s s z x , s s z y ) , z = s s z z .
h m ( r r d , z ) = ( s z s ) 2 δ ( r r d + z s r m ) ,
| ( x , y , z ) ( x , y , z ) | = s 4 ( s z ) 4
g m ( r d ) = [ H f ] m ( r d ) = z 1 z 2 d z d 2 r s 4 ( s z ) 4 b f ( r , z ) f ( r , z ) h m ( r r d , z ) ,
[ H g ] ( r , z ) = m = 1 M d 2 r d b g ( r d ) g m ( r d ) h m ( r r d , z ) ,
f 1 ( r , z ) , f 2 ( r , z ) obj = d 2 r z 1 z 2 d z b f ( r , z ) f 1 ( r , z ) f 2 ( r , z ) s 4 ( s z ) 4
g 1 ( r d ) , g 2 ( r d ) im = m = 1 M d 2 r d b g ( r d ) g 1 m ( r d ) g 2 m ( r d ) .
H f , g im = f , H g obj .
[ H , H g ] m ( r d ) = m = 1 M d 2 r d b g ( r d ) g m ( r d ) k m m ( r d , r d )
k m m ( r d , r d ) = z 1 z 2 d z d 2 r b f ( r , z ) s 4 ( s z ) 4 h m ( r r d , z ) h m ( r r d , z ) .
[ H H f ] ( r , z ) = z 1 z 2 d z d 2 r b f ( z , r ) f ( z , r ) p ( r , r , z , z )
p ( r , r , z , z ) = s 4 ( s z ) 4 m = 1 M d 2 r d b g ( r d ) h m ( r r d , z ) h m ( r r d , z ) .
k m m ( r d r d ) = z 1 z 2 d z δ ( r d r d z s ( r m r m ) ) .
v ρ , j ( r d ) = V j ( ρ ) exp ( 2 π i ρ r d )
K ( ρ ) V j ( ρ ) = μ ρ , j V j ( ρ )
k m m = d 2 r d k m m ( r d ) exp ( 2 π i ρ r d ) .
k m m = z 1 z 2 d z exp ( 2 π i ρ ( r m r m ) z s ) ,
K m m = z 2 z 1
K m m = i s 2 π ρ ( r m r m ) { exp [ 2 π i ρ ( r m r m ) z 2 s ] exp [ 2 π i ρ ( r m r m ) z 1 s ] }
p ( r r , z , z ) = ( s z s z ) 2 m 1 M δ ( r r + r m z z s ) ,
z 1 z 2 d z ( s z s z ) 2 U j ( ρ , z ) m = 1 M exp ( 2 π i ρ r m ( z z ) 1 s ) = μ ρ , j U j ( ρ , z )
u j , ρ ( r , z ) = U j ( ρ , z ) exp ( 2 π i ρ r ) .
U ^ j ( ρ , z ) = z 1 z 2 d z U ^ ( ρ , z ) m = 1 M exp ( 2 π i z s ρ r m ) exp ( 2 π i z s ρ r m ) .
[ H v ρ , j ] = μ ρ , j u ρ , j ( r , z ) .
u ρ , j ( r , z ) = exp ( 2 π i ρ r ) U j ( ρ , z )
U j ( ρ , z ) = 1 μ ρ , j ( s z s ) 2 m = 1 M [ V j ( ρ ) ] m exp ( 2 π i z s ρ r m ) .
g m ( r d ) = A rect ( 2 | r d | R ) ( 1 | r d | 2 R 2 ) v
f ( r ) = A 3 4 R rect ( 2 | r | R ) ( 1 | r | 2 R 2 ) ,
g ( r d ) = k = 1 K n = 1 N k b ( 1 a k n ( r d r k r k n ) , R θ k n )
f meas ( r , z ) = d 2 ρ j = 1 M A j ( ρ ) U j ( ρ , z ) exp 2 π i ρ r ,
A j ( ρ ) = z 1 z 2 d z F ( ρ , z ) U j ( ρ , z ) s 4 ( s z ) 4
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.