## Abstract

The singular value decomposition (SVD) of an imaging system is a computationally intensive calculation for tomographic imaging systems due to the large dimensionality of the system matrix. The computation often involves memory and storage requirements beyond those available to most end users. We have developed a method that reduces the dimension of the SVD problem towards the goal of making the calculation tractable for a standard desktop computer. In the presence of discrete rotational symmetry we show that the dimension of the SVD computation can be reduced by a factor equal to the number of collection angles for the tomographic system. In this paper we present the mathematical theory for our method, validate that our method produces the same results as standard SVD analysis, and finally apply our technique to the sensitivity matrix for a clinical CT system. The ability to compute the full singular value spectra and singular vectors could augment future work in system characterization, image-quality assessment and reconstruction techniques for tomographic imaging systems.

© 2010 Optical Society of America

## 1. Introduction

Medical imaging systems can be characterized by a linear system operator ℋ that maps a continuous object distribution through the system onto an image plane. The singular value decomposition (SVD) of ℋ provides an orthonormal set of basis functions that have numerous applications toward system characterization and image-quality assessment. For example, SVD data can be used to compute the null functions for an imaging system [1,2]. There have also been numerous publications that discuss the use of SVD in reconstruction methods [3–6]. Additionally there is ongoing work investigating the use of singular vectors as channels for mathematical observer models [7, 8].

The forward model for an imaging system is often approximated by a sensitivity matrix **H** that maps voxels in object space to detector pixels in image space. To minimize the error in this approximation, it is desirable to have a large number of voxels to sample object space causing the column dimension *N* of the sensitivity matrix to be large. The row dimension of the sensitivity matrix is especially large in tomographic systems where *M* is the product of the number detector elements *P* and the number of collection angles *J*. The consequent large dimensions of **H** for tomographic medical imaging systems are problematic when using conventional SVD algorithms because they require adequate computer memory to compute and store **HH**^{†} or **H**^{†}**H**.

In this paper, we present a method to reduce the dimension of the SVD computation. We show that for tomographic systems with discrete rotational symmetry the dimension of the problem can be reduced by a factor equal to the number of collection angles. The theory and mathematics of our method are presented in Sections 2 – 5. In Section 6, we present results from a proof-of-concept experiment. These results verify that our reduced dimension algorithm produced the same data set as conventional SVD analysis. Finally, we show the singular value spectra and singular vector data that we obtained by applying the reduced dimension SVD to the sensitivity matrix for a simulated 3rd generation cone beam x-ray CT system.

## 2. The system operator and the symmetry operators

The symmetry operators we will be considering arise from rotations about an axis of the imaging system. If an object is described by a function *f*(**r**) of the three dimensional location vector **r**, and **R*** _{j}* is the matrix that describes a rotation by the angle
$\frac{2\pi j}{J}$ about the symmetry axis, then an operator 𝒯

*acting in object space is given by ${\mathcal{T}}_{j}f(\mathbf{r})=f({\mathbf{R}}_{j}^{-1}\mathbf{r})$. The symmetry group in this case is*

_{j}*Z*, the cyclic group of order

_{J}*J*. This group is represented in ℝ

^{3}by the set of matrices {

**I**,

**R**

_{1},...,

**R**

_{J−1}} which satisfy

**R**

_{j}**R**

*=*

_{k}**R**

_{j+k}and ${\mathbf{R}}_{1}^{J}=\mathbf{I}$. These matrices can all be expressed in terms of

**R**

_{1}by ${\mathbf{R}}_{j}={\mathbf{R}}_{1}^{j}$. Similarly this cyclic group is represented in object space by the set of operators {ℐ, 𝒯

_{1},..., 𝒯

_{J−1}}, which satisfy the same multiplication rules as the rotation matrices.

Components of the data vector will be indexed by a vector **m** given by **m** = (**p**, *j*) = (*p _{x}*,

*p*,

_{y}*j*). The vector

**p**identifies the location of a detector element in a given detector array, and the index

*j*identifies the detector array. These

*J*identical detector arrays are assumed to be spaced around the object at angles $\frac{2\pi j}{J}$ and to have identical apertures in front of them. If

*h*

**(**

_{p}**r**) is the detector sensitivity function for the detector located at position

**p**on the detector with

*j*= 0, then the detector sensitivity function for detector

**m**is given by

*h*

**(**

_{m}*r*) = 𝒯

_{j}h**(**

_{p}**r**). The mean data vector

**ḡ**has components given by

**ḡ**= ℋ

*f*. If the number of detector elements in one detector array is

*P*, then

**ḡ**is a concatenation of

*J P*-dimensional vectors

**ḡ**

*:*

_{j}*U*to data space

*V*= ℝ

*. We represent this symbolically by ℋ :*

^{JP}*U*→

*V*. The decomposition of

**ḡ**given above corresponds to the decomposition of

*V*into an orthogonal direct sum of subspaces

*V*, all of which are

_{j}*P*-dimensional: A different decomposition of

*V*into subspaces arises from symmetry considerations and will be described below.

The components of the subvectors **ḡ*** _{j}* can be thought of as the result of an operator ℋ

*acting on the object function:*

_{j}*to be an operator that takes an object function and gives a vector in the subspace*

_{j}*V*of the full data space In other words, the result of applying this operator is the vector

_{j}*would be obtained by covering all of the apertures except the*

_{j}*j*one. We will assume that the range of ℋ

^{th}*is all of*

_{j}*V*. We may then write the action of the system operator ℋ on an object function as with ℋ

_{j}*:*

_{j}*U*→

*V*and

*range*(ℋ

*) =*

_{j}*V*.

_{j}Corresponding to the rotations of the object there are rotations of the data given by matrices **S*** _{j}*. The matrix

**S**

_{1}is described by the equation

**S**

*satisfy ${\mathbf{S}}_{j}={\mathbf{S}}_{1}^{j}$. We may assume that the detectors are numbered in the clockwise direction so that ℋ*

_{j}*=*

_{j}**S**

*ℋ*

_{j}_{0}𝒯

_{–j}. This equation says that we can get the projection on the

*j*detector array by rotating the object by an angle of $-\frac{2\pi j}{J}$, getting the projection on the 0

^{th}*detector array, and then moving this*

^{th}*P*-dimensional data vector back to the correct location in the

*JP*-dimensional vector

**ḡ**.

The operator ℋ satisfies the symmetry condition **S*** _{k}*ℋ = ℋ𝒯

*. In mathematical terms we say the operator ℋ intertwines the group representation {ℐ, 𝒯*

_{k}_{1},...,𝒯

_{J}_{–1}} with the representation {

**I**,

**S**

_{1},...,

**S**

_{J}_{–1}}. Intuitively the symmetry condition tells us that rotating the object $\frac{2\pi j}{J}$ around the system axis gives the same result as rotating the ring of detectors through the same angle in the opposite direction. This condition places restrictions on the form of the system operator. These restrictions can be used to reduce the dimensionality of the SVD calculation for ℋ by at least a factor of

*J*. Before we show this, we will introduce the inner products that we will use for object space and data space.

## 3. Inner products and adjoint operators

The SVD of any operator involves the notion of an adjoint operator. To define the adjoint of an operator, we need an inner product in the object space and in the data space. The inner product in object space will be defined by

*S*is a support region for all object functions and is invariant under the rotations in the symmetry group. For example,

*S*could be a circular cylinder whose axis is the axis of the rotations in the group. In data space, we have the standard inner product

We define adjoint operators ${\mathcal{T}}_{j}^{\u2020}$ and ${\mathbf{S}}_{j}^{\u2020}$ by the equations ${\left({\mathcal{T}}_{j}f,{f}^{\prime}\right)}_{U}={\left(f,{\mathcal{T}}_{j}^{\u2020}{f}^{\prime}\right)}_{U}$ and ${\left({\mathbf{S}}_{j}\mathbf{g},{\mathbf{g}}^{\prime}\right)}_{V}={\left(\mathbf{g},{\mathbf{S}}_{j}^{\u2020}{\mathbf{g}}^{\prime}\right)}_{V}$. The symmetry operators are unitary: ${\mathcal{T}}_{j}^{\u2020}={\mathcal{T}}_{j}^{-1}={\mathcal{T}}_{-j}$ and ${\mathbf{S}}_{j}^{\u2020}={\mathbf{S}}_{j}^{-1}={\mathbf{S}}_{-j}$. The first equality in each case is the unitary property, while the second follows from the definitions of the symmetry operators.

We can also define the adjoint of the system operator, ℋ^{†}, by the equation (ℋ*f*, **g**)* _{V}* = (

*f*, ℋ

^{†}

**g**)

*. This operator is a map from data space to object space: ℋ*

_{U}^{†}:

*V*→

*U*. In terms of the detector sensitivity functions, this adjoint operator is given by

*V*. Symbolically, we write ℋ :

_{j}*V*→

*U*and

*nullspace*$\left({\mathscr{H}}_{j}^{\u2020}\right)={V}_{j}^{\perp}$. Next, we will consider a different decomposition of data space that arises from the symmetry group of the system.

## 4. Projection operators

In order to reduce the dimensionality of the SVD problem for ℋ, we need to decompose object functions into components that are eigenfunctions of the 𝒯* _{j}* operators and data vectors into components that are eigenvectors of the

**S**

*matrices. This process is essentially a Fourier decomposition with respect to the group of rotations. To this end define operators 𝒬*

_{j}*and*

_{k}**P**

*, for*

_{k}*k*= 0, 1,...,

*J*– 1 via the equations [9]

*𝒬*

_{k}*= 𝒬*

_{k}*and*

_{k}**P**

_{k}**P**

*=*

_{k}**P**

*. They are also Hermitian: ${\mathcal{Q}}_{k}^{\u2020}={\mathcal{Q}}_{k}$ and ${\mathbf{P}}_{k}^{\u2020}={\mathbf{P}}_{k}$. These two properties imply that these operators are orthogonal projections onto their ranges. If we apply a symmetry operator after a projection operator the result is*

_{k}*Ũ*is the range of 𝒬

_{j}*, and*

_{j}*Ṽ*is the range of

_{j}**P**

*, then we have the corresponding orthogonal decompositions of object space and data space This suggests that we define the operators ℋ̃*

_{j}*by ℋ̃*

_{k}*= ℋ𝒬*

_{k}*=*

_{k}**P**

*ℋ, where the second equality here follows from the symmetry property of ℋ. These operators are maps from object space to data space with the following ranges and null spaces:*

_{k}*range*(ℋ̃

*) =*

_{k}*Ṽ*and

_{k}*nullspace*$\left({\tilde{\mathscr{H}}}_{k}\right)={\tilde{U}}_{k}^{\perp}$. The system operator can be expressed as a sum of these operators: In this case, the decomposition of ℋ is in block diagonal form in the sense that ℋ̃

*maps*

_{j}*Ũ*to

_{k}*Ṽ*and vanishes on

_{k}*Ũ*for

_{j}*j*≠

*k*.

The adjoint of ℋ̃* _{k}* maps data space to object space and has the following properties:

*nullspace*$\left({\tilde{\mathscr{H}}}_{k}^{\u2020}\right)={\tilde{V}}_{k}^{\perp}$ and

*range*$\left({\tilde{\mathscr{H}}}_{k}^{\u2020}\right)={\tilde{U}}_{k}$. The adjoint system operator also has block diagonal form:

*JP*×

*JP*matrix ℋℋ

^{†}has block diagonal form as well:

*Ṽ*to

_{j}*Ṽ*and vanishes on

_{j}*Ṽ*for

_{k}*k*≠

*j*. This fact, together with the orthogonality of the decompositions of object and data space, implies that the SVD problem for the system operator reduces to finding the eigenvalues and eigenvectors for the

*J*matrices ${\tilde{\mathscr{H}}}_{j}{\tilde{\mathscr{H}}}_{j}^{\u2020}$. These matrices map a

*P*-dimensional space to itself and therefore the corresponding eigenvalue problems reduce to finding the eigenvalues for

*P*×

*P*matrices. The original

*JP*×

*JP*eigenvalue problem for the SVD of ℋ has now been reduced to a set of

*JP*×

*P*eigenvalue problems, which is more computationally tractable, especially for large values of

*J*.

## 5. Reduction of dimension for SVD

We now want to find the eigenvalues and eigenvectors of the linear operator
${\tilde{\mathscr{H}}}_{k}{\tilde{\mathscr{H}}}_{k}^{\u2020}$. This operator is given by **P*** _{k}*ℋ𝒬

*ℋ*

_{k}^{†}=

**P**

*ℋ ℋ*

_{k}^{†}, where we have used the symmetry property of ℋ and the orthogonal projection properties of

**P**

*. We may write this last form out as a triple sum:*

_{k}*q*=

*p*–

*l*and re-index the sum over

*l*to get

*r*=

*j*+

*p*–

*q*and re-index the sum over

*j*to arrive at

*Ṽ*, so we will write such a vector as

_{k}**P**

_{k}**g**in the eigenvalue equation: ${\tilde{\mathscr{H}}}_{k}{\tilde{\mathscr{H}}}_{k}^{\u2020}{\mathbf{P}}_{k}\mathbf{g}=\lambda {\mathbf{P}}_{k}\mathbf{g}$. Since any vector in

*Ṽ*may be reproduced from its values on the first detector, we may assume that

_{k}**g**has the following form Now we look at the eigenvalue equation ${J}^{2}{\mathbf{P}}_{k}{\mathscr{H}}_{0}{\mathcal{Q}}_{k}{\mathscr{H}}_{0}^{\u2020}{\mathbf{P}}_{k}\mathbf{g}=\lambda {\mathbf{P}}_{k}\mathbf{g}$ and notice that, since ${\mathscr{H}}_{0}^{\u2020}$ vanishes on ${V}_{0}^{\perp}$ we must have $J{\mathbf{P}}_{k}{\mathscr{H}}_{0}{\mathcal{Q}}_{k}{\mathscr{H}}_{0}^{\u2020}\mathbf{g}=\lambda {\mathbf{P}}_{k}\mathbf{g}$. In this equation,

*J*

^{2}is replaced with

*J*because

**P**

*has a 1/*

_{k}*J*factor. These last two vectors are equal iff $J{\mathscr{H}}_{0}{\mathcal{Q}}_{k}{\mathscr{H}}_{0}^{\u2020}\mathbf{g}=\lambda \mathbf{g}$. This last equation can be regarded as a

*P*×

*P*linear system since the operator maps

*V*

_{0}to

*V*

_{0}.

The result can also be formulated in terms of a lower dimensional SVD problem. We start with
$J{\mathscr{H}}_{0}{\mathcal{Q}}_{k}{\mathscr{H}}_{0}^{\u2020}\mathbf{g}=\lambda \mathbf{g}$ and write **g**_{0} = **Zg** so that **Z** is a *P* × *M* matrix. With **g** given as above, we also have **g** = **Z**^{†}**g**_{0}. Our eigenvalue equation is now
$J{\mathscr{H}}_{0}{\mathcal{Q}}_{k}{\mathscr{H}}_{0}^{\u2020}{\mathbf{Z}}^{\u2020}{\mathbf{g}}_{0}=\lambda {\mathbf{Z}}^{\u2020}{\mathbf{g}}_{0}$. Multiplying both sides by **Z**, we get
$J\mathbf{Z}{\mathscr{H}}_{0}{\mathcal{Q}}_{k}{\mathscr{H}}_{0}^{\u2020}{\mathbf{Z}}^{\u2020}{\mathbf{g}}_{0}=\lambda {\mathbf{g}}_{0}$. Using the orthogonal projection properties of 𝒬* _{k}*, we can write this as

*J*(

**Z**ℋ

_{0}𝒬

*)(*

_{k}**Z**ℋ

_{0}𝒬

*)*

_{k}^{†}

**g**

_{0}=

*λ*

**g**

_{0}. Therefore, the eigenvalue problem for each

*k*is equivalent to the SVD of the operator

**Z**ℋ

_{0}𝒬

*. Finally, note that*

_{k}**Z**ℋ

_{0}𝒬

*=*

_{k}**Z**ℋ𝒬

*=*

_{k}**ZP**

*ℋ so that we could also do the SVD for these last two operators.*

_{k}To make this calculation explicit, we define the operator 𝒜* _{k}* by
${\mathcal{A}}_{k}=\sqrt{J}\mathbf{Z}{\mathscr{H}}_{0}{\mathcal{Q}}_{k}$. Then we have the SVD equations for this operator in the data space for a single detector:
${\mathcal{A}}_{k}{\mathcal{A}}_{k}^{\u2020}{\mathbf{g}}_{0\mathit{kl}}={\lambda}_{\mathit{kl}}{\mathbf{g}}_{0\mathit{kl}}$. This is an eigenvector equation for a

*P*×

*P*matrix. The eigenvalues are real and non-negative. To get the singular vectors in the full data space, we use: ${\mathbf{g}}_{\mathit{kl}}=\sqrt{J}{\mathbf{P}}_{k}{\mathbf{Z}}^{\u2020}{\mathbf{g}}_{0\mathit{kl}}$. The $\sqrt{J}$ factor is there to preserve normalization. The singular functions

*f*in object space are then given by: $\sqrt{{\lambda}_{\mathit{kl}}}{f}_{\mathit{kl}}={\mathscr{H}}^{\u2020}{\mathbf{g}}_{\mathit{kl}}$. This expression can be expanded in terms of the single-detector system operator by using ${\mathscr{H}}^{\u2020}{\mathbf{g}}_{\mathit{kl}}=\sqrt{J}{\mathscr{H}}^{\u2020}{\mathbf{P}}_{k}{\mathbf{Z}}^{\u2020}{\mathbf{g}}_{0\mathit{kl}}=\sqrt{J}{\mathcal{Q}}_{k}{\mathscr{H}}^{\u2020}{\mathbf{Z}}^{\u2020}{\mathbf{g}}_{0\mathit{kl}}=\sqrt{J}{\mathcal{Q}}_{k}{\mathscr{H}}_{0}^{\u2020}{\mathbf{Z}}^{\u2020}{\mathbf{g}}_{0\mathit{kl}}$. This expression shows that we can generate the singular object functions by backprojecting a single detector singular vector with ${\mathscr{H}}_{0}^{\u2020}$ and then applying the operator 𝒬

_{kl}*. This operator rotates the function through each of the*

_{k}*J*angles, multiplies each rotated function by a phase factor determined by

*k*, and then sums. The symmetry properties of the resulting singular functions are determined by

*k*.

## 6. Examples

A proof-of-concept experiment was conducted to compare the singular data from the reduced dimension SVD to the singular data from a standard dimension SVD computation. Each technique was used to calculate the SVD of a simulated 2-D parallel beam source x-ray CT system. In the absence of scattering, the system operator for an x-ray CT system can be linearized by making the approximations discussed in Barrett and Myers [2]. The discrete representation of this linear operator is the sensitivity matrix for the modeled system.

The sensitivity matrix, **H**, for the simulated x-ray CT system was computed using a discrete-to-discrete forward model to map from object space **U** to image space **V**. The object space was discretized into *N* = 256 × 256 voxels with an imposed circular field of view. The image space was measured with a *P* = 32 detector element camera collecting at *J* = 90 equally spaced angles over 360 degrees of rotation about the center of the field of view. As discussed in Section 5, the reduced dimension SVD therefore led to *J* SVD computations of the *P* × *N* matrices
${\mathbf{A}}_{k}=\sqrt{J}\mathbf{Z}{\mathbf{P}}_{k}\mathbf{H}$. In comparison, the standard dimension SVD consisted of a single SVD computation of the *JP* × *N* matrix **H**. In both cases, the algorithm used to compute singular data for either the reduced dimension or the standard dimension matrix was the SVD function in the MATLAB software package [10]. Computations for the proof-of-concept experiment were performed on an iMac computer equipped with a 3.2 GHz Intel Core i3 processor and 8GB of DDR3 memory. Calculation of all singular values and associated singular vectors up to the rank of **H** on this machine required 15.86 minutes using the reduced dimension SVD algorithm and 473.55 minutes using the standard dimension SVD algorithm. Our reduced dimension algorithm was therefore approximately 30 times faster than the standard dimension algorithm at computing the full SVD of the simulated sensitivity matrix. The speed of our algorithm could be further improved by trivially distributing the SVD computation of the **A _{k}** matrices onto the cores of a multi-core machine such as a Mac Pro computer.

Figure 1 shows the largest 16 singular values computed using the reduced dimension SVD and the standard dimension SVD. A slice through the central plane of the object space vectors for the 10 largest singular values is presented in Fig. 2.

In Fig. 1 we can see that the some singular vectors occur in pairs with the same singular value (doublets), while others occur by themselves (singlets). The existence of this type of pattern would have been predicted from an analysis that made use of the full symmetry group of the system, which would include reflections as well as rotations [11–13]. By replacing the cyclic group of rotations used in this work with the dihedral group of rotations and reflections, we can show that the spaces spanned by the singular vectors that share a singular value are predicted to be either one-dimensional or two-dimensional. This prediction follows from the fact that the irreducible representations of the dihedral group are either one-dimensional or two-dimensional [9].

If we look at the standard dimension SVD data in column (a) of Fig. 2, we can see that singular vectors in the doublets are related to each other by a reflection through an axis of symmetry of the system. On the other hand, singular vectors in the singlets are invariant under reflections. This structure is also apparent in the data from the reduced dimension SVD data in columns (b) and (c) of Fig. 2. Since the operator **HH**^{†} is a real matrix, the real and imaginary parts of the singular vectors that the symmetry analysis produced are separately real eigenvectors of this matrix with the same eigenvalue. Since doublet eigenvectors share an eigenvalue, each of the four real eigenvectors corresponding to a given doublet eigenvalue in columns (b) and (c) is only required to be a linear combination of the two eigenvectors for that doublet in column (a). This extra degree of freedom for the doublets explains why the vectors in the column (a) do not exactly match up with any of the vectors in columns (b) and (c) for the doublet eigenvalues.

In order to check the outputs from the reduced dimension algorithm we tested the orthonormality of the computed singular vectors. Orthonormality of singular vectors *u _{n}* can be expressed mathematically in terms of the inner product and the Kronecker delta as [2]

**H**for the proof-of-concept system. In our forward model, we used subvoxels and monte carlo techniques when quantifying the values of

**H**at each collection angle. Consequently, our forward model does not strictly exhibit the property of discrete rotational symmetry discussed in Section 2. To further assess how this modeling error manifested itself in the SVD data, we compared singular vectors produced by the reduced dimension and standard dimension techniques. For this analysis we chose singlets

*u*

_{1},

*u*

_{6}and

*u*

_{15}. We subtracted the singlet output by the reduced dimension SVD from the singlet output by the standard SVD resulting in an image of the error for the singlet. The results of this subtraction are shown in Fig. 3. The structure of these images shows that that there are contributions from additional singular vectors within each singlet that are a consequence of not preserving discrete rotational symmetry in our forward model. Note that the magnitude of the inner product between two singular vectors of differing indices that came from a reduced dimension SVD computation of the same

**A**matrix (

_{k}*e.g. u*

_{1}and

*u*

_{6}) are in fact zero to within the numerical precision of the computer.

The reduced dimension SVD was next used to calculate singular data for the sensitivity matrix of a simulated clinical x-ray CT system. The specifications for the simulated system were chosen to roughly approximate a 3rd generation Siemens Sensation 64 scanner (Siemens Healthcare) [14]. In order to shorten the computation time the number of slices for our modeled system was changed from 64 slices to 12 slices. The model consisted of a cone beam source with source to center of rotation distance of *R _{F}* = 600 mm and a center of rotation to center of detector array distance of

*R*= 450 mm. The slice thickness for each of the 12 detector column was 0.5 mm and each detector row had 672 channels of 0.9 mm width. This detector geometry was modeled as a rectangular array with

_{D}*P*= 8064 detector elements. The detector array collected data at J = 180 equally spaced angles over 360 degrees. Object space was centered at the center of rotation and discretized using

*N*= 512 × 512 × 5 square voxels of dimension 0.6741 mm. This choice of voxel parameters ensured that a ray traveling from the source through the edge of object space was observed by the detector array. A circular field of view was also imposed within each 512 × 512 slice through object space to preserve rotational symmetry. An illustration of the geometry for the modeled system is shown in Fig. 4.

Using a geometric forward projector, the sensitivity matrix for the simulated Siemens scanner was calculated. Applying the reduced dimension algorithm resulted in *J* SVD computations of *P* × *N* matrices. The singular data for a given *P* × *N* matrix was computed using a power methods algorithm [3, 15]. Figure 5 shows the singular value spectra that was computed for the simulated Siemens x-ray CT system. Figure 6 shows the central 512 × 512 slice through the object space singular vector associated with the singular value spectra for singular value indices 1 – 10. Similarly, Fig. 7 and Fig. 8 show the central slice through the singular vectors associated with indices 88 – 92 and 175 – 179 respectively.

## 7. Conclusion

In this paper, we have shown that in the presence of discrete rotational symmetry, computation of the SVD for the imaging operator can be reduced by a factor equal to the number of collection angles for the system geometry. The results from our proof-of-concept experiment confirm that the reduced dimension SVD algorithm results in a decomposition that is analogous to that produced by a standard dimension SVD algorithm. However, data from the reduced SVD algorithm will be corrupted by approximations and estimates in the forward model that break the discrete rotational symmetry of the system operator. The proof-of-concept experiment also showed that the reduced dimension algorithm can significantly shorten the time required to compute an SVD. Using the same computing resources, the reduced SVD algorithm was able to calculate the singular value spectra and associated singular vectors up to the rank of our tested system matrix in 15.86 minutes compared to 463.55 minutes for a standard dimension SVD computation which is a factor of 30 times faster.

Our algorithm is especially useful for tomographic medical imaging systems in which the system matrix has large dimensions resulting in memory issues for a standard desktop computer running a SVD computation. As we have demonstrated, we were able to apply the reduced dimension SVD technique to the simulated system matrix of a 3rd generation clinical x-ray CT system resulting in the singular values and singular vectors associated with 180 values from the singular value spectra. If we were to run our algorithm for a longer time, we could compute more terms from the spectra yielding a data set that could be used to quantify image-quality assessment metrics such as the null space and measurement space for the imaging system [2]. Our reduced dimension SVD algorithm can therefore be used for tomographic medical imaging systems to output the higher order terms of the singular value spectra that would require extensive computing resources using standard SVD.

## Acknowledgments

The authors thank Dr. Harrison Barrett and Dr. Lana Volokh for their helpful discussions on the topic of singular value decomposition. This research was supported by NIBIB/NIH grants RC1-EB010974 and P41-EB002035.

## References and links

**1. **H. H. Barrett, J. N. Aarsvold, and T. J. Roney, “Null functions and eigenfunctions: tools for the analysis of imaging systems,” Prog. Clin. Biol. Res. **363**, 211–226 (1991). [PubMed]

**2. **H. Barrett and K. Myers, *Foundations of Image Science* (John Wiley and Sons, 2004).

**3. **A. K. Jorgensen and G. L. Zeng, “SVD-Based evaluation of multiplexing in multipinhole SPECT systems,” Int. J. Biomed. Imaging **2008**, 769195 (2008). [CrossRef]

**4. **Y. Hsieh, G. Zeng, and G. Gullberg, “Projection space image reconstruction using strip functions to calculate pixels more natural for modeling the geometric response of the SPECT collimator,” IEEE Trans. Med. Imaging **17**(1), 24–44 (1998). [CrossRef] [PubMed]

**5. **G. Zeng and G. Gullberg, “An SVD study of truncated transmission data in SPECT,” IEEE Trans. Nucl. Sci. **44**(1), 107–111 (1997). [CrossRef]

**6. **G. Gullberg and G. Zeng, “A reconstruction algorithm using singular value decomposition of a discrete representation of the exponential radon transform using natural pixels,” IEEE Trans. Nucl. Sci. **41**(6), 2812–2819 (1994). [CrossRef]

**7. **S. Park and E. Clarkson, “Efficient estimation of ideal-observer performance in classification tasks involving high-dimensional complex backgrounds,” J. Opt. Soc. Am. A **26**(11), 59–71 (2009). [CrossRef]

**8. **S. Park, J. Witten, and K. Myers, “Singular vectors of a linear imaging system as efficient channels for the Bayesian ideal observer,” IEEE Trans. Med. Imaging **28**(5), 657–668 (2009). [CrossRef] [PubMed]

**9. **M. Hamermesh, *Group Theory and its Application to Physical Problems* (Dover Publications, 1989).

**10. **E. Anderson, Z. Bai, and C. Bischof, *LAPACK Users’ Guide* (Society for Industrial Mathematics, 1999). [CrossRef]

**11. **J. Aarsvold, “Multiple-pinhole transaxial tomography: a model and analysis,” Ph. D. Dissertation (University of Arizona, 1993).

**12. **J. Aarsvold and H. Barrett, “Symmetries of single-slice multiple-pinhole tomographs,” in *Conference Record of the 1996 IEEE NSS/MIC* (IEEE, 1997), vol. 3, pp. 1673–1677. [CrossRef]

**13. **P. Varatharajah, B. Tankersley, and J. Aarsvold, “Discrete models and singular-value decompositions of single-slice imagers with orthogonal detectors,” in *Conference Record of the 1998 IEEE NSS/MIC* (IEEE, 1999), vol. 2, pp. 1184–1188.

**14. **S. Steckmann, M. Knaup, and M. Kachelrieß, “High performance cone-beam spiral backprojection with voxel-specific weighting,” Phys. Med. Biol. **54**(12), 3691–3708 (2009). [CrossRef] [PubMed]

**15. **E. Isaacson and H. Keller, *Analysis of Numerical Methods* (Dover Publications, 1994).