## Abstract

High-throughput processing of parallel-beam X-ray tomography at synchrotron facilities is lacking a reliable and robust method to determine the center of rotation in an automated fashion, i.e. without the need for a human scorer. Well-known techniques based on center of mass calculation, image registration, or reconstruction evaluation work well under favourable conditions but they fail in cases where samples are larger than field of view, when the projections show low signal-to-noise, or when optical defects dominate the contrast. Here we propose an alternative technique which is based on the Fourier analysis of the sinogram. Our technique shows excellent performance particularly on challenging data.

© 2014 Optical Society of America

## 1. Introduction

Parallel-beam X-ray tomography is an imaging technique by which the internal 3D structure of a sample can be reconstructed from 2D projections formed by the penetration of parallel X-rays through the sample at a series of different angles in the range of [0; π] (see Fig. 1). Due to the parallelism of the penetrating x-rays, the obtained 2D projections can be separated into independent 1D-projection rows. The sequence of these rows throughout the angular projection range forms a sinogram, which is a 2D image for each individual row. Applying a reconstruction method such as filtered back-projection (FBP) [1] on an individual sinogram yields a reconstructed 2D slice of the sample. Combining all slices creates the 3D image of the sample.

The position of the rotation axis in reference to the incoming beam direction and camera orientation is given by the orientation of the mechanical rotation stage which spins the sample. In a well-aligned tomography setup, the rotation axis is perpendicular to the incoming beam direction and to the rows of the camera sensor. It is essential for the quality of the reconstruction to determine accurately the location of the projected rotation axis, or the center of rotation (CoR). An error of 1 pixel in the calculation of the CoR can return visible artifacts in the reconstructed image, as shown in Fig. 2.

There are two main families of methods for the detection of the CoR. The first category uses digital alignment or image registration of two opposing projections at 0° and 180°. Thus, the quality of their performance depends on the reliability of algorithms for the alignment [2,3] and makes use of only a limited amount of projection space information provided by these two projections. These methods are not feasible when projections have low contrast or if they are disturbed by fixed defects of the optics system, as exemplified in Fig. 3.

The second category of methods evaluates image metrics on the reconstruction space to find the best reconstructed image [4,5]. These techniques are promising, make use of all the available information, but are very time-consuming due to reconstruction process. Besides, they are not applicable for reconstructed images having strong ring artifacts.

Another method that has been used in the past seeks to measure the motion of the overall centre-of-mass of the rotating slice [6]. This should follow a sinusoidal path assuming the sample is completely contained in the field of view, and there are no diffraction, refraction, or nonlinear effects. However, we found that this method did not yield the correct value even for good quality experimental data.

Even with advanced correction for the systematic defects (flat field correction) certain types of defects cannot be perfectly corrected, arising from such processes as non-linear response, time fluctuation, and complete saturation or completely missing detector areas. The prior methods are sensitive to such flawed information, which then can bias the above methods so as to offset the detected CoR.

In this report, we propose an alternative approach which uses the Fourier space information of the sinogram. This technique is easy to implement, has low computational cost, and is very robust to challenging tomographic data. The technique is presented in Section 2 along with a brief review of the existing techniques. The performance of all the techniques is evaluated and compared in Section 3 using experimental data. Finally, in Section 4 we present the discussion and the conclusion.

## 2. Method

#### 2.1 Proposed method

In parallel-beam tomography, a 2D object, mathematically represented by *f(x,y)*, is reconstructed from a set of its 1D projections, *P,* at different angles *θ* which is defined by [1]:

*δ*is the Dirac’s delta-function, and

*t*is the projection co-ordinate at the detector plane. This is commonly known as a sinogram. Figure 4 illustrates a computationally generated phantom (Fig. 4(a)) and corresponding sinogram (Fig. 4(b)). The origin of

*t (t = 0)*coincides with the location of the CoR projected onto the detector plane.

The Fourier transform of the sinogram,$\widehat{P}$, is given by

*u*is the spatial frequency and

*ν*is the angular harmonic number [7]. As presented in [7], significant Fourier coefficients of the sinogram are constrained inside a double-wedge region (Fig. 4(e)) defined bywhere

*r*is the outermost radius of the object. The origin of the double-wedge may be understood by considering the origin of the intensity in the Fourier transform at the given point in Fourier space

*(u,ν)*. The value arises as the convolution of a plane wave whose peaks lie at a slope defined by this point in frequency space. Where a feature in the sinogram lies parallel to this angle, there will be some significant value arising from this convolution. In a perfect sinogram, the sinusoidal trajectory corresponding to a feature in the object will have a minimum slope defined by the distance of that feature from the rotation center. The smallest slopes occurring in the complete sinogram are therefore defined by the outermost radius of the whole object, and so points in the Fourier transform corresponding to lesser slopes do not have significant intensity. A mathematical derivation is fully presented in the reference [7].

In parallel-beam tomography, projections are only collected in the range of [0; π], as there is no additional information in the angular range [π; 2π]. The measured sinogram covering the angles [0; π] can be completed to form an estimated full revolution [0; 2π] by extending it using a copy of itself reflected through *t = 0* to cover the angles [π; 2π] (see Fig. 4(b)). As the location of the origin of *t* is yet to be determined, it is not possible from *a priori* knowledge to calculate a correct full revolution sinogram. Instead, an approximate mirror axis is assumed as shown in Figs. 4(c) and 4(d) at some arbitrary location 5 or 10 pixels away from *t = 0*. In practice, one may use the horizontal centre of the image (HCoI). It is here that the properties of a projection sinogram in Fourier space become crucially important. Any estimated full revolution sinogram that has been calculated with a mirror axis displaced from the origin will show non-negligible Fourier coefficients outside the double-wedge region described in Eq. (3) (see Figs. 4(f) and 4(g)).

From these findings follows a practical procedure to calculate the CoR:

- 1- Create a reflection of the [0; π] sinogram about the HCoI axis. Shift the copy horizontally by
*s*in the search range*[s*. Then append the shifted copy vertically with the original one to form an estimated [0; 2π] sinogram._{min}; s_{max}] - 2- Apply Fourier transform to the estimated sinogram, multiply the result with a binary mask,
*M(u,ν)*, which is 1 outside the double-wedge region and 0 elsewhere, then calculate the average of coefficients, which we call the sinogram Fourier metric*Q*._{SF}where

- 3- Step 1 and 2 are repeated to get a list of
*Q*in the range_{SF}*[s*. The global minimum of_{min};s_{max}]*Q*is the best alignment of both sinogram halves. The displacement_{SF}*s*at the minimum value of the metric is then used to calculate the optimum CoR_{0}

*r,*and so the method may be fully automated. As can be seen in Figs. 4(f) and 4(g) non-negligible Fourier coefficients caused by misalignment mainly distribute around the vertical line

*ν = 0*, thus we can safely choose

*r*much larger than the radius of the object. Particularly, in our work we use

*r*equal to the width of the image, corresponding to an object of diameter twice the width of the detector.

#### 2.2 Prior methods

To evaluate our technique we compare its performance with established techniques including four methods based on image registration and two methods based on evaluating the reconstructed image. The center-of-mass method is sensitive to noise as proven in [5] so we omit it from the comparison.

### 2.2.1 Image registration methods

For calculating the CoR by image registration techniques, one horizontally reflects the projection at 180° and shifts it horizontally a certain amount, *s,* in the search range *[s _{min}; s_{max}]* to find the position giving the best correlation with the projection at 0°. Then, the CoR is calculated by Eq. (6). Following are the definitions of two different correlation coefficients:

The Pearson correlation coefficient [8] between two images is defined by

*P*and

_{ij}*P*is the numerical description of the projection at 0° and the flipped and shifted projection at 180°. $\overline{P}$ and ${\overline{P}}^{s}$are their average values respectively.

_{ij}^{s}The Spearman rank correlation coefficient [9] uses ranks of image intensities instead and is defined by

*P*is the intensity of a 1D array formed by ordering intensities of

_{k}*P*from the smallest to the largest.

_{ij}*K*is the total number of elements. The function

*R*returns the rank of intensity in the array.

The translation between two images giving the best correlation can be calculated by techniques using the fast Fourier transform (FFT), denoted by *F*.

In the phase correlation method, phase correlation between two images [10] is given by

*s*is determined by locating the maximum value of

*P*.

_{pc}The gradient correlation method uses the same formula as Eq. (9) except that the input is the complex image defined by [11]

### 2.2.2 Reconstruction based methods

These techniques compare a set of images reconstructed using a range of estimated CoRs via image metrics analogous to automatic focusing techniques. In this report we use two image metrics [5]:

The summation of the absolute value

*w*is given byThe CoR is given by searching the global minimum of

*Q*and

_{SA}*Q*in the range of estimated CoRs.

_{SN}## 3. Results

To test the performance and robustness of our method in comparison with prior methods, we applied each of the methods to two different experimental data sets. First, we used a very good data set, with good contrast between features, little noise and in which the sample fits entirely within the field of view. Second, a more realistic data set from an advanced research project, in which there is low but significant contrast in the features, much more noise, and in which the sample is larger than the field of view. Lastly, we investigate the behaviour of our method when sub-pixel alignment is required.

#### 3.1 Performance on a good data set

The sample consisted of several polymer spheres and the tomographic data were obtained from Beamline I12 at Diamond Light Source. The sample was rotate through 180° and 1800 projections were collected using 53 keV X-rays, a CdWO_{4} scintillator, and a PCO.4000 camera with an optic system giving a pixel size of 5 µm. The sample was positioned at 53 m from the source and the sample-detector distance was 1 m to take advantage of edge enhancement from refraction [12] for visual inspection (Fig. 2). As shown in Fig. 5, this is a good data set with the sample completely inside the field of view and with little noise. The correct CoR value given by a human scorer from visual inspection of the reconstructed images is 1210 in the 2451 pixel wide image.

The results of different techniques discussed in the previous paragraph are shown in Fig. 6. The phase correlation, and gradient correlation methods have been omitted from the plot as they return a single value only. To be able to demonstrate the various image metrics, we normalize each metric list by its minimum value. In this test, all techniques give the correct value of 1210.

#### 3.2 Performance on a challenging data set

The second data set is taken from a Magnesium Phosphate cement sample at a source distance of 97 m.1500 projections over 180° were collected at an X-ray energy of 100 keV, using a LuAG:Ce scintillator, and the PCO.4000 camera yielding a pixel size of 2 µm, positioned 1 m behind the sample. This data set has most challenging features (see Figs. 7(a) and 7(b)): The sample is larger than the field of view; the images suffer low signal-to-noise ratio; and there are strong artifacts arising from scintillator defects even after flat-field correction. The visual inspection (Fig. 7(c)) gives the CoR value of 1850 over the 3701 pixel wide sinogram image.

For the following CoR calculations, a Gaussian smoothing filter with a sigma value of 3 pixels is used in order to reduce stochastic noise. The CoR values obtained from the phase correlation and gradient correlation technique are 1812 and 1849.5, respectively. Figure 8 shows the results of the other techniques. Both the Pearson correlation and Spearman rank correlation technique give the same result as the gradient correlation method. Methods using image metrics of *Q _{SA}* and

*Q*give the CoR values of 1851.5 and 1852.5, respectively. However, our sinogram Fourier metric technique returns a result identical with the human scorer.

_{SN}#### 3.3 Sub-pixel performance

If we now wish to align the discrete images with sub-pixel accuracy then the impact of any scheme of estimating the values between pixels should be understood. In order to study this we have assessed the performance of our method using two different interpolation techniques, linear and cubic [13–15].

Figure 9 shows the tomographic data of a second sample made from a collection of polymer spheres. The data was collected with 53 keV X-rays, 600 projections, and a CdWO_{4} scintillator optically coupled to the PCO.4000 camera giving a pixel size of 12 µm. The sample-detector distance was 2.22 m. As indicated in Figs. 9(c) and 9(d) single pixel accuracy is not sufficient for finding the correct CoR.

The results (Fig. 10) show that cubic interpolation gives a smooth transition of *Q _{SF}* values while linear interpolation shows fluctuations. However, both interpolators return the same CoR value of 357.8 over 713 pixel image width in a searching range of [-2; 6] and a step-size of 0.2. This CoR value is confirmed by visual inspection (Fig. 9(e)) at the border between the two half-sinograms. Although the correct value can be found with a linear interpolation scheme, the smoother approach to the minimum provided by cubic interpolation could lead to a more robust automatic calculation.

## 4. Discussion and conclusion

Previously available methods for calculating the CoR have proven not to be sufficiently robust for fully automated tomographic reconstruction. They all work equally well on a good quality well behaved data set, but either return wrong values or fail completely on noisy data sets such as are often obtained by researchers seeking to push the limits of what is possible in order to make new observations. Image registration and reconstruction metrics are easily dominated by overlapping of artifacts with high contrast such as flaws in the detection system or optical effects like refraction. This can lead to extremes in the metric at incorrect alignment positions. Therefore we base our alignment method on a particular feature of the sinogram in Fourier space - the absence of significant Fourier coefficients outside the double-wedge area, as described in Eq. (3), which takes into account the spatial trajectory of features that is expected in a projection tomography sinogram. The variation in this particular feature allows us to find the CoR with high reliability. The measure of sinogram Fourier metric, *Q _{SF}*, exhibits a global minimum at the correct alignment, allowing robust automatic location of this value. For practical automatic use when the object size,

*r,*is not known to the system, an over-estimate of

*r*may be used.

In summary, we propose a reliable technique to accurately calculate the center of rotation in tomography via an automated procedure. Our technique works well under challenging conditions when the sinogram is impaired by strong artifacts or noise or when the sample is smaller than the field of view, situations often encountered in the most challenging and novel tomography experiments. The implementation in Mathematica code that we have used in this work is available for download at Zenodo (http://dx.doi.org/10.5281/zenodo.10946)

## Acknowledgments

This work was carried out with the support of the Diamond Light Source. We thank James Marrow and David Nowell, University of Oxford, for the permission of presenting their tomographic data of the Magnesium Phosphate cement sample from the EPSRC funding (Grant number: EP/J019992/1) of the QUBE project (QUBE: QUasi-Brittle fracture: a 3D experimentally-validated approach). Data collected under the visitor number of EE9036.

## References and links

**1. **A. C. Kak and M. Slaney, *Principles of Computerized Tomographic Imaging* (IEEE Press, 1988).

**2. **B. Zitová and J. Flusser, “Image registration methods: a survey,” Image Vis. Comput. **21**(11), 977–1000 (2003). [CrossRef]

**3. **M. Yang, H. Gao, X. Li, F. Meng, and D. Wei, “A new method to determine the center of rotation shift in 2D-CT scanning system using image cross correlation,” NDT Int. **46**, 48–54 (2012). [CrossRef]

**4. **A. Brunetti and F. de Carlo, “A robust procedure for determination of center of rotation in tomography,” Proc. SPIE **5535**, 652–659 (2004). [CrossRef]

**5. **T. Donath, F. Beckmann, and A. Schreyer, “Automated determination of the center of rotation in tomography data,” J. Opt. Soc. Am. A **23**(5), 1048–1057 (2006). [CrossRef] [PubMed]

**6. **D. S. G. Azevedo, D. J. Schneberk, J. P. Fitch, and H. E. Martz, “Calculation of the rotational centers in computed tomography sinograms,” IEEE Trans. Nucl. Sci. **37**(4), 1525–1540 (1990). [CrossRef]

**7. **P. R. Edholm, R. M. Lewitt, and B. Lindholm, “Novel properties of the Fourier decomposition of the sinogram,” Proc. SPIE **671**, 8–18 (1986). [CrossRef]

**8. **K. Pearson, “Contributions to the mathematical theory of evolution, III, Regression, Heredity, and Panmixia,” Philos. Trans. R. Soc. Lond. Ser. A **187**, 253–318 (1896). [CrossRef]

**9. **C. Spearman, “The proof and measurement of association between two things,” Am. J. Psychol. **15**(1), 72–101 (1904). [CrossRef] [PubMed]

**10. **C. D. Kuglin and D. C. Hines, “The phase correlation image alignment method,” in *IEEE 1975 Conference on Cybernetics and Society*, New York (1975), pp. 163–165.

**11. **V. Argyriou and T. Vlachos, “Estimation of sub-pixel motion using gradient cross-correlation,” Electron. Lett. **39**(13), 980–982 (2003). [CrossRef]

**12. **A. Snigirev, I. Snigireva, V. Kohn, S. Kuznetsov, and I. Schelokov, “On the possibilities of x-ray phase contrast microimaging by coherent high-energy synchrotron radiation,” Rev. Sci. Instrum. **66**(12), 5486–5492 (1995). [CrossRef]

**13. **T. M. Lehmann, C. Gönner, and K. Spitzer, “Survey: Interpolation methods in medical image processing,” IEEE Trans. Med. Imaging **18**(11), 1049–1075 (1999). [CrossRef] [PubMed]

**14. **Wolfram Research, Inc., Mathematica, Version 9.0, Champaign, IL, 2013.

**15. **R. C. Gonzalez and R. E. Woods, *Digital Image Processing*, 2nd ed. (Prentice-Hall, 2002).