Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Carbon nanotube-based multiple source C-arm CT system: feasibility study with prototype system

Open Access Open Access

Abstract

To extend the field of view while reducing dimensions of the C-arm, we propose a carbon nanotube (CNT)-based C-arm computed tomography (CT) system with multiple X-ray sources. A prototype system was developed using three CNT X-ray sources, enabling a feasibility study. Geometry calibration and image reconstruction were performed to improve the quality of image acquisition. However, the geometry of the prototype system led to projection truncation for each source and an overlap region of object area covered by each source in the two-dimensional Radon space, necessitating specific corrective measures. We addressed these problems by implementing truncation correction and applying weighting techniques to the overlap region during the image reconstruction phase. Furthermore, to enable image reconstruction with a scan angle less than 360°, we designed a weighting function to solve data redundancy caused by the short scan angle. The accuracy of the geometry calibration method was evaluated via computer simulations. We also quantified the improvements in reconstructed image quality using mean-squared error and structural similarity. Moreover, detector lag correction was applied to address the afterglow observed in the experimental data obtained from the prototype system. Our evaluation of image quality involved comparing reconstructed images obtained with and without incorporating the geometry calibration results and images with and without lag correction. The outcomes of our simulation study and experimental investigation demonstrated the efficacy of our proposed geometry calibration, image reconstruction method, and lag correction in reducing image artifacts.

© 2023 Optica Publishing Group under the terms of the Optica Open Access Publishing Agreement

1. Introduction

Cone beam computed tomography (CBCT) presents an advanced imaging technique that employs a cone-shaped X-ray beam detected by a two-dimensional (2D) detector [1]. This method enables the acquisition of three-dimensional (3D) radiographic images with a single rotation of the X-ray source, providing high-precision volumetric data. Notably, CBCT benefits from an isotropic voxel resolution, enhancing image accuracy [2]. The versatility of CBCT systems is notable, as they can be designed in various gantry sizes, catering to diverse applications.

C-arm CT, an integration of CBCT with a C-arm system, finds essential utilization within the operating room environment due to its dynamic gantry movement capability for targeted body part scanning. Intraoperative C-arm CT primarily serves two crucial functions through real-time monitoring, namely, the provision of surgical guidance and the detection of potential abnormalities that may arise during surgical intervention. This is achieved by performing a 3D-3D registration process, aligning the real-time volume image acquired from C-arm CT with the pre-acquired high-precision image used for surgical planning. This alignment enables the precise tracking of the trajectory of the surgical tool, ensuring the requisite precision for the surgical procedure [3]. Furthermore, C-am CT significantly enhances patient safety by enabling the early detection of abnormalities, such as hematomas, which may occur during the surgical procedure [4]. Consequently, patient recovery can be expedited, and the incidence of post-surgery complications minimized. These pronounced advantages ultimately culminate in reduced surgical time [5], leading to the recent integration of C-arm CT in robotic surgical operations [6,7].

However, intraoperative C-arm CT is constrained by its dimensions despite its smaller size than the multi-slice detector CT (MDCT). For instance, a C-arm designed for CT imaging has physical dimensions of 220 cm (length) $\times$ 84 cm (width) $\times$ 181 cm (height), reconstructing a 3D volume with dimensional limited to only 16 cm $\times$ 16 cm $\times$ 16 cm [8]. When a C-arm of this size is positioned around a surgical site for real-time monitoring, it can inadvertently obstruct the operator’s direct view of the surgical site. Additionally, when surgery is performed with the C-arm CT on the surgical site, it can restrict the operator’s standing position or posture [9]. Consequently, frequent repositioning of the C-arm during the operation becomes necessary to ensure adequate operating space, leading to undesirable delays in the surgical procedure. One potential approach to address this challenge involves using a smaller-sized C-arm. However, downsizing the C-arm reduces the dimensions of the flat-panel detector, subsequently diminishing the field of view (FOV). This reduction in the FOV can significantly limit the ability of the C-arm to cover a large surgical site. This limitation obstructs the registration process between the volume image acquired by the C-arm CT and the surgical planning image encompassing the surgical site. Ultimately, this hampers the immediate intraoperative assessment [10]. Hence, developing a C-arm CT system that can either extend or maintain the FOV size while simultaneously reducing the overall dimensions of the equipment is necessary.

To overcome the limitations of conventional C-arm CT, we propose a multiple-source C-arm CT system. In the proposed system, multiple carbon-nanotube (CNT) X-ray source modules were strategically positioned on the C-arm in the transverse direction of the detector to extend the in-plane FOV. The CNT X-ray source can be rapidly switched on and off using a digital signal with a minimum pulse width of 500 ns [11]. Therefore, separate data for each source can be obtained with only a single system rotation by operating the sources sequentially. Moreover, this capability further permits on-the-fly dose adjustment during imaging, effectively mitigating potential health concerns arising from continuous exposure to the patient or surgeon [12]. Furthermore, the proposed system can be engineered to have a smaller physical footprint than conventional single-source C-arm CT systems while maintaining the same FOV size. This size reduction is achieved by employing a compact detector with multiple sources. The miniaturization of the system increases mobility and convenience in the operating room and facilitates convergence with a surgical robot platform. Additionally, the scatter and detector cost can be reduced due to the small detector size. The CNT X-ray source also contributes to lowering the cost of the system because it is more economical than the hot cathode X-ray source used in conventional C-arm CT.

Numerous prior studies have explored various CT system designs focusing on the extension of FOV using multiple source-based technology and its respective image reconstruction method [1319]. Notably, the inverse geometry CT (IGCT) approach is a representative case, sharing similarities with our proposed system by extending the in-plane FOV [20,21]. IGCT aims to acquire a sufficient data set of thick volume to mitigate cone-beam artifacts and achieve a signal-to-noise ratio (SNR) on par with conventional MDCT for diagnostic purposes [20]. Therefore, IGCT employs source modules featuring focal spots positioned at short intervals through an array of collimator holes, ensuring an adequate sampling rate. However, our proposed system focuses on extending FOV while maintaining the key attributes of intraoperative C-arm CT. Therefore, we placed multiple X-ray sources along the in-plane direction, strategically leaving sufficient distances between each source module to significantly extend the FOV. This design aims to augment the coverage area while preserving the characteristics that make intraoperative C-arm CT valuable.

This study presents a prototype system employing three CNT X-ray sources, purpose-built to conduct a feasibility assessment of the proposed system. The image acquisition procedure was designed to enhance image quality. Initially, we executed geometry calibration using a dedicated calibration phantom to rectify geometry misalignment. Subsequently, we reconstructed the image of each X-ray source, incorporating the calibration outcome via the filtered backprojection (FBP) algorithm [22], which was then synthesized to obtain the final image. The image reconstruction process involved performing truncation correction for the projection of each source and applying weighting to consider the overlap region in the 2D Radon space, stemming from the covered object area covered by each source. Moreover, to facilitate image reconstruction in the short scan mode (scan angle < 360$^{\circ }$), we designed a weighting function to solve data redundancy caused by the short scan mode [23]. A comprehensive simulation study verified the proposed geometry calibration and image reconstruction. In the experimental phase, we identified an afterglow issue using the prototype system resulting from detector lag, leading to artifacts in the reconstructed image. Therefore, we corrected the detector lag by referring to detector lag correction methods for single source CBCT [2426]. Finally, we verified the effectiveness of our proposed methods by comparing the image quality of the reconstructed image with and without incorporating the calibration results and with and without applying detector lag correction.

2. System description and data acquisition

The feasibility assessment of the proposed multiple-source C-arm CT system was conducted by meticulously designing and fabricating a prototype using three CNT X-ray sources coupled with a 2D flat-panel detector (Fig. 1). Sources 1 and 3 were strategically positioned on either side of the central source (source 2), with a proximity that was 11.25 mm closer to the detector than source 2. The interval between sources in the lateral direction of the detector was 100 mm. The detector was configured with a resolution of 2024 $\times$ 2024 pixels, each pixel measuring 0.145 mm $\times$ 0.145 mm, employing 2 $\times$ 2 binning for efficient data handling. The nominal geometry parameters of the prototype system are summarized in Table 1. In this nominal setup, the in-plane small FOV—the FOV with only source 2—had a diameter of 134 mm, but in-plane extended FOV—the FOV with three sources—achieved a diameter of 240 mm. To obtain the maximum extended FOV with three sources, it is possible to design the prototype system in a way that minimizes the overlap region of the object area covered by each source in Radon space. However, in the prototype system, the interval between sources was set to accommodate an intentional overlap region. This decision was made with the forward-looking goal of accommodating the potential future extension of the FOV, potentially encompassing other body regions, such as the spine, while still targeting the initial FOV size appropriate for imaging the human head.

 figure: Fig. 1.

Fig. 1. Structure of the prototype system.

Download Full Size | PDF

Tables Icon

Table 1. Nominal geometry parameters of the prototype system

A sequential operation of the three sources was employed to capture data for each source, following the order of sources 1, 2, and 3, while the system gantry was rotated by 1/3$^{\circ }$ increments. Consequently, the angular sampling rate of each source amounted to 1 view per degree. The prototype system was initially designed to allow a complete 360$^{\circ }$ gantry rotation. However, we acquired projection data in short scan mode, using a scan angle of 230$^{\circ }$, to verify the functionality of the image acquisition procedure even at scan angles of less than 360$^{\circ }$.

3. Geometry calibration

For geometry calibration, it is essential to establish calibration parameters that accurately characterize the prototype system. To achieve this, we have made three key assumptions. First, we assumed the stability of both the detector and the three sources during the system gantry rotation to prevent distortion in the geometry. Second, we assumed a constant rotation speed of the system gantry during data acquisition. Third, we relied on precise knowledge of the ball bearings (BB) positions in the calibration phantom used for data acquisition during the calibration process.

We defined a coordinate system to represent the calibration parameters. First, we divided the prototype system into three subsystems, each comprising a single source CBCT with one source and detector. We used one of the three subsystems as a calibration reference to define a coordinate system for representing the calibration parameters for all subsystems with one common coordinate system. Therefore, we defined the $xyz$ coordinate system (i.e., the global coordinate system) using the subsystem, including source 2 as the calibration reference. The $x$-axis was defined by the line connecting the iso-center and source 2, and the $z$-axis was aligned with the rotation axis. The origin of the $xyz$ coordinate system was set at the iso-center, lying in the same $z$-plane as source 2.

Under these foundational assumptions, we categorized the calibration parameters into two groups: intrinsic parameters, describing the system, and extrinsic parameters, characterizing the calibration phantom outside the system (Table 2). The intrinsic parameters were quantified as numerical values representing the deviations of the system geometry from the reference point of nominal geometry. Given that each source and the detector exhibited independent geometry misalignment, intrinsic parameters were defined and estimated for each module within the $xyz$ coordinate system. According to the definition of the $xyz$ coordinate system, source 2 was associated with an offset in the $x$ direction, whereas the other two sources had offsets in the $x$, $y$, and $z$ directions. The detector featured offsets in the $x$, $y$, and $z$ directions, an in-plane rotation based on the $x$-axis, and two out-of-plane rotations based on the $y$ = 0 and $z$ = 0 axes on the detector plane of $x = DSD - DSO$. Consequently, a total of 13 intrinsic parameters were defined. Fig. 2 illustrates the intrinsic parameters defined for the sources and detector. Additionally, we defined a set of six extrinsic parameters to specify the position of the calibration phantom. These parameters encompassed the $xyz$ position of the phantom and rotations around the $x$, $y$, and $z$-axes, indicating the respective directions of the phantom.

 figure: Fig. 2.

Fig. 2. Intrinsic parameters for three sources offset, detector offset, and detector rotation.

Download Full Size | PDF

Tables Icon

Table 2. Intrinsic and extrinsic parameters of the prototype system

We performed geometry calibration to estimate the calibration parameters using the calibration phantom (see Fig. 3). We aimed to minimize the pairwise distance between the BB center coordinates obtained from the projection data of each source and the BB center coordinates derived using the calibration parameters to perform geometry calibration. A detailed description of the calibration phantom and calibration algorithm is provided in Supplement 1.

 figure: Fig. 3.

Fig. 3. (a) Calibration phantom. Example projections of the phantom scanned by (b) source 1, (c) source 2, and (d) source 3.

Download Full Size | PDF

4. Image reconstruction

We applied the calibration results to the image reconstruction algorithm, facilitating the reconstruction of the images of each source. These reconstructed images were subsequently synthesized to generate the final image. The flowchart outlining the reconstruction algorithm for the prototype system is depicted in Fig. 4, with detailed explanations for each step, as follows.

 figure: Fig. 4.

Fig. 4. Flowchart of the image reconstruction algorithm of the prototype system.

Download Full Size | PDF

4.1 Re-projection to solve the detector rotation problem

As shown in Fig. 4, the proposed image reconstruction algorithm includes two weight functions. We wanted to define these weight functions independently of the $z$-axis and apply them uniformly across all detector rows. However, the pixels on the same detector row have different $z$ coordinates due to detector rotation corresponding to $\phi _d$ and $\psi _d$ (see Fig. 2), so the defined weight function could not be applied. Therefore, we defined a virtual detector that is not rotated so that the pixels on the same detector row have the same $z$ coordinate. By re-projecting projection data from the real detector to the virtual detector, it is possible to apply the defined weight function uniformly across all detector rows.

To implement this re-projection, we calculated the $xyz$ coordinates of each pixel of the real detector using the calibration results. The pixel coordinates of the virtual detector were defined by applying the detector offset ($x_d$, $y_d$, $z_d$) to the detector on the nominal geometry. Finally, projection data on the real detector was transferred onto the virtual detector through the calculated detector pixel coordinates and 2D bi-linear interpolation.

However, due to the detector rotation, there were pixels whose values could not be estimated through 2D interpolation near the boundary of the virtual detector, which has the same number of rows and columns as the real detector. Since the filtering operation of FBP algorithm cannot be applied to projection data containing pixels with undefined values, we defined the virtual detector to be slightly smaller than the real detector so that the values of all pixels could be estimated by 2D interpolation. The more the real detector rotates, the smaller the virtual detector should be defined. Considering the potential manufacturing errors of the prototype system, detector rotation did not exceed 1$^{\circ }$. Therefore, we reduced the number of rows and columns in the virtual detector by one to six pixels to accommodate potential detector rotation. This reduced the FOV diameter by up to 1.8 mm. The number of reduced rows and columns was determined manually according to the calibration results.

4.2 Weighting for redundant data for short scan

A weight function was developed to address the issue of redundant data to conduct image reconstruction in the short scan mode with a scan angle of less than 360$^{\circ }$ [23]. To guide the design of the weight function in the prototype system, we conducted an analysis of data redundancy in the geometry of single-source fan-beam CT.

Projection samples are characterized by two parameters: detector angle ($\gamma$)—the angle between the projection sample and the iso-ray—and the projection angle ($\beta$). In Fig. 5(a), the projection sample of $(\gamma, \beta )$ is shown by the thick solid line. When the fan-beam CT geometry is rotated to the projection angle $\beta +\pi +2\gamma$, the projection sample of $(-\gamma,\beta +\pi +2\gamma )$ is also shown with the same thick solid line. Therefore, two equal projection samples are duplicated. In a single-source fan-beam CT, when data is acquired with a full gantry rotation of $2\pi$, every projection sample has its complementary duplicated pair. The entire set of $2\pi$ projections can be divided into two subsets, thus it is possible to perform image reconstruction using just a single subset. The minimum scan angle required to acquire a single subset is $\pi +2\gamma _{fan}$, where $\gamma _{fan}$ denotes the fan angle. However, data acquired during this short scan also includes additional projection samples beyond this single subset. These additional projection samples lead to data redundancy by duplicating some of the projection samples in this single subset. In Fig. 5(b), the two gray areas represent a redundant region containing duplicated projection samples in the sinogram as depicted. For image reconstruction, it is necessary to normalize the projection samples in the two redundant regions. The weight function $w_r(\gamma,\beta )$ designed for this purpose satisfies the following formula:

$$w_r(\gamma,\beta) + w_r(-\gamma,\beta+\pi+2\gamma)=1.$$

 figure: Fig. 5.

Fig. 5. (a) Duplicated samples of $(\gamma, \beta )$ and $(-\gamma,\beta +\pi +2\gamma )$ in fan-beam CT. (b) Sinogram representation of redundant data in fan-beam CT.

Download Full Size | PDF

The prototype system incorporated three sources, each located at a different distance from the iso-center. If the distances to the iso-center from the sources are different, the positions of the sources do not coincide at the same projection angle. Consequently, projection samples from each source are misaligned, even when they share the same projection angle and detector angle. In other words, identical projection samples from different sources have its distinct projection angles and detector angles.

Our goal was to standardize the projection angle and detector angle of the projection samples from three sources in reference to source 2. As depicted in Fig. 6(a), we treated the projection samples from sources 1 and 3 as if they originated from an imaginary source located at the same distance from the iso-center as source 2. The projection angle and detector angle of sources 1 and 3 were then redefined based on the position of the imaginary source; further being able to represent all projection samples in the prototype system with a consistent projection angle and detector angle. Consequently, the sinogram of the three sources can be represented using the horizontal and vertical axes $\gamma$ and $\beta$, respectively, as depicted in Fig. 6(b). The variable $\gamma _m$ in Fig. 6(b) denotes the smaller absolute value between the minimum detector angle of source 1 and the maximum detector angle of source 3. We note that image reconstruction in the short scan mode is limited to sinograms falling within the detector angle range between $-\gamma _m$ and $\gamma _m$. Therefore, projection samples originating from source 1 or source 3 that extend beyond this specified range were excluded from the reconstruction process. Furthermore, we defined $\phi _1$ and $\phi _2$ as the angles formed by the iso-rays of sources 1 and 3 with the iso-rays of source 2, respectively. Additionally, the variable $\theta$ represents the scan angle of the prototype system.

 figure: Fig. 6.

Fig. 6. (a) Projection sample of source 1 represented as being irradiated from an imaginary source with the same distance from the iso-center as source 2. The angle $\beta '$ between the imaginary source and source 2, centered on the iso-center, denotes the difference between the projection angles of source 1 and source 2. The angle $\gamma '$ formed between the projection sample and the iso-ray of the imaginary source was determined by the detector angle of source 1. (b) Sinogram representation of the prototype system.

Download Full Size | PDF

Unlike the fan-beam CT with the characteristic linear boundary in the redundant region as shown in Fig. 5(b), the sinogram of the prototype system, as illustrated in Fig. 6(b), features a step-like configuration. Consequently, the redundant region in the prototype system does not have linear boundaries, making the design of the weighting function more complex. To simply represent the step-like redundant data of our prototype system, the sinogram was separately observed under two distinct cases, the first involves using samples with $\beta \geq 0$, while the second uses samples with $\beta <\theta -\phi _1-\phi _2$. In each of these cases, the redundant data was divided with linear boundaries, $\beta =\pi -2\gamma$ and $\beta =-2\gamma -\pi +\theta -\phi _1-\phi _2$, as demonstrated in Fig. 7. After establishing the weight function for each of these cases, we obtained the final weight function by subsequently averaging the two weights defined under its respective case; thus being able to effectively utilize the entire acquired sinogram during image reconstruction.

 figure: Fig. 7.

Fig. 7. Sinogram representation of redundant data for (a) $\beta \geq 0$ and (b) $\beta <\theta -\phi _1-\phi _2$.

Download Full Size | PDF

In Fig.7, the blue-shaded region, including the hatched region, signifies the minimum area essential for image reconstruction, while the yellow-shaded region indicates the redundant data. The hatched portion within the blue-shaded region overlaps the yellow-shaded region. Fig. 7(a) shows that the minimum scan angle necessary for image reconstruction in the prototype system is $\pi +2\gamma _m$, which equates to 216.24$^{\circ }$ based on nominal geometry. Two weight functions were designed to address the redundant data in $\beta \geq 0$ and $\beta < \theta - \phi _1 - \phi _2$. These weight functions adhere to Eq. (1), ensuring that the sum of the weights within the hatched and yellow-shaded region is 1. Since the tomographic filtering operation is a derivative operator, the weight functions must be continuous and differentiable in $\gamma$ [27].

We divided the regions with redundant data to design the weight function satisfying these conditions, as shown in Fig. 8. Furthermore, we defined a distinct weight function for each region. The yellow-shaded region in Fig. 7 was divided into Areas I to V, and the hatched region was divided into Areas I′ to V′. Areas I to V contained overlapping data with Areas I′ to V′. We specifically defined Areas II and IV within Areas I to V to facilitate a seamless transition of weights between Areas I and III, as well as between Areas III and V, respectively. The boundaries of Areas II and IV, represented by $l_1(\gamma )$, $l_2(\gamma )$, $l_3(\gamma )$, and $l_4(\gamma )$, are expressed as follows:

$$\begin{aligned} & l_1(\gamma) = \left(\pi-2\gamma_{1,max}-\theta\right) \left(\gamma-\gamma_{1,max}\right)/\epsilon + \pi-2\gamma_{1,max}, \\ & l_2(\gamma) = \left(\pi-2\gamma_{2,max}-\theta+\phi_1\right) \left(\gamma-\gamma_{2,max}\right)/\epsilon + \pi-2\gamma_{2,max}, \\ & l_3(\gamma) = \left(\pi+2\gamma_{3,min}-\theta\right) \left(\gamma-\gamma_{3,min}\right)/\epsilon - 2\gamma_{3,min} -\pi+\theta-\phi_1-\phi_2, \\ & l_4(\gamma) = \left(\pi+2\gamma_{2,min}-\theta+\phi_2\right) \left(\gamma-\gamma_{2,min}\right)/\epsilon - 2\gamma_{2,min} -\pi+\theta-\phi_1-\phi_2. \end{aligned}$$

The maximum detector angles of sources 1 and 2 are represented by $\gamma _{1,max}$ and $\gamma _{2,max}$, respectively, while the minimum detector angles of sources 2 and 3 are denoted as $\gamma _{2,min}$ and $\gamma _{3,min}$, respectively. Additionally, we set the parameter $\epsilon$ to a value of 0.02. The definitions of Areas I to V are summarized in Table 3. For $\beta \geq 0$ and $\beta <\theta -\phi _1-\phi _2$, Areas I′ to V′ can be expressed by replacing $(\gamma,\beta )$ in the formulas presented in Table 3 with $(-\gamma,\beta +\pi +2\gamma )$ and $(-\gamma,\beta -\pi +2\gamma )$, respectively. The weight functions $w_1(\gamma,\beta )$ and $w_2(\gamma,\beta )$ were defined for both cases as follows:

$$w_1(\gamma,\beta) = 3\upsilon_1^2(\gamma,\beta)-2\upsilon_1^3(\gamma,\beta), \quad w_2(\gamma,\beta) = 3\upsilon_2^2(\gamma,\beta)-2\upsilon_2^3(\gamma,\beta),$$
where $\upsilon _1(\gamma,\beta )$ and $\upsilon _2(\gamma,\beta )$ correspond to functions summarized in Table 4. In Table 4, $U(\beta )$ denotes the unit step function for $\beta$. Moreover, $\sigma _1(\gamma,\beta )$ to $\sigma _4(\gamma,\beta )$ represent functions used to smoothly connect the weights on the borders on both sides of Area II or IV; they are defined as follows:
$$\begin{aligned} \sigma_1(\gamma,\beta) &= \frac{1}{2}-\frac{1}{2}\cos{\left(\frac{\pi}{\gamma_{1,max}-l_1^{{-}1}(\beta)}\left(\gamma-l_1^{{-}1}(\beta)\right)\right)}, \\ \sigma_2(\gamma,\beta) &= \frac{1}{2}-\frac{1}{2}\cos{\left(\frac{\pi}{\gamma_{2,max}-l_2^{{-}1}(\beta)}\left(\gamma-l_2^{{-}1}(\beta)\right)\right)}, \\ \sigma_3(\gamma,\beta) &= \frac{1}{2}-\frac{1}{2}\cos{\left(\frac{\pi}{\gamma_{3,min}-l_3^{{-}1}(\beta)}\left(\gamma-l_3^{{-}1}(\beta)\right)\right)}, \\ \sigma_4(\gamma,\beta) &= \frac{1}{2}-\frac{1}{2}\cos{\left(\frac{\pi}{\gamma_{4,min}-l_4^{{-}1}(\beta)}\left(\gamma-l_4^{{-}1}(\beta)\right)\right)}. \end{aligned}$$

 figure: Fig. 8.

Fig. 8. Partitioning of the redundant data area to define the weight function in the case of (a) $\beta \geq 0$ and (b) $\beta <\theta -\phi _1-\phi _2$.

Download Full Size | PDF

Tables Icon

Table 3. Definition of the divided area with redundant data

Tables Icon

Table 4. Definition of $\upsilon _1(\gamma,\beta )$ and $\upsilon _2(\gamma,\beta )$ for each divided area

A weight function $w_r(\gamma,\beta )$ that could use the entire data was obtained by averaging the weight functions $w_1(\gamma,\beta )$ and $w_2(\gamma,\beta )$, as follows:

$$w_r(\gamma,\beta) = \frac{w_1(\gamma,\beta)+w_2(\gamma,\beta)}{2}.$$

We calculated $w_r(\gamma,\beta )$ based on the geometry calibration result by considering $\gamma$ and $\beta$ for the projection sample of each source. Subsequently, we applied $w_r(\gamma,\beta )$ to the projections derived from each source.

4.3 Truncation artifact reduction

We implemented the symmetric mirroring method for reducing truncation artifacts [28] to mitigate the truncation error stemming from the limited irradiation range of each source in the prototype system. First, we introduced temporary pixels beyond the truncated boundary of the projection. We then symmetrically flipped pixels around the truncated boundary, creating an equal number of mirrored pixels. We multiplied the mirrored value by applying a sine function for the temporary pixels to gradually feather their value to zero, ensuring a smooth transition. Subsequently, we applied ramp filtering to the symmetrically mirrored projection, followed by the removal of the temporary pixels to obtain a filtered projection.

The symmetric mirroring method estimates projection data outside the truncated boundary in the sinogram domain. The number of temporary pixels is related to the estimated size of the projection data outside the truncated boundary. Therefore, the optimal number of temporary pixels is dependent on the object size. However, there is another factor that affects the optimal number of temporary pixels. The symmetric mirroring method is performed under the assumption that the data outside the truncated boundary has similar symmetry as the data inside the boundary. If the data deviates significantly from this assumption, artifact correction performance may be degraded even if the temporary pixel number is set large to reflect the large object size. We empirically set the number of temporary pixels as 100 for the skull phantom.

4.4 Weighting for overlap region of object area for each source

The three sources of the prototype system shared specific scan areas, leading to overlap regions within their respective object areas. Object areas represent the reconstructed voxel regions for each source. The object areas are visualized in the sinogram domain, as depicted in Fig. 6(b), where the overlap regions of the object areas correspond to the areas in which the sinograms of each source intersect. To obtain the final image by combining the reconstructed results from all sources, it is essential to perform normalization for the overlap regions using a weighting function specific to each source. Therefore, we applied a weighting function to the filtered projection in order to address the overlap regions. This weighting function was applied during the backprojection process after filtering the truncation-corrected projection. Specifically, we established the weight functions for sources 1, 2, and 3 as $w_{o,1}(\gamma )$, $w_{o,2}(\gamma )$, and $w_{o,3}(\gamma )$, respectively, ensuring that the sum of their weight functions in the overlap region equaled 1 and exhibited continuity with respect to $\gamma$, as follows:

$$w_{o,1}(\gamma) = \begin{cases}\! \begin{aligned}[b] \frac{1}{2}\left(1+\cos{\left(\frac{\pi(\gamma-\gamma_{2,min})}{\gamma_{1,max}-\gamma_{2,min}}\right)}\right), \end{aligned} & (\gamma,\beta) \in S_1 \cap S_2, \\ 1, & \text{otherwise}. \end{cases}$$
$$w_{o,3}(\gamma) = \begin{cases}\! \begin{aligned}[b] \frac{1}{2}\left(1+\cos{\left(\frac{\pi(\gamma-\gamma_{2,max})}{\gamma_{2,max}-\gamma_{3,min}}\right)}\right), \end{aligned} & (\gamma,\beta) \in S_2 \cap S_3, \\ 1, & \text{otherwise}. \end{cases}$$
$$w_{o,2}(\gamma) = \begin{cases} 1-w_{o,1}(\gamma), & (\gamma,\beta) \in S_1 \cap S_2 \cap \overline{S_3}, \\ 1-w_{o,1}(\gamma)-w_{o,3}(\gamma), & (\gamma,\beta) \in S_1 \cap S_2 \cap S_3, \\ 1-w_{o,3}(\gamma), & (\gamma,\beta) \in \overline{S_1} \cap S_2 \cap S_3,\\ 1, & \text{otherwise}. \end{cases}$$
where $S_1$, $S_2$, and $S_3$ represent the sinogram range of source 1, 2, and 3, respectively.

After applying the weight function to the filtered projection, we performed the backprojection to generate a reconstructed image for each source. The culmination of this process yielded the final reconstructed image, denoted as the calibrated image, obtained by summing the reconstructed images from all three sources. Additionally, we obtained a reconstructed image without incorporating the calibration result, denoted as the uncalibrated image, to evaluate the improvement in image quality resulting from the geometry calibration.

The reconstruction matrix size of the prototype system was set to 900 $\times$ 900 $\times$ 300, with each voxel measuring 0.2684 mm$^3$. Although reconstruction was possible with a voxel size of at least 0.1342 mm$^3$ based on the nominal geometry, the reconstructed matrix size was reduced to address computational load and memory constraints by setting the voxel size to twice the nominal size.

5. Simulation study for geometry calibration

The simulation study aimed to assess the accuracy of the proposed geometry calibration method by quantifying the calibration error, which denotes the disparity between the estimated and real values of the calibration parameters. The simulation study adopted the same nominal geometry and prior information associated with the calibration phantom to replicate the experimental conditions of the prototype system.

The initial values of intrinsic parameters were set to 0, representing the nominal geometry values. Correspondingly, the initial values of extrinsic parameters were also set to 0. Subsequently, the real value of each parameter was randomly set within the range specified in Table 5. The random range of the intrinsic parameters considered the potential manufacturing errors in the prototype system, while that of the extrinsic parameter encompassed the possible errors inherent in the phantom placement.

Tables Icon

Table 5. Random range of calibration parameters used to set its real values

Given that the prototype system comprises three single-source subsystems, we employed a forward-projection simulator designed for single-source CBCT to generate BB projection data using the real parameter values and prior BB position information. The cylinder in the calibration phantom was disregarded due to its negligible effect on measuring the BB center coordinates. The attenuation coefficient of BB was set to 0.3/mm. Moreover, Poisson noise with a blank scan flux of 48,000 was added to the noiseless projection data to replicate experimental conditions. Ten projection data sets were generated, each employing different real parameter values. Subsequently, we conducted the geometry calibration on each projection data set to compute the calibration error of the parameters. Quantitative evaluation in the image domain was performed by comparing the mean-square error (MSE) and structural similarity (SSIM) [29] of the uncalibrated and calibrated images. The reference image for the MSE and SSIM calculations was reconstructed based on the real parameter values.

6. Experiment

6.1 Experimental data acquisition and geometry calibration

We acquired experimental data using the prototype system for the calibration phantom. The data were obtained by operating the three CNT X-ray sources with an 80 kVp anode voltage and a 20 mA anode current. Each source followed a sequential pattern of 25 ms on time and 35 ms off-time. After that, the geometry calibration was executed using the acquired experimental data. The initial values of the intrinsic parameters were set to 0, mirroring the setup in the simulations. Subsequently, the initial values of the extrinsic parameters were manually set through estimation based on an uncalibrated image.

We then acquired experimental data using the skull phantom depicted in Fig. 9. The skull phantom closely resembles the actual subject of the proposed system and served as a means to assess the enhancements in image quality resulting from the calibration process. The settings of the CNT X-ray source for acquiring skull phantom data remained identical to those used for the calibration phantom data.

 figure: Fig. 9.

Fig. 9. Skull phantom.

Download Full Size | PDF

6.2 Detector lag correction

The experimental data (Fig. 10) revealed the presence of afterglow in the projection of each source attributed to by projection of other sources. This afterglow primarily resulted from detector lag, a phenomenon arising from the charge trapping in the amorphous silicon (a-Si) flat-panel detector employed in the prototype system, as previously reported [26]. Although the afterglow did not significantly impede the measurement of the BB center coordinates, it did lead to artifacts in the reconstructed image. Therefore, the experimental skull phantom data had to be corrected for detector lag to assess the image quality in the image domain. Hence, we modeled the detector lag caused by each source and estimated the lag from the projection data based on the modeling results.

 figure: Fig. 10.

Fig. 10. Example projections of skull phantom scanned by (a) source 1, (b) source 2, and (c) source 3.

Download Full Size | PDF

The detector lag modeling involved measuring the step response for 50 frames at 60 ms intervals after turning the source on for 25 ms and off for 35 ms, following the same data acquisition process as in the experiments. The step response for lag modeling was designated as the lag data. Since each source has a different incident angle to the detector, we expected the lag characteristics to differ among the sources. Therefore, we obtained lag data separately for each source. As reported in previous study [26], the lag characteristics are expected to depend on the exposure level. Hence, we measured the lag data with eight anode currents, varying in 2 mA increments from 6 mA to 20 mA while maintaining an 80 kVp anode voltage. The obtained lag data was used to experimentally verify that the detector lag of the prototype system exhibited non-uniformity across detector pixels. However, we observed that the local lag variation in each detector pixel was not significantly pronounced. The measurement noise was minimized by applying 11 $\times$ 11 binning to the lag data. Consequently, we reduced the size of the lag data from 1012 $\times$ 1012 to 92 $\times$ 92, alleviating the computational load of the model fitting process.

We modeled the acquired lag data as an exponential signal, following the conventional detector lag correction method [2426]. We verified that the detector lag of 20 mA anode current diminished to less than 0.1% of the exposure intensity after five frames (Fig. 11). Therefore, we assumed that the detector lag after five frames had minimal impact on the afterglow of the projection. Consequently, we used only the first five frames of the lag data for model fitting. The lag data from eight different anode currents were normalized relative to the first frame, representing the exposure intensity. Subsequently, coefficients $a$, $b$, and $c$ were estimated through exponential fitting of the normalized lag data, as shown in the following equation:

$$h(t/\Delta t) = a+be^{{-}ct/\Delta t}, \quad t=0, \Delta t, 2\Delta t, 3\Delta t, 4\Delta t,$$
where $t$ denotes a discrete time variable for an X-ray frame and $\Delta t$ represents the time interval between frames. Subsequently, we fitted these coefficients with a linear function for exposure intensity due to the observed variation in the estimated coefficients for each source varied with the current. The primary lag in the experimental data occurred at an exposure intensity of 20 mA anode current. Hence, we linearly fit the coefficients $a$, $b$, and $c$ of each source based on the exponential fitting results obtained from the 20 mA lag data. After that, we estimated the coefficients $m_1$, $m_2$, and $m_3$, representing the gain for exposure intensity, as follows:
$$\begin{aligned} h_i(p,t/\Delta t) &= a_i(p)+b_i(p)e^{{-}c_i(p)t/\Delta t}, \quad t=0, \Delta t, 2\Delta t, 3\Delta t, 4\Delta t, \\ a_i(p) &= m_1(p-p_{i,{20 {\textrm{mA}}}})+a_{i,{20 {\textrm{mA}}}}, \\ b_i(p) &= m_2(p-p_{i,{20 {\textrm{mA}}}})+b_{i,{20 {\textrm{mA}}}}, \\ c_i(p) &= m_3(p-p_{i,{20 {\textrm{mA}}}})+c_{i,{20 {\textrm{mA}}}}, \end{aligned}$$
where $p$ corresponds to the exposure intensity and $h_i(p,t/\Delta t)$ represents the normalized lag data from source $i$. The value $p_{i,{20 {\textrm{mA}}}}$ represents the exposure intensity with 20 mA anode current for source $i$, whereas $a_{i,{20 {\textrm{mA}}}}$, $b_{i,{20 {\textrm{mA}}}}$, and $c_{i,{20 {\textrm{mA}}}}$ denote the exponential fitting results obtained from the 20 mA lag data of source $i$. Finally, we estimated the ideal output without lag by subtracting the estimated lag within four frames from each raw data frame, which inherently included the lag. Given that sources 1, 2, and 3 operated sequentially, we calculated and estimated the ideal output for each source chronologically, as follows:
$$p_1(k) = \begin{cases}\! \begin{aligned}[b] & q_1(1), \end{aligned} & k=1, \\ \begin{aligned}[b] & q_1(2)-p_3(1)h_3(p_3(1),1)-p_2(1)h_2(p_2(1),2)-p_1(1)h_1(p_1(1),3), \end{aligned} & k=2, \\ \begin{aligned}[b] & q_1(k)-p_3(k-1)h_3(p_3(k-1),1)-p_2(k-1)h_2(p_2(k-1),2)-\\ & p_1(k-1)h_1(p_1(k-1),3)-p_3(k-2)h_3(p_3(k-2),4), \end{aligned} & k\geq3, \\ \end{cases}$$
$$p_2(k) = \begin{cases}\! \begin{aligned}[b] & q_2(1)-p_1(1)h_1(p_1(1),1) \end{aligned} & k=1, \\ \begin{aligned}[b] & q_2(k)-p_1(k)h_1(p_1(k),1)-p_3(k-1)h_3(p_3(k-1),2)-\\ & p_2(k-1)h_2(p_2(k-1),3)-p_1(k-1)h_1(p_1(k-1),4), \end{aligned} & k\geq2, \\ \end{cases}$$
$$p_3(k) = \begin{cases}\! \begin{aligned}[b] & q_3(1)-p_2(1)h_2(p_2(1),1)-p_1(1)h_1(p_1(1),2), \\ \end{aligned} & k=1, \\ \begin{aligned}[b] & q_3(k)-p_2(k)h_2(p_2(k),1)-p_1(k)h_1(p_1(k),2)-\\ & p_3(k-1)h_3(p_3(k-1),3)-p_2(k-1)h_2(p_2(k-1),4), \end{aligned} & k\geq2, \\ \end{cases}$$
where $q_i(k)$ represents the $k$th frame of source $i$, including the lag, and $p_i(k)$ corresponds to the estimated ideal output for the $k$th frame of source $i$. In this way, we performed lag correction on the experimental skull phantom data.

 figure: Fig. 11.

Fig. 11. Normalized lag of detector center pixel measured with 20 mA anode current of source 2.

Download Full Size | PDF

To verify the operation of the weight function used to solve the redundant data for short scan and the overlap region of object area for each source in the proposed reconstruction algorithm, we reconstructed the uncalibrated and calibrated images of the calibration and skull phantoms. Additionally, the performance of the proposed lag correction was verified by comparing reconstructed images before and after applying the detector lag correction to the skull phantom.

7. Result

7.1 Simulation results

We calculated the calibration error of each parameter to quantitatively evaluate the calibration results using simulation data. Table 6 summarizes the mean and standard deviation of parameter errors for calibration performed with simulation data generated from ten parameter sets. The mean and standard deviation of parameter errors, which were close to zero, indicated that the parameters were accurately estimated.

Tables Icon

Table 6. Mean and standard deviation values of the calibration errors for each parameter

Regarding the intrinsic parameters, we confirmed that the errors associated with source offset, detector offset, and detector rotation errors were all found to be well within the threshold of 0.02 mm for the offsets and 0.01$^{\circ }$ for the rotation. Intrinsic parameter errors of this level cannot affect more than 0.1342 mm—the minimum voxel size of the reconstructed image that can be set based on the nominal geometry. Additionally, the $xyz$ center position errors and rotation errors of the calibration phantom were less than 0.002 mm and 0.001$^{\circ }$, respectively. The extrinsic parameter errors were confined to a remarkably low level, approximately 1.5% of the minimum voxel size of 0.1342 mm. These results validate the efficacy of our proposed calibration method, underscoring its ability to meet the target precision of one voxel of the reconstructed image.

Fig. 12 presents the reference, uncalibrated, and calibrated images using one of the 10 projection data sets for quantitative evaluation. The reconstructed results display 50 $\times$ 50 axial and coronal slice images, including the centers of four BBs at different $z$-coordinates. Specifically, these centers correspond to the 36th, 66th, 147th, and 226th slices within the reconstructed image, comprising 300 slices. Upon examining the axial and coronal slices within the uncalibrated image, noticeable artifacts were observed, attributable to the inaccurate amalgamation of reconstructed results from the three sources, resulting from geometry misalignment. However, these artifacts were reduced in the calibrated image. We set 50 $\times$ 50 axial slice images for all BBs as regions of interest (ROIs). Table 7 summarizes the minimum and maximum values of MSE and SSIM for ROIs of calibrated and uncalibrated images. The calibrated image was approximately identical to the reference image for all ROIs. They had lower MSE and higher SSIM values than the uncalibrated image. Therefore, this result demonstrates the effectiveness of the proposed calibration method for the proposed multiple-source C-arm CT system.

 figure: Fig. 12.

Fig. 12. Reconstructed images of the simulation results. (a) Reference images, (b) uncalibrated image, and (c) calibrated image. The rows represent BBs centered on the 36th, 66th, 147th, and 226th axial slices, respectively. The display window of all image slices is [0 0.3] (unit: 1/mm).

Download Full Size | PDF

Tables Icon

Table 7. MSE and SSIM for ROIs of calibrated and uncalibrated images

7.2 Experiment results

Fig. 13 illustrates the uncalibrated and calibrated images of the calibration phantom. The reconstructed images consisted of 900 $\times$ 900 axial slices, each centered on the same BB. We set the axial slice of 100 $\times$ 100 area including the cylinder edge and the axial and coronal slice of 100 $\times$ 100 area based on the center of the BB as ROIs. The uncalibrated image was observed to have edge blur and distortion surrounding the cylinder edge and BB due to geometry misalignment. However, these artifacts were reduced in the calibrated image. The uncalibrated and calibrated images showed artifacts in a wide shade around the BB, a beam-hardening artifact resulting from the metallic BB.

 figure: Fig. 13.

Fig. 13. Uncalibrated images and calibrated images of the calibration phantom. The display window of all image slices is [0 0.1] (unit: 1/mm).

Download Full Size | PDF

Fig. 14 displays the first and second frames of 20 mA lag data captured by source 2. We used this data to probe the spatial characteristics of detector lag. Our analysis involved the random selection of two detector pixels, enabling a comparison of their exposure intensity in the first frame and subsequent lag effects in the second frame. Pixel 1 and pixel 2 (yellow and red markers in Fig. 14, respectively) demonstrated exposure intensities of 57,818 and 57,990 detector counts in the first frame, respectively. In the second frame, pixel 1 and pixel 2 exhibited lags of 1,413 and 1,106 detector counts, respectively. Pixel 1 displayed a lower lag, equivalent to 78.3% of pixel 2, despite having identical exposure intensity at 99.7% of pixel 2. We confirmed that the first frame of lag data had uniform exposure intensity for detector pixels through flat-field correction. However, the detector lag of the second frame was not uniform; it was found to be spatially dependent.

 figure: Fig. 14.

Fig. 14. (a) First frame and (b) second frame of 20 mA lag data captured by source 2. The display window of each frame is [57012 58734] and [672 2880] (unit: detector counts), respectively.

Download Full Size | PDF

We normalized the lag data obtained from the eight anode currents to the first frame, representing the exposure intensity. Subsequently, we employed exponential fitting (Eq. (9)). Fig. 15 provides a concrete illustration of the exponential fitting results for the normalized lag measured from source 2. The markers in each graph represent the normalized lag at the detector pixel (210, 210), selected from the range of pixels spanning from the top left (1, 1) to the bottom right (1012, 1012) of the detector. The blue curve represents the results of the exponential fitting. We confirmed that the fitting error for the first to fifth frames was less than 0.003. The lag coefficients, denoted as $a$, $b$, and $c$, were derived from the exponential fitting. The linear function fitting for these coefficients concerning the exposure intensity is displayed in Fig. 16. The markers in Fig. 16 represent the exposure intensity and coefficients $a$, $b$, and $c$ for the eight anode currents. The blue line represents the linear fitting results of these coefficients. Coefficients $a$ and $b$ had values of 0 and 1, respectively, and exhibited negligible variation across different exposure intensities. Conversely, coefficient $c$ increased with higher exposure intensity. This observation signifies that the normalized lag decreased faster as the anode current and exposure intensity increased.

 figure: Fig. 15.

Fig. 15. Exponential fitting result of normalized lag for 8 anode currents of source 2.

Download Full Size | PDF

 figure: Fig. 16.

Fig. 16. Linear fitting results of lag coefficients $a$, $b$, and $c$.

Download Full Size | PDF

We estimated the detector lag of skull phantom data through the lag coefficient and performed lag correction by removing the estimated lag from the projection. Fig. 17 compares lag-uncorrected and lag-corrected projections. We verified that the afterglow caused by the detector lag of other sources that appeared in the lag-uncorrected projection was reduced in the lag-corrected projection. In the third row of Fig. 17, the difference between the lag-corrected and uncorrected projections represents the afterglow caused by the estimated lag. Fig. 18 compares the uncalibrated image, lag-uncorrected calibrated image, and lag-corrected calibrated image of the skull phantom. The reconstructed images consisted of 900 $\times$ 900 axial slices. We enlarged the 100 $\times$ 100 ROI images, including the edge of the phantom. We also displayed 580 $\times$ 300 coronal images along the dotted line of the axial image. Furthermore, we observed artifacts stemming from geometry misalignment in the uncalibrated image. However, these artifacts were reduced in the calibrated image. The red arrows in the uncalibrated image indicate edge blur. Additionally, we observed complex shade artifacts in the lag-uncorrected image. The yellow arrows in the uncalibrated image and the lag-uncorrected calibrated image correspond to artifacts due to detector lag. However, these artifacts were reduced in the lag-corrected calibrated image. Hence, the proposed calibration method effectively reduced artifacts during image reconstruction, even in experiments using prototype systems.

 figure: Fig. 17.

Fig. 17. Lag-uncorrected projections, lag-corrected projections, and the difference images. The display window of lag-uncorrected and lag-corrected projections is [0 4.2]. The display window of the difference images is [0, 0.76].

Download Full Size | PDF

 figure: Fig. 18.

Fig. 18. Uncalibrated images, lag-uncorrected calibrated image, and lag-corrected calibrated images of skull phantom. Each row represents the axial, enlarged ROI, and coronal images, respectively. The display window of all image slices is [0 0.1] (unit: 1/mm).

Download Full Size | PDF

8. Discussion

We outline the process of obtaining artifact-corrected images using our innovative prototype multiple-source C-arm CT system, incorporating three CNT X-ray sources. The geometry calibration was performed using the BB center coordinates of the calibration phantom. The calibration results were used to reconstruct images for each source by applying truncation correction and weighting for data redundancy and overlap region of object area for each source. The final image was obtained by summing the reconstructed images of each source. We validated the calibration results through simulation studies and experiments. We observed afterglow due to detector lag in the experimental data. Therefore, we performed lag correction.

The prototype system used three sources to verify the feasibility of the multiple-source C-arm CT system, increasing six calibration parameters compared to a single-source CBCT. We estimated 19 calibration parameters using the BB center coordinates commonly used in conventional single-source CBCT calibration techniques. We considered employing additional sources to achieve a wider FOV using a smaller detector. However, increasing the number of calibration parameters reduced the accuracy of the parameters estimated by the existing calibration method and phantom. Alternative calibration methods may become imperative if the target level of accuracy for the estimated parameters cannot be attained, even with the utilization of a phantom containing a greater number of BBs.

A potential approach involves the independent calibration of each subsystem instead of simultaneously calibrating all subsystems. Each subsystem would share six intrinsic parameters for the detector and six extrinsic parameters for the phantom. Consequently, if the calibration of one subsystem can estimate 13 parameters corresponding to a single source CBCT system, the remaining subsystems would solely need to estimate the three intrinsic parameters for the source, excluding the shared parameters. This strategy effectively alleviates the challenge posed by augmented parameter count. However, it relies heavily on the first subsystem to estimate the shared parameters. Therefore, sufficient BBs must be placed within the narrow scan area of the first subsystem to ensure accurate calibration.

Another strategy involves conducting calibration using bead projections, employing a phantom that features larger beads instead of BBs, as demonstrated in a previous study [30]. The bead projection-based method is time-consuming because it requires generating projections for each source during the optimization of parameter estimation. Furthermore, it exhibits heightened sensitivity to projection noise compared to calculating BB center coordinates. However, the bead projection-based method provides a larger dataset than the BB center coordinates, resulting in higher calibration accuracy. Therefore, when employing additional sources, the bead projection-based calibration method can be considered in conjunction with the independent calibration of each subsystem.

We used symmetric mirroring in the image reconstruction algorithm to perform truncation correction. However, we observed that despite this correction, values near the truncated boundary of the filtered projection remained higher than the normal values, as in a previous study [28]. The residual truncation artifact observed in the reconstructed image was reduced by introducing weighting applied to the overlap region of the object area for each source in the filtered projection, not in the final image. Weighting in the final image assigns an equal contribution to each source in the overlap region, whereas weighting in the projection domain allows for a reduced contribution to the voxels closer to the projection boundary in the reconstructed image of each source. Note that the vicinity of the projection boundary, where residual truncation artifacts occur, encompasses the overlap region of the object area for each source. Therefore, the proposed weight function for the overlap region can also aid in reducing residual truncation artifacts.

In the prototype system, a single-frame projection was acquired by activating the CNT X-ray source for 25 ms and deactivating it for 35 ms, resulting in a total acquisition time of 60 ms per frame. In the short scan mode, characterized by a scan angle of 230$^{\circ }$, a total of 230 frames were obtained for each source, resulting in a scan duration of 41.4 seconds. The following two potential approaches can be considered to reduce the scan time and enable real-time image acquisition during surgery: shortening the acquisition time of each frame projection or increasing the interval between projection angles. However, such modifications deteriorate the quality of the reconstructed image. Therefore, in the future, we plan to optimize the projection acquisition time and the interval between projection angles while ensuring that any potential degradation in image quality remains clinically acceptable.

Although the prototype system demonstrated less detector lag than the single-source CBCT system used in the previous study [26], we observed artifacts in the reconstructed images attributable to detector lag. This disparity arises due to the differential impact of detector lag between the proposed multiple-source C-arm system and the conventional single-source CBCT system. Afterglow in projection is primarily attributed to the blurring of the lag from the previous fully illuminated frames outside the object into frames with object shadows [26]. In single-source CBCT, temporally adjacent frames are consecutive, resulting in insignificant changes in intensity. Consequently, the detector lag does not adequately blur within the object shadow of consecutive frames to produce discernible afterglow. However, the operational sequence of the sources in the proposed system introduces distinctive dynamics. The temporal adjacency of frames is disrupted due to the sequential operation of sources, causing the intensity variations in temporally adjacent frames. Hence, afterglow in projection is highly dependent on the detector lag generated in adjacent frames.

In this study, we utilized lag data from eight different exposure intensities for lag correction. In consonance with established lag correction methods for the single source CBCT, we employed exponential fitting to estimate the lag coefficient. We simplified the representation of the lag coefficient into a linear model concerning exposure intensity to capture the exposure-dependent lag characteristics. We successfully obtained images with reduced artifacts stemming from detector lag by subtracting the estimated lag from the projection data acquired using the prototype system. However, we expect to achieve an even more accurate equation for the detector lag by expanding the utilization of lag data to encompass a wider range of exposure intensities and employing a polynomial model instead of a linear model to fit lag coefficients. This enhancement in modeling the relationship between lag and exposure intensity will further improve the performance of lag correction.

The simulation studies were used to verify that the proposed geometry calibration method attained a reconstruction accuracy of less than one voxel. However, the experimental results revealed that the calibrated image of the calibration phantom did not exhibit a perfectly spherical shape for the BB. Hence, we plan to track the position of each module in the prototype system using infrared cameras to verify if this discrepancy is caused by geometry distortion during gantry rotation. If any tilting or vibration of the sources and detector module is detected, we will reinforce the joints between each module and the gantry to prevent such motion. Additionally, if the C-arm experiences slight bending due to the weight of the sources and detector resulting from insufficient rigidity of the C-arm, we will reinforce the C-arm to enhance its rigidity.

Our primary objective was to enhance the FOV compared to traditional C-arm CT systems; a goal that we addressed in the design and evaluation of the prototype system. We used a mobile C-arm CT with a detector dimensions of 295 mm $\times$ 295 mm and an effective detector pixel pitch of 0.302 mm (2 $\times$ 2 binning) as a reference, which closely resembled our prototype system’s detector specifications [31]. Notably, this reference system featured a larger gantry, resulting in a source-to-detector distance of 1164 mm; further offering a FOV of 160 mm in diameter and a voxel size of 0.313 mm$^3$. In contrast, our prototype system delivered a FOV measuring 240 mm in diameter, coupled with a voxel size of 0.2684 mm$^3$, thereby surpassing the reference system in terms of FOV size and spatial resolution. However, since FOV depends on various factors such as system size, geometry, and detector specifications, it alone may not provide a comprehensive evaluation of our system’s performance relative to traditional systems. We are fully aware of the significance of other imaging capabilities, including noise power spectrum, modulation transfer function, and radiation dose, which deserve in-depth evaluation. We plan to thoroughly explore these aspects in our future work to gain a deeper understanding of our system’s overall performance and its applicability in various imaging scenarios.

9. Conclusion

We propose a prototype CNT-based multiple-source C-arm CT system. We applied the geometry calibration method, image reconstruction algorithm, and detector lag correction method required for the prototype system. The results confirm that image artifacts caused by geometry misalignment and detector lag were effectively reduced. We manufactured the prototype system with a 24 cm diameter FOV for brain surgery imaging. The system can be applied to other human body surgeries, such as spine surgery, by increasing the distance between sources or using more sources to enlarge the FOV. In future work, we plan to optimize the operation speed of the CNT X-ray sources and the interval between projection angles to reduce the scan time.

Funding

National Research Foundation of Korea (RS-2022-00144336, RS-2023-00240135).

Acknowledgments

This work was supported in part by the National Research Foundation of Korea (NRF) Grant funded by the Korean Government through the Ministry of Science and ICT (MSIT) under Grant RS-2022-00144336, and RS-2023-00240135, and Institute of Information & Communications Technology Planning & Evaluation (IITP) Grant funded by MSIT (No. 2020-0-01361, Artificial Intelligence Graduate School Program (Yonsei University)).

Disclosures

The authors declare no conflicts of interest.

Data availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

Supplemental document

See Supplement 1 for supporting content.

References

1. G. L. Machado, “Cbct imaging–a boon to orthodontics,” The Saudi Dental J. 27(1), 12–21 (2015). [CrossRef]  

2. W. C. Scarfe, A. G. Farman, P. Sukovic, et al., “Clinical applications of cone-beam computed tomography in dental practice,” J. Can. Dent. Assoc. 72(1), 75–80 (2006).

3. S. Schafer, S. Nithiananthan, D. Mirota, et al., “Mobile c-arm cone-beam ct for guidance of spine surgery: image quality, radiation dose, and integration with interventional guidance,” Med. Phys. 38(8), 4563–4574 (2011). [CrossRef]  

4. H. Arakawa, M. Marks, H. Do, et al., “Experimental study of intracranial hematoma detection with flat panel detector c-arm ct,” Am. J. Neuroradiol. 29(4), 766–772 (2008). [CrossRef]  

5. J. S. Hott, V. R. Deshmukh, J. D. Klopfenstein, et al., “Intraoperative iso-c c-arm navigation in craniospinal surgery: the first 60 cases,” Neurosurgery 54(5), 1131–1137 (2004). [CrossRef]  

6. J. Tonetti, M. Boudissa, G. Kerschbaumer, et al., “Role of 3d intraoperative imaging in orthopedic and trauma surgery,” Orthop. Traumatol. Surg. Res. 106(1S), S19–S25 (2020). [CrossRef]  

7. W. P. Liu, Y. Otake, M. Azizian, et al., “2d–3d radiograph to cone-beam computed tomography (cbct) registration for c-arm image-guided robotic surgery,” Int. J. Comput. Assisted Radiol. Surg. 10(8), 1239–1252 (2015). [CrossRef]  

8. N. Foster, C. Shaffrey, A. Buchholz, et al., “Image quality and dose comparison of 3 mobile intraoperative three-dimensional imaging systems in spine surgery,” World Neurosurg. 160, e142–e151 (2022). [CrossRef]  

9. H. Kageyama, S. Yoshimura, K. Uchida, et al., “Advantages and disadvantages of multi-axis intraoperative angiography unit for percutaneous pedicle screw placement in the lumbar spine,” Neurol. Med. Chir.(Tokyo) 57(9), 481–488 (2017). [CrossRef]  

10. T. Reynolds, S. Hatamikia, Y. Q. Ma, et al., “Extended longitudinal and lateral 3d imaging with a continuous dual-isocenter cbct scan,” Med. Phys. 50(4), 2372–2379 (2023). [CrossRef]  

11. J.-W. Lee, J.-W. Jeong, S. Park, et al., “Adaptively variable frame-rate fluoroscopy with an ultra-fast digital x-ray tube based on carbon nanotube field electron emitters,” Proc. SPIE 11312, 113123F (2020). [CrossRef]  

12. D. Mendelsohn, J. Strelzow, N. Dea, et al., “Patient and surgeon radiation exposure during spinal instrumentation using intraoperative computed tomography-based navigation,” The Spine J. 16(3), 343–354 (2016). [CrossRef]  

13. V. B. Neculaes, P. M. Edic, M. Frontera, et al., “Multisource x-ray and ct: Lessons learned and future outlook,” IEEE Access 2, 1568–1585 (2014). [CrossRef]  

14. H. Yu, L. Li, C. Tan, et al., “X-ray source translation based computed tomography (stct),” Opt. Express 29(13), 19743–19758 (2021). [CrossRef]  

15. Z. Wang, H. Yu, Y. Huang, et al., “Bpf algorithms for multiple source-translation computed tomography reconstruction,” arXiv, arXiv:2305.18878 (2023). [CrossRef]  

16. Z. Wang, J. Cui, Y. Liu, et al., “Shct: Segmented helical computed tomography based on multiple slant source-translation,” Opt. Express 31(17), 27223–27238 (2023). [CrossRef]  

17. Z. Wang, J. Cui, X. Bian, et al., “Single-slice rebinning reconstruction method for segmented helical computed tomography,” Opt. Express 31(19), 30514–30528 (2023). [CrossRef]  

18. Z. Wang, Z. Deng, F. Liu, et al., “Osnet & mneto: Two types of general reconstruction architectures for linear computed tomography in multi-scenarios,” arXiv, arXiv:2309.11858 (2023). [CrossRef]  

19. Z. Wang, Y. Liu, S. Wang, et al., “Analytical reconstructions of full-scan multiple source-translation computed tomography under large field of views,” J. X-Ray Sci. Technol. pp. 1–18 (2023).

20. T. G. Schmidt, R. Fahrig, N. J. Pelc, et al., “An inverse-geometry volumetric ct system with a large-area scanned source: A feasibility study,” Med. Phys. 31(9), 2623–2627 (2004). [CrossRef]  

21. J. Baek, B. De Man, J. Uribe, et al., “A multi-source inverse-geometry ct system: initial results with an 8 spot x-ray source array,” Phys. Med. Biol. 59(5), 1189–1202 (2014). [CrossRef]  

22. L. A. Feldkamp, L. C. Davis, and J. W. Kress, “Practical cone-beam algorithm,” J. Opt. Soc. Am. A 1(6), 612–619 (1984). [CrossRef]  

23. D. L. Parker, “Optimal short scan convolution reconstruction for fan beam ct,” Med. Phys. 9(2), 254–257 (1982). [CrossRef]  

24. J. Hsieh, O. Gurmen, and K. F. King, “Recursive correction algorithm for detector decay characteristics in ct,” in Medical Imaging 2000: Physics of Medical Imaging, vol. 3977 (SPIE, 2000), pp. 298–305.

25. N. Mail, D. Moseley, J. Siewerdsen, et al., “An empirical method for lag correction in cone-beam ct,” Med. Phys. 35(11), 5187–5196 (2008). [CrossRef]  

26. J. Starman, J. Star-Lack, G. Virshup, et al., “Investigation into the optimal linear time-invariant lag correction for radar artifact removal,” Med. Phys. 38(5), 2398–2411 (2011). [CrossRef]  

27. J. Hsieh, “Computed tomography: principles, design, artifacts, and recent advances” (SPIE Press, 2003).

28. B. Ohnesorge, T. Flohr, K. Schwarz, et al., “Efficient correction for ct image artifacts caused by objects extending outside the scan field of view,” Med. Phys. 27(1), 39–46 (2000). [CrossRef]  

29. Z. Wang, A. C. Bovik, H. R. Sheikh, et al., “Image quality assessment: from error visibility to structural similarity,” IEEE Trans. on Image Process. 13(4), 600–612 (2004). [CrossRef]  

30. S. Moon, S. Choi, H. Jang, et al., “Geometry calibration and image reconstruction for carbon-nanotube-based multisource and multidetector ct,” Phys. Med. Biol. 66(16), 165005 (2021). [CrossRef]  

31. N. Sheth, T. De Silva, A. Uneri, et al., “A mobile isocentric c-arm for intraoperative cone-beam ct: technical assessment of dose and 3d imaging performance,” Med. Phys. 47, 958–974 (2020). [CrossRef]  

Supplementary Material (1)

NameDescription
Supplement 1       Detail description of the calibration phantom and calibration algorithm

Data availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (18)

Fig. 1.
Fig. 1. Structure of the prototype system.
Fig. 2.
Fig. 2. Intrinsic parameters for three sources offset, detector offset, and detector rotation.
Fig. 3.
Fig. 3. (a) Calibration phantom. Example projections of the phantom scanned by (b) source 1, (c) source 2, and (d) source 3.
Fig. 4.
Fig. 4. Flowchart of the image reconstruction algorithm of the prototype system.
Fig. 5.
Fig. 5. (a) Duplicated samples of $(\gamma, \beta )$ and $(-\gamma,\beta +\pi +2\gamma )$ in fan-beam CT. (b) Sinogram representation of redundant data in fan-beam CT.
Fig. 6.
Fig. 6. (a) Projection sample of source 1 represented as being irradiated from an imaginary source with the same distance from the iso-center as source 2. The angle $\beta '$ between the imaginary source and source 2, centered on the iso-center, denotes the difference between the projection angles of source 1 and source 2. The angle $\gamma '$ formed between the projection sample and the iso-ray of the imaginary source was determined by the detector angle of source 1. (b) Sinogram representation of the prototype system.
Fig. 7.
Fig. 7. Sinogram representation of redundant data for (a) $\beta \geq 0$ and (b) $\beta <\theta -\phi _1-\phi _2$ .
Fig. 8.
Fig. 8. Partitioning of the redundant data area to define the weight function in the case of (a) $\beta \geq 0$ and (b) $\beta <\theta -\phi _1-\phi _2$ .
Fig. 9.
Fig. 9. Skull phantom.
Fig. 10.
Fig. 10. Example projections of skull phantom scanned by (a) source 1, (b) source 2, and (c) source 3.
Fig. 11.
Fig. 11. Normalized lag of detector center pixel measured with 20 mA anode current of source 2.
Fig. 12.
Fig. 12. Reconstructed images of the simulation results. (a) Reference images, (b) uncalibrated image, and (c) calibrated image. The rows represent BBs centered on the 36th, 66th, 147th, and 226th axial slices, respectively. The display window of all image slices is [0 0.3] (unit: 1/mm).
Fig. 13.
Fig. 13. Uncalibrated images and calibrated images of the calibration phantom. The display window of all image slices is [0 0.1] (unit: 1/mm).
Fig. 14.
Fig. 14. (a) First frame and (b) second frame of 20 mA lag data captured by source 2. The display window of each frame is [57012 58734] and [672 2880] (unit: detector counts), respectively.
Fig. 15.
Fig. 15. Exponential fitting result of normalized lag for 8 anode currents of source 2.
Fig. 16.
Fig. 16. Linear fitting results of lag coefficients $a$ , $b$ , and $c$ .
Fig. 17.
Fig. 17. Lag-uncorrected projections, lag-corrected projections, and the difference images. The display window of lag-uncorrected and lag-corrected projections is [0 4.2]. The display window of the difference images is [0, 0.76].
Fig. 18.
Fig. 18. Uncalibrated images, lag-uncorrected calibrated image, and lag-corrected calibrated images of skull phantom. Each row represents the axial, enlarged ROI, and coronal images, respectively. The display window of all image slices is [0 0.1] (unit: 1/mm).

Tables (7)

Tables Icon

Table 1. Nominal geometry parameters of the prototype system

Tables Icon

Table 2. Intrinsic and extrinsic parameters of the prototype system

Tables Icon

Table 3. Definition of the divided area with redundant data

Tables Icon

Table 4. Definition of υ 1 ( γ , β ) and υ 2 ( γ , β ) for each divided area

Tables Icon

Table 5. Random range of calibration parameters used to set its real values

Tables Icon

Table 6. Mean and standard deviation values of the calibration errors for each parameter

Tables Icon

Table 7. MSE and SSIM for ROIs of calibrated and uncalibrated images

Equations (13)

Equations on this page are rendered with MathJax. Learn more.

w r ( γ , β ) + w r ( γ , β + π + 2 γ ) = 1.
l 1 ( γ ) = ( π 2 γ 1 , m a x θ ) ( γ γ 1 , m a x ) / ϵ + π 2 γ 1 , m a x , l 2 ( γ ) = ( π 2 γ 2 , m a x θ + ϕ 1 ) ( γ γ 2 , m a x ) / ϵ + π 2 γ 2 , m a x , l 3 ( γ ) = ( π + 2 γ 3 , m i n θ ) ( γ γ 3 , m i n ) / ϵ 2 γ 3 , m i n π + θ ϕ 1 ϕ 2 , l 4 ( γ ) = ( π + 2 γ 2 , m i n θ + ϕ 2 ) ( γ γ 2 , m i n ) / ϵ 2 γ 2 , m i n π + θ ϕ 1 ϕ 2 .
w 1 ( γ , β ) = 3 υ 1 2 ( γ , β ) 2 υ 1 3 ( γ , β ) , w 2 ( γ , β ) = 3 υ 2 2 ( γ , β ) 2 υ 2 3 ( γ , β ) ,
σ 1 ( γ , β ) = 1 2 1 2 cos ( π γ 1 , m a x l 1 1 ( β ) ( γ l 1 1 ( β ) ) ) , σ 2 ( γ , β ) = 1 2 1 2 cos ( π γ 2 , m a x l 2 1 ( β ) ( γ l 2 1 ( β ) ) ) , σ 3 ( γ , β ) = 1 2 1 2 cos ( π γ 3 , m i n l 3 1 ( β ) ( γ l 3 1 ( β ) ) ) , σ 4 ( γ , β ) = 1 2 1 2 cos ( π γ 4 , m i n l 4 1 ( β ) ( γ l 4 1 ( β ) ) ) .
w r ( γ , β ) = w 1 ( γ , β ) + w 2 ( γ , β ) 2 .
w o , 1 ( γ ) = { 1 2 ( 1 + cos ( π ( γ γ 2 , m i n ) γ 1 , m a x γ 2 , m i n ) ) , ( γ , β ) S 1 S 2 , 1 , otherwise .
w o , 3 ( γ ) = { 1 2 ( 1 + cos ( π ( γ γ 2 , m a x ) γ 2 , m a x γ 3 , m i n ) ) , ( γ , β ) S 2 S 3 , 1 , otherwise .
w o , 2 ( γ ) = { 1 w o , 1 ( γ ) , ( γ , β ) S 1 S 2 S 3 ¯ , 1 w o , 1 ( γ ) w o , 3 ( γ ) , ( γ , β ) S 1 S 2 S 3 , 1 w o , 3 ( γ ) , ( γ , β ) S 1 ¯ S 2 S 3 , 1 , otherwise .
h ( t / Δ t ) = a + b e c t / Δ t , t = 0 , Δ t , 2 Δ t , 3 Δ t , 4 Δ t ,
h i ( p , t / Δ t ) = a i ( p ) + b i ( p ) e c i ( p ) t / Δ t , t = 0 , Δ t , 2 Δ t , 3 Δ t , 4 Δ t , a i ( p ) = m 1 ( p p i , 20 mA ) + a i , 20 mA , b i ( p ) = m 2 ( p p i , 20 mA ) + b i , 20 mA , c i ( p ) = m 3 ( p p i , 20 mA ) + c i , 20 mA ,
p 1 ( k ) = { q 1 ( 1 ) , k = 1 , q 1 ( 2 ) p 3 ( 1 ) h 3 ( p 3 ( 1 ) , 1 ) p 2 ( 1 ) h 2 ( p 2 ( 1 ) , 2 ) p 1 ( 1 ) h 1 ( p 1 ( 1 ) , 3 ) , k = 2 , q 1 ( k ) p 3 ( k 1 ) h 3 ( p 3 ( k 1 ) , 1 ) p 2 ( k 1 ) h 2 ( p 2 ( k 1 ) , 2 ) p 1 ( k 1 ) h 1 ( p 1 ( k 1 ) , 3 ) p 3 ( k 2 ) h 3 ( p 3 ( k 2 ) , 4 ) , k 3 ,
p 2 ( k ) = { q 2 ( 1 ) p 1 ( 1 ) h 1 ( p 1 ( 1 ) , 1 ) k = 1 , q 2 ( k ) p 1 ( k ) h 1 ( p 1 ( k ) , 1 ) p 3 ( k 1 ) h 3 ( p 3 ( k 1 ) , 2 ) p 2 ( k 1 ) h 2 ( p 2 ( k 1 ) , 3 ) p 1 ( k 1 ) h 1 ( p 1 ( k 1 ) , 4 ) , k 2 ,
p 3 ( k ) = { q 3 ( 1 ) p 2 ( 1 ) h 2 ( p 2 ( 1 ) , 1 ) p 1 ( 1 ) h 1 ( p 1 ( 1 ) , 2 ) , k = 1 , q 3 ( k ) p 2 ( k ) h 2 ( p 2 ( k ) , 1 ) p 1 ( k ) h 1 ( p 1 ( k ) , 2 ) p 3 ( k 1 ) h 3 ( p 3 ( k 1 ) , 3 ) p 2 ( k 1 ) h 2 ( p 2 ( k 1 ) , 4 ) , k 2 ,
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.