Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

X-ray Compton backscattering imaging via structured light

Open Access Open Access

Abstract

Compton backscattering imaging (CBI) is a technique that uses ionizing radiation to detect the presence of low atomic number materials on a given target. Unlike transmission x-ray imaging, the source and sensor are located on the same side, such that the photons of interest are scattered back after the radiation impinges on the body. Rather than scanning the target pixel by pixel with a pencil-beam, this paper proposes the use of cone-beam coded illumination to create the compressive x-ray Compton backscattering imager (CXBI). The concept was developed and tested using Montecarlo simulations through the Geant4 application for tomography emissions (GATE), with conditions close to the ones encountered in experiments, and posteriorly, a test-bed implementation was mounted in the laboratory. The CXBI was evaluated under several conditions and with different materials as target. Reconstructions were run using denoising-prior-based inverse problem algorithms. Finally, a preliminary dose analysis was done to evaluate the viability of CXBI for human scanning.

© 2022 Optica Publishing Group under the terms of the Optica Open Access Publishing Agreement

1. Introduction

Compton backscattering imaging (CBI) is a technique that uses ionizing radiation to obtain information about the molecular composition of an object. It captures the scattered photons after radiation collides with the target; therefore, it is considered a single-sided technique that allows locating both the detector and target on the same side of the setup. CBI has been used in organic material identification commonly encountered on explosives manufacturing, drugs, and contraband in general. This concept has been successfully applied to, for example, under-vehicle and airport luggage inspection as well as screening of buildings, among others [14]. Moreover, since the 9/11 terrorist attacks, CBI scanners have been applied to passenger screening at airports and are used on a daily basis [5]. CBI can be broadly classified into two groups: pencil-beam and full-field illumination based [6]. Pencil-beam techniques rely on a highly collimated x-ray source impinging on the body and one or two scintillators used to capture the scattered photons; in this architecture, the resolution limits are constrained by the physical dimensions of the beam, while the acquisition time is as fast as $15 \mu s$ per scanned pixel [5,7]. Most of the pencil-beam based scanners emulate the flying spot scheme [8,9], which is composed of a fan beam collimator together with a rotating chopper wheel. In order to capture a full scene, either the body under inspection must move horizontally or the source and detector must translate together in one direction while a set of measurements are taken. Figure 1-left shows the functioning principle of the pencil-beam CBI scanning. In full-field illumination techniques, the body is fully bathed by an x-ray cone-beam and the pixels are registered parallel-wise. Naturally, recovering the wanted information is more challenging in this scenario. The captured photons are focused on a two-dimensional sensor, either using a coded aperture [10,11], high x-ray optics (lobster eye [12]), or spatially selective filters [13].

 figure: Fig. 1.

Fig. 1. Left: Flying spot architecture. An x-ray cone-beam source passes through a fan beam collimator and posteriorly through a chopper wheel which acts as rotating pencil-beam collimator; this allows the body to be vertically scanned. In order to fully scan the body, it must translate horizontally. Right: The proposed compressive x-ray Compton backscattering imaging (CXBI). The structured light arrives to the body under inspection, conducting a random sampling over the field of view, while the coded aperture continuously moves. The system can also be conceived as a static coded aperture with a moving body, which is more practical.

Download Full Size | PDF

The performance of a CBI architecture is dependent on different extrinsic and intrinsic factors. It has been shown that different levels of incident keV photons directly influences the final contrast of the image [14]. At the same time, the number of scattered photons turns out to be proportional to the thickness of the target, until it reaches the so-called saturation thickness, as widely explained by Hosamani and Badiger [2]. On the other hand, the type and size of the scintillating material implemented in the conversion of high energy photons to visible light must be chosen in accordance with the expected scattered energy and intensity. Commercial scanners implemented in security checkpoints are composed of Sodium Iodide (NaI) based crystals [5]; other scintillators in the literature have been designed to enhance the final optical transmittance of the system [15]. This paper proposes a new architecture that allows the realization of Compton backscattering imaging using structured illumination under the theoretical principles of compressive sensing [16]; the architecture is coined compressive x-ray Compton backscattering imager (CXBI). CXBI emulates the simultaneous acquisition of different pixels by means of a coded aperture and a cone-beam x-ray source. The set of coded illumination measurements are then used to recover the information through the solution of an optimization inverse problem. This framework has been extensively applied in other contexts and physical phenomena such as hyperspectral imaging [1720], conventional and spectral computed tomography [2123], and video acquisition [24], among others. The functionality of CXBI is tested in this paper with Montecarlo simulations using the Geant4 Application for Tomographic Emission (GATE) and an experimental tested-bed in then mounted in the laboratory. As it will be shown here, the CXBI resolution limits are defined by the coded aperture pixel pitch size, while the spatial quality is determined by the level of radiation impinging on the body under inspection.

2. Compton phenomenon

Compton scattering occurs when a photon and a charged particle (electron at rest) collide. This phenomenon is the predominant effect at low keV, occurring together with the photoelectric effect, which is the main contributor to the attenuation of the photon flux [25]. Assume a photon has a frequency $\nu _p$ and energy $E_p=h\nu _p$, where $h$ is the Plank’s constant; if the photon hits an electron at rest with energy $E_o=m_oc^2$, with $m_o$ being the electron mass at rest and $c$ the speed of light, the photon will be scattered with energy $E_s$ given by the following expression [26]

$$E_s=\frac{E_p}{\frac{E_p}{E_o}\left(1-cos(\theta)\right)+1},$$
where $\theta$ is the scattering angle of the photon. The expression in (1) is a direct result of the law of conservation of momentum. According to the literature, a photon is said to be backscattered if $\theta >\pi /2$ [14], and when $\theta =\pi$ the photon will be expelled with approximately $70\%$ of the incident energy. The number of Compton scattered photons captured by a detector, is directly proportional to the differential cross-section involved in the interaction, which is defined by the Klein-Nishina formula [27]:
$$\frac{d\sigma}{d\Omega}=\frac{r_e^2}{2}\left( \frac{1}{1+\alpha\left(1-cos(\theta)\right)}\right)^2\left( \alpha+cos^2(\theta)\frac{\alpha^2(1-cos(\theta))^2 }{1+\alpha(1-cos(\theta)}\right),$$
where $\alpha =E_p/E_o$, $r_e=2.818\times 10^{-15} m$ is the classical electron radius, and $d\Omega$ is an infinitesimally solid angle element, subtended by the detector at the point of interest in the body. Although $d\sigma /d\Omega$ represents an area, this quantity can be taken as a unitless value proportional to the probability of Compton scattering [5]. Figure 2 graphically explains Eq. (2); notice that $d\sigma /d\Omega$ reaches a maximum for $\theta =\pi$ in the backscatter regime $(\theta >\pi /2)$. This expression is only valid for unpolarized photons; for a generalized differential cross-section formula, please refer to [28].

 figure: Fig. 2.

Fig. 2. Left: Cross section involved in the Compton scattering phenomenon. Right: $\frac {d\sigma }{d\Omega }$ as a function of $\theta$, for three different incident photon energies.

Download Full Size | PDF

When collisions do not occur on the surface, but on a sub-surface level, the incoming as well as the scattered photons have to travel back and forth through the body, which attenuates its intensity through the Beer-Lambert law, $I=I_o e^{-\mu x}$, where $I$ is the intensity after the beam has traveled a distance $x$ on the material, $I_o$ is the incident intensity, and $\mu$ is the linear attenuation coefficient coming from the photoelectric effect and the Compton scattering.

3. Inverse problems in CBI

The recovery of information starting from incomplete data is a problem of broad interest which has been successfully implemented in medical and hyperspectral imaging [17,22], electromagnetics [29], and acoustics [30], among other fields. The inverse problem in CBI can be formulated as $\mathbf {y}=\mathcal {N}(\mathbf {A}(\mathbf {x}))$, where $\mathbf {x} \in \ \mathbb {R}^n$ is the n-pixel image to be recovered (vector-wise), $\mathbf {y}\in \ \mathbb {R}^m$ $(m<n)$ are the noisy captured measurements, $\mathbf {A}$ is the forward measurements operator, and $\mathcal {N}$ represents a signal dependent noise distribution. If the noise is additive and signal independent, the last expression can be expressed as $\mathbf {y}=\mathbf {A}(\mathbf {x})+\epsilon$, where $\epsilon$ is additive Gaussian noise. Assuming that the forward model matrix $\mathbf {A}$ and the measurement vector $\mathbf {y}$ are fully known, one can recover $\mathbf {x}$ by using the Maximum Aposteriori Probability (MAP) estimator:

$$\hat{x}=\min_{\mathbf{x}}\left(||\mathbf{y}-\mathbf{A}\mathbf{x}||_2^2+\lambda R(\mathbf{x})\right),$$
where $||\cdot ||_2$ is the $\ell _2$ norm, $R(\mathbf {x})$ is a regularization term, and $\lambda$ is a variable to tune the influence of $R(\mathbf {x})$ into the final solution. Several approaches have been used to come to a suitable solution of Eq. (3). The regularizer term is often assumed to be the $l_1$ norm, such that the data to be recovered is sparse in a certain domain [31]; other approaches consider the total variation norm as the canonical regularizer [32]. Recently, the Alternating Direction Method of Multipliers (ADMM) [33] has been used to solve Eq. (3) efficiently by partitioning the optimization problem into two independent problems, such that a denoiser acts as the image prior. Equation (3) can be iteratively solved using the ADMM as follows:
$$\begin{aligned} \mathbf{x}^{k+1} &= \min_{x}\frac{1}{2}||\mathbf{A}\mathbf{x}-\mathbf{y}||_2^2+\frac{\rho}{2}||\mathbf{x}-\mathbf{z}^{k}+\mathbf{u}^{k}||_2^2\\ \mathbf{z}^{k+1} &= \mathcal{D}(\mathbf{x}^{k+1}+\mathbf{u}^{k},\sigma^2=\lambda/\rho),\\ \mathbf{u}^{k+1} &= \mathbf{u}^{k}+\mathbf{x}^{k+1}-\mathbf{z}^{k+1}, \end{aligned}$$
where $\mathbf {z}$ and $\mathbf {u}$ are intermediate variables coming from the reformulation of the problem using the Augmented Lagrangian Multipliers, and $\mathcal {D}$ is the denoiser; when the noise level is unknown, the variance must be tuned in order to reach optimal reconstructions. The solution of the first line in Eq. (4) is easily calculated, as this is a second order expression with a defined global minimum. Recently, with the advent of deep learning, the reconstruction techniques have been adapted in order to take advantage of the available data and processing capacity. The unfolding algorithms [34], deep convolutional autoenconders [35], recurrent neural networks [36] and generative adversarial networks [37], have been proposed to efficiently solve inverse problems in imaging. In the context of Compton scattering, the concept of inverse problem has already been applied to recover a three-dimensional electron density map by means of a mono-energetic source, an array of spectral detectors, and assuming signal dependent noise (Poisson) [38,39]. Given that the forward model is non-linear and not easily invertible, iterative methods have been used to find the electron density map. Moreover, the incompleteness of the data inherently arises from the energy resolution of the system and the limited angular view.

4. Compressive x-ray Compton backscattering imaging

In CXBI, a random parallel acquisition of the scene pixels is done by means of coded illumination or structured light. A similar approach was already proposed under different contexts, such as the x-ray diffraction tomography [40,41]. The proposed architecture can be seen in Fig. 1-right. A cone-beam illuminates a predefined portion of the coding pattern. After that, the structured light hits the body under inspection and the photons scattered by Compton scattering are registered by two scintillating plates. The cone-beam origin is assumed to have a small focal spot, so that the projected pattern preserves its shape, and the penumbra effect is small [42]. As for the energy characteristics of the source, this paper tests CXBI under a poly-energetic x-rays source, given that mono-energetic sources are too costly for real applications. A well detailed experimental study of CBI under poly-energetic sources is developed in [15]. After the structured light hits the body and the backscattered photons are captured, the pattern moves one column so the coded aperture in the field of view (red square) changes, and the captured pixels are, in overall different. The system can also be conceived as a static coded aperture with a moving body. This sensing principle was already implemented in the visible light regime, through the well-known single pixel camera [3537,43].

4.1 Discrete model

4.1.1 Mono-energetic assumption

Let $B_{i,j}$ be the photon flux (number of photons per area per time) impinging on the region or pixel $(i,j)$ in the body under inspection. The number of scattered photons by action of the Compton phenomenon can be calculated as follows:

$$P_{in}=\left(B_{i,j}N_1^2t\right)e^{-\mu x}\frac{d\sigma_T}{d\Omega},$$
where $N_1\times N_1$ is the size of the pixel $(i,j)$, $t$ is the exposure time, $x$ is the penetration depth of the photons inside the body, $e^{-\mu x}$ is the attenuation experienced by the photon flux in accordance to the Beer-Lambert law, and $\frac {d\sigma _T}{d\Omega }$ is the total cross-section involved in the scattering process which is defined as:
$$\frac{d\sigma_T}{d\Omega}=N_1^2dx\frac{d\sigma}{d\Omega} \mathcal{P}Z,$$
where $N_1 \times N_1 \times dx$ are the dimensions of the region where the Compton scattering occurs, $Z$ is the effective atomic number, $\mathcal {P}$ is the number of atoms per unit volume, while $\frac {d\sigma }{d\Omega }$ is given by Eq. (2). The variable $dx$ in Eq. (6) is the differential version of the variable $x$ in Eq. (5). Notice that $\mathcal {P}Z$ is an indicator of the electron density of the material involved in the interaction, quantity that is intended to be recovered in Compton Scattering Tomography [44]. As expected, $\frac {d\sigma _T}{d\Omega }$ is inherently material dependent. The total number of scattered photons can be defined as:
$$P_s=\int_x \left(B_{i,j}N_1^2t\right)e^{-\mu x}N_1^2\frac{d\sigma}{d\Omega}\mathcal{P}Z dx.$$
The variable $x$ must be integrated from zero, where body initiates, up to a depth where $95\%$ of the Compton scattering occurs. Once the photons are scattered, they travel back towards the detector passing again through the body and following a path of length $\ell _s$ with linear attenuation coefficient of $\mu _s$; therefore the intensity is attenuated by $e^{-\mu _s\ell _s}$. The variable $l_s$ can be written as a function of the photon scattered angle $\theta$ as $\ell _s=\frac {x}{cos(\pi -\theta )}$. The number of captured photons by the two plates can be written as follows:
$$\mathbf{F}_{i,j}=\int_{\psi}\int_{\omega_{i,j}}^{\pi}\int_x \left(B_{i,j}N_1^2t\right)e^{-\mu x}N_1^2\frac{d\sigma}{d\Omega}\mathcal{P}Z e^{-\mu_s\frac{x}{\cos(\pi-\theta)}} dx d\theta d\psi,$$
where $\omega _{i,j}$ is the minimum angle of the scattered photons captured by the detectors which depends on the distance between the body under inspection and the detectors, and on the dimensions of the scintillating plates; the variable $\psi$ accounts for the different positions of the photons once they hit the detectors (see Fig. 3 for an illustration of the mentioned variables). A similar analysis for the case of the pencil-beam scanner was developed by Cao [5].

 figure: Fig. 3.

Fig. 3. Graphical description of the discretization model. Left: 3D view of the CXBI and the scattering region of pixel (5,7). The region is a cube with dimensions $N_1\times N_1\times k$, where k is the depth up to which $95\%$ of the single scattering events occur. Middle: Top view of the CXBI; the photon (multi-color arrow) arrives, and it is scattered at angle $\theta$ (red arrow). The minimum scattering angle is $\omega _{5,7}$ (green dashed line) and the maximum scattered angle is $\pi$; the backscattered photons travel a path of length $\ell _s=x/cos(\pi -\theta )$. Right: Front view of one of the detectors in CXBI. $\psi$ is the angle between a middle point between the detectors, $P$, and a spatial position (x,y) on such detectors.

Download Full Size | PDF

4.1.2 Poly-energetic assumption

As stated by Huang et al. [15], when using a poly-energetic source, the theoretical analysis of the Compton backscattering imaging sensing process is non-trivial due to the multi-scattering phenomena; hence, to extend Eq. (8) to the poly-energetic scenario, the approach proposed in [5] is adopted; assume that the incoming energy $B^{E_p}_{i,j}$ is the photon flux for a given incident energy (or energy bin) at the region $(i,j)$. The number of scattered photons captured by the detectors can be written as follows:

$$\mathbf{F}_{i,j}=\sum_{E_p}\tau(E_p)\int_{\psi}\int_{\omega_{i,j}}^{\pi}\int_x \left(B^{E_p}_{i,j}N_1^2t\right)e^{-\mu x}N_1^2\frac{d\sigma}{d\Omega}\mathcal{P}Z e^{-\mu_s\frac{x}{\cos(\pi-\theta)}} dx d\theta d\psi,$$
where $\tau (E_p)$ is the estimated contribution of each energy bin, in percentage, to the final number of captured backscattered photons. That is how, for example, for a $50kV$ source filtered with a 1mm Aluminum window, $17\%$ of the scattered photons $(\tau =0.17)$ belong to the energy region $0-10keV$, $41.3\%$ $(\tau =0.413)$ to $10-20keV$, $31\%$ $(\tau =0.31)$ to $30-40keV$ and $10.7\%$ $(\tau =0.107)$ to the region $40-50keV$ [45]. Notice that $\tau (E_{p})$ is an empirical parameter defined by a proper experimental analysis. Although not explicit, several variables of the argument in the last integral depend on $E_p$, such as the scattering angle (see Eq. (1)), the followed path by the scattered photons $l_s$, and consequently its linear attenuation coefficient $\mu _s$.

4.2 Forward model

The CXBI sensing process will now be described using a matrix-vector notation. Let $\mathbf {T}=\left [\mathbf {t}_1,\mathbf {t}_2,\ldots, \mathbf {t}_{N+Q-1}\right ]$ be a $N\times N+Q-1$ matrix containing the pattern to be used during the capturing process in CXBI, where $\mathbf {t}_{m}$ is the $m^{th}$ column. Here, $N \times N$ is the size of the projected pattern in every shot, and $Q$ is the number of captured measurements (the analysis can be extended to a non-squared scene without loss of generality). The pattern corresponding to the $q^{th}$ snapshot can be isolated as follows:

$$\mathbf{C}_q=\mathbf{T}\cdot\mathbf{B}_q,$$
where $q\in \left [0,Q-1 \right ]$ and $\mathbf {B}_q$ is defined as follows:
$$\mathbf{B}_q=\left[ \boldsymbol{0}_{q\times N}; \mathbf{I}_{N\times N}; \boldsymbol{0}_{Q-1-q\times N} \right]$$
where $\boldsymbol {0}_{q\times N}$ is a matrix full of zeros with dimensions $q\times N$, $\mathbf {I}_{N\times N}$ is the identity matrix and $\boldsymbol {0}_{Q-1-q\times N}$ is a matrix full of zeros with dimensions $Q-1-q\times N$. The forward matrix $\mathbf {A}$, with dimensions $Q\times N^2$ can be defined as:
$$\mathbf{A}=\left[ \vec{\mathbf{C}}_0, \vec{\mathbf{C}}_1,\ldots \vec{\mathbf{C}}_{Q-1} \right]^T$$
where $\vec {\mathbf {C}}_q$ is the vector-wise version of $\mathbf {C}_q$. The captured information is therefore given $\mathbf {y}=\mathbf {A}\cdot \vec {\mathbf {F}}$, where $\vec {\mathbf {F}}$ is the vector-wise version of $\mathbf {F}$ given in Eq. (9) with length $N^2$, and $\mathbf {y}$ is a column vector of length $Q$. Note that the proposed discretization model does not consider Compton multiple scattering processes, Rayleigh phenomenon and unwanted scattering processes coming from air and other bodies. Therefore, the real measurements, corrupted by noise, can be written as $\mathbf {y}=\mathbf {A}\cdot \vec {\mathbf {F}}+\epsilon$. In this paper, the source of noise is assumed to be independent from the acquired signal; this will be properly justified in the simulations. A graphical description of the ensemble of the matrix $\mathbf {A}$ can be seen in Fig. 4-top. An example of the sensing matrix for 10 captured snapshots and a $16 \times 16$ scene can be seen in Fig. 4-bottom. In this case, $\mathbf {A}$ has dimensions of $10 \times 256$. Notice in the zoomed portion how patterns relate to each other by a movement that represents the physical translation of the mask or equivalently the body.

 figure: Fig. 4.

Fig. 4. Top: CXBI forward model for two captured snapshots. Each row in $\mathbf {A}$ represents the row-wise pattern for a captured snapshot. Bottom: Sensing matrix for 10 captured snapshots and a $16 \times 16$ scene. The zoomed portion highlights the movement of the row-wise patterns, which is related to the physical translation of the mask.

Download Full Size | PDF

5. GATE simulation conditions and results

The proposed CXBI was mounted and tested using Geant4 Application for Tomographic Emissions, GATE, which is an user-friendly software developed by Strul et al. [46] that allows simulation of high energy physics experiments with conditions close to real life. All the implemented software is available for replication of the results, as we show in Code 1 (Ref. [47]).

5.1 General conditions

A Poly-energetic source was simulated using the Matlab tool SPEKTR [48]. The x-ray source was powered by a 120kV source, and it was inherently filtrated by a 1.5mm aluminum window. This was in accordance with the standards of commercial scanners [5,7]. The source beam expansion is 17.5 degrees and it is located at position (0,0,−15) cm. The activity of the source is set to $6.4\times 10^8Bq$, where $1Bq$ is defined as the generation of one photon per second. By multiplying $6.4\times 10^8$ times $1.6\times 10^{-19}$ (charge of electron) and dividing it by $0.02$ (percentage of electrons in an electron beam that generate x-rays [49,50]), one obtains that the equivalent current is $5.12nA$.

The coded aperture is conceived as a $32 \times 32$, 10% transmittance random mask, with pitch size of $2mm \times 2mm$ and depth of $2 mm$. The transmittance is defined as the number of non-blocking pixels over the total number of pixels. The material of the coded aperture is Tungsten (W), because this element possesses a high atomic number and it is suitable to project the pattern. The coded aperture is surrounded by a shielding wall (collimator) in order to prevent unwanted photons from the source from colliding with the target. The coded aperture is positioned at the origin (0,0,0).

Two detectors were placed between the target and the coded aperture in order to capture the backscattered photons after they collide with the body under inspection. The dimensions of the detectors are $30 cm \times 50cm \times 5cm$ and the scintillating material is Sodium Iodide (NaI). The area was chosen such that the setup can be replicated in the laboratory. The thickness was chosen in order to have a high conversion ratio [15]. The detectors were located at (25,0,20.5)cm and (−25,0,20.5)cm. This location was set to ensure that the plates are as close as possible to the target without being directly affected by the radiation of the source. In GATE, the output parameters in the simulations are known as singles. One single represents the aggregation of photon hits occurring at the detector layer. This aggregation emulates the integration in electronic hardware. For more details regarding the hits adding policies and more, please refer to GATE documentation [51].

The body under inspection is placed such that its front face is located at (0,0,50)cm. The $2mm \times 2mm$ pixel of the mask is seen as a $0.867cm \times 0.867cm$ square at the body. This means that the spatial size of the recovered scene is limited to $27.7cm \times 27.7 cm$, under proper conditions.

5.2 Measurement process

The capturing process is done snapshot by snapshot, and each snapshot takes $15.625 ms$. The coded aperture translates with a uniform speed of $128mm/s$, such that every $15.625 ms$, a new column enters and other exits the field of view. The number of singles per snapshot is extracted to then concatenate the measurements vector $\mathbf {y}$. 512 shots were captured, which corresponds to 50% of the total number of pixels. There is no need to add artificial noise to $\mathbf {y}$ given that GATE emulates real experimental conditions. Only the photons arriving at the detector with an energy bigger than $10keV$ are taken into account when assembling $\mathbf {y}$. The matrix $\mathbf {A}$ is constructed according to the patterns in the snapshots, as depicted in Fig. 4. Finally, the reconstruction algorithms are run to find $\mathbf {F}$. Given that the photons interact with air and other bodies in the surroundings, unwanted scattering effects add noise to the captured data. In order to reduce these effects in the recovered image, the capturing process is run without target, such that the resultant measurements are subtracted from the ones with body.

5.3 Ground-truth generation

The CXBI is a new architecture; thus, there are no image databases available that fit the current needs of this research; hence the ground-truth images must be self-captured using GATE. The ground-truth was captured by moving a pencil-beam source pixel by pixel over the scene and then registering the backscattered photons. The pencil-beam was emulated by reducing the beam expansion of the same x-rays source to 0.4 degrees and it was located at a $(x,y,25)cm$, where $x,y$ represents the spatial location of a given pixel over the target. The activity of the source was reduced to $5000$ photons per second. The number $5000$ was estimated as the mean number of particles that pass through a $2mm \times 2mm$ window during $15.625 ms$ when the activity is set to $6.4\times 10^8Bq$.

5.4 Results

Three different targets were implemented in the simulations. The first target was composed of four cubes of size $7 \times 7 \times 1 cm^3$, equidistantly located over the origin. Two of them were composed of caffeine ($C_8H_{10}N_4O_2$, density 1.23$g/cm^3$), and the remaining ones were composed of HMX-Octogen ($C_4H_{8}N_8O_8$, density 1.91$g/cm^3$); HMX-Octogen is mainly used for plastic explosives [14]. The second target was composed of the letters U and D, both made of Aluminum (Z=13, density 2.7$g/cm^3$). The last target was a human hand with a gold ring (Z=79, density 19.3$g/cm^3$) located on the center finger. The density and composition of the hand is assumed to be close to that of water. Figure 5 shows the ground-truth image and the reconstructed images using a compression ratio of $12.5\%$ and $25\%$; the compression ratio is defined as the ratio between the number of captured measurements (length of $\mathbf {y}$) and the number of pixels on the scene $(N^2)$. The chosen reconstruction algorithm was ADMM (Eq. (4)) with BM3D [52] and FFDNET (pre-trained) [53] in the denoising stage, with variables initialized at zero. The BM3D denoising technique is a collaborative filtering approach based on grouping similar fragments of the image into a 3-dimensional array, such that higher dimensional filtering techniques that consider potential similarities can be applied. The FFDNET denoising technique is a Convolutional Neural Network architecture that runs the denoising operation on down-sampled images to then upsample to the original size. Other commonly used denoisers could be used in this stage as well [54]. In order to decrease the correlation between adjacent masks, the patterns were chosen equidistantly over the original captured 512 shots. That is, when using a length of $12.5\%$, patterns were chosen every other 4 $(50/12.5)$. The standard deviation of the noise in BM3D and FFDNET was chosen such that the best reconstruction is reached (noise level is unknown). Two criteria of comparison with the ground-truth were used: the structural similarity index (SSIM) and the Peak Signal-to-noise ratio (PSNR). The obtained SSIM and PSNR for the different scenarios are consigned in Fig. 5 (bottom-right and top-right respectively).

 figure: Fig. 5.

Fig. 5. Ground-truth and reconstructed $32 \times 32$ images using $12.5\%$ and $25\%$ of the information. The ADMM was implemented using BM3D and FFDNET in the denoising stage. The transmittance per snapshot was set to 10%. The SSIM and PSNR for every scenario can be observed in the bottom-right and top-right of the figures respectively.

Download Full Size | PDF

5.5 Noise analysis

In CBI, the capturing process is usually assumed to be corrupted by signal dependent Poisson noise that models the inter-arrival time among photons. This is valid when the number of photons arriving at the sensor in a given time is less than 100 [23]; when this happens, an accurate value of the irradiance (radiant flux per unit area) in the detectors cannot be estimated. The nature of the proposed CXBI, where there exists a parallel acquisition of pixels, allows one to safely assume that the level of Poisson noise is negligible. To test that, the mean number of photons arriving at the detectors was measured and consigned in Table 1 for the simulation conditions explained before. As shown, the arriving photons at the detector per captured snapshot was sufficient to safely neglect Poisson noise. Other sources of noise that might affect the performance of scintillator-based detectors are dark and readout noise [55].

Tables Icon

Table 1. Mean number of photons arriving at the detectors for one single measurement for the three different targets in Fig. 5.

6. Dose and energy deposited analysis

One of the key advantages of Compton backscattering imaging is the level of radiation dose impinging on the body under inspection. A short time acquisition makes the radiation exposure almost negligible [13]. For example, the ZBV (Z Back-scanner Van) [3], which is a mobile system that scans vehicles using a pencil-beam collimated mono-energetic source of 225keV, requires a radiation dose of 0.07 $\mu Sv$ $(Sv\rightarrow Siever)$ per inspection [13]. The absorbed radiation for a given pixel $(i,j)$ is defined as follows:

$$dose(E_p)=\sum_{E_p}\frac{E_pB_{i,j}^{E_p}N_1^2t\mu_{en,m}}{N_1^2},$$
where $E_pB_{i,j}^{E_p}N_1^2t$ is the total impinging energy in Joules into the pixel $(i,j)$, and $\mu _{en,m}$ is the mass energy absorption coefficient given in $cm^2/kg$. The final units of the absorbed radiation are $Grays (Gy)$ $(Gy=\frac {J}{kg})$. Table 2 shows the estimated absorbed dose per pixel in the body in one CXBI shot in a muscle-skeleton phantom. The coded aperture pitch size was $2mm\times 2mm$ while the activity of the source was $5.12nA$. The curve representing $\mu _{en,m}$ for a muscle skeleton phantom can be seen in Fig. 6-left [56]. The number of photons per pixel in the body, and the area of such pixel are also consigned in Table 2. Figure 6 shows the original and reconstructed human hands, using 3 different levels of absorbed dose, $0.0658\times 10^{-4}Gy$, $0.68\times 10^{-4}Gy$, and $3.42\times 10^{-4}Gy$ (top of each figure), and its respective SSIM (bottom-left), and PSNR (bottom-right); the compression ratio remains fixed at $25\%$, and the dose is controlled with the transmittance per captured snapshot. Notice that, in all cases, the absorbed dose was less than the one needed to capture the ground truth. The main reason behind this, is the fact that a highly collimated pencil-beam was used to run the pixel-by-pixel scanning (see section 5.3).

 figure: Fig. 6.

Fig. 6. Left: Mass absorption coefficient $\mu _{en,m}$ in $cm^2/kg$ as a function of incoming photons energies in $MeV$ [56]. Right: Ground-truth (top-left) and reconstructed scenes for different levels of absorbed dose (top of each figure). The compression ratio remains fixed at $25\%$, and the dose is controlled with the transmittance per captured snapshot. The SSIM and PSNR for each scenario can be seen in the bottom left and right respectively. The BM3D is used in the denoising stage.

Download Full Size | PDF

Tables Icon

Table 2. Absorbed dose per pixel in one snapshot, number of photons arriving at a pixel in one snapshot, and area of the pixel, for $2mm \times 2mm$ coded aperture pitch size, and a source activity of $5.12nA$. The mass absorption coefficient values can be seen in Fig. 6.

7. CXBI for target inspection in humans

The proposed architecture can be potentially implemented to improve the speed of scanning in critical check points such as airports and homeland security borders. In order to prove that, a human male of 5.4 ft high with a suspected gun on it was scanned using the conventional pencil-beam with activity of 3000 Becquerel, to then synthetically generate the CXBI measurements vector. The scanner dimensions were modified to $30cm \times 176cm$. To test the CXBI in a noisier environment, artificial Gaussian noise with variance of $\sigma ^2=10000$ was added. The body composition and density was assumed to be close to that of water ($\rho =1 g/cm^3$, $11.2\% \ H, 88.8\% \ O$) and the material for the gun was assumed to be red brass, an alloy commonly used in firearm manufacturing ($\rho =8.44 g/cm^3$, $88\% \ Cu, 10\% \ Sn, 2\% \ Zn$). Figure 7-top shows the SSIM and the PSNR curves for different code transmittance per shot, 0.1%, 1%, and 10%, as a function of the compression ratio; here the BM3D denoiser was used, as the performance is similar to that of the FFDNET with less simulation time. Even for a $0.1\%$ of code transmittance, the algorithm shows good performance when the compression ratio is sufficient. Figure 7-bottom shows the SSIM and PSNR for 3 different compression ratios, 3.125%, 6.25% and 12.5%, as a function of the code transmittance per shot. It is evident that, after a certain point, increasing the transmittance does not necessarily mean a better reconstruction in terms of SSIM and PSNR.

 figure: Fig. 7.

Fig. 7. Top: SSIM and PSNR for different code transmittance per shot, as a function of the compression ratio. From left to right: 0.1%, 1%, and 10% transmittance. Bottom: SSIM and PSNR for different compression ratios, as a function of the code transmittance. From left to right: 3.125%, 6.25%, and 12.5% compression ratio. The BM3D was implemented as denoiser.

Download Full Size | PDF

The ground-truth and reconstructed scenes for 3 different coded transmittances, 1%, 2.5% and 5%, and a compression ratio of $12.5\%$ can be seen in Fig. 8. As shown, the firearm located in the body can be easily identified by a simple visual inspection for all the cases. As stated in [5], in order to quantify the detectability of the suspected gun, a contrast estimation must be done. the contrast against the background is defined as $C_{TR}=\left |\frac {N_{str}-N_{bkg}}{N_{bkg}}\right |$, where $N_{str}$ and $N_{bkg}$ are the mean values of the pixels on the target area and background respectively. Figure 8 shows a $22 \times 22$ pixels zoom over the image in the area where the firearm is placed, as well as the $C_{TR}$ for each reconstruction (bottom-right). The background is assumed to be the human silhouette. The $C_{TR}$ as a function of the code transmittance per snapshot can also be seen in Fig. 8. Notice how $C_{TR}$ reaches a maximum when the transmittance is 10%; nevertheless, the $C_{TR}$ in the reconstructions do not reach the one given by the ground-truth.

 figure: Fig. 8.

Fig. 8. Left: Contrast against the background $C_{TR}$ of the firearm as a function of the transmittance per snapshot for a fixed compression ratio of 12.5%. Top-right: Ground-truth and reconstructed scenes for three different transmittances per snapshot, with a fixed compression ratio of 12.5%. Bottom-right: Zooming portion of the firearm and its $C_{TR}$ (bottom-right). The BM3D was implemented as denoiser.

Download Full Size | PDF

8. Experimental measurements

The CXBI was mounted in the Computational X-rays Imaging Laboratory at the University of Delaware, using off-the-shelf hardware components. Figure 9 shows the test-bed implementation. A ThermoFisher Micro CT x-ray source (PXS10) is powered at 130kV, with a current of 110$\mu A$. Then, the radiation passes through two collimators such that unwanted scattering is reduced to a minimum. The first collimator (2b in Fig. 9) is a $3.2cm \times 3.2cm$ lead collimator and the second one is a $1.6cm \times 1.6cm$ lead collimator; both have a thickness of $1/32$ in. In addition, a lead shield (2a in Fig. 9) is located near the source to eliminate unused radiation from the experiment. The collimated radiation hits a tungsten-based mask, with pixel pitch size of $0.5mm\times 0.5mm$ and $50\%$ transmittance, and then the coded radiation travels towards the water target. Finally, an x-ray detector (CareView 560-RF-DE) is used to capture the backscattered radiation. To assemble the sensing matrix $\mathbf {A}$, the target is first removed, and the detector is located in the target place. Then, as the coded aperture moves (using the automatized translational stage Thorlabs LTS300, with speed of 0.8mm/s), each pattern that will be used in the reconstruction is registered.

 figure: Fig. 9.

Fig. 9. CXBI test-bed implementation. Seven different hardware components can be observed. 1: Micro CT x-rays source. 2a- Lead shield. 2b- Collimator with a $3.2cm \times 3.2cm$ window. 2c- Collimator with a $1.6cm \times 1.6cm$ window. 3- Tungsten-based coded aperture with pitch size of $0.5mm \times 0.5mm$. 4- Water cube target. 5- Dual energy detector.

Download Full Size | PDF

After all the patterns are registered (384 in total), the target is placed in its original spot, and the detector moves backwards. Then, as the coded aperture moves, the backscattered radiation for each of its patterns is captured. After that, the target is removed and the capturing process is repeated. This last step removes unwanted radiation in the measurements (see Section 5.2).

To assemble the sensing matrix, the coded aperture patterns need to be binarized. A grouping of $7 \times 7$ pixels is done, such that a coded aperture pitch size occupies a region of $7 \times 7$ pixels at the detector when running the calibration process. After that, a mean value that represents a single $7 \times 7$ region is taken; then, if that value surpasses a threshold, the pixel is set to 1 and if not, to 0. The threshold is set to be the mean value of all the grouped $7 \times 7$ regions. Figure 10 shows the original and binarized coded aperture for a given snapshot.

 figure: Fig. 10.

Fig. 10. Left: Original captured pattern for a given snapshot. Middle: grouping of $7\times 7$ pixels in the detector. Right: Binarized coded aperture used to assemble the sensing matrix.

Download Full Size | PDF

On the other side, each of the elements of the measurements vector $\mathbf {y}$ is defined by adding up all the pixel values of the detector measurements; this is because the CareView 560-RF-DE is a two-dimensional detector; however, CXBI is conceived for a single pixel scintillator plate. Next, the measurements with no target are subtracted from the measurements with target, as it was done in the simulations. Then, the reconstruction algorithm is run. In order to prevent $\mathbf {y}$ from having large values (the quantization bits of the sensor are 16), this vector is divided by $1\times 10^3$. Results for a cube-shaped water target can be seen in Fig. 11; as depicted, the field of view of the system only includes a portion of the object (see dashed red square); a projection of the target when the detector is located in the front part (transmission projection) without the mask is also shown in this figure for reference purposes;

 figure: Fig. 11.

Fig. 11. Left: Original cube-shaped water target and filed of view of the system (dashed red square). Middle: transmission projection of the target; the deep-gray area represents the water. Right: reconstructed scene of the target using the CXBI experimental test-bed implementation. The spatial size of the recovered image with CXBI is $31 \times 35$.

Download Full Size | PDF

9. Discussion

The experiments developed in GATE and in the laboratory show that CXBI can be implemented to conduct Compton backscattering imaging. Gate simulation results shown in Fig. 5 confirm that the recognition of dissimilar materials can be achieved through CXBI (see Caffeine and HMX-Octogen recovered in the image). The obtained PSNR and SSIM in simulations are good; these can be improved even more by tuning controllable parameters in the experiment (i.e. level of radiation), or by implementing a more robust data-dependent reconstruction algorithm. When used for human scanning, results in Fig. 7 show that the quality in the reconstructions are more sensitive to the compression ratio than to the transmittance per snapshot. Moreover, as depicted in Fig. 8, the transmittance per snapshot gives the best $C_{TR}$ results at about 10%. Increasing the transmittance further does not necessarily improve the performance and the image quality. This has already been experienced in other imaging areas such as compressive spectral imaging, where the optimal coding structure for a multiple shot scenario exhibits a transmittance inversely proportional to the captured snapshots [19,20]. The experimental results are an initial proof-of-concept with hardware that was readily available in the laboratory and not ideal for Compton backscattering imaging sensing. Some of the factors found to affect the capturing process in the laboratory are: the number of aluminum-based bodies in the surroundings including the translation stage, the type of detector used, the dependency of the capturing process on the frames per second rate of the detector, the susceptibility of reconstructions to the calibration process, and the inaccessibility to one side when capturing the backscattered radiation (see Fig. 9 where the detector is located only in one side). When compared to commercially available scanners, the required acquisition time is comparable (2 sec. using a compression ratio of $12.5\%$ against 3 sec. when using conventional scanners [4]). In terms of absorbed dose, CXBI requires less than the simulated pencil-beam scanner (see Fig. 6), but a bigger quantity compared to optimized commercial scanners ($0.7\mu Gy$ for a 120kV source [5,7]).

10. Conclusions

This paper proposes the compressive x-ray Compton backscattering imager (CXBI) as an alternative to the state-of-the-art scanning techniques. A discrete measurements model in accordance with a relativistic physics framework was proposed; this model assumes that the captured photons coming from the Compton effect are singly scattered. A forward measurements model, based on the single pixel imaging principle, was developed; this model considers the movement of the coded aperture patterns, or equivalently the movement of the body while the coded aperture remains fixed. A functional CXBI experiment in GATE was programmed and tested for different conditions and bodies (see Fig. 5); a method to create the ground-truth images was also proposed and implemented. The viability of CXBI for human screening and the expected dose per pixel was analyzed for several scenarios, obtaining a good contrast estimation of the suspected body against the human silhouette. Finally, preliminary experimental results using off-the-shelf hardware components are shown. Reducing the needed dose to get accurate results, reevaluation the noise model as signal dependent for low radiations, and improving the experimental results, remain as a future work.

Funding

National Science Foundation (CIF 1717578); University of Delaware (University Doctoral Fellowship).

Disclosures

Computational X-rays Imaging Laboratory, University of Delaware. Some of the stl files implemented in GATE were downloaded from the free access web pages www.cgtrader.com and www.cults3d.com. The stl files were used merely for research purposes with no commercial intentions. Please contact the first authors for more details. The authors declare no conflicts of interest.

Data availability

Data underlying the results presented in this paper are available in Ref. [47].

References

1. A. Chalmers, “Applications of backscatter x-ray imaging sensors for homeland defense,” Proc. SPIE 5071, 388–396 (2003). [CrossRef]  

2. M. Hosamani and N. Badiger, “Determination of effective atomic number of composite materials using backscattered gamma photons–a novel method,” Chem. Phys. Lett. 695, 94–98 (2018). [CrossRef]  

3. A. Chalmers, “Rapid inspection of cargos at portals using drive-through transmission and backscatter x-ray imaging,” Proc. SPIE 5403, 644–648 (2004). [CrossRef]  

4. D. J. Brenner, “Are x-ray backscatter scanners safe for airport passenger screening? for most individuals, probably yes, but a billion scans per year raises long-term public health concerns,” Radiology 259(1), 6–10 (2011). [CrossRef]  

5. Z. Cao, “Optimization for the tradeoff of detection efficiency and absorbed dose in x-ray backscatter imaging,” J. Transp. Secur. 6(1), 59–76 (2013). [CrossRef]  

6. D. Shedlock, X-ray backscatter imaging for radiography by selective detection and snapshot: Evolution, development, and optimization (University of Florida, 2007).

7. P. Rez, R. L. Metzger, and K. L. Mossman, “The dose from compton backscatter screening,” Radiat. Prot. Dosim. 145(1), 75–81 (2011). [CrossRef]  

8. M. D. Herr, J. J. McInerney, D. G. Lamser, and G. L. Copenhaver, “A flying spot x-ray system for compton backscatter imaging,” IEEE Trans. Med. Imaging 13(3), 461–469 (1994). [CrossRef]  

9. B. Yang, X. Wang, H. Shen, J. Xu, K. Xiong, and B. Mu, “Design of x-ray backscatter imaging system for vehicle detection,” Proc. SPIE 11542, 115420L (2020). [CrossRef]  

10. A. A. Faust, R. E. Rothschild, P. Leblanc, and J. E. McFee, “Development of a coded aperture x-ray backscatter imager for explosive device detection,” IEEE Trans. Nucl. Sci. 56(1), 299–307 (2009). [CrossRef]  

11. T. Shimura, T. Hosoi, and H. Watanabe, “Backscattering x-ray imaging using fresnel zone aperture,” Appl. Phys. Express 14(7), 072002 (2021). [CrossRef]  

12. J. Xu, X. Wang, Q. Zhan, S. Huang, Y. Chen, and B. Mu, “A novel lobster-eye imaging system based on schmidt-type objective for x-ray-backscattering inspection,” Rev. Sci. Instrum. 87(7), 073103 (2016). [CrossRef]  

13. D.-C. Dinca, J. R. Schubert, and J. Callerame, “X-ray backscatter imaging,” Proc. SPIE 6945, 694516 (2008). [CrossRef]  

14. S. Huang, X. Wang, Y. Chen, J. Xu, and B. Mu, “Simulation on x-rays backscatter imaging based on montecarlo methods for security inspection,” Proc. SPIE 10802, 1080203 (2018). [CrossRef]  

15. S. Huang, X. Wang, Y. Chen, J. Xu, T. Tang, and B. Mu, “Modeling and quantitative analysis of x-ray transmission and backscatter imaging aimed at security inspection,” Opt. Express 27(2), 337–349 (2019). [CrossRef]  

16. D. L. Donoho, “Compressed sensing,” IEEE Trans. Inf. Theory 52(4), 1289–1306 (2006). [CrossRef]  

17. G. R. Arce, D. J. Brady, L. Carin, H. Arguello, and D. S. Kittle, “Compressive coded aperture spectral imaging: An introduction,” IEEE Signal Process. Mag. 31(1), 105–115 (2014). [CrossRef]  

18. J. Tan, Y. Ma, H. Rueda, D. Baron, and G. R. Arce, “Compressive hyperspectral imaging via approximate message passing,” IEEE J. Sel. Top. Signal Process. 10(2), 389–401 (2016). [CrossRef]  

19. E. Salazar, A. Parada-Mayorga, and G. R. Arce, “Spectral zooming and resolution limits of spatial spectral compressive spectral imagers,” IEEE Trans. Comput. Imaging 5(2), 165–179 (2019). [CrossRef]  

20. C. V. Correa, H. Arguello, and G. R. Arce, “Spatiotemporal blue noise coded aperture design for multi-shot compressive spectral imaging,” J. Opt. Soc. Am. A 33(12), 2312–2322 (2016). [CrossRef]  

21. A. P. Cuadros and G. R. Arce, “Coded aperture optimization in compressive x-ray tomography: a gradient descent approach,” Opt. Express 25(20), 23833–23849 (2017). [CrossRef]  

22. A. P. Cuadros, X. Ma, and G. R. Arce, “Compressive spectral x-ray tomography based on spatial and spectral coded illumination,” Opt. Express 27(8), 10745–10764 (2019). [CrossRef]  

23. T. Mao, A. P. Cuadros, X. Ma, W. He, Q. Chen, and G. R. Arce, “Coded aperture optimization in x-ray tomography via sparse principal component analysis,” IEEE Trans. Comput. Imaging 6, 73–86 (2020). [CrossRef]  

24. X. Ma, X. Yuan, C. Fu, and G. R. Arce, “Led-based compressive spectral-temporal imaging,” Opt. Express 29(7), 10698–10715 (2021). [CrossRef]  

25. J. E. Parks, “The compton effect-compton scattering and gamma ray spectroscopy,” Department of Physics and Astronomy, The University of Tennessee Knoxville, Tennessee pp. 37996–1200 (2015).

26. A. H. Compton, “A quantum theory of the scattering of x-rays by light elements,” Phys. Rev. 21(5), 483–502 (1923). [CrossRef]  

27. J. H. Hubbell, W. J. Veigele, E. Briggs, R. Brown, D. Cromer, and R. J. Howerton, “Atomic form factors, incoherent scattering functions, and photon scattering cross sections,” J. Phys. Chem. Ref. Data 4(3), 471–538 (1975). [CrossRef]  

28. M. A. Stroscio, “Generalization of the klein-nishina scattering amplitude for an electromagnetic field of general polarization,” Phys. Rev. A 29(4), 1691–1694 (1984). [CrossRef]  

29. N. V. Korovkin, V. L. Chechurin, and M. Hayakawa, Inverse problems in electric circuits and electromagnetics (Springer Science —- Business Media, 2007).

30. M. I. Taroudakis and G. Makrakis, Inverse problems in underwater acoustics (Springer Science —- Business Media, 2013).

31. M. A. Figueiredo, R. D. Nowak, and S. J. Wright, “Gradient projection for sparse reconstruction: Application to compressed sensing and other inverse problems,” IEEE J. Sel. Top. Signal Process. 1(4), 586–597 (2007). [CrossRef]  

32. X. Yuan, “Generalized alternating projection based total variation minimization for compressive sensing,” in Proceedings of IEEE International Conference on Image Processing (ICIP), (IEEE, 2016), pp. 2539–2543.

33. S. Boyd, N. Parikh, and E. Chu, Distributed optimization and statistical learning via the alternating direction method of multipliers (Now Publishers Inc, 2011).

34. V. Monga, Y. Li, and Y. C. Eldar, “Algorithm unrolling: Interpretable, efficient deep learning for signal and image processing,” IEEE Signal Process. Mag. 38(2), 18–44 (2021). [CrossRef]  

35. C. F. Higham, R. Murray-Smith, M. J. Padgett, and M. P. Edgar, “Deep learning for real-time single-pixel video,” Sci. Rep. 8(1), 2369 (2018). [CrossRef]  

36. I. Hoshi, T. Shimobaba, T. Kakue, and T. Ito, “Single-pixel imaging using a recurrent neural network combined with convolutional layers,” Opt. Express 28(23), 34069–34078 (2020). [CrossRef]  

37. N. Karim and N. Rahnavard, “Spi-gan: Towards single-pixel imaging through generative adversarial network,” arXiv preprint arXiv:2107.01330 (2021).

38. G. Rigaud and B. Hahn, “Reconstruction algorithm for 3d compton scattering imaging with incomplete data,” Inverse Probl. Sci. Eng. 29(7), 967–989 (2021). [CrossRef]  

39. J. W. Webber and W. R. Lionheart, “Three dimensional compton scattering tomography,” Inverse Probl. 34(8), 084001 (2018). [CrossRef]  

40. J. A. Greenberg, M. Hassan, K. Krishnamurthy, and D. Brady, “Structured illumination for tomographic x-ray diffraction imaging,” Analyst 139(4), 709–713 (2014). [CrossRef]  

41. Z. Zhu, R. A. Ellis, and S. Pang, “Coded cone-beam x-ray diffraction tomography with a low-brilliance tabletop source,” Optica 5(6), 733–738 (2018). [CrossRef]  

42. A. Kueh, J. M. Warnett, G. J. Gibbons, J. Brettschneider, T. E. Nichols, M. A. Williams, and W. S. Kendall, “Modelling the penumbra in computed tomography 1,” J. X-Ray Sci. Technol. 24(4), 583–597 (2016). [CrossRef]  

43. M. F. Duarte, M. A. Davenport, D. Takhar, J. N. Laska, T. Sun, K. F. Kelly, and R. G. Baraniuk, “Single-pixel imaging via compressive sampling,” IEEE Signal Process. Mag. 25(2), 83–91 (2008). [CrossRef]  

44. S. J. Norton, “Compton scattering tomography,” J. Appl. Phys. 76(4), 2007–2015 (1994). [CrossRef]  

45. O. W. Linton and F. A. Mettler Jr, “National council on radiation protection and measurements,” in National conference on dose reduction in CT, with an emphasis on pediatric patients. AJR Am J Roentgenol, vol. 181 (2003), pp. 321–329.

46. D. Strul, G. Santin, D. Lazaro, V. Breton, and C. Morel, “Gate (geant4 application for tomographic emission): a pet/spect general-purpose simulation platform,” Nucl. Phys. B-Proceedings Suppl. 125, 75–79 (2003). [CrossRef]  

47. E. Salazar, X. Liu, and G. Arce, “Compressive x-rays compton backscattering imaging software,” figshare (2022), https://doi.org/10.6084/m9.figshare.19229052.

48. J. Punnoose, J. Xu, A. Sisniega, W. Zbijewski, and J. Siewerdsen, “spektr 3.0, a computational tool for x-ray spectrum modeling and analysis,” Med. Phys. 43(8Part1), 4711–4717 (2016). [CrossRef]  

49. J. C. Hsu, L. M. Nieves, O. Betzer, T. Sadan, P. B. Noël, R. Popovtzer, and D. P. Cormode, “Nanoparticle contrast agents for x-ray imaging applications,” Wiley Interdiscip. Rev.: Nanomed. Nanobiotechnol. 12(6), e1642 (2020). [CrossRef]  

50. R. Behling, Modern diagnostic x-ray sources: technology, manufacturing, reliability (CRC Press, 2021).

51. “Gate Documentation,” https://opengate.readthedocs.io. Accessed: 2021-07-16.

52. K. Dabov, A. Foi, V. Katkovnik, and K. Egiazarian, “Image denoising by sparse 3-d transform-domain collaborative filtering,” IEEE Trans. on Image Process. 16(8), 2080–2095 (2007). [CrossRef]  

53. K. Zhang, W. Zuo, and L. Zhang, “Ffdnet: Toward a fast and flexible solution for cnn-based image denoising,” IEEE Trans. on Image Process. 27(9), 4608–4622 (2018). [CrossRef]  

54. M. P. McLoughlin and G. R. Arce, “Deterministic properties of the recursive separable median filter,” IEEE Trans. Acoust., Speech, Signal Process. 35(1), 98–106 (1987). [CrossRef]  

55. F. Lacroix, A. S. Beddar, M. Guillot, L. Beaulieu, and L. Gingras, “A design methodology using signal-to-noise ratio for plastic scintillation detectors design and performance optimization,” Med. Phys. 36(11), 5214–5220 (2009). [CrossRef]  

56. J. Hubbell and S. Seltzer, “Nist standard reference database 126,” National Institute of Standards and Technology, Gaithersburg, MD (1996).

Supplementary Material (1)

NameDescription
Code 1       This file contains all the software needed to replicate the results of the long paper X-ray Compton Backscattering Imaging via Structured Light. Please see attached PDF for a detailed description of the files.

Data availability

Data underlying the results presented in this paper are available in Ref. [47].

47. E. Salazar, X. Liu, and G. Arce, “Compressive x-rays compton backscattering imaging software,” figshare (2022), https://doi.org/10.6084/m9.figshare.19229052.

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (11)

Fig. 1.
Fig. 1. Left: Flying spot architecture. An x-ray cone-beam source passes through a fan beam collimator and posteriorly through a chopper wheel which acts as rotating pencil-beam collimator; this allows the body to be vertically scanned. In order to fully scan the body, it must translate horizontally. Right: The proposed compressive x-ray Compton backscattering imaging (CXBI). The structured light arrives to the body under inspection, conducting a random sampling over the field of view, while the coded aperture continuously moves. The system can also be conceived as a static coded aperture with a moving body, which is more practical.
Fig. 2.
Fig. 2. Left: Cross section involved in the Compton scattering phenomenon. Right: $\frac {d\sigma }{d\Omega }$ as a function of $\theta$, for three different incident photon energies.
Fig. 3.
Fig. 3. Graphical description of the discretization model. Left: 3D view of the CXBI and the scattering region of pixel (5,7). The region is a cube with dimensions $N_1\times N_1\times k$, where k is the depth up to which $95\%$ of the single scattering events occur. Middle: Top view of the CXBI; the photon (multi-color arrow) arrives, and it is scattered at angle $\theta$ (red arrow). The minimum scattering angle is $\omega _{5,7}$ (green dashed line) and the maximum scattered angle is $\pi$; the backscattered photons travel a path of length $\ell _s=x/cos(\pi -\theta )$. Right: Front view of one of the detectors in CXBI. $\psi$ is the angle between a middle point between the detectors, $P$, and a spatial position (x,y) on such detectors.
Fig. 4.
Fig. 4. Top: CXBI forward model for two captured snapshots. Each row in $\mathbf {A}$ represents the row-wise pattern for a captured snapshot. Bottom: Sensing matrix for 10 captured snapshots and a $16 \times 16$ scene. The zoomed portion highlights the movement of the row-wise patterns, which is related to the physical translation of the mask.
Fig. 5.
Fig. 5. Ground-truth and reconstructed $32 \times 32$ images using $12.5\%$ and $25\%$ of the information. The ADMM was implemented using BM3D and FFDNET in the denoising stage. The transmittance per snapshot was set to 10%. The SSIM and PSNR for every scenario can be observed in the bottom-right and top-right of the figures respectively.
Fig. 6.
Fig. 6. Left: Mass absorption coefficient $\mu _{en,m}$ in $cm^2/kg$ as a function of incoming photons energies in $MeV$ [56]. Right: Ground-truth (top-left) and reconstructed scenes for different levels of absorbed dose (top of each figure). The compression ratio remains fixed at $25\%$, and the dose is controlled with the transmittance per captured snapshot. The SSIM and PSNR for each scenario can be seen in the bottom left and right respectively. The BM3D is used in the denoising stage.
Fig. 7.
Fig. 7. Top: SSIM and PSNR for different code transmittance per shot, as a function of the compression ratio. From left to right: 0.1%, 1%, and 10% transmittance. Bottom: SSIM and PSNR for different compression ratios, as a function of the code transmittance. From left to right: 3.125%, 6.25%, and 12.5% compression ratio. The BM3D was implemented as denoiser.
Fig. 8.
Fig. 8. Left: Contrast against the background $C_{TR}$ of the firearm as a function of the transmittance per snapshot for a fixed compression ratio of 12.5%. Top-right: Ground-truth and reconstructed scenes for three different transmittances per snapshot, with a fixed compression ratio of 12.5%. Bottom-right: Zooming portion of the firearm and its $C_{TR}$ (bottom-right). The BM3D was implemented as denoiser.
Fig. 9.
Fig. 9. CXBI test-bed implementation. Seven different hardware components can be observed. 1: Micro CT x-rays source. 2a- Lead shield. 2b- Collimator with a $3.2cm \times 3.2cm$ window. 2c- Collimator with a $1.6cm \times 1.6cm$ window. 3- Tungsten-based coded aperture with pitch size of $0.5mm \times 0.5mm$. 4- Water cube target. 5- Dual energy detector.
Fig. 10.
Fig. 10. Left: Original captured pattern for a given snapshot. Middle: grouping of $7\times 7$ pixels in the detector. Right: Binarized coded aperture used to assemble the sensing matrix.
Fig. 11.
Fig. 11. Left: Original cube-shaped water target and filed of view of the system (dashed red square). Middle: transmission projection of the target; the deep-gray area represents the water. Right: reconstructed scene of the target using the CXBI experimental test-bed implementation. The spatial size of the recovered image with CXBI is $31 \times 35$.

Tables (2)

Tables Icon

Table 1. Mean number of photons arriving at the detectors for one single measurement for the three different targets in Fig. 5.

Tables Icon

Table 2. Absorbed dose per pixel in one snapshot, number of photons arriving at a pixel in one snapshot, and area of the pixel, for 2mm×2mm coded aperture pitch size, and a source activity of 5.12nA. The mass absorption coefficient values can be seen in Fig. 6.

Equations (13)

Equations on this page are rendered with MathJax. Learn more.

Es=EpEpEo(1cos(θ))+1,
dσdΩ=re22(11+α(1cos(θ)))2(α+cos2(θ)α2(1cos(θ))21+α(1cos(θ)),
x^=minx(||yAx||22+λR(x)),
xk+1=minx12||Axy||22+ρ2||xzk+uk||22zk+1=D(xk+1+uk,σ2=λ/ρ),uk+1=uk+xk+1zk+1,
Pin=(Bi,jN12t)eμxdσTdΩ,
dσTdΩ=N12dxdσdΩPZ,
Ps=x(Bi,jN12t)eμxN12dσdΩPZdx.
Fi,j=ψωi,jπx(Bi,jN12t)eμxN12dσdΩPZeμsxcos(πθ)dxdθdψ,
Fi,j=Epτ(Ep)ψωi,jπx(Bi,jEpN12t)eμxN12dσdΩPZeμsxcos(πθ)dxdθdψ,
Cq=TBq,
Bq=[0q×N;IN×N;0Q1q×N]
A=[C0,C1,CQ1]T
dose(Ep)=EpEpBi,jEpN12tμen,mN12,
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.