Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Total Variation Constrained Graph Manifold Learning Strategy for Cerenkov Luminescence Tomography

Open Access Open Access

Abstract

Harnessing the power and flexibility of radiolabeled molecules, Cerenkov luminescence tomography (CLT) provides a novel technique for non-invasive visualisation and quantification of viable tumour cells in a living organism. However, owing to the photon scattering effect and the ill-posed inverse problem, CLT still suffers from insufficient spatial resolution and shape recovery in various preclinical applications. In this study, we proposed a total variation constrained graph manifold learning (TV-GML) strategy for achieving accurate spatial location, dual-source resolution, and tumour morphology. TV-GML integrates the isotropic total variation term and dynamic graph Laplacian constraint to make a trade-off between edge preservation and piecewise smooth region reconstruction. Meanwhile, the tetrahedral mesh-Cartesian grid pair method based on the k-nearest neighbour, and the adaptive and composite Barzilai–Borwein method, were proposed to ensure global super linear convergence of the solution of TV-GML. The comparison results of both simulation experiments and in vivo experiments further indicated that TV-GML achieved superior reconstruction performance in terms of location accuracy, dual-source resolution, shape recovery capability, robustness, and in vivo practicability. Significance: We believe that this novel method will be beneficial to the application of CLT for quantitative analysis and morphological observation of various preclinical applications and facilitate the development of the theory of solving inverse problem.

© 2022 Optica Publishing Group under the terms of the Optica Open Access Publishing Agreement

1. Introduction

Cerenkov luminescence (CL) is the optical radiation induced in a medium that is emitted by fast-charged particles originating as decay secondary products of PET radionuclides and radiotherapy beams [13]. Thus, Cerenkov luminescence imaging (CLI) is a straightforward and low-cost optical imaging modality that harnesses the power and flexibility of small, specific, radiolabeled molecules [4]. CLI has been evaluated in intraoperative imaging of radionuclides [5], surgical resection [6], intraoperative assessment of tumour resection margins [7], and radio-guided brain tumour resection surgery [8]. However, because of the high susceptibility of optical transport from an internal light source to an irregular torso, the surface emission CLI cannot be used reliably to infer the internal source location for many biological applications. To evaluate the attenuation of Cerenkov light in deep tissue, three-dimensional Cerenkov luminescence tomography (CLT) has demonstrated capability to reconstruct the interior distribution of radioactive sources from measurements at tissue surface [9]. Theoretically, CLT can obtain precise 3D recordings of tumour shape, tumour size, and tumour position, with a very rewarding application prospect. However, owing to the ill-posed problem, as in other optical tomography (bioluminescence and fluorescence tomography), CLT suffers from insufficient spatial resolution and shape recovery [10].

To alleviate this problem, some anatomical priori information and regularisation approaches are usually required for accurate source localisation. In addition to the anatomical information provided by Computed Tomography (CT) and Magnetic Resonance Imaging (MRI) [11], some a priori information, such as topography of the surface light power distribution, heterogeneous optical properties, and Cerenkov source distribution prior guided by Single-Photon Emission Computed Tomography (SPECT) and Positron Emission Tomography (PET) [12], are required to accurately model the light propagation and infer the permissible source region for more accurate CLT reconstruction. Furthermore, many regularisation theories such as $L_2$-norm regularisation (Tikhonov) [13], $L_1$-norm regularisation (Lasso) [14,15], $L_{2,1}$-norm optimisation [16], and $L_p$-norm ($0<p<1$) regularisation (non-convex method) [17] are often used to release the ill-posedness and obtain an approximate pseudo-solution of the inverse problem. Meanwhile, many studies have shown that the over-smoothing of $L_2$-norm regularisation and the over-sparse of $L_1$/$L_p$-norm regularisation may decrease the reconstruction accuracy and shape recovery capability in CLT [14]. Therefore, for ovecoming this drawback, some reconstruction algorithms based on joint regularisation (elastic net regularisation, nonconvex Laplacian manifold joint method) have been conducted in optical tomography [18,19], which utilises the advantages of different types of regularisation terms to improve the reconstruction performance in spatial localisation.

However, for CLI in radiation therapy and image-guided tumour surgery, the boundaries of tumours, organs, and other structures are of primary importance. For this reason, Tumour boundary information and morphological features are necessary for a satisfactory reconstruction. Total variation (TV), as a powerful regularisation method, has been used in various medical image reconstruction [20]. The assumption of TV is that images consist of a piecewise-constant area. This is a reasonable assumption for tumours, where tumours with approximately constant high density are often embedded in lower-density biological tissue. Thus, TV-norm is promising in providing high-resolution images of tumours, where edges and discontinuities are properly preserved in CLT.

In general image reconstruction problems, TV minimisation is carried out on a Cartesian grid, where each element represents a pixel in 2D or voxel in 3D. The differential operators resulting from minimising TV regularisation are discretised directly using finite difference method [21]. However, numerical techniques based on finite element method are widely used for CLT reconstruction. The tissue volume is discretised into a series of disjointed tetrahedral meshes. It is difficult to calculate the differential operators of TV-associated problems owing to the unstructured discretisation of complex geometries, nonlinearity of data fitting and regularisation terms, and non-differentiability of the regularisation term. To solve this problem, in 2D simulation, the TV norm is simply converted to $\|X{{\|}_{TV}}=\sum \limits_{k=1}^{{{N}_{e}}}{{{L}_{k}}}\left | {{X}^{k,l}}-{{X}^{k,r}} \right |$, where $L_{k}$ denotes the length of the $k$ edge, and $X^{k,l}$, $X^{k,r}$ represents the left element and the right element with respect to the $k-th$ edge, respectively [22]. Research has found that although TV regularisation has previously been studied in optical tomography, detailed information about the discretisation schemes has rarely been provided in 3D tomography reconstruction.

In addition, the unwanted staircase effect of TV regularisation transforms smooth ramp regions into piecewise constant stair regions and develops false edges in various inverse problems. To overcome this limitation, there is a growing interest in regularising the TV norm and coupling with Laplacian priors. In triangulated mesh denoising, a variational model combining TV and Laplacian regularisation terms was proposed to filter the normal vector field, which has a crucial advantage in preserving sharp features and smooth regions and in substantially suppressing staircase effects [23]. A TV image restoration method was also developed using a hyper-Laplacian prior for image gradient, which makes a good balance between the alleviation of staircase effects and preserving image details [24].

In this study, a total variation constrained graph manifold learning (TV-GML) strategy was proposed to improve the CLT reconstruction accuracy of spatial location, dual-source resolution, and shape recovery capability. For the TV-GML strategy, the isotropic total variation term and dynamic graph Laplacian constraint were incorporated to make a trade-off between edge preservation and piecewise smooth region reconstruction. Considering the characteristics of the internal source distribution and dynamic estimation process of the internal source, a dynamic graph Laplacian matrix was introduced for tumour structure recovery based on organ-level anatomical prior information, and a self-loop learning matrix in iteration. In addition, to calculate the gradient of the 3D TV norm, based on dual system discrete theory, a Tetrahedral mesh-Cartesian grid pair method was proposed for tumour edge preservation using the $k-$nearest neighbour algorithm and cosine criteria condition. Thus, the inverse problem of CLT is transformed into a series-differentiable optimisation problem. According to the non-negativity of the Cerenkov source distribution, the problem is then solved efficiently by gradient projection with an adaptive and composite Barzilai–Borwein (BB) step-size selection scheme. To our knowledge, this study was the first to introduce the total variation term and dynamic graph Laplacian constraint to alleviate the staircase effect of TV regularisation and improve the 3D CLT reconstruction accuracy. Numerical simulations and in vivo experiments were conducted to validate the performance of the TV-GML method.

2. Method

2.1 Photon propagation mode

The CL spectrum is a blue-weighted broad spectrum (approximately $300-900 nm$, peak at $400 nm$) with high absorption in biological tissues [25]. For CLT study, it is conceivable that the the third-order spherical harmonics ($SP_3$) model is well suited for describing light propagation in tissues coupled with Robin boundary condition [26]. Thus, under the finite element framework, a linear relationship between the CL source and the existing partial luminescent flux on the object surface can be depicted as

$${A_{S{P_3}}}X = {J^ + }$$
where $J^ +$ denotes the existing partial luminescent flux on the surface. $A_{S{P_3}}$ is the system matrix where $X$ is the unknown distribution of the CL source. In CLT research, $X$ is a function of the spatial position $p(k,m,n)$, that is, ${X_{p(k,m,n)}} = f(p(k,m,n))$, where $(k,m,n)$ is the coordinate of the point.

2.2 Reconstruction based on total variation constrained graph

2.2.1 Total variation constrained graph model

Inversely solving Eq. 1 is a well-known ill-posed problem. To reduce its ill-posedness, we proposed the TV-GML strategy, which incorporated the isotropic total variation and manifold graph to discover intrinsic geometric and structural information in CLT. The TV-GML optimisation function is given by:

$$\widehat X = \mathop {\arg \min }_X \frac{1}{2}\left\| {{A_{S{P_3}}}X - {J^ + }} \right\|_2^2 + \lambda \left\| {LX} \right\|_2^2 + \gamma {\left\| X \right\|_{TV}}$$
where $\lambda$ and $\gamma$ are penalty weighting factors, and $L$ is the dynamic graph Laplacian matrix as $L=D-W+V$.

Considering that the internal CL source was a Gaussian distribution, $W$ was designed as a Gaussian weighted matrix based on the correlation of adjacent elements. Which is defined as:

$$\begin{aligned} &{W_{ij}} = {W_{ji}} = \left\{ \begin{array}{l} 0 \qquad \qquad {i = j} \\ exp( - \frac{{d_{i,j}^2}}{{4{R^2}}})/{\rho _{{s_k}}}, \qquad {i,j \in {s_k},i \ne j} \\ 0 \qquad {otherwise} \end{array} \right.\\ &{{\rho _{{s_k}}} = \sum_{\forall g,l \in {s_k},g \ne l} {exp( - \frac{{d_{g,l}^2}}{{4{R^2}}})} } \end{aligned}$$
where $s_{k}$ is the vertex set in the k-th organ. $R$ is the radius of the Gaussian kernel function; $d_{i,j}$ is the Euclidean distance between the $i$-th and $j$-th vertex; ${\rho _{{S_k}}}$ is the region mollifier.

$D$ is a diagonal matrix whose entries are the column sums of $W$: $D=diag(\sum \limits_{j = 1}^n {{W_{ij}}})$ . $V$ is a self-loop matrix, defined as the normalised density of the previous dynamic estimated CL source $\hat X$: $V=diag(\frac {{\hat X}_i}{max{(\hat X)}})$. Thus, the graph Laplacian matrix $L$ is defined as [19]:

$${{L_{i,j}}} = \left\{ \begin{array}{l} 1 + ({\widehat X_i})/max(\widehat X) \qquad {i = j} \\ - exp( - \frac{{d_{i,j}^2}}{{4{R^2}}})/{\rho _{{s_k}}} \qquad {i,j \in {S_k},i \ne j} \\ 0 \qquad {otherwise}\end{array}\right.$$

In our research, the isotropic total variation is defined as:

$${\left\| X \right\|_{TV}} = \int_\Omega {\left| {\nabla X} \right|} dxdydz$$
where ${\left \|. \right \|_{TV}}$ denotes the TV norm, with the discrete form defined as:
$$\begin{aligned} &{\left\| {{X_{p(k,m,n)}}} \right\|_{TV}} = \sum_{k,m,n} {\left| {\nabla {X_{p(k,m,n)}}} \right|} = \sum_{k,m,n} {u(k,m,n)}\\ &= \sum_{k,m,n}^{} {\sqrt {{{(\frac{{\partial f(k,m,n)}}{{\partial x}})}^2} + {{(\frac{{\partial f(k,m,n)}}{{\partial y}})}^2} + {{(\frac{{\partial f(k,m,n)}}{{\partial z}})}^2}} } \end{aligned}$$
where $u(k,m,n)$ represents the voxel value of the gradient image at point $p$. In traditional 3D image processing, for regular image pixels or voxels in a Cartesian grid,
$$\left\{ \begin{array}{l} \frac{{\partial f(k,m,n)}}{{\partial x}} = \frac{{{X_{p(k,m,n)}} - {X_{p(k - 1,m,n)}}}}{1} \\ \frac{{\partial f(k,m,n)}}{{\partial y}} = \frac{{{X_{p(k,m,n)}} - {X_{p(k,m - 1,n)}}}}{1} \\ \frac{{\partial f(k,m,n)}}{{\partial z}} = \frac{{{X_{p(k,m,n)}} - {X_{p(k,m,n - 1)}}}}{1} \\ \end{array} \right.$$

2.2.2 Gradient of the objective function

In solving Eq. 2, the major steps include the calculation of the gradient of the objective function and the step size in each iteration. The gradient of the objective function is derived as follows:

$$G = A_{S{P_3}}^T({A_{S{P_3}}}X - J) + 2 \cdot \lambda {L^T}LX + \gamma \cdot \nabla {\left\| X \right\|_{TV}}$$

In traditional 3D image processing, for regular image pixels or voxels $p(k,m,n)$, the derivative of the TV term is approximated in a similar form as follows [27]:

$$\begin{aligned} &\nabla {\left\| {{X_{p(k,m,n)}}} \right\|_{TV}} = \frac{{3{X_{p(k,m,n)}} - {X_{p(k - 1,m,n)}} - {X_{p(k,m - 1,n)}} - {X_{p(k,m,n - 1)}}}}{{u(k,m,n)}}\\ & - \frac{{{X_{p(k + 1,m,n)}} - {X_{p(k,m,n)}}}}{{u(k + 1,m,n)}} - \frac{{{X_{p(k,m + 1,n)}} - {X_{p(k,m,n)}}}}{{u(k,m + 1,n)}} - \frac{{{X_{k,m,n + 1}} - {X_{p(k,m,n)}}}}{{u(k,m,n + 1)}}\\ &= \frac{{3{X_{p(k,m,n)}} - {X_{p(k - 1,m,n)}} - {X_{p(k,m - 1,n)}} - {X_{p(k,m,n - 1)}}}}{{\sqrt {{{({X_{p(k,m,n)}} - {X_{p(k - 1,m,n)}})}^2} + {{({X_{p(k,m,n)}} - {X_{p(k,m - 1,n)}})}^2} + {{({X_{p(k,m,n)}} - {X_{p(k,m,n - 1)}})}^2} + \delta } }}\\ &- \frac{{{X_{p(k + 1,m,n)}} - {X_{p(k,m,n)}}}}{{\sqrt {{{({X_{p(k + 1,m,n)}} - {X_{p(k,m,n)}})}^2} + {{({X_{p(k + 1,m,n)}} - {X_{p(k + 1,m - 1,n)}})}^2} + {{({X_{p(k + 1,m,n)}} - {X_{p(k + 1,m,n - 1)}})}^2} + \delta } }}\\ &- \frac{{{X_{p(k,m, + 1,n)}} - {X_{p(k,m,n)}}}}{{\sqrt {{{({X_{p(k,m + 1,n)}} - {X_{p(k - 1,m + 1,n)}})}^2} + {{({X_{p(k,m + 1,n)}} - {X_{p(k,m,n)}})}^2} + {{({X_{p(k,m + 1,n)}} - {X_{p(k,m + 1,n - 1)}})}^2} + \delta } }}\\ &- \frac{{{X_{p(k,m,n + 1)}} - {X_{p(k,m,n)}}}}{{\sqrt {{{({X_{p(k,m,n + 1)}} - {X_{p(k - 1,m,n + 1)}})}^2} + {{({X_{p(k,m,n + 1)}} - {X_{p(k,m - 1,n + 1)}})}^2} + {{({X_{p(k,m,n + 1)}} - {X_{p(k,m,n)}})}^2} + \delta } }} \end{aligned}$$
where $\delta$ is a small positive number to avoid singularities in the derivative calculation and is set as $1e-10$ in the algorithm. For CLT reconstruction, the finite element mesh used for tomography is not as regular as image pixels, it brings heavy computation and great challenge to optimise the corresponding $\nabla {\left \| X \right \|_{TV}}$. Thus, for a 3D tetrahedral mesh, we proposed a Tetrahedral mesh-Cartesian grid pair method to calculate the gradient of the TV norm based on dual system discrete theory.

2.2.3 Tetrahedral mesh-Cartesian grid pair method for calculating $\nabla {\left \| X \right \|_{TV}}$

As shown in Fig. 1(a), a 3D space region can be split into a uniform Cartesian grid system. In the same space, the heart is divided into tetrahedral meshes, as shown in Fig. 1(b). The tetrahedral mesh consists of two different types of data: geometrical data describing information associated with each individual vertex or tetrahedral, i.e. the tetrahedral nodes (show as red dot points) and topological data specifying the connectivity of the mesh (show as grey connection line). In Fig. 1(c), the tetrahedral mesh is plotted over a uniform Cartesian grid system. Taking point $p(k,m,n)$ as an example, an enlarged view of the sub-region is shown in Fig. 1(d). According to Eq. 9, the other 12 points in a particular direction of the centre point $p(k,m,n)$ should be used to calculate the corresponding $\nabla {\left \|X_{p(k,m,n)} \right \|_{TV}}$. In the Cartesian grid system, along the $x$, $y$, and $z$ axes, 12 points can be easily determined, as shown by the black dot $p(k,m,n + 1),{\ldots }p(k,m,n - 1)$ in Fig. 1(e). And in our research, a criteria condition was established in the tetrahedral mesh grid to determine the corresponding 12 points.

 figure: Fig. 1.

Fig. 1. Diagram of the Tetrahedral mesh-Cartesian grid mapping process.

Download Full Size | PDF

Criteria: In tetrahedral mesh system, if the direction of vector $\mathop a^ \to$ from point $p'(k - 1,m,n)$ to $p'(k,m,n)$ is angularly closest to the direction of vector $\mathop b^ \to$ from $p(k - 1,m,n)$ to $p(k,m,n)$ in Cartesian grid system, $p'(k - 1,m,n)$ is most likely to be used instead of $p(k - 1,m,n)$ to calculate the gradient.

According to included angle formula, the criteria condition can be represented as:

$$\left\{ \begin{array}{l} p'(k,m,n + 1) = arg\;\mathop{\min}\limits_p (1 - \left| {\cos < \overrightarrow a, \overrightarrow b > } \right|) \\ where \qquad p \in \Omega, p \ne p(k,m,n) \end{array} \right.$$
where $\Omega$ is the sub-region of the nearest $k$ element of the centre point $p(k,m,n)$. In our study, the $k-$nearest neighbour algorithm was used to identify these points. Further, according to this criterion, the corresponding 12 points in the tetrahedral mesh grid were determined, as shown by the pink dot in Fig. 1(f). For convenience, the points corresponding to $p(k - 1,m,n)$ in the tetrahedral mesh grid were recorded as $p'(k - 1,m,n)$ and the energy as ${X_{p'(k - 1,m,n)}}$. Note that, in irregular tetrahedral mesh systerm, Eq. 7 should be modified as:
$$\left\{ \begin{array}{l} \frac{{\partial f(k,m,n)}}{{\partial x}} = \frac{{{X_{p(k,m,n)}} - {X_{p'(k - 1,m,n)}}}}{{\left\| {p(k,m,n) - p'(k - 1,m,n)} \right\|}} \\ \frac{{\partial f(k,m,n)}}{{\partial y}} = \frac{{{X_{p(k,m,n)}} - {X_{p'(k,m - 1,n)}}}}{{\left\| {p(k,m,n) - p'(k,m - 1,n)} \right\|}} \\ \frac{{\partial f(k,m,n)}}{{\partial z}} = \frac{{{X_{p(k,m,n)}} - {X_{p'(k,m,n - 1)}}}}{{\left\| {p(k,m,n) - p'(k,m,n - 1)} \right\|}} \end{array} \right.$$
where $\left \|. \right \|$ is the euclidean distance between the two point sets. Thus, the derivative of the TV term in the tetrahedral mesh grid can be rewritten as:
$$\begin{aligned} &\nabla {\left\| {{X_{p(k,m,n)}}} \right\|_{TV}} =\\ &\frac{{3{X_{p(k,m,n)}} - {X_{p'(k - 1,m,n)}} - {X_{p'(k,m - 1,n)}} - {X_{p'(k,m,n - 1)}}}}{{\sqrt {{{(\frac{{{X_{p'(k - 1,m,n)}} - {X_{p(k,m,n)}}}}{{\left\| {p'(k - 1,m,n) - p(k,m,n)} \right\|}})}^2} + {{(\frac{{{X_{p'(k,m - 1,n)}} - {X_{p(k,m,n)}}}}{{\left\| {p'(k,m - 1,n) - p(k,m,n)} \right\|}})}^2} + {{(\frac{{{X_{p'(k,m,n - 1)}} - {X_{p(k,m,n)}}}}{{\left\| {p'(k,m,n - 1) - p(k,m,n)} \right\|}})}^2} + \delta } }}\\ &- \frac{{{X_{p'(k + 1,m,n)}} - {X_{p(k,m,n)}}}}{{\sqrt {{{(\frac{{{X_{p'(k + 1,m,n)}} - {X_{p(k,m,n)}}}}{{\left\| {p'{\text{(}}k + 1,m,n{\text{)}} - p{\text{(}}k,m,n{\text{)}}} \right\|}})}^2} + {{(\frac{{{X_{p(k + 1,m - 1,n)}} - {X_{p'(k + 1,m,n)}}}}{{\left\| {p'{\text{(}}k + 1,m - 1,n{\text{)}} - p{\text{'(}}k + 1,m,n{\text{)}}} \right\|}})}^2} + {{(\frac{{{X_{k + 1,m,n - 1}} - {X_{k + 1,m,n}}}}{{\left\| {p'{\text{(}}k + 1,m,n - 1{\text{)}} - p{\text{'(}}k + 1,m,n{\text{)}}} \right\|}})}^2} + \delta } }}\\ &- \frac{{{X_{p'(k,m + 1,n)}} - {X_{p(k,m,n)}}}}{{\sqrt {{{(\frac{{{X_{p'(k - 1,m + 1,n)}} - {X_{p'(k,m + 1,n)}}}}{{\left\| {p'{\text{(}}k - 1,m + 1,n{\text{)}} - p{\text{(}}k,m,n{\text{)}}} \right\|}})}^2} + {{(\frac{{{X_{p(k,m,n)}} - {X_{p'(k,m + 1,n)}}}}{{\left\| {p{\text{(}}k,m,n{\text{)}} - p'{\text{(}}k,m + 1,n{\text{)}}} \right\|}})}^2} + {{(\frac{{{X_{p'(k,m + 1,n - 1)}} - {X_{p'(k,m + 1,n)}}}}{{\left\| {p'{\text{(}}k,m + 1,n - 1{\text{)}} - p{\text{(}}k,m + 1,n{\text{)}}} \right\|}})}^2} + \delta } }}\\ &- \frac{{{X_{p'(k,m,n + 1)}} - {X_{pk,m,n}}}}{{\sqrt {{{(\frac{{{X_{p'(k - 1,m,n + 1)}} - {X_{p'(k,m,n + 1)}}}}{{\left\| {p'{\text{(}}k + 1,m,n{\text{)}} - p'{\text{(}}k,m,n + 1{\text{)}}} \right\|}})}^2} + {{(\frac{{{X_{p'(k,m - 1,n + 1)}} - {X_{p'(k,m,n + 1)}}}}{{\left\| {p'{\text{(}}k,m - 1,n + 1{\text{)}} - p{\text{(}}k,m,n + 1{\text{)}}} \right\|}})}^2} + {{(\frac{{{X_{p(k,m,n)}} - {X_{p'(k,m,n + 1)}}}}{{\left\| {p{\text{(}}k,m,n{\text{)}} - p{\text{(}}k,m,n + 1{\text{)}}} \right\|}})}^2} + \delta } }} \end{aligned}$$

2.2.4 Updating scheme in iterations by adaptive BB method

To solve Eq. 2, according to the non-negativity of the Cerenkov source distribution, the gradient projection (GP) algorithm was used to update $\widehat X$ in the $n+1$ iteration as:

$${\widehat X_{n + 1}} = \max ({\widehat X_n} - {\alpha _n}{\rho _n},0)$$
where ${\alpha _n}$ denotes the step size at iteration $n$ computed by a line search algorithm. ${\rho _n}$ is the projected gradient, which is calculated as:
$${\rho _n}(l) = \left\{ \begin{array}{l} {{G_n}(l)} \;\;\; if \;\;\; {{G_n}(l) \leqslant 0,} \;\;\; or \;\;\; {{{\widehat X}_n}(l) > 0} \\ 0 \quad{otherwise} \end{array} \right.$$
Here, $l$ is the voxel position index.

Generally, to solve the large-scale and ill-conditioned optimisation problem in CLT, the calculation of the exact step length ${\alpha _n}$ is very slow. Motivated by the super-linear convergence behaviour of the BB methods [28], in our study, ${\alpha _n}$ can be selected by an efficient adaptive and composite BB method.

Following the BB method, $\alpha _n^{BB1}$ and $\alpha _n^{BB2}$ are the optimal solutions of the optimisation problems:

$$\left\{ \begin{array}{l} \alpha _n^{BB1}{\text{ = }}\mathop {\min }\limits_\alpha {\left\| {{\alpha ^{ - 1}}{s_{n - 1}} - {y_{n - 1}}} \right\|^2} \\ \alpha _n^{BB2}{\text{ = }}\mathop {\min }\limits_\alpha {\left\| {\alpha {y_{n - 1}} - {s_{n - 1}}} \right\|^2} \end{array} \right.$$
where ${s_{n - 1}} = {\widehat X_n} - {\widehat X_{n - 1}}$ and ${y_n} = {G_n} - {G_{n - 1}}$. Thus, $\alpha _n^{BB1}$ and $\alpha _n^{BB2}$ are calculated as:
$$\left\{ \begin{array}{l} \alpha _n^{BB1} = \frac{{{{({{\widehat X}_n} - {{\widehat X}_{n - 1}})}^T}({{\widehat X}_n} - {{\widehat X}_{n - 1}})}}{{{{({{\widehat X}_n} - {{\widehat X}_{n - 1}})}^T}({G_n} - {G_{n - 1}})}} \\ \alpha _n^{BB2} = \frac{{{{({{\widehat X}_n} - {{\widehat X}_{n - 1}})}^T}({G_n} - {G_{n - 1}})}}{{{{({G_n} - {G_{n - 1}})}^T}({G_n} - {G_{n - 1}})}} \end{array} \right.$$
Notably, compared with each other, the potential inferiority of $\alpha _n^{BB1}$ and $\alpha _n^{BB2}$ can be observed in
$$\left\{ \begin{array}{l} {\left\| {\alpha _n^{{\text{BB}}1}{y_{n - 1}} - {s_{n - 1}}} \right\|^2} \geqslant {\left\| {\alpha _n^{{\text{BB}}2}{y_{n - 1}} - {s_{n - 1}}} \right\|^2} \\ {\left\| {{{\left( {\alpha _n^{{\text{BB}}2}} \right)}^{ - 1}}{s_{n - 1}} - {y_{n - 1}}} \right\|^2} \geqslant {\left\| {{{\left( {\alpha _k^{{\text{BB}}1}} \right)}^{ - 1}}{s_{n - 1}} - {y_{n - 1}}} \right\|^2} \end{array} \right.$$
Therefore, to integrate the advantages of the existing BB step sizes, it updates the weights at each iteration according to the quality of the two types of BB step sizes:
$$\alpha _n^{{\text{CBB}}} = \frac{{{R_2}}}{{{R_1} + {R_2}}}\alpha _n^{{\text{BB}}1} + \frac{{{R_1}}}{{{R_1} + {R_2}}}\alpha _n^{{\text{BB}}2}$$
where
$$\left\{ \begin{array}{l} {R_1} = {\left\| {\alpha _n^{{\text{BB}}1}{y_{n - 1}} - {s_{n - 1}}} \right\|^2}\\ {R_2} = {\left\| {{{\left( {\alpha _n^{{\text{BB}}2}} \right)}^{ - 1}}{s_{n - 1}} - {y_{n - 1}}} \right\|^2} \end{array} \right.$$
Denote $\kappa = {R_2}/\left ( {{R_1} + {R_2}} \right )$, Eq. 18 can be written as:
$$\alpha _n^{{\text{CBB}}} = \kappa \alpha _n^{{\text{BB}}1} + \left( {1 - \kappa } \right)\alpha _n^{{\text{BB}}2}$$
where $0 < \kappa < 1$. Observe that $\alpha _n^{{\text {CBB}}}$ is the weighted mean of $\alpha _n^{BB1}$ and $\alpha _n^{BB2}$ . $\kappa$ is updated adaptively at each iteration.

In summary, the flowchart of the proposed TV-GML method is presented, as shown in Fig. 2.

 figure: Fig. 2.

Fig. 2. The flowchart of the proposed TV-GML method

Download Full Size | PDF

3. Experiments

To evaluate the performance of the TV-GML method, simulation experiments based on a digital mouse and in vivo experiments were conducted. Three types of regularisation algorithms were employed for comparisons: the incomplete variables truncated conjugate gradient (IVTCG) algorithm based on $\ell _{1}-$norm [15], the Gaussian weighted Laplace prior (GWLP) algorithm based on $\ell _{2}-$norm, and the two-step iterative shrinkage/thresholding (TwIST) algorithm based on the anisotropic TV norm [29]. To quantify the performance of each algorithm, the location error (LE), DICE similarity coefficient (DICE), contrast-to-noise ratio (CNR), root mean square error (RMSE), relative volume error (RVE), and relative intensity error (RIE) were calculated as shown in section 1 of the supplementary materials and compared. All programs were executed using a computer with an Intel Core $i7-6700$ CPU ($3.40 GHz$) and $32 GB$ RAM.

3.1 Numerical simulations experiments

In this part of the study, the torso section of the commonly used digital mouse [30] with a height of $35 mm$ was employed, including the muscle, heart, liver, lung, stomach, and kidney, with empirical optical parameters at $650 nm$[14]. To verify the size recovery capability of TV-GML, two single-source simulations based on spherical sources with different radii were conducted, as shown in Fig. 3(a)-3(b). Furthermore, to verify the multi-source reconstruction performance, two dual-source simulations were conducted, as shown in Fig. 3(c)-3(d). The detailed source information is listed in Table S1 of supplementary materials. In all simulations, the molecular optical simulation environment (MOSE) platform [31] based on the Monte Carlo method was employed to simulate the surface flux distribution, as shown in Fig. 3(e)-3(h). To mimic the measurement uncertainty and inevitable noise, $10\%$ additive Gaussian noise was added to the surface-simulated measurements. For the inverse reconstruction, the digital mouse model was discretised into a uniform tetrahedral mesh, including $7,433$ nodes and $38,277$ tetrahedral elements.

 figure: Fig. 3.

Fig. 3. Digital tumour mouse model and surface-simulated measurements used in numerical simulation experiments. (a)-(b) two spherical sources with different radius were used in single source simulations; (c)-(d) two dual-source simulation models. (e)-(f) shows the 3D CLIs (colormap, $650 nm$).

Download Full Size | PDF

3.2 in vivo experiments

In the in vivo experiments, three kinds of female BALB/c nude mice ($6-8$ weeks old) were used to test the algorithm performance in different situations. All animal experimental protocols were approved by the Animal Ethics Committee of the Northwest University of China (No. NWU-AWC-20210901M). All procedures were performed in strict accordance with the appropriate institutional guidelines for animal research. To minimise suffering in mice, the data acquisition procedure was performed under isoflurane gas anaesthesia ($3\%$ isoflurane-air mixture).

For the first model, a small transparent plastic catheter filled with approximately $5\mu L$ [18F] fluoro-2-deoxy-D-glucose ($^{18}$F-FDG) ($40 \pm 50 \mu Ci$) with a diameter of $1 mm$ and a length of $5 mm$, was sewn into the liver of the mouse as an early deeper micro-tumour. For the second model, a normal mouse model was used to investigate the accumulation of $^{18}$F-FDG in the bladder. In this experiment, mice were injected with $800 \pm 50 \mu Ci^{18}$F-FDG through the tail vein. After $40 min$, when $^{18}$F-FDG had circulated to the bladder, surface CLI was acquired. For the third model, $10 \mu L$ volume of $1 \times 10^6$ 4T1 tumour cells was injected subcutaneously into the back of the mice with a separation of injection points of approximately $5 mm$. Five days after injecting tumour cells, the mice were injected with approximately $200 \pm 50 \mu Ci^{18}$F-FDG to collect the CLI signal. The time point used for all in vivo imaging was the same as Ref. [11]. Detailed data acquisition processes and true source information are shown in section 3 of the supplementary materials.

4. Results

4.1 Numerical simulation experiments

4.1.1 Single-source reconstruction results

Figure 4 shows the results of the four different contrast methods for single-source reconstruction. Where Fig. 4(a)–4(d) are the reconstruction results of $\ell _2$-norm regularisation, $\ell _1$-norm regularisation, TV regularisation, and our proposed TV-GML method, respectively; Fig. 4(a1)–4(d1) and Fig. 4(a2)–44(d2) correspond to the reconstructed results of the source with different sizes, respectively. The reconstructed Cerenkov source is represented by a red area in the 3D view. In the 2D image of the cross-section of the real source centre ($Z=19 mm$), the white circle represents the actual edge of the real source. For better statistics and comparison, Table 1 lists the quantitative analysis metrics of the four algorithms. For convenience, the best and worst quantifiers are highlighted in blue and red, respectively.

 figure: Fig. 4.

Fig. 4. Reconstruction of the single source. The 3D rendering and 2D tomographic slices are shown for comparison. The white outline indicates the real source location and shape. (a)-(d) showed the results of $\ell _{2}-$norm regularisation, $\ell _{1}-$norm regularisation, TV regularisation, and proposed TV-GML method, respectively; (a1)-(d1) and (a2)-(d2) correspond to the reconstructed results of the source with different size, respectively..

Download Full Size | PDF

Tables Icon

Table 1. Quantitative results of single source reconstruction.

From Fig. 4(a1)–4(d1), it can be found that the $\ell _2$-norm regularisation algorithm is limited by over-smoothness, and there are some artefacts in the reconstructed image. Therefore, for the reconstruction of a small light source with a radius of 1 mm, the errors of LE, RVE, RMSE, and RIE are the largest, and the CNR and DICE coefficients are the smallest. Meanwhile, comparing Fig. 4(a1) and Fig. 4(a2), the reconstruction quantification metric of a large-size source with a radius of $1.5$ mm is significantly improved compared to that of the small source case. To some extent, it shows that the $\ell _2$-norm regularisation algorithm is more suitable for the reconstruction of large volumes of light sources. In contrast, the regularisation reconstruction algorithm based on $\ell _1$-norm has less positioning error and energy error for the reconstruction of light source with a small volume. However, for a light source with a large volume, the reconstruction results show a certain over-sparsity. The reconstructed light source is concentrated in the source centre, and the volume and energy errors are the largest, as shown in Fig. 4(b2). The TV-norm and TV-GML reconstruction algorithms have good performance in terms of source distribution and intensity recovery regardless of the source size. However, the morphological properties of spherical particles obtained by TV norm have some loss owing to the staircase effect, resulting in a slightly larger positioning error than the TV-GML reconstruction algorithm. In conclusion, the proposed method can best locate the light source and reconstruct the shape and volume with the smallest reconstruction error ($LE<0.2$, $RIE<0.1$, $RVE<0.02$, $RMSE<0.041$) and highest CNR and DICE similarity.

4.1.2 Dual-source reconstruction results

Figure 5(a1)–5(d1) correspond to the reconstructed results of the first simulation model with an edge-edge distance of $2 mm$. In 3D view, $\ell _2$-norm offered the worst and the most scattered reconstruction with insufficient convergence, and lost much spatial resolution and incorrectly merged two sources into one owing to the over-smooth characteristic. $\ell _1$-norm, TV-norm, and our proposed TV-GML algorithms successfully restored the distinguishable two sources, clearly in 3D view. But $\ell _1$-norm and TV-norm based algorithms still exhibited over-sparsity and irregular artefacts. In both 2D and 3D views, two sources can be restored distinctly by our proposed method, which has the best performance in positioning accuracy, artefact suppression, and shape recovery.

 figure: Fig. 5.

Fig. 5. Dual source reconstruction comparison results. The reconstructed sources displayed in 3D view and 2D axial view. The white outline indicates the real source location and shape. (a)-(d) show the results of $\ell _{2}-$norm regularisation, $\ell _{1}-$norm regularisation, TV regularisation, and our proposed TV-GML method, respectively; (a1)-(d1) and (a2)-(d2) correspond to the reconstructed results of the source with different size, respectively.

Download Full Size | PDF

To make the CLT reconstruction more challenging, two sources with different sizes were reconstructed, as shown in Fig. 5(a2)–5(d2). Similar to the previous single-source experiment, $\ell _1$-norm algorithm was limited by over-sparsity, and the recovered source has the largest size deviation, especially for the source with a radius of $1.5 mm$. Whereas $\ell _2$-norm was compromised by over-smoothness with great barycentre offset. Although TV-norm reconstructed algorithm performs well in terms of location and dual-source resolution in 2D view, some sporadic artefacts occur in the 3D view. Compared with the three other comparison algorithms, TV-GML still provided the best reconstruction result, showing the smallest position error (LE:$0.22$ mm and $0.20$ mm) and the largest overlap with the two real sources (DICE $84\%$). The overall quantitative comparison from Table 2 revealed that TV-GML was still superior to all the other methods with the best performance in terms of spatial resolution and shape recovery capability. The second-best method was the TV-norm-based algorithm.

Tables Icon

Table 2. Quantitative results of dual source reconstruction.

4.2 in vivo experiment

In in vivo experiments, what needs to be explained is that because the surface-detected grayscale images and the inner Cerenkov source are not calibrated with the actual number of photons, all evaluation metrics related to source intensity (i.e. RIE and RMSE) were unavailable. Furthermore, in order to obtain ground truth, using feature-based 3D registration method, CT/PET image data were blended together to segment the actual Cerenkov source region. And, the actual Cerenkov source was overlaid with the reconstructed results to validate in vivo CLT reconstructions. The reconstructed information and quantitative results are presented in Table 3.

Tables Icon

Table 3. Quantitative results of in vivo CLT reconstruction.

4.2.1 Results of implanted source experiment

Figure 6(a)–6(d) show the results of the implanted source experiment using four different contrast methods. The full view and the partially enlarged image are presented in the 3D view. In addition, across the true source centre, the coronal view (C), the sagittal view (S), and the transversal view (T) are also displayed in the subfigures. From the 3D view and the transversal view, owing to over-smooth of $\ell _{2}-$norm based algorithm, there was a lack of convergence, resulting in some scattered (the smallest CNR $0.075$) and significant deviation in geometric volume (RVE$>0.17$). Although TV can overcome the over-smooth problem of $\ell _{2}-$norm, there is still a deviation in spatial position (LE $0.47$ mm) and the minimal overlap area with the actual light source (DICE $50\%$). For $\ell _{1}-$norm and TV-GML, the reconstructed areas were relatively similar with a large coverage of the actual tube cavity. The quantitative analysis in Table 3 further reveal that TV-GML achieved the best performance between the three methods in terms of positioning (LE $0.3 mm$), shape similarity (DICE $81\%$), and image quality (CNR $0.093$). However, in terms of volume recovery, $\ell _{1}-$norm was more suitable for this small source reconstruction, with the smallest RVE of $0.034$. TV-GML was the second-best method, with an RVE of $0.042$.

 figure: Fig. 6.

Fig. 6. Implanted source reconstruction results. (a)-(d) show the results of $\ell _{2}-$norm regularisation, $\ell _{1}-$norm regularisation, TV regularisation, and our proposed TV-GML method, respectively. The reconstruction results were displayed in 3D view, the coronal view (C), the sagittal view (S), and the transversal view (T).

Download Full Size | PDF

4.2.2 Results of bladder reconstruction experiment

We further compared the performance of different methods in expressing the accumulation of $^{18}$FDG in the bladder. Figure 7 shows the reconstruction results for the four different methods. In Fig. 7(b), the reconstructed high-signal region only appeared in some isolated tetrahedral (RVE$>0.4$), causing a "shrunken" reconstruction. Which is markedly smaller than the bladder areas defined by the CT atlas as the semi-transparent irregular-ball schematised. These results confirm that $\ell _{1}-$norm still provided over-sparse reconstructions. Moreover, the distribution of $^{18}$FDG given by the other three methods solved the over-sparse problem, showing a certain accumulation effect. However, the transversal view image in Fig. 7(a) shows that the reconstructed source area by $\ell _{2}-$norm slightly deviated outward from the actual source centre and attached to the object surface (LE $1.1 mm$). Compared with the true bladder image, the Cerenkov source reconstructed by TV-norm was still slightly over-smooth with the second largest RVE of $0.255$, leading to some artefacts and a significant overestimation of the $^{18}$FDG agglomeration in 3D view, as shown in Fig. 7(c). TV-GML successfully overcame the over-smoothing and over-sparse problem, obtaining the best reconstruction image (CNR $0.093$). Meanwhile, the reconstructed $^{18}$FDG agglomeration area perfectly overlapped with the actual bladder (largest DICE $79\%$), which offered a significant consistency in shape (smallest RVE $0.021$) and positional recovery (smallest LE $0.37$ mm) in 3D and 2D views.

 figure: Fig. 7.

Fig. 7. Bladder reconstruction results. (a)-(d) show the results of $\ell _{2}-$norm regularisation, $\ell _{1}-$norm regularisation, TV regularisation, and our proposed TV-GML method, respectively. The reconstruction results were displayed in 3D view, the coronal view (C), the sagittal view (S), and the transversal view (T).

Download Full Size | PDF

4.2.3 Results of double subcutaneous tumour experiment

To evaluate the resolving power for multiple sources, in vivo imaging experiments based on $4T1$ breast tumour mouse model was performed to identify two tiny subcutaneous tumour. Based on PET/CT/CLT registration technology, double subcutaneous tumours (translucent spheres) and reconstructed Cerenkov luminescence signals were fused in the CT coordinate system, as shown in Fig. 8. Where S1-T1 and S2-T2 are the 2D cross-sectional views of the centres of two different sources. The comparison results are as follows: 1) all four compared algorithms showed relatively consistent performance and good dual-source resolution. 2) Results of $\ell _{2}-$norm algorithm showed evident over-smoothing, and suffer from many image artefacts as shown in Fig. 8(a) transversal view. Thus, this leads to significant deviation in terms of shape (RVE$>0.3$ mm) and positioning (LE$>0.53$ mm). 3) Although, $\ell _{1}-$norm was able to restrict reconstructed artefact in some degree, it is notable that the reconstructed areas were still smaller than the corresponding tumour regions owing to over-sparsity, as shown in the 3D enlarged view (in Fig. 8(b)). 4) Benefiting from the dynamic learning strategy and 3D gradient information, our proposed algorithm performs better in artefact inhibition and 3D morphological restoration than the TV norm. 5) Except for TV-GML, in the left tumour, the reconstructed area was biased towards the surface of the biological tissue (LE$>0.7 mm$). And two reconstructed sources did not achieve the same level of accuracy. This shows that the TV-GML can establish global convergence results for the CLT. 6) For the TV-GML method, the reconstructed tumours perfectly overlapped with the PET-defined tumour regions, which provided the best accuracy in all four quantitative metrics.

 figure: Fig. 8.

Fig. 8. Double subcutaneous tumour reconstruction results in 3D and cross-sectional view. (a)-(d) show the results of $\ell _{2}-$norm regularisation, $\ell _{1}-$norm regularisation, TV regularisation, and our proposed TV-GML method, respectively. S1-T1 and S2-T2 are the two-dimensional coronal view (C), sagittal view (S), and transversal view (T) of the centres of two different sources.

Download Full Size | PDF

5. Discussion and conclusion

In this study, a novel TV-GML strategy was proposed to improve the spatial location, dual-source resolution, and the shape-recovery capability in CLT pre-clinical application. The TV-GML strategy has the following four distinctive innovations and characteristics: 1) graph manifold learning based on a dynamic graph Laplacian matrix, which made full use of the organ-level anatomical information, the Gaussian distribution character, and the self-loop learning matrix to overcome the unwanted staircase effect of TV regularisation and guarantee the sparsity and morphology of the internal source distribution. 2) Based on the included angle formula and $k-$nearest neighbour algorithm, Tetrahedral mesh-Cartesian grid pairs were determined to calculate the differential operators of the 3D finite element-based TV norm. 3) To effectively solve the ill-posed and large-scale optimisation problems, the adaptive and composite BB methods were adopted to ensure global super-linear convergence of the solution. 4) Compared with $\ell _{1}-$norm and $\ell _{2}-$norm and TV-norm regularisation, the TV-GML strategy can better control the sparsity, morphology, and edge preservation of the Cerenkov source based on the prior structure and source distribution information generated in the previous iteration. Thus, the TV-GML strategy can not only alleviate the ill-posed inverse problem, but also make a trade-off between edge preservation and piecewise smooth region reconstruction.

The TV-GML strategy was validated intensively in two groups of numerical simulations and three in vivo experiments. In order to ensure sufficiently convergent, the iterations number and tolerance of each algorithm were set as 1000 and 1e-6 by experience. Meanwhile, to guarantee their best performance and fair comparisons, regularization parameters related to each method were determined using the generalised cross-validation method [19]. In three in vivo experiments, to avoid the inherent error caused by finite element discretisation, the reconstructed CLT was interpolated into the CT image for a more comprehensive evaluation of the algorithm performance. Meanwhile, the tumour PET image as the ultimate gold standard was blended together to describe the actual morphological information. Owing to in vivo CLT reconstruction from a single view, the reconstruction accuracy has slightly decreased compared with numerical simulations. However, three different types of in vivo experiments further proved the superiority of the TV-GML strategy over its competitors, providing the best reconstruction result with the smallest position error (LE: $<0.4 mm$, RVE:$<0.05$) and the largest overlap with the two real sources (DICE: $>78\%$). It should be noted that in order to evaluate the algorithm performance, no optimization strategy was used in our experiments. With the help of some effective reconstruction strategy, such as the permissible source region based on the PET & CT multi-modal soft priors, the performance of our propsed method will be further improved.

Need to add that, in our research, the proposed TV regularization is isotropic. Theory, only the isotropic TV is rotationally invariant. While the anisotropic TV is not, because the weights corresponding to all coordinate directions are equal. In consequence, the rotational variance of the anisotropic TV can lead to geometric distortions of the recovered images. In particular, with anisotropic TV, the boundaries in the images tend to align with the co-ordinate axes. Meanwhile, Experiments also indicate that the anisotropic TV regularization yields some irregular artefacts. And isotropic TV regularization is seen to recover the source distribution with clear edges and high image contrast. However, compared with anisotropic TV regularization, isotropic TV regularization is not easy to implement, especially for finite element reconstruction framework. And the proposed Tetrahedral mesh-Cartesian grid pair method provides a new way to build and solve isotropic TV regularization.

Although the TV-GML strategy performed well in CLT reconstruction, it still had some drawbacks. First, to calculate the gradient of the TV norm, $12$ points in the tetrahedral mesh grid should be determined from a sub-region based on the $k-$nearest neighbour algorithm. It is inevitable that the choice of $k$ will influence the choice of $12$ points and restrict the computational accuracy of the TV-norm gradient. If $k$ is too high, the gradient of the TV norm will be more accurate, but requires more time. Thus, further research is needed on how to automatically determine the value of $k$ for different FE meshes. Furthermore, in in vivo experiments, the reconstructed source intensity does not indicate the actual number of Cerenkov photons, the concentration of $^{18}$FDG and and the quantitative information the tumor cells. As far as we know, quantitative research of tumor cells is still a challenge problem in optical molecular tomography. And, some optical calibration [32] and biometric methods [33] should be adopted to reflect the number of photons or tumour cells in quantitative research on CLT reconstruction. Furthermore, the TV-GML strategy will be further verified in various preclinical applications for X-ray luminescence computed tomography, Cerenkov-excited luminescence tomography [34], 3D NIR-II imaging [35], and some novel optical tomography techniques.

In conclusion, the proposed TV-GML strategy can improve the CLT reconstruction accuracy of the spatial location, dual-source resolution, and shape-recovery capability. It inherited the benefits of both the dynamic Laplacian graph model and the TV-norm regularisation and also overcame the major limitations of the existing $\ell _{2}-$norm, $\ell _{1}-$norm, and TV-norm. We believe that this novel method will benefit the application of CLT for quantitative analysis and morphological observation of various preclinical applications and facilitate the development of the theory of solving the inverse problem.

Funding

National Natural Science Foundation of China (11871321, 61901374, 61906154, 61971350); Natural Science Foundation of Shaanxi Province (2019JQ-724); Scientific and Technological projects of Xi’an (201805060ZD11CG44); Key Research and Development Projects of Shaanxi Province (2020SF-036); Xi’an Science and Technology Project (2019218214GXRC018CG019-GXYD18.3).

Acknowledgment

The authors would like to thank the CAS key Laboratory of Molecular Imaging for part of in vivo experiment data.

Disclosures

The authors declare no potential conflict of interests.

Data availability

Data underlying of the in-vivo results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

Supplemental document

See Supplement 1 for supporting content.

References

1. G. S. Mitchell, R. K. Gill, D. L. Boucher, C. Li, and S. R. Cherry, “In vivo cerenkov luminescence imaging: a new tool for molecular imaging,” Phil. Trans. R. Soc. A. 369(1955), 4605–4619 (2011). [CrossRef]  

2. J. Axelsson, S. C. Davis, D. J. Gladstone, and B. W. Pogue, “Cerenkov emission induced by external beam radiation stimulates molecular fluorescence,” Med. Phys. 38(7), 4127–4132 (2011). [CrossRef]  

3. R. Robertson, M. S. Germanos, C. Li, G. S. Mitchell, S. R. Cherry, and M. D. Silva, “Optical imaging of cerenkov light generation from positron-emitting radiotracers,” Phys. Med. Biol. 54(16), N355–N365 (2009). [CrossRef]  

4. J. S. Klein, G. S. Mitchell, and S. R. Cherry, “Quantitative assessment of cerenkov luminescence for radioguided brain tumor resection surgery,” Phys. Med. Biol. 62(10), 4183–4201 (2017). [CrossRef]  

5. P. Mascagni, F. Longo, M. Barberio, B. Seeliger, V. Agnus, P. Saccomandi, A. Hostettler, J. Marescaux, and M. Diana, “New intraoperative imaging technologies: Innovating the surgeon’s eye toward surgical precision,” J. Surg. Oncol. 118(2), 265–282 (2018). [CrossRef]  

6. D. Y. Lewis, R. Mair, A. Wright, K. Allinson, S. K. Lyons, T. Booth, J. Jones, R. Bielik, D. Soloviev, and K. M. Brindle, “[18f] fluoroethyltyrosine-induced cerenkov luminescence improves image-guided surgical resection of glioma,” Theranostics 8(14), 3991–4002 (2018). [CrossRef]  

7. M. R. Grootendorst, M. Cariati, S. E. Pinder, A. Kothari, M. Douek, T. Kovacs, H. Hamed, A. Pawa, F. Nimmo, J. Owen, V. Ramalingam, S. Sethi, S. Mistry, K. Vyas, D. S. Tuch, A. Britten, M. Van Hemelrijck, G. J. Cook, C. Sibley-Allen, S. Allen, and A. Purushotham, “Intraoperative assessment of tumor resection margins in breast-conserving surgery using 18f-fdg cerenkov luminescence imaging: a first-in-human feasibility study,” J. Nucl. Med. 58(6), 891–898 (2017). [CrossRef]  

8. E. Ciarrocchi, C. Vanhove, B. Descamps, S. De Lombaerde, S. Vandenberghe, and N. Belcari, “Performance evaluation of the lightpath imaging system for intra-operative cerenkov luminescence imaging,” Phys. Medica 52, 122–128 (2018). [CrossRef]  

9. Z. Hu, J. Liang, W. Yang, W. Fan, C. Li, X. Ma, X. Chen, X. Ma, X. Li, X. Qu, J. Wang, F. Cao, and J. Tian, “Experimental cerenkov luminescence tomography of the mouse model with spect imaging validation,” Opt. Express 18(24), 24441–24450 (2010). [CrossRef]  

10. Y. Xu, H. Liu, and Z. Cheng, “Harnessing the power of radionuclides for optical imaging: Cerenkov luminescence imaging,” J. Nucl. Med. 52(12), 2009–2018 (2011). [CrossRef]  

11. M. Liu, H. Guo, H. Liu, Z. Zhang, C. Chi, H. Hui, D. Dong, Z. Hu, and J. Tian, “In vivo pentamodal tomographic imaging for small animals,” Biomed. Opt. Express 8(3), 1356–1371 (2017). [CrossRef]  

12. Z. Hu, X. Chen, J. Liang, X. Qu, D. Chen, W. Yang, J. Wang, F. Cao, and J. Tian, “Single photon emission computed tomography-guided cerenkov luminescence tomography,” J. Appl. Phys. 112(2), 024703 (2012). [CrossRef]  

13. C. Li, G. S. Mitchell, and S. R. Cherry, “Cerenkov luminescence tomography for small-animal imaging,” Opt. Lett. 35(7), 1109–1111 (2010). [CrossRef]  

14. M. Cai, Z. Zhang, X. Shi, J. Yang, Z. Hu, and J. Tian, “Non-negative iterative convex refinement approach for accurate and robust reconstruction in cerenkov luminescence tomography,” IEEE Trans. Med. Imaging 39(10), 3207–3217 (2020). [CrossRef]  

15. H. Guo, X. He, M. Liu, Z. Zhang, Z. Hu, and J. Tian, “Weight multispectral reconstruction strategy for enhanced reconstruction accuracy and stability with cerenkov luminescence tomography,” IEEE Trans. Med. Imaging 36(6), 1337–1346 (2017). [CrossRef]  

16. S. Jiang, J. Liu, Y. An, G. Zhang, J. Ye, Y. Mao, K. He, C. Chi, and J. Tian, “Novel l 2, 1-norm optimization method for fluorescence molecular tomography reconstruction,” Biomed. Opt. Express 7(6), 2342–2359 (2016). [CrossRef]  

17. H. Guo, Z. Hu, X. He, X. Zhang, M. Liu, Z. Zhang, X. Shi, S. Zheng, and J. Tian, “Non-convex sparse regularization approach framework for high multiple-source resolution in cerenkov luminescence tomography,” Opt. Express 25(23), 28068–28085 (2017). [CrossRef]  

18. H. Wang, C. Bian, L. Kong, Y. An, Y. Du, and J. Tian, “A novel adaptive parameter search elastic net method for fluorescent molecular tomography,” IEEE Trans. Med. Imaging 40(5), 1484–1498 (2021). [CrossRef]  

19. H. Guo, L. Gao, J. Yu, X. He, H. Wang, J. Zheng, and X. Yang, “Sparse-graph manifold learning method for bioluminescence tomography,” J. Biophotonics 13(4), e201960218 (2020). [CrossRef]  

20. M. Hintermüller, M. Holler, and K. Papafitsoros, “A function space framework for structural total variation regularization with applications in inverse problems,” Inverse Probl. 34(6), 064002 (2018). [CrossRef]  

21. W. Lu, J. Duan, D. Orive-Miguel, L. Herve, and I. B. Styles, “Graph-and finite element-based total variation models for the inverse problem in diffuse optical tomography,” Biomed. Opt. Express 10(6), 2684–2707 (2019). [CrossRef]  

22. H. Gao and H. Zhao, “Multilevel bioluminescence tomography based on radiative transfer equation part 2: total variation and l1 data fidelity,” Opt. Express 18(3), 2894–2912 (2010). [CrossRef]  

23. S. Zhong, Z. Xie, W. Wang, Z. Liu, and L. Liu, “Mesh denoising via total variation and weighted laplacian regularizations,” Comput. Animat. Virtual Worlds 29(3-4), e1827 (2018). [CrossRef]  

24. K. Jon, Y. Sun, Q. Li, J. Liu, X. Wang, and W. Zhu, “Image restoration using overlapping group sparsity on hyper-laplacian prior of image gradient,” Neurocomputing 420, 57–69 (2021). [CrossRef]  

25. T. M. Shaffer, E. C. Pratt, and J. Grimm, “Utilizing the power of cerenkov light with nanotechnology,” Nat. Nanotechnol. 12(2), 106–117 (2017). [CrossRef]  

26. J. Zhong, J. Tian, X. Yang, and C. Qin, “Whole-body cerenkov luminescence tomography with the finite element sp 3 method,” Ann. Biomed. Eng. 39(6), 1728–1735 (2011). [CrossRef]  

27. E. Y. Sidky, C.-M. Kao, and X. Pan, “Accurate image reconstruction from few-views and limited-angle data in divergent-beam ct,” J. X-Ray Sci. Technol. 14, 119–139 (2006).

28. B. Iannazzo and M. Porcelli, “The riemannian barzilai–borwein method with nonmonotone line search and the matrix geometric mean computation,” IMA J. Numer. Analysis 38(1), 495–517 (2018). [CrossRef]  

29. H. Xie, H. Wang, L. Wang, N. Wang, J. Liang, Y. Zhan, and X. Chen, “Comparative studies of total-variation-regularized sparse reconstruction algorithms in projection tomography,” AIP Adv. 9(8), 085122 (2019). [CrossRef]  

30. M. Dhenain, S. W. Ruffins, and R. E. Jacobs, “Three-dimensional digital mouse atlas using high-resolution mri,” Dev. Biol. 232(2), 458–470 (2001). [CrossRef]  

31. J. Tian, J. Liang, X. Chen, and X. Qu, “Molecular optical simulation environment,” in Molecular Imaging, (Springer, 2013), pp. 15–46.

32. Y. Yang, K. K.-H. Wang, S. Eslami, I. I. Iordachita, M. S. Patterson, and J. W. Wong, “Systematic calibration of an integrated x-ray and optical tomography system for preclinical radiation research,” Med. Phys. 42(4), 1710–1720 (2015). [CrossRef]  

33. A. Moiyadi, P. Shetty, E. Sridhar, V. Gota, M. Gurjar, G. Saicharan, V. Singh, and S. Srivastava, “Objective assessment of intraoperative tumor fluorescence reveals biological heterogeneity within glioblastomas: a biometric study,” J. Neuro-Oncol. 146(3), 477–488 (2020). [CrossRef]  

34. Z. Hu, Y. Qu, K. Wang, X. Zhang, J. Zha, T. Song, C. Bao, H. Liu, Z. Wang, J. Wang, Z. Liu, H. Liu, and J. Tian, “In vivo nanoparticle-mediated radiopharmaceutical-excited fluorescence molecular imaging,” Nat. Commun. 6(1), 7560 (2015). [CrossRef]  

35. M. Cai, Z. Zhang, X. Shi, Z. Hu, and J. Tian, “Nir-ii/nir-i fluorescence molecular tomography of heterogeneous mice based on gaussian weighted neighborhood fused lasso method,” IEEE Trans. Med. Imaging 39(6), 2213–2222 (2020). [CrossRef]  

Supplementary Material (1)

NameDescription
Supplement 1       Supplemental Document1

Data availability

Data underlying of the in-vivo results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (8)

Fig. 1.
Fig. 1. Diagram of the Tetrahedral mesh-Cartesian grid mapping process.
Fig. 2.
Fig. 2. The flowchart of the proposed TV-GML method
Fig. 3.
Fig. 3. Digital tumour mouse model and surface-simulated measurements used in numerical simulation experiments. (a)-(b) two spherical sources with different radius were used in single source simulations; (c)-(d) two dual-source simulation models. (e)-(f) shows the 3D CLIs (colormap, $650 nm$).
Fig. 4.
Fig. 4. Reconstruction of the single source. The 3D rendering and 2D tomographic slices are shown for comparison. The white outline indicates the real source location and shape. (a)-(d) showed the results of $\ell _{2}-$norm regularisation, $\ell _{1}-$norm regularisation, TV regularisation, and proposed TV-GML method, respectively; (a1)-(d1) and (a2)-(d2) correspond to the reconstructed results of the source with different size, respectively..
Fig. 5.
Fig. 5. Dual source reconstruction comparison results. The reconstructed sources displayed in 3D view and 2D axial view. The white outline indicates the real source location and shape. (a)-(d) show the results of $\ell _{2}-$norm regularisation, $\ell _{1}-$norm regularisation, TV regularisation, and our proposed TV-GML method, respectively; (a1)-(d1) and (a2)-(d2) correspond to the reconstructed results of the source with different size, respectively.
Fig. 6.
Fig. 6. Implanted source reconstruction results. (a)-(d) show the results of $\ell _{2}-$norm regularisation, $\ell _{1}-$norm regularisation, TV regularisation, and our proposed TV-GML method, respectively. The reconstruction results were displayed in 3D view, the coronal view (C), the sagittal view (S), and the transversal view (T).
Fig. 7.
Fig. 7. Bladder reconstruction results. (a)-(d) show the results of $\ell _{2}-$norm regularisation, $\ell _{1}-$norm regularisation, TV regularisation, and our proposed TV-GML method, respectively. The reconstruction results were displayed in 3D view, the coronal view (C), the sagittal view (S), and the transversal view (T).
Fig. 8.
Fig. 8. Double subcutaneous tumour reconstruction results in 3D and cross-sectional view. (a)-(d) show the results of $\ell _{2}-$norm regularisation, $\ell _{1}-$norm regularisation, TV regularisation, and our proposed TV-GML method, respectively. S1-T1 and S2-T2 are the two-dimensional coronal view (C), sagittal view (S), and transversal view (T) of the centres of two different sources.

Tables (3)

Tables Icon

Table 1. Quantitative results of single source reconstruction.

Tables Icon

Table 2. Quantitative results of dual source reconstruction.

Tables Icon

Table 3. Quantitative results of in vivo CLT reconstruction.

Equations (20)

Equations on this page are rendered with MathJax. Learn more.

A S P 3 X = J +
X ^ = arg min X 1 2 A S P 3 X J + 2 2 + λ L X 2 2 + γ X T V
W i j = W j i = { 0 i = j e x p ( d i , j 2 4 R 2 ) / ρ s k , i , j s k , i j 0 o t h e r w i s e ρ s k = g , l s k , g l e x p ( d g , l 2 4 R 2 )
L i , j = { 1 + ( X ^ i ) / m a x ( X ^ ) i = j e x p ( d i , j 2 4 R 2 ) / ρ s k i , j S k , i j 0 o t h e r w i s e
X T V = Ω | X | d x d y d z
X p ( k , m , n ) T V = k , m , n | X p ( k , m , n ) | = k , m , n u ( k , m , n ) = k , m , n ( f ( k , m , n ) x ) 2 + ( f ( k , m , n ) y ) 2 + ( f ( k , m , n ) z ) 2
{ f ( k , m , n ) x = X p ( k , m , n ) X p ( k 1 , m , n ) 1 f ( k , m , n ) y = X p ( k , m , n ) X p ( k , m 1 , n ) 1 f ( k , m , n ) z = X p ( k , m , n ) X p ( k , m , n 1 ) 1
G = A S P 3 T ( A S P 3 X J ) + 2 λ L T L X + γ X T V
X p ( k , m , n ) T V = 3 X p ( k , m , n ) X p ( k 1 , m , n ) X p ( k , m 1 , n ) X p ( k , m , n 1 ) u ( k , m , n ) X p ( k + 1 , m , n ) X p ( k , m , n ) u ( k + 1 , m , n ) X p ( k , m + 1 , n ) X p ( k , m , n ) u ( k , m + 1 , n ) X k , m , n + 1 X p ( k , m , n ) u ( k , m , n + 1 ) = 3 X p ( k , m , n ) X p ( k 1 , m , n ) X p ( k , m 1 , n ) X p ( k , m , n 1 ) ( X p ( k , m , n ) X p ( k 1 , m , n ) ) 2 + ( X p ( k , m , n ) X p ( k , m 1 , n ) ) 2 + ( X p ( k , m , n ) X p ( k , m , n 1 ) ) 2 + δ X p ( k + 1 , m , n ) X p ( k , m , n ) ( X p ( k + 1 , m , n ) X p ( k , m , n ) ) 2 + ( X p ( k + 1 , m , n ) X p ( k + 1 , m 1 , n ) ) 2 + ( X p ( k + 1 , m , n ) X p ( k + 1 , m , n 1 ) ) 2 + δ X p ( k , m , + 1 , n ) X p ( k , m , n ) ( X p ( k , m + 1 , n ) X p ( k 1 , m + 1 , n ) ) 2 + ( X p ( k , m + 1 , n ) X p ( k , m , n ) ) 2 + ( X p ( k , m + 1 , n ) X p ( k , m + 1 , n 1 ) ) 2 + δ X p ( k , m , n + 1 ) X p ( k , m , n ) ( X p ( k , m , n + 1 ) X p ( k 1 , m , n + 1 ) ) 2 + ( X p ( k , m , n + 1 ) X p ( k , m 1 , n + 1 ) ) 2 + ( X p ( k , m , n + 1 ) X p ( k , m , n ) ) 2 + δ
{ p ( k , m , n + 1 ) = a r g min p ( 1 | cos < a , b > | ) w h e r e p Ω , p p ( k , m , n )
{ f ( k , m , n ) x = X p ( k , m , n ) X p ( k 1 , m , n ) p ( k , m , n ) p ( k 1 , m , n ) f ( k , m , n ) y = X p ( k , m , n ) X p ( k , m 1 , n ) p ( k , m , n ) p ( k , m 1 , n ) f ( k , m , n ) z = X p ( k , m , n ) X p ( k , m , n 1 ) p ( k , m , n ) p ( k , m , n 1 )
X p ( k , m , n ) T V = 3 X p ( k , m , n ) X p ( k 1 , m , n ) X p ( k , m 1 , n ) X p ( k , m , n 1 ) ( X p ( k 1 , m , n ) X p ( k , m , n ) p ( k 1 , m , n ) p ( k , m , n ) ) 2 + ( X p ( k , m 1 , n ) X p ( k , m , n ) p ( k , m 1 , n ) p ( k , m , n ) ) 2 + ( X p ( k , m , n 1 ) X p ( k , m , n ) p ( k , m , n 1 ) p ( k , m , n ) ) 2 + δ X p ( k + 1 , m , n ) X p ( k , m , n ) ( X p ( k + 1 , m , n ) X p ( k , m , n ) p ( k + 1 , m , n ) p ( k , m , n ) ) 2 + ( X p ( k + 1 , m 1 , n ) X p ( k + 1 , m , n ) p ( k + 1 , m 1 , n ) p '( k + 1 , m , n ) ) 2 + ( X k + 1 , m , n 1 X k + 1 , m , n p ( k + 1 , m , n 1 ) p '( k + 1 , m , n ) ) 2 + δ X p ( k , m + 1 , n ) X p ( k , m , n ) ( X p ( k 1 , m + 1 , n ) X p ( k , m + 1 , n ) p ( k 1 , m + 1 , n ) p ( k , m , n ) ) 2 + ( X p ( k , m , n ) X p ( k , m + 1 , n ) p ( k , m , n ) p ( k , m + 1 , n ) ) 2 + ( X p ( k , m + 1 , n 1 ) X p ( k , m + 1 , n ) p ( k , m + 1 , n 1 ) p ( k , m + 1 , n ) ) 2 + δ X p ( k , m , n + 1 ) X p k , m , n ( X p ( k 1 , m , n + 1 ) X p ( k , m , n + 1 ) p ( k + 1 , m , n ) p ( k , m , n + 1 ) ) 2 + ( X p ( k , m 1 , n + 1 ) X p ( k , m , n + 1 ) p ( k , m 1 , n + 1 ) p ( k , m , n + 1 ) ) 2 + ( X p ( k , m , n ) X p ( k , m , n + 1 ) p ( k , m , n ) p ( k , m , n + 1 ) ) 2 + δ
X ^ n + 1 = max ( X ^ n α n ρ n , 0 )
ρ n ( l ) = { G n ( l ) i f G n ( l ) 0 , o r X ^ n ( l ) > 0 0 o t h e r w i s e
{ α n B B 1  =  min α α 1 s n 1 y n 1 2 α n B B 2  =  min α α y n 1 s n 1 2
{ α n B B 1 = ( X ^ n X ^ n 1 ) T ( X ^ n X ^ n 1 ) ( X ^ n X ^ n 1 ) T ( G n G n 1 ) α n B B 2 = ( X ^ n X ^ n 1 ) T ( G n G n 1 ) ( G n G n 1 ) T ( G n G n 1 )
{ α n BB 1 y n 1 s n 1 2 α n BB 2 y n 1 s n 1 2 ( α n BB 2 ) 1 s n 1 y n 1 2 ( α k BB 1 ) 1 s n 1 y n 1 2
α n CBB = R 2 R 1 + R 2 α n BB 1 + R 1 R 1 + R 2 α n BB 2
{ R 1 = α n BB 1 y n 1 s n 1 2 R 2 = ( α n BB 2 ) 1 s n 1 y n 1 2
α n CBB = κ α n BB 1 + ( 1 κ ) α n BB 2
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.