Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Correntropy-induced metric with Laplacian kernel for robust fluorescence molecular tomography

Open Access Open Access

Abstract

Fluorescence molecular tomography (FMT), which is used to visualize the three-dimensional distribution of fluorescence probe in small animals via the reconstruction method, has become a promising imaging technique in preclinical research. However, the classical reconstruction criterion is formulated based on the squared l2-norm distance metric, leaving it prone to being influenced by the presence of outliers. In this study, we propose a robust distance based on the correntropy-induced metric with a Laplacian kernel (CIML). The proposed metric satisfies the conditions of distance metric function and contains first and higher order moments of samples. Moreover, we demonstrate important properties of the proposed metric such as nonnegativity, nonconvexity, and boundedness, and analyze its robustness from the perspective of M-estimation. The proposed metric includes and extends the traditional metrics such as l0-norm and l1-norm metrics by setting an appropriate parameter. We show that, in reconstruction, the metric is a sparsity-promoting penalty. To reduce the negative effects of noise and outliers, a novel robust reconstruction framework is presented with the proposed correntropy-based metric. The proposed CIML model retains the advantages of the traditional model and promotes robustness. However, the nonconvexity of the proposed metric renders the CIML model difficult to optimize. Furthermore, an effective iterative algorithm for the CIML model is designed, and we present a theoretical analysis of its ability to converge. Numerical simulation and in vivo mouse experiments were conducted to evaluate the CIML method’s performance. The experimental results show that the proposed method achieved more accurate fluorescent target reconstruction than the state-of-the-art methods in most cases, which illustrates the feasibility and robustness of the CIML method.

© 2021 Optical Society of America under the terms of the OSA Open Access Publishing Agreement

1. Introduction

As an important optical molecular imaging modality, fluorescence molecular tomography (FMT) is a non-invasive molecular imaging technology that can observe imaging targets quantitatively at the molecular level [1]. Thus, FMT can visualize biological and physiological processes three-dimensionally (3D) and is widely used for tumor diagnosis and drug discovery [2,3]. FMT allows visualization of the 3D distribution of fluorescent probes in the tissue by solving the linear system of equations between a system matrix and the measured values of surface photons. However, due to the strong scattering property of biological tissues and the limited boundary measurements with noise, the reconstruction problem in FMT is severely ill-posed. To solve the FMT reconstruction problem, efforts have been made from different aspects, e.g., improvement of forward modeling [4], different configurations of the imaging hardware systems [5], various regularization methods [6,7], and multimodal strategies combining CT or MRI [8,9].

Compressed sensing (CS) has been widely used in signal and image processing fields. Based on CS theory, a sparse signal can be reconstructed with far fewer measurements than Nyquist’s sampling theorem requires [10]. Since the fluorescent source is usually small and sparse in practical application, it is reasonable to apply the CS theory to the reconstruction of the fluorescent targets. The classical CS reconstruction model consists of two parts, including the error term and the regularization term. The error term measures empirical loss, and the regularization term encourages sparsity of the signal. Traditional CS reconstruction models often utilize the $l_{2}$ norm error term, which is more appropriate for normally distributed data. However, the Gaussian assumption cannot always be ascertained in many real application due to impulsive disturbances or substantial outliers. Besides that, the detected surface optical signal is inevitably disrupted by various types of noise during acquisition and transmission. Thus, the common used square operation greatly exacerbates the noise effect. In contrast, the $l_{1}$-norm is less sensitive to noise than the $l_{2}$-norm metric and the derivative of $l_{1}$-norm is bounded (except for the point of origin), which make the $l_{1}$-norm error term a more competitive option in robust CS reconstruction [11]. In feature extraction [12], dimensionality reduction [13] and classification [14], the $l_{1}$-norm has proven capable of mitigating the effect of outliers well. However, the $l_{1}$-norm may not be effective enough to manage a large number of outliers owing to the unboundedness of the $l_{1}$-norm.

For the early tumor detection, the size of the tumor is small compared with the entire reconstruction region and they can be treated as sparse signals. Therefore, sparse regularization methods are adopted to reconstruct the tumor, which can effectively suppress spurious background signals and retain more details. Generally, sparsity problems can frequently be transformed into a $l_{0}$ regularization problem. Lee proposed a novel noniterative exact reconstruction algorithm to achieve the optimal $l_{0}$ for the measurement level increasing to unknown sparsity [15]. Although the $l_{0}$-norm characterizes the sparsity of a signal, the $l_{0}$ optimization problem is NP-hard. Moreover, its sensitivity to noise also presents difficulties in solving this problem. Thus, the relaxation methods were proposed by replace the $l_{0}$-norm by the continuous sparsity promoting penalty function $l_{1}$-norm. Several excellent theoretical works [16,17], have shown that the $l_{1}$-norm minimization can make an exact recovery when satisfied certain conditions, such as assuming the restricted isometric property (RIP) [16]. Researchers have considered the combination of CS theory and the $l_{1}$-norm optimization algorithm for FMT reconstruction [18,19], including the fast iterative shrinkage thresholding algorithm (FISTA) [20], $l_{1}$-Homotopy [21], incomplete variables truncated conjugate gradient (IVTCG) [22], and the greedy algorithm based on match pursuit (MP) framework [23]. Furthermore, Dutta et al. proposed the joint $l_{1}$ and total variance regularization method that improves the smoothness of the image while maintaining sparsity [24]. Jiang et al. proposed the joint group sparse and LASSO method under the convex optimization framework [18].

However, the $l_{1}$-norm is not always a sound choice for numerical optimization because it is non-differentiable at zero. Compared with the $l_{0}$-norm method, the $l_{1}$-norm reconstruction methods have insufficient positional accuracy and robustness. For non-convex relaxation, the $l_{p}$-norm $p\in (0,1)$ seems to be the preferred choice. Zhu extended a non-convex regularization method to accurately reconstruct the target in FMT [25]. In [26], Peng demonstrated that in every underdetermined linear system $Ax = b$, there is a corresponding constant $p^{\ast }(A,b)>0$ such that every solution to the $l_{p}$-norm minimization problem also solves the $l_{0}$-norm minimization problem whenever $0<p<p^{\ast }(A,b)$. Presently, there are two main approaches to $l_{p}$-norm minimization for $0<p<1$. The first is the iteration reweighted least squares algorithm [27]. The other approach is an iterative thresholding algorithm when $p=\frac {1}{2}, \frac {2}{3}$ [28,29].

In statistical studies, correntropy is a local similarity measure based on information theoretic learning. It can be regarded as a surrogate for the mean square error criterion in the kernel space for measuring the similarity between two random variables in a neighborhood controlled by the kernel bandwidth [30]. It is also a second-order statistic, effectively reducing the effect of large outliers, and is widely used in robust learning. Correntropy matching pursuit (CMP) [31] has been applied to develop a robust sparse representation. The CMP method adaptively assigns small weights on corrupted data entries and greater weights on clean ones, thereby reducing the effect of large noise. He et al. [32] designed a maximum correntropy adaptation approach for robust CS. Guo et al. [33] proposed a robust subspace clustering algorithm based on the correntropy-induced metric. These algorithms confirm that the correntropy-induced metric is robust against outliers.

Inspired by the studies of the correntropy-induced metric, we present a robust and efficient reconstruction algorithm based on the correntropy-induced metric with a Laplacian kernel (CIML) for FMT to reduce noise interference and produce more accurate results. Firstly, we present a correntropy-induced metric, which satisfies the properties of boundedness, non-convexity, non-negativity, triangle inequality, and approximation behaviors. Thus, compared with the traditional loss functions, the metric contains the first-order and higher-order moments information from the perspective of M-estimation theory to guarantee the robustness of it [34]. Secondly, based on the correntropy-induced metric, we propose a non-convex reconstruction model with an adaptive regularization function. Thirdly, to solve the non-convex model, we decompose the objective function into two proper convex functions and solved it by the difference of convex algorithm (DCA) [35,36]. In addition, the convergence of the CIML algorithmn is proven. To validate the performance of the proposed model and algorithm for FMT reconstruction, numerical simulation experiments and in vivo mouse experiments were conducted.

The remainder of the paper was organized as follows. In section 2, we gave a brief review of the FMT reconstruction model, the concept and property of correntropy, and the DCA. The reconstruction algorithm based on correntropy was proposed and the convergence of the algorithm was analyzed in section 3. Numerical simulation and in vivo experiments were conducted to validate the performance of the proposed model and algorithm. The evaluation indices and experimental results were presented in section 4. Discussion and conclusion were given in section 5.

2. Background

In this section, we formulate the problem of recovering the fluorescence target, derive the definition of correntropy, and briefly introduce its property. Last, we present the standard form of the DC algorithm and the properties of the algorithm’s solution.

2.1 Reconstruction problem

FMT reconstruction relies on the diffusion of light transmission in biological tissue. Based on finite element analysis, the measurements that cannot be observed are removed and a linear relationship between the unknown fluorescent source within the tissue and the surface photon density is established. The final matrix equation is formed as [37]:

$${A}{x}={Y}$$
where $A\in R^{M\times N} (M \ll N)$ is a weight matrix used to map the unknown fluorescence source distribution to known measurements, $Y\in R^{M}$ is the measured light flux at the boundary of the biological tissue and $x\in R^{N}$ represents the distribution of the unknown source in biological tissues.

FMT reconstruction is aimed at solving the inverse problem of Eq. 1. Generally, given the noise in the FMI measurement and the ill-posed problem in the weight matrix $A$, it is impractical to solve $x$ directly. To obtain an acceptable approximate solution, the sparsity term to the object function is usually be engaged. The traditional sparse reconstruction model with $l_{0}$ regularization is as follow:

$$\arg \min_{x}\frac{1}{2}\parallel {A}{x}-{Y}\parallel_{2}^{2}+{\lambda}\parallel {x}\parallel_{0}$$
where ${\lambda }$ is the regularization parameter used to balance the loss term and the penalty term. In this study, a correntropy-induced metric based on the Laplacian kernel function was proposed to replace the $l_{2}$-norm loss term, and a non-convex sparsity-promoting penalty function was used to approximate $l_{0}$-norm. The details of the non-convex reconstruction model were described in section 3.

2.2 Definition and property of correntropy

Cross correntropy between two random variables ${U}$ and ${Z}$ is defined as [30]:

$$V_{\sigma}(U,Z)=E[\kappa_{\sigma}(U-Z)]$$
where ${\kappa _{\sigma }}$ is a kernel function with size parameter ${\sigma }$, ${E}$ is the mathematical expectation. The Gaussian kernel is the preferred kernel in correntropy, given by:
$$\kappa_{\sigma}(u_{i}, z_{i})= \frac{1}{\sqrt{2{\pi}}{\sigma}}exp(-\frac{(u_{i}-z_{i})^{2}}{2\sigma^{2}})$$
But the joint probability density function(PDF) is unknown and only a limited amount of data $\{(u_{i},z_{i}),\ \ i=1,\ldots ,m\}$ are available in practice, leading to the empirical correntropy being:
$$V_{M,\sigma}(U,Z)=\frac{1}{m}\sum_{i=1}^{m}\kappa_{\sigma}(u_{i}-z_{i})$$
We introduce property of correntropy as proved in [30]. Assume the joint pdf of the samples $\{(u_{i}, z_{i})\}_{i=1}^{m}$ is $f_{U,Z}(u, z)$. $E=Z-U$ is defined as the error random variable. $f_{E; \sigma }(e)$ is the Parzen estimate of the error pdf from data $\{(e_{i}=u_{i}-z_{i})\}_{i=1}^{m}$. $V_{\sigma }(U,Z)$ is equal to the value of $f_{E;\sigma }(e)$ evaluated at the point $e=0$:
$$V_{\sigma}(U,Z)=f_{E;\sigma}(0)$$

2.3 DC algorithm

DC programming and DCA (DC Algorithm), which constitute the backbone of global optimization and non-convex programming, were introduced by Pham Dinh Tao in 1985. The DCA solves the problem of minimizing a function $f$, which is the difference of the convex functions over the entire space $R^{n}$ or on a convex set $C\subset R^{n}$ [35,36]. A standard DC program is structured as:

$${\alpha}=inf{\{f(x)=g(x)-h(x):x\in\Re^{n}}\}\ \ \ \ (P_{dc})$$
where $g(x)$, $h(x)$ are lower semi-continuous proper convex functions defined on $R^{n}$.

The main purpose of a DCA, which is an iterative method, is to replace at the point $x^{l}$ of iteration $l$ in the DC program. The second component $h(x)$ is approximated by its affine minorization $h_{l}(x)$ and defined by:

$$h_{l}(x)=h(x^{l})+{<x-x^{l},y^{l}>}, \ \ y^{l}\in\partial{h(x^{l})}$$
The minorization produces the convex program formed as:
$$inf\{{g(x)-h_{l}(x):x\in R^{n}}\} \Leftrightarrow\inf\{{g(x)-{<x,y^{l}>}:x\in R^{n}}\}$$
the optimal solution of which is taken as $x^{l+1}$.

The DCA is a descent method with no line search, but with global convergence, it has the following properties (for simplicity, we have omitted the dual part of the following properties) [36]:

(1) The DCA linearly converges for general DC programming;

(2) If ${h(x)}$ is differentiable, the subdifferential of ${h(x)}$ at point ${x^{l}}$ is reduced to a singleton. At this time, ${x^{l+1}}$ is the solution of the following convex program:

$$min\{g(x)-(h(x^{l})+\nabla h(x^{l})^{T}(x-x^{l}))\}$$
The DCA is an effective algorithm for solving a non-convex model. The above properties are applied to the next part.

3. Methods

This section proposes a robust method for solving the reconstruction problem. First, the properties of the robust metric are introduced. Then, we introduce the convex function of DC decomposition to the inverse problem based on the metric and propose the algorithm. Finally, the convergence of the algorithm is proven.

3.1 Metric of correntropy induced by the Laplacian kernel

In this study, we used the Laplacian kernel as the correntropy kernel function, given by:

$$k_{\sigma}(w_{1},w_{2})=e^{-\sigma|w_{1}-w_{2}|}$$
where $k_{\sigma }$ is a kernel function with size parameter $\sigma (\sigma >0)$, $w_{1}$ and $w_{2}$ are two random variables. Based on the above, correntropy induces a robust metric,
$$e_{\sigma}(w)=1-e^{-\sigma|w|}, \ \ \forall w\in R$$
By using correntropy as an error metric, the required vector $x$ is estimated iteratively by maximizing the formula:
$${J_{MCC}}=\frac{1}{M}\sum_{i=1}^{M}{exp}^{-\sigma|y_{i}-A_{i}{x}|}$$
where $A_{i}$ represents the ${i}$-th row of the weight matrix $A$. The above formula is called the maximum correntropy criterion(MCC). The properties of the robust metric are given below.

Property 1 $e_{\sigma }(w)$ defines a metric in the sample space. As a metric, the following conditions must be obeyed:

(1) Non-negativity. $e_{\sigma }(w_{1},w_{2})=1-e^{-\sigma |w_{1}-w_{2}|}\geq 0$.

(2) Identity of indiscernibles. $e_{\sigma }(w_{1},w_{2})= 0$ if and only if $w_{1}=w_{2}$.

(3) Symmetry. $e_{\sigma }(w_{1},w_{2})= e_{\sigma }(w_{2},w_{1})$.

(4) Triangle inequality. For ${\sigma }\geq 0$, any positive $w_{1}\in R$ and $w_{2}\in R$, the following triangle inequality must be satisfied:

$$e_{\sigma}(w_{1}+w_{2})\leq e_{\sigma}(w_{1})+e_{\sigma}(w_{2})$$

Proof: See Supplement 1.

Property 2 The $e_{\sigma }(w_{1},w_{2})$ is positive and bounded, i.e., $0\leq e_{\sigma }(w_{1},w_{2})<1$. It reaches minimum if and only if $w_{1}=w_{2}$. When $w_{1}$ and $w_{2}$ are distant enough, $e_{\sigma }(w_{1},w_{2})$ tends to, but cannot reach, 1. (As shown in Fig. 1).

 figure: Fig. 1.

Fig. 1. $l_{1}$ norm and $e_{\sigma }(w)$ with different $\sigma$ values.

Download Full Size | PDF

Property 3 The proposed metric $e_{\sigma }(w_{1},w_{2})$ contains the first and higher order moments of signal information.

Proof: See Supplement 1.

Property 4 The metric $e_{\sigma }(w)$ is a positive, symmetrical, and bounded. From the viewpoint of robust statistics, in this setting the metric is insensitive to outliers. It reaches its minimum if and only if $w=0$, if $w\rightarrow \infty$, $\lim e_{\sigma }(w)=1$. It satisfies:

$$\nabla e_{\sigma}(w)=\left\{ \begin{array}{l} \sigma e^{-\sigma w}, \ \ if \ \ w>0\\ -\sigma e^{\sigma w}, \ \ if\ \ w<0 \\ \end{array} \right.$$
We have
$$\lim_{w\rightarrow +\infty}(\sigma e^{-\sigma w})=0, \lim_{w\rightarrow -\infty}(-\sigma e^{\sigma w})=0$$
The proposed metric $e_{\sigma }(w)$ is differentiable and its derivative is bounded, except at the point $w=0$. According to M-estimation theory [34], the metric $e_{\sigma }(w)$ is insensitive to noise and outliers. In this sense, the large error term is attenuated similar to the weighting function, for the outlier to have reduced influence on the adaptation.

Based on the nature of sparse recovery, the $l_{0}$ penalty is the target penalty, whereas other penalties may also be capable of recovering the target. As mentioned before, the $l_{0}$ penalty is undesirable from the computational perspective due to its discontinuity and discreteness. It is well known that the $l_{2}$ penalty is analytically tractable, but generally produces non-sparse solutions. Such difficulties motivated the use of penalties that are computationally tractable approximations or relaxations of the $l_{0}$ penalty. Among all proposals, the $l_{1}$ penalty has attracted much research attention to sparse recovery. It has been recognized that the $l_{1}$ penalty does not infallibly point us to the true underlying sparse model. Fan and Li [38] advocated for classes of penalty functions with three desired properties: unbiasedness, sparsity, and continuity. Following this, Fan and Lv [39] proposed the condition for characterizing unbiasedness and sparsity promoting properties. Then, the penalty function satisfies the following condition as defined in [39].

Definition 1 The function $\rho (t)$, is called proper as a penalty function if $\rho (t)$ such that:

C1: $\rho (t)$ increases and is concave in $t\in [0,\infty )$;

C2: $\rho ^{'}(t)$ is continuous with $\rho {'} (0+)\in (0,\infty )$;

C3: if $\rho (t)$ depends on a positive parameter $\lambda$, then $\rho ^{'} (t; \lambda )$ increases in $\lambda \in (0,\infty )$ and $\rho ^{'} (0+)$ is independent of $\lambda$.

It follows that $e_{\sigma }^{'}(w)$ is positive and decreasing, and $e^{'}_{\sigma } (0+)$ is the upper bound of $e_{\sigma }^{'}(w)$. It is shown in [38] that penalties satisfying Condition 1 and $\lim _{w\rightarrow \infty } e_{\sigma }^{'}(w)=0$ enjoy both unbiasedness and sparsity.

Property 5 For any $W=(w_{1},w_{2},\ldots ,w_{n})^{T}\in R^{n}$, the following formula is established,

$$e_{\sigma}(w_{i})=1-e^{-\sigma|w_{i}|}=\left\{ \begin{array}{l} 0 \ \ \ w_{i}=0 \\ 1 \ \ \ w_{i}\neq 0, \sigma\rightarrow +\infty \\ \end{array} \right.$$
Thus, the robust metric induced by the Laplacian kernel proposed in this paper is geometrically equivalent to $l_{0}$ norm:
$$e_{\sigma}(W)=\sum_{i=1}^{n}(1-e^{-\sigma|w_{i}|})=\|W\|_{0}$$
Let $\sigma$ be a fixed value. If $w_{i}$ tends to the infinitesimal, it can be obtained by Taylor expansion: $\sum _{i=1}^{n}(1-e^{-\sigma |w_{i}|})=\sum _{i=1}^{n}\sigma |w_{i}|$, $e_{\sigma }(W)$ is less sensitive to large noise compared with the Euclidean distance index. This renders the metric more robust ($\sigma =1$, $\sum _{i=1}^{n}\sigma |w_{i}|=\|W\|_{1}$). Figure 1 shows the change curve of $e_{\sigma }(W)$ under different $\sigma$ and the $l_{1}$-norm curve. Property 5 indicates that $e_{\sigma }(W)$ smoothly interpolates between $l_{0}$-norm and $l_{1}$-norm through a non-negative parameter $\sigma$.

3.2 DC algorithm based on a robust metric

We use the metric induced by the Laplacian kernel to measure the similarity between the measured value $y$ of the surface photon density and the estimated value $Ax$. Therefore, the correntropy based reconstruction can rewrite Eq. 2 as the following optimization problem:

$$\min_{x}\sum_{i=1}^{M}(1-e^{-\sigma |y_{i}-A_{i}{x}|})+\lambda\|x\|_{0}$$
Since the $l_{0}$ is non-differentiable, a general way is to approximate $\|x\|_{0}$ by a non-convex function to overcome this difficulty,
$$\|x\|_{0}\approx\sum_{k=1}^{N}{(1-{e}^{-\beta|{x_{k}}|})}$$
where $\beta$ is a free parameter and $x_{k}$ is the $k$-th element of the vector $x$, which is to be reconstructed. Therefore, the vector $x$ can be iteratively estimated by minimizing the following function:
$$\min_{x}\sum_{i=1}^{M}{(1-{e}^{-\sigma |y_{i}-A_{i}{x}|})}+{\lambda}\sum_{k=1}^{N}{(1-{e}^{-\beta|{x_{k}}|})}$$
The objective function is a non-convex function, which is difficult to solve directly. In this paper, we use DCA to solve the problem and decompose the non-convex function into two lower semi-continuous proper convex functions. The $e_{\sigma }(w)=1-e^{-\sigma |w|}$ can be expressed as a DC function:
$$e_{\sigma}(w)=g(w)-h(w)$$
where
$$g(w)=\sigma|w|, \ \ h(w)=\sigma|w|-(1-e^{-\sigma|w|})$$
are convex functions. Through DC decomposition, the optimization problem of the Eq. 20 is equivalent to solving the optimization problem of the DC function,
$$\min_{x}[{E({x})-{F({x})}}]$$
where
$${E(x)}=\sum_{i=1}^{M}\sigma |{y_{i}-{A_{i}x}}|+{\lambda}\sum_{k=1}^{N}{\beta{|{x_{k}}|}}$$
$${F(x)}=\sum_{i=1}^{M}(\sigma |{y_{i}-{A_{i}x}}|-(1-{e}^{-\sigma|{y_{i}-{A}_{i}{x}}|}))+{\lambda}\sum_{k=1}^{N}({\beta{|{x_{k}}|}}-(1-{e}^{-{\beta{|{x}_{k}|}}}))$$
Use the DC algorithm to optimize the DC function (Eq. 23), where $E(x)$ and $F(x)$ are lower semi-continuous functions on $R^{n}$,
$$min{f({x})}={E({x})-{F({x})}}, \ \ {x\in{R^{n}}}$$
During iteration, use the affine projection of function $F(x)$ at the current iteration point $x^{l}$ to approximate the replacement function $F(x)$, where the affine projection of $F(x)$ is:
$${F_{l}({x})}={F({x^{l}})}+{<}{x-{x}^{l},{v}^{l}}>,\ \ {v}^{l}\in\partial{F({x^{l}})}$$
Therefore, optimizing the DC function is equivalent to optimizing the Eq. 28:
$$min[{E({x})}-{F_{l}({x})}] \Rightarrow\min[{E({x})-{<}{x},{v}^{l}>}],\ \ {x}\in{R}^{n}$$
where:
$$\begin{aligned} {v}^{l}=\sum_{i=1}^{M}[-\sigma{sign}({y_{i}-{A_{i}{x^{l}}}})A_{i}&+\sigma{e}^{-\sigma{|{y_{i}-{A_{i}{x^{l}}}}|}}{sign}({y_{i}-{A_{i}{x^{l}}}})A_{i}]+\\ &{\lambda}\sum_{k=1}^{N}[{\beta{sign}({x_{k}^{l}})}-{e}^{-{\beta{|{x_{k}^{l}}|}}}{sign}({x_{k}^{l}}){\beta}] \end{aligned}$$
Each iteration solves a sub-optimization problem,
$$x^{i+1}=arg \min_{x}\sum_{i=1}^{M} \sigma |y_{i}-A_{i}x|+\lambda\sum_{k=1}^{N}\beta|x_{k}|-{<}x,v^{i}>$$
Since Eq. 30 contains absolute value operations we introduce the variables $t_{1}$ and $t_{2}$ with $|y_{i}-A_{i}x|\leq (t_{1})_{i},i=(1,2,\ldots ,M)$ and $|x_{k}|\leq (t_{2})_{k}, k=(1,2,\ldots ,N)$, the above problem is reformulated as linear programming:
$$\begin{aligned} \min_{t_{1},t_{2},x}&\sum_{i=1}^{M}\sigma t_{1} +\lambda\sum_{k=1}^{N}\beta t_{2}-{<}x,v^{i}>\\ &s.t.\ \ |y_{i}-A_{i}x|\leq(t_{1})_{i},\ \ i=(1,2,\ldots,M)\\ &|x_{k}|\leq(t_{2})_{k},\ \ k=(1,2,\ldots,N) \end{aligned}$$
where $t_{1}$ and $t_{2}$ are $M$-dimensional and $N$-dimensional column vectors, respectively. DCA is an iterative algorithm, and in a finite number of iterations, the algorithm converges to a critical point to satisfy the necessary optimality condition. We will design an efficient iterative algorithm for solving the optimization problem, which is depicted in Algorithm 1.

boe-12-10-5991-i001

3.3 Convergence analysis

Theory 1. The resulting sequence $\{x^{i}\}$ converges when using the DC algorithm to solve the non-convex optimization Eq. 20.

Proof. Regarding the optimal solution concept, the following inequality is satisfied:

$$E(x^{i+1})-\nabla F(x^{i})^{T}x^{i+1}\leq E(x^{i})-\nabla F(x^{i})^{T}x^{i}$$
it can be written as:
$$\nabla F(x^{i})^{T}(x^{i}-x^{i+1})\leq E(x^{i})-E(x^{i+1})$$
Due to the convexity of $F(\cdot )$, we have:
$$F(x^{i+1})-F(x^{i})\geq \nabla F(x^{i})^{T}(x^{i+1}-x^{i})$$
By combining the above inequalities, we have:
$$E(x^{i+1})-F(x^{i+1})\leq E(x^{i})-F(x^{i})$$
Therefore, the objective value of the problem 20 decreases monotonically with each iteration. Moreover, the objective of the problem is lower bounded by 0. Thus, the convergence of sequence $\{x^{i}\}$ can be proved.

4. Experiments and results

In this section, numerical simulation experiments and in vivo experiments were conducted to validate the performance of the CIML method, which was compared with the FISTA, Homotopy, and IVTCG methods in terms of accuracy, efficiency, and robustness. All the reconstruction algorithms were performed in MATLAB and run on a desktop computer with 4 GB RAM and a 3.60 GHz Intel Core i7 CPU.

4.1 Evaluation index

To quantitatively evaluate the accuracy of the proposed reconstruction method for both source location and shape recovery, several indices including location error (LE), contrast-to-noise ratio (CNR), normalized mean square error (NMSE), reconstructed fluorescent yield (RFY), Dice index and Time were calculated in this study.

The LE measures the distance between the center of the reconstructed region and the real fluorescent region. It is defined as:

$${LE}={\|{L_{r}-{L_{a}}}\|_{2}}$$
where $L_{r}$ and $L_{a}$ are the center coordinates of the reconstructed region and the actual fluorescent source, respectively. $\|\cdot \|_{2}$ is the Euclidean distance operator.

The CNR, which is mainly affected by artifacts and noise in reconstructed images, is used for evaluation of image contrast. The higher the CNR value, the easier to distinguish the fluorescent source from the background. The CNR is defined as follows:

$${CNR}=\frac{|{\mu_{ROI}}-{\mu_{ROB}}|}{({\omega_{ROI}{\sigma_{ROI}^{2}}}+{\omega_{ROB}{\sigma_{ROB}^{2}}})^{\frac{1}{2}}}$$
where the subscripts $ROI$ and $ROB$ denote the real source region in 3D tetrahedral mesh and the Non-source region in the entire 3D reconstruction region, respectively. $\mu$, $\sigma ^{2}$, and $\omega$ are the mean intensity value, variance, and the weighting factor calculated by the relative volume of the corresponding region, respectively.

The NMSE denotes the relative error between the true and the reconstructed source positions,

$${NMSE}=\frac{\|{x}_{r}-{x}_{t}\|_{2}^{2}}{\|{x_{t}}\|_{2}^{2}}$$
where $x_{r}$ and $x_{t}$ are the recovered and true locations, respectively.

The RFY is used to evaluate the average reconstructed fluorescence yield of the real area and is defined as follows:

$${RFY}=\frac{\sum Y}{N}$$
where $Y$ is the fluorescence yield of the node in the 3D tetrahedral mesh of the reconstruct source, and $N$ is the number of nodes of the tetrahedral mesh of the reconstruct source. The larger the RFY, the better the reconstruction performance.

The Dice coefficient is employed to evaluate the spatial structure similarity of the reconstructed source and the true source,

$${Dice}=\frac{2|X\cap Y|}{|X|+|Y|}$$
where X and Y are the reconstructed region and the actual fluorescent region, respectively. Greater Dice coefficient indicate better morphological reconstruction.

4.2 Numerical mouse simulation experiment

The experimental model used a numerical mouse simulation [40]. The numerical mouse consisted of five organs: lung, heart, stomach, liver, kidneys, and muscle tissue. The optical parameters of each organ and the muscle tissue are shown in Table 1 [41]. The reconstruction of a cylindrical source was used to assess the performance of the proposed method for recovery. A cylindrical fluorescence source S with a diameter of 1.6 mm and 1.6 mm height was implanted into the liver of the digital mouse at the point (18, 7, 17.5) mm, as shown in Fig. 2(a). There are four isotropic point excitation light sources on the Z = 17.5 mm plane. During each excitation, the optical data were acquired within a FOV of 120 degree from the side opposite the excitation source. With four excitations, we obtained four measurement data sets. The digital mouse model was meshed with 13655 nodes and 74761 tetrahedral elements in the forward process, and its forward simulation result was shown in Fig. 2(b). The meshed simulation model contained 8526 nodes and 45452 tetrahedrons after simple mapping between nodes in the inverse process.

 figure: Fig. 2.

Fig. 2. (a) is the 3D view of the numerical mouse model. The diameter and height of the cylindrical fluorescent source are all set to be 1.6 mm in the liver; (b) The forward mesh and the photon distribution on the surface of the single source.

Download Full Size | PDF

Tables Icon

Table 1. Optical parameters of the numerical mouse phantom.

The 2D cross-sectional view and 3D rendering of the reconstructed source results by four methods in numerical simulations are presented in Fig. 3. The results of the quantitative analysis of the reconstruction are listed in Table 2. In the 2D slice, the true location of the light source is indicated by the black circle and the colored area represents the reconstructed source. In the 3D view, the red cylindrical and the yellow area represent the authentic fluorescent source and the reconstructed fluorescent source, respectively.

 figure: Fig. 3.

Fig. 3. The reconstruction results of the numerical mouse experiments using four methods. (a)the FISTA method, (b)the Homotopy method, (c)the IVTCG method, and (d)the CIML method. The first row illustrates the cross-sectional views in the z= 17.5 mm plane and the second row illustrates the reconstruction results of the 3-D views corresponding to (a)-(d). The black circle in the cross-sectional views indicates the real position of the fluorescent source.

Download Full Size | PDF

Tables Icon

Table 2. Quantitative results for the four methods used in the single-source numerical simulation.

The Homotopy method produces a distorted target shape and has the largest LE indicating the poorest location tracing (Fig. 3(b) and (f)). The spatial biases generated by the CIML and IVTCG methods are much smaller compared with the Homotopy and FISTA methods. Regarding NMSE and spatial resolution, the reconstructed result obtained using the FISTA method is a little poorer than the reconstruction by other methods (Fig. 3(a) and (e)). Conversely, the source reconstructed by the CIML method is closest to the true source.

As shown in Table 2, the CIML method obtained the most accurate reconstruction results with the smallest LE of 0.46 mm which was 75% that of the FISTA (0.61), IVTCG(0.61), and 69% that of the Homotopy (0.67) methods, and had the smallest NMSE, indicating that it was closest to the true position. The CIML method also had the highest CNR compared to the other three methods. Moreover, the Time of CIML was shorter compared to the FISTA method. The CIML method achieved the most accurate results with highest Dice of 0.7259, which means that the CIML has the best reconstruction performance in both accuracy and morphology.

In addition to the most accurate reconstruction of the light source position and shape, the CIML method also achieved the highest fluorescence yield (Table 2) within the reconstructed light source compared with the other three algorithms. The RFY obtained by the CIML method was 9.87e-03, which was about 300% that of the FISTA method, about 170% that of Homotopy, as well as about 4100% that of IVTCG methods. The simulation results demonstrate that the CIML method was able to solve the FMT reconstruction problem more accurately.

4.3 Robustness

As mentioned in section 1, in FMT, noise is inevitable. Therefore, the robustness of the algorithm is critical for reconstruction. Tests were performed to verify the robustness and accuracy of the CIML method under different noise species and intensities. In this experiment, the measurement data sets were artificially corrupted by 10-25% Gaussian noise and Poisson noise, respectively. The quantitative results of four algorithms under interference by noise of different kinds are displayed in Fig. 4 and in Fig. 5, respectively.

 figure: Fig. 4.

Fig. 4. The quantitative analysis of different methods under different Gaussian noise intensities.

Download Full Size | PDF

 figure: Fig. 5.

Fig. 5. The quantitative analysis of different methods under different Poisson noise intensities.

Download Full Size | PDF

As shown in Fig. 4 and Fig. 5, for the same noise of the Gaussian and Poisson intensity, the CIML method obtained smaller LE, NMSE and larger Dice results than the three conventional methods, which indicates that the reconstructed position of the CIML method was the closest to the real position. These advantages of the CIML were more obvious in the test of Poisson noise. The LE ranges of the proposed method and the FISTA method were 0.44-0.50 mm and 0.61-0.62 mm, respectively, at the four Gaussian noise levels (Fig. 4(a)). However, the FISTA method produced the largest NMSE (Fig. 4(c)). The LE of the Homotopy method rose rapidly, indicating that the reconstruction result was affected by noise. The results showed that the CIML method provided a more robust reconstruction of the fluorescent source compared with the other methods when the noise intensity increases, and had similar performance for the test of Poisson noise (Fig. 5(a), (c), (e)). In contrast, the RFY variation ranges of the CIML method were 0.0088-0.0125, 0.0089-0.0129 when the level of the Gaussian and Poisson noise ranged from 10% to 25%, respectively (Fig. 4(d) and Fig. 5(d)). The CIML method provided the highest RFY within the reconstructed light source, which was much higher than the other methods, indicating that the CIML method can restore the fluorescence distribution effectively. In addition, the proposed method produced the highest CNR showing that the CIML method had a better reconstruction performance than the other methods, in contrast with noise interference (Fig. 4(b) and Fig. 5(b)). Figure 4 and Fig. 5 also show that the reconstruction performance of the CIML method was superior to the other methods and increased noise intensities. The quantitative results further confirm that the CIML method was able to significantly improve the quality of reconstruction when the data in the measurement sets containing noise. The experiment results demonstrate that noise had no obvious effect on our reconstruction framework and the CIML method had the highest accuracy and robustness to manage the Gaussian and Poisson noise problem. This also demonstrates that our method surpassed the other algorithms.

According to the qualitative analysis introduced above, we can draw the following conclusions. Even if the measurement data sets were destroyed by 25% noise, the CIML method still obtained effective results compared with the other methods. This demonstrates that our proposed method is not sensitive to noise, and the algorithm is more robust. Compared to the $l_{1}$ regularization methods, the CIML method has excellent convergence, and its robustness is superior.

4.4 Stability

To evaluate the stability of the four methods, the influence on reconstruction of the number of nodes was studied. The nodes with different numbers ranging from 2460 to 10692 were adopted for reconstruction. The quantitative results for all the evaluation indexes are shown in Fig. 6. The LEs and NMSEs of the CIML, FISTA, Homotopy and IVTCG methods are shown to have varied from 0.36 to 1.07 and 0.0298 to 0.0954, 0.50 to 1.48 and 0.0920 to 0.2908, 0.38 to 1.36 and 0.0269 to 0.1847, 0.56 to 1.55 and 0.0842 to 0.2154, respectively, as the number of nodes increased. The ranges of change of the CIML method were more gradual compared with the other three algorithms. Notably, when the number of nodes exceeded 7795, the performance remained relatively stable. Figure 6(d) shows that the RFYs of the four algorithms rise as the number of nodes increases. The RFYs of different numbers of nodes of the CIML method which far exceeded than that of the FISTA method and the IVTCG method, except for 2460, were all above 0.005. In addition, the fluorescence reconstructed by the CIML method had high contrast, which was much larger than the other three contrast algorithms, except for 10692, and finally tended to flatten as the number of nodes increased. Moreover, the Dice coefficient of the CIML increased with the number of nodes, except for 6341, and far exceeds the other three comparison methods. The reconstruction time by the CIML method was increased, but the speed of increase was less than that of the FISTA method as the number of nodes increased. According to the experimental results given above, the CIML method had better stability than the three methods despite the increased number of nodes.

 figure: Fig. 6.

Fig. 6. The quantitative analysis of different methods under different numbers of nodes.

Download Full Size | PDF

4.5 Efficiency

To test and further observe the influence of excitation points on the reconstruction result, the number of excitation sources was gradually increased. For reconstruction, the digital mouse was meshed by 4623 nodes and 25089 tetrahedral elements. The numerical results from six to 36 different excitation points are shown in Fig. 7. The Fig. 7(a) and (c) show that the LE and NMSE of the CIML reduce when the number of excitation points are increased, except for the excitation point is 12, but the LEs of the CIML which were smaller than the other three algorithms at the same number of excitation points, and all remain below 0.53 mm. As seen in Fig. 7(d), the RFYs of the CIML method were much higher than the comparison algorithms as the number of excitation points increases, except for the Homotopy method at 36 excitation points. The Dice coefficient of each method increased as the excitation point increased, but the Dice of the CIML method were larger than the other three methods at the same excitation point, as shown in Fig. 7(e). Meanwhile, the CNR of the four methods all decreased correspondingly, but the CNR of the CIML method both were higher than 42 and far surpass than the other methods, as shown in Fig. 7(b). It is certain that more excitation sources could achieve more accurate reconstruction results, but as the measurement data increase, the amount of computational cost required also rapidly increase, especially for the FISTA method and the CIML method, as shown in Fig. 7(f). Consequently, it is apparent that the CIML method also has an excellent reconstruction effect with six excitation points.

 figure: Fig. 7.

Fig. 7. The quantitative analysis of different methods with different numbers of excitation sources.

Download Full Size | PDF

4.6 Double-target reconstruction

The algorithm’s capability for multiple-target reconstruction is critical. To assess further the location accuracy and robustness of the CIML method, a dual-source numerical simulation experiment was also conducted. Two cylindrical targets were implanted into the liver. S1 and S2 located at (13, 7, 17) mm and (21, 7, 17) mm, respectively. The fluorescent yields of both were set to 0.05 $mm^{-1}$. The diameters and heights of the two sources were set to 2 mm and 1 mm, respectively. To reconstruct the fluorescent sources, the digital mouse was discretized into 15647 nodes and 86389 tetrahedral elements in the forward process, while the process of reconstruction contained 7850 nodes and 43221 tetrahedrons.

The 3D rendering and cross-section views of the reconstruction results using different methods are displayed in Fig. 8. In the 3D view, the red cylinders and the yellow areas respectively denote the authentic fluorescent sources and the reconstructed fluorescent source regions. The black circles in the cross-sectional views denote the actual positions of the fluorescent targets and the color areas represent the reconstructed sources.

 figure: Fig. 8.

Fig. 8. The transverse slice views in the Z=17 mm of the FISTA, the Homotopy, the IVTCG and the CIML methods are presented in the first row, while the second row illustrates the 3-D reconstruction results corresponding to the four methods.

Download Full Size | PDF

From the cross-sectional views, the CIML method is confirmed to have better multi-target resolution than the FISTA method and obtained more clearly reconstructed results than the Homotopy and IVTCG methods as shown in the Fig. 8 3D view. In morphological reconstruction, the CIML method has a superior spatial overlap between the reconstructed and true sources and is better positioned near the authentic sources. The comparisons confirm that the CIML method’s reconstruction performance was superior.

The quantitative results of the four methods are summarized in Table 3, which further confirms these observations. The FISTA method was only able to accurately reconstruct S1, the reconstruction of S2 result in over-shrinkage. Although the Homotopy, and IVTCG methods were able to acquire two sources, the CIML method achieved a much smaller LE and much higher Dice than the other three methods, which were up to 0.6156 and 0.5385, thus demonstrating the CIML method obtained accurate source localization and morphological reconstruction. Moreover, the NMSE of the CIML method was smaller than of the other methods, which confirms that the CIML method has the highest reconstruction accuracy. The CIML method relatively spent a long period of time than the Homotopy and IVTCG methods. The RFY of the CIML method was 17 times that of the FISTA and 7 times that of IVTCG methods, which shows that the CIML method significantly improved the fluorescent yield. Notably, the CNR of the CIML method was much larger than the FISTA and IVTCG methods, demonstrating that the CIML method reconstructed contrast better. These numerical simulation experiments have revealed that the CIML method achieved better performance for source location, morphology, and fluorescence yield. The in vivo experiment in the next section also confirmed this conclusion.

Tables Icon

Table 3. Quantitative analysis of different methods with a double-target.

4.7 In-vivo experiment

To investigate further the practical performance of the CIML method, in vivo mouse experiments were conducted under the guidelines of the Air Force Military Medical University Guide for the Care and Use of Laboratory Animals formulated by the National Society for Medical Research. In vivo experiment use an adult BALB/C mouse. In this experiment, the surface fluorescence image of the probe and CT data were acquired via a dual-modality FMT/CT system. The micro-CT imaging system consists of an x-ray tube (Oxford Instruments series 5000 Apogee X-ray tube, X-ray Technology Inc., CA) with a focal spot size of 35 $\mu m$ and a high-resolution flat panel x-ray detector (Hamamatsu C7921CA-02, Hamamatsu city, Japan) with a 1032 $\times$ 1012 pixel photo diode array with a 50 $\mu m$ pixel pitch. A fluorescent bead (diameter 1.2 mm and height 7.5 mm) was injected into the abdomen of the mouse with a Cy5.5 solution. The extinction coefficient of the Cy5.5 solution with a concentration of 2000 $nM$ was 0.019 $mm^{-1}\mu M^{-1}$, and the quantum efficiency was 0.23. The peak excitation and emission wavelengths of the probe are 670 nm and 710 nm, respectively [42]. The detector’s FOV was 180$^{\circ }$ and four projections were used to acquire fluorescence data. After the optical images were acquired, structural information was collected from the mouse using a micro-CT system to represent the real distribution and provide prior information of the fluorescence. The major organs were segmented according to the CT data and then integrated into the heterogeneous mouse model. The optical parameters for different organs are shown in Table 4 [41]. The fluorescence image was projected on the mouse’s surface by CT, then registered to the standard mesh. The mouse was discretized into meshes with Amira 5.2 and the fluorescence image was mapped to these meshes to obtain the measurement data [43]. As shown in Fig. 9, the areas indicated by the white circle in CT images were the real target distribution and the real position of the target was (20.3 mm, 28.8 mm, 8.8 mm). The mouse model was discretized into 3905 nodes and 18986 tetrahedral elements for reconstruction.

 figure: Fig. 9.

Fig. 9. Mouse analysis: the micro-CT result. (a) Transverse view. (b) Sagittal view. (c) Coronal view. (d) 3-D visualization.

Download Full Size | PDF

Tables Icon

Table 4. Optical absorption and scattering coefficient of the in vivo mouse experiment.

The 3D views and the cross-sectional views are displayed in Fig. 10. The red cylinder indicates the true position of the fluorescent bead. Fig. 10 shows that the IVTCG method produced more reconstruction artifacts, and the Homotopy method could not obtain a feasible reconstruction result because of interference from the background fluorescence signal. Moreover, the CIML method achieved accurate reconstruction with spatial continuity and high structural similarity in the 3D view. Compared with the IVTCG method the CIML method rarely introduced artifact areas. The in vivo experiments demonstrate that the CIML method has excellent accuracy and robustness compared to the other methods indicating that the CIML method had superior performance in FMT reconstruction.

 figure: Fig. 10.

Fig. 10. Reconstruction results of the in vivo experiment. (a)-(d) transverse slice views of Z=8.8 mm obtained by the FISTA, the Homotopy, the IVTCG and the CIML algorithms respectively, while the second row illustrates the 3-D reconstruction results corresponding to the four methods.

Download Full Size | PDF

The quantitative analysis also verified these results, which were displayed in Fig. 5. Regarding the target position, the FISTA and IVTCG methods obtained relatively satisfactory reconstruction results with an LE of 1.36 mm and 1.28 mm, respectively. The Homotopy method did not accurately determine the location of the target. However, the LE of the CIML method is the least and reconstructs the target region more effectively. The Dice index given by the CIML was 0.5, which was significantly higher than other methods. The results of in vivo reconstruction showed that the superior performance of CIML in obtaining the morphology of the fluorescence probe distribution. The CIML method had the higher contrasts more against the background. Compared with the other methods, the CIML method provided a large area of high signal intensity, which significantly improved the overall CNR of the reconstructed source. In addition, the CIML method obtains more fluorescence information and its RFY is 2.5e-3, which is 1 to 3 orders of magnitude larger than the other methods, proving its advantages for reconstruction. Overall, the in vivo experiment further demonstrated that the CIML method considerably improves the performance of FMT reconstruction, which means that the CIML method has excellent potential for biological applications.

Tables Icon

Table 5. Quantitative results of the in vivo mouse experiments.

5. Discussion and conclusion

In this paper, a novel distance metric has been proposed based on correntropy and a Laplacian kernel. Several properties of the proposed metric such as nonnegativity, nonconvexity, boundedness, and approximation behaviors are proved. The proposed metric has greater flexibility when compared with the classical $l_{0}$-norm and $l_{1}$-norm. Moreover, by applying this robust metric to achieve the accurate reconstruction of the internal fluorescent source, a novel robust learning framework (CIML) has been proposed. However, the nonconvexity of the proposed CIML makes it difficult to optimize. We designed a simple and efficient iterative algorithm to address this problem. A continuous optimization method was developed to solve the proposed model, which is transformed into difference of convex functions programming (DC programming). The resulting DC optimization algorithm linearly converges. Therefore, the CIML method is a novel reconstruction algorithm for FMT, which allows for satisfactory reconstruction and generates a greater fluorescence yield.

To validate the performance of the CIML method, extensive numerical simulations and in vivo mouse experiments were designed. The performance of the CIML method was illustrated via the LE, CNR, NMSE, RFY, Dice coefficient and Time of the reconstruction results. Compared with the FISTA, Homotopy, and IVTCG methods, the qualitative and quantitative analysis showed that the CIML method was able to achieve the best results in terms of positioning accuracy, image contrast, spatial resolution of the fluorescent source and morphology recovery. To evaluate further the robustness of the CIML method, we conducted four group tests of different Gaussian and Poisson noise intensities, respectively. The experimental results are consistent with our theoretical justifications in section 3. With increasing noise, the advantages of the CIML method over the other methods became more apparent. Of note, even when 25% noise was added to the measurement data sets, the CIML method still produced satisfactory reconstruction results.

In the numerical simulation, the reconstruction results confirmed that the CIML method is less sensitive to the excitation points than other three methods, and it produces high fluorescence yields while reconstructing accurately. Therefore, the CIML method offers better robustness, stability, and efficiency. In dual-source simulation studies, the CIML method performs better in terms of LE, NMSE, RFY and Dice compared with the other algorithms. These results demonstrate that the CIML method is capable of accurately reconstructing the fluorescent source. The reconstruction results of the in vivo mouse experiment further demonstrate the superiority of the CIML method. In terms of position and spatial overlap, the reconstruction area is closely similar to the real fluorescent source. The results of the in vivo experiment proved that the CIML method realizes accurate reconstruction. Due to deviation and fewer surface measurements during organ segmentation, the in vivo results are inferior to the numerical simulation results using the same reconstruction algorithm. However, compared with the other three methods, the reconstruction performance of the CIML method remains superior. Overall, the numerical simulations and in vivo experiments both demonstrated the advantage of the CIML method in terms of location accuracy and morphological.

Although the CIML method achieves encouraging reconstruction results, several challenging problems remain in FMT. The method we designed is time-consuming. In our research, the nonconvexity of the proposed model makes it difficult to optimize. In order to solve this challenge problem, for DC programming algorithm, the nonconvexity problem is transformed into a series of sub-optimization problem containing absolute value operation which is solved by linear programming. It leads to a relatively large time cost than some traditional convex optimization reconstruction algorithms. Besides that, in FMT reconstruction, the high dimension of the system matrix also leads to a large amount of time consumption. In order to overcome this limitation, some more efficiency algorithm such as simplex algorithm will be introduced to solve the linear programming. And some dimension reduction strategies will be applied to improve the computational efficiency in FMT reconstruction in our future work. To make the results convincing, the regularization parameters of the Homotopy, the FISTA and the IVTCG methods were selected by using the generalized cross-validation based method [44]. Thence all the results were optimal or near-optimal. Moreover, the CIML method was utilized to reconstruct the fluorescence probe distribution in this study. Its application and performance need to be further investigated in bioluminescence tomography [45] and in cerenkov luminescence tomography [46]. Considering the application background, in this research, we only consider the sparse features to design the reconstruction algorithm. The proposed algorithm has better sparsity than other convex optimization algorithms. In future research, according to some other application needs, we will introduce some hybrid regularization strategy to simultaneously guarantee the sparsity and smoothness of the result. In summary, the proposed CIML reconstruction method is robust and effective for resolving the FMT inverse problem. It achieved better reconstruction performance in both location accuracy and robustness compared with the traditional methods. The method has excellent potential for improved reconstruction performance and to promote the application of FMT for in vivo biological studies.

Funding

Xi'an Science and Technology Bureau (201805060ZD11CG44, 2018JQ6099, 2019JQ-724); China Postdoctoral Science Foundation (2018M643719); National Natural Science Foundation of China (11871321, 61901374, 61906154, 61971350); Key Research and Discovery Plan in Shaanxi Province in 2020 (2020SS-036); Education Department Served Local Special Projects (17JF027); Youth Innovation Team of Shaanxi Provincial Department of Education (21JP123).

Disclosures

The authors declare no conflicts of interest.

Data availability

Data underlying of the simulation results presented in this paper are available, Ref. [40]. Data underlying of the in-vivo results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

Supplemental document

See Supplement 1 for supporting content.

References

1. M. N. van Oosterom, P. Meershoek, M. M. Welling, F. Pinto, P. Matthies, H. Simon, T. Wendler, N. Navab, C. J. H. van de Velde, H. G. van der Poel, and F. W. B. van Leeuwen, “Extending the hybrid surgical guidance concept with freehand fluorescence tomography,” IEEE Trans. Med. Imaging 39(1), 226–235 (2020). [CrossRef]  

2. V. C. Torres, L. Sinha, C. Li, K. M. Tichauer, and J. G. Brankov, “Excised whole lymph node imaging for cancer staging with angular restriction dual fluorescent optical projection tomography,” in 2019 IEEE 16th International Symposium on Biomedical Imaging (IEEE, 2019), pp. 507–511.

3. S. Zhang, X. Ma, Y. Wang, M. Wu, H. Meng, W. Chai, X. Wang, S. Wei, and J. Tian, “Robust reconstruction of fluorescence molecular tomography based on sparsity adaptive correntropy matching pursuit method for stem cell distribution,” IEEE Trans. Med. Imaging 37(10), 2176–2184 (2018). [CrossRef]  

4. Q. Fang, “Mesh-based Monte Carlo method using fast ray-tracing in plucker coordinates,” Biomed. Opt. Express 1(1), 165–175 (2010). [CrossRef]  

5. C. Darne, Y. Lu, and E. M. Sevick-Muraca, “Small animal fluorescence and bioluminescence tomography: a review of approaches, algorithms and technology update,” Phys. Med. Biol. 59(1), R1–R64 (2014). [CrossRef]  

6. H. Li, J. Tian, F. Zhu, W. Cong, L. Wang, E. Hoffman, G. Wang, and J. Tian, “A mouse optical simulation environment (mose) to investigate bioluminescent phenomena in the living mouse with the monte carlo method,” Acad. Radiol. 11(9), 1029–1038 (2004). [CrossRef]  

7. W. Ren, H. Isler, M. Wolf, J. Ripoll, and M. Rudin, “Smart toolkit for fluorescence tomography: simulation, reconstruction, and validation,” IEEE Trans. Biomed. Eng. 67(1), 16–26 (2020). [CrossRef]  

8. A. Ale, V. Ermolayev, E. Herzog, C. Cohrs, M. Angelis, and V. Ntziachristos, “FMT-XCT: in vivo animal studies with hybrid fluorescence molecular tomography-x-ray computed tomography,” Nat. Methods 9(6), 615–620 (2012). [CrossRef]  

9. F. Stuker, C. Baltes, K. Dikaiou, D. Vats, L. Carrara, E. Charbon, J. Ripoll, and M. Rudin, “Hybrid small animal imaging system combining magnetic resonance imaging with fluorescence tomography using single photon avalanche diode detectors,” IEEE Trans. Med. Imaging 30(6), 1265–1273 (2011). [CrossRef]  

10. D. L. Donoho, “Compressed sensing,” IEEE Trans. Inf. Theory 52(4), 1289–1306 (2006). [CrossRef]  

11. R. E. Carrillo, A. B. Ramirez, G. R. Arce, K. E. Barner, and B. M. Sadler, “Robust compressive sensing of sparse signals: a review,” EURASIP J. Adv. Signal Process. 2016(1), 108 (2016). [CrossRef]  

12. D. Meng, Q. Zhao, and Z. Xu, “Improve robustness of sparse PCA by ℓ1-norm maximization,” Pattern Recognit. 45(1), 487–497 (2012). [CrossRef]  

13. H. Wang, X. Lu, Z. Hu, and W. Zheng, “Fisher discriminant analysis with ℓ1-norm,” IEEE Trans. Cybern. 44(6), 828–842 (2014). [CrossRef]  

14. H. Yan, Q. L. Ye, and D. J. Yu, “Efficient and robust TWSVM classification via a minimum ℓ1-norm distance metric criterion,” Mach. Learn. 108(6), 993–1018 (2019). [CrossRef]  

15. O. Lee, J. M. Kim, Y. Bresler, and J. C. Ye, “Compressive diffuse optical tomography: Noniterative exact reconstruction using joint sparsity,” IEEE Trans. Med. Imaging 30(5), 1129–1142 (2011). [CrossRef]  

16. E. J. Candes and T. Tao, “Decoding by linear programming,” IEEE Trans. Inf. Theory 51(12), 4203–4215 (2005). [CrossRef]  

17. D. L. Donoho and X. Huo, “Uncertainty principles and ideal atomic decomposition,” IEEE Trans. Inf. Theory 47(7), 2845–2862 (2001). [CrossRef]  

18. S. Jiang, J. Liu, G. Zhang, Y. An, H. Meng, Y. Gao, K. Wang, and J. Tian, “Reconstruction of fluorescence molecular tomography via a fused lasso method based on group sparsity prior,” IEEE Trans. Biomed. Eng. 66(5), 1361–1371 (2019). [CrossRef]  

19. M. Cai, Z. Zhang, X. Shi, Z. Hu, and J. Tian, “NIR-II/NIR-I fluorescence molecular tomography of heterogeneous mice based on gaussian weighted neighborhood fused lasso method,” IEEE Trans. Med. Imaging 39(6), 2213–2222 (2020). [CrossRef]  

20. A. Beck and M. Teboulle, “A fast iterative shrinkage-thresholding algorithm for linear inverse problems,” SIAM J. Imaging Sci. 2(1), 183–202 (2009). [CrossRef]  

21. M. S. Asif and J. Romberg, “Sparse recovery of streaming signals using ℓ1-homotopy,” IEEE Trans. Signal Process. 62(16), 4209–4223 (2014). [CrossRef]  

22. X. He, J. Liang, J. Yu, F. Liu, and J. Tian, “Sparse reconstruction for quantitative bioluminescence tomography based on the incomplete variables truncated conjugate gradient method,” Opt. Express 18(24), 24825–24841 (2010). [CrossRef]  

23. L. Kong, Y. An, Q. Liang, L. Yin, Y. Du, and J. Tian, “Reconstruction for fluorescence molecular tomography via adaptive group orthogonal matching pursuit,” IEEE Trans. Biomed. Eng. 67(9), 2518–2529 (2020). [CrossRef]  

24. J. Dutta, S. Ahn, C. Li, S. R. Cherry, and R. M. Leahy, “Joint ℓ1 and total variation regularization for fluorescence molecular tomography,” Phys. Med. Biol. 57(6), 1459–1476 (2012). [CrossRef]  

25. D. Zhu and C. Li, “Nonconvex regularizations in fluorescence molecular tomography for sparsity enhancement,” Phys. Med. Biol. 59(12), 2901–2912 (2014). [CrossRef]  

26. J. Peng, S. Yue, and H. Li, “NP/CMP equivalence: A phenomenon hidden among sparsity models ℓ0 minimization and ℓp minimization for information processing,” IEEE Trans. Inf. Theory 61(7), 4028–4033 (2015). [CrossRef]  

27. M. F. I. Daubechies, R. DeVore, and C. Gunturk, “Iteratively reweighted least squares minimization for sparse recovery,” Commun. on Pure Appl. Math. 63(1), 1–38 (2010). [CrossRef]  

28. W. Cao, J. Sun, and Z. Xu, “Fast image deconvolution using closed-form thresholding formulas of ℓq regularization,” J. Vis. Commun. Image Represent. 24(1), 31–41 (2013). [CrossRef]  

29. Z. Xu, X. Chang, F. Xu, and H. Zhang, “ℓ1/2 regularization: a thresholding representation theory and a fast solver,” IEEE Trans. Neural Netw. Learning Syst. 23(7), 1013–1027 (2012). [CrossRef]  

30. W. Liu, P. P. Pokharel, and J. C. Principe, “Correntropy: properties and applications in non-gaussian signal processing,” IEEE Trans. Signal Process. 55(11), 5286–5298 (2007). [CrossRef]  

31. Y. Wang, Y. Y. Tang, and L. Li, “Correntropy matching pursuit with application to robust digit and face recognition,” IEEE Trans. Cybern. 47(6), 1354–1366 (2017). [CrossRef]  

32. Y. He, F. Wang, S. Wang, J. Cao, and B. Chen, “Maximum correntropy adaptive filtering approach for robust compressive sensing reconstruction,” Inf. Sci. 480, 381–402 (2019). [CrossRef]  

33. L. Guo, X. Zhang, Z. Liu, X. Xue, Q. Wang, and S. Zheng, “Robust subspace clustering based on automatic weighted multiple kernel learning,” Inf. Sci. 573, 453–474 (2021). [CrossRef]  

34. P. J. Huber, “Robust Statistics,” in International Encyclopedia of Statistical Science, L. Miodrag, (Springer Berlin Heidelberg, 2011).

35. P. Tao, L. Thi, and H. An, “Convex analysis approaches to DC programming: theory, algorithms and applications,” Acta Mathematica Vietnamica 22(1), 289–355 (1997).

36. H. A. L. Thi, T. P. Dinh, L. H. Minh, and X.-T. Vo, “DC approximation approaches for sparse optimization,” Eur. J. Oper. Res. 244(1), 26–46 (2015). [CrossRef]  

37. Y. Lv, J. Tian, W. Cong, G. Wang, J. Luo, W. Yang, and H. Li, “A multilevel adaptive finite element algorithm for bioluminescence tomography,” Opt. Express 14(18), 8211–8223 (2006). [CrossRef]  

38. F. R. Li, “Variable selection via nonconcave penalized likelihood and its oracle properties,” Publ. Am. Stat. Assoc. 96(456), 1348–1360 (2001). [CrossRef]  

39. J. Lv and Y. Fan, “A unified approach to model selection and sparse recovery using regularized least squares,” Ann. Statist. 37(6A), 3498–3528 (2009). [CrossRef]  

40. B. Dogdas, D. Stout, A. F. Chatziioannou, and R. M. Leahy, “Digimouse: a 3D whole body mouse atlas from ct and cryosection data,” Phys. Med. Biol. 52(3), 577–587 (2007). [CrossRef]  

41. G. Alexandrakis, F. R. Rannou, and A. F. Chatziioannou, “Tomographic bioluminescence imaging by use of a combined optical-pet (OPET) system: a computer simulation feasibility study,” Phys. Med. Biol. 50(17), 4225–4241 (2005). [CrossRef]  

42. F. Gao, H. Zhao, Y. Tanikawa, and Y. Yamada, “A linear, featured-data scheme for image reconstruction in time-domain fluorescence molecular tomography,” Opt. Express 14(16), 7109–7124 (2006). [CrossRef]  

43. X. Chen, X. Gao, D. Chen, X. Ma, and J. Tian, “3D reconstruction of light flux distribution on arbitrary surfaces from 2D multi-photographic images,” Opt. Express 18(19), 19876–19893 (2010). [CrossRef]  

44. J. Prakash, H. Dehghani, B. W. Pogue, and P. K. Yalavarthy, “Model-resolution-based basis pursuit deconvolution improves diffuse optical tomographic imaging,” IEEE Trans. Med. Imaging 33(4), 891–901 (2014). [CrossRef]  

45. H. Guo, J. Yu, Z. Hu, H. Yi, Y. Hou, and X. He, “A hybrid clustering algorithm for multiple-source resolving in bioluminescence tomography,” J. Biophotonics 11(4), e201700056 (2018). [CrossRef]  

46. H. Guo, X. He, M. Liu, Z. Zhang, Z. Hu, and J. Tian, “Weight multispectral reconstruction strategy for enhanced reconstruction accuracy and stability with Cerenkov luminescence tomography,” IEEE Trans. Med. Imaging 36(6), 1337–1346 (2017). [CrossRef]  

Supplementary Material (1)

NameDescription
Supplement 1       434679 Properties and proofs of properties

Data availability

Data underlying of the simulation results presented in this paper are available, Ref. [40]. Data underlying of the in-vivo results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

40. B. Dogdas, D. Stout, A. F. Chatziioannou, and R. M. Leahy, “Digimouse: a 3D whole body mouse atlas from ct and cryosection data,” Phys. Med. Biol. 52(3), 577–587 (2007). [CrossRef]  

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (10)

Fig. 1.
Fig. 1. $l_{1}$ norm and $e_{\sigma }(w)$ with different $\sigma$ values.
Fig. 2.
Fig. 2. (a) is the 3D view of the numerical mouse model. The diameter and height of the cylindrical fluorescent source are all set to be 1.6 mm in the liver; (b) The forward mesh and the photon distribution on the surface of the single source.
Fig. 3.
Fig. 3. The reconstruction results of the numerical mouse experiments using four methods. (a)the FISTA method, (b)the Homotopy method, (c)the IVTCG method, and (d)the CIML method. The first row illustrates the cross-sectional views in the z= 17.5 mm plane and the second row illustrates the reconstruction results of the 3-D views corresponding to (a)-(d). The black circle in the cross-sectional views indicates the real position of the fluorescent source.
Fig. 4.
Fig. 4. The quantitative analysis of different methods under different Gaussian noise intensities.
Fig. 5.
Fig. 5. The quantitative analysis of different methods under different Poisson noise intensities.
Fig. 6.
Fig. 6. The quantitative analysis of different methods under different numbers of nodes.
Fig. 7.
Fig. 7. The quantitative analysis of different methods with different numbers of excitation sources.
Fig. 8.
Fig. 8. The transverse slice views in the Z=17 mm of the FISTA, the Homotopy, the IVTCG and the CIML methods are presented in the first row, while the second row illustrates the 3-D reconstruction results corresponding to the four methods.
Fig. 9.
Fig. 9. Mouse analysis: the micro-CT result. (a) Transverse view. (b) Sagittal view. (c) Coronal view. (d) 3-D visualization.
Fig. 10.
Fig. 10. Reconstruction results of the in vivo experiment. (a)-(d) transverse slice views of Z=8.8 mm obtained by the FISTA, the Homotopy, the IVTCG and the CIML algorithms respectively, while the second row illustrates the 3-D reconstruction results corresponding to the four methods.

Tables (5)

Tables Icon

Table 1. Optical parameters of the numerical mouse phantom.

Tables Icon

Table 2. Quantitative results for the four methods used in the single-source numerical simulation.

Tables Icon

Table 3. Quantitative analysis of different methods with a double-target.

Tables Icon

Table 4. Optical absorption and scattering coefficient of the in vivo mouse experiment.

Tables Icon

Table 5. Quantitative results of the in vivo mouse experiments.

Equations (41)

Equations on this page are rendered with MathJax. Learn more.

Ax=Y
argminx12AxY22+λx0
Vσ(U,Z)=E[κσ(UZ)]
κσ(ui,zi)=12πσexp((uizi)22σ2)
VM,σ(U,Z)=1mi=1mκσ(uizi)
Vσ(U,Z)=fE;σ(0)
α=inf{f(x)=g(x)h(x):xn}    (Pdc)
hl(x)=h(xl)+<xxl,yl>,  ylh(xl)
inf{g(x)hl(x):xRn}inf{g(x)<x,yl>:xRn}
min{g(x)(h(xl)+h(xl)T(xxl))}
kσ(w1,w2)=eσ|w1w2|
eσ(w)=1eσ|w|,  wR
JMCC=1Mi=1Mexpσ|yiAix|
eσ(w1+w2)eσ(w1)+eσ(w2)
eσ(w)={σeσw,  if  w>0σeσw,  if  w<0
limw+(σeσw)=0,limw(σeσw)=0
eσ(wi)=1eσ|wi|={0   wi=01   wi0,σ+
eσ(W)=i=1n(1eσ|wi|)=W0
minxi=1M(1eσ|yiAix|)+λx0
x0k=1N(1eβ|xk|)
minxi=1M(1eσ|yiAix|)+λk=1N(1eβ|xk|)
eσ(w)=g(w)h(w)
g(w)=σ|w|,  h(w)=σ|w|(1eσ|w|)
minx[E(x)F(x)]
E(x)=i=1Mσ|yiAix|+λk=1Nβ|xk|
F(x)=i=1M(σ|yiAix|(1eσ|yiAix|))+λk=1N(β|xk|(1eβ|xk|))
minf(x)=E(x)F(x),  xRn
Fl(x)=F(xl)+<xxl,vl>,  vlF(xl)
min[E(x)Fl(x)]min[E(x)<x,vl>],  xRn
vl=i=1M[σsign(yiAixl)Ai+σeσ|yiAixl|sign(yiAixl)Ai]+λk=1N[βsign(xkl)eβ|xkl|sign(xkl)β]
xi+1=argminxi=1Mσ|yiAix|+λk=1Nβ|xk|<x,vi>
mint1,t2,xi=1Mσt1+λk=1Nβt2<x,vi>s.t.  |yiAix|(t1)i,  i=(1,2,,M)|xk|(t2)k,  k=(1,2,,N)
E(xi+1)F(xi)Txi+1E(xi)F(xi)Txi
F(xi)T(xixi+1)E(xi)E(xi+1)
F(xi+1)F(xi)F(xi)T(xi+1xi)
E(xi+1)F(xi+1)E(xi)F(xi)
LE=LrLa2
CNR=|μROIμROB|(ωROIσROI2+ωROBσROB2)12
NMSE=xrxt22xt22
RFY=YN
Dice=2|XY||X|+|Y|
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.