Through the reconstruction of the fluorescent probe distributions, fluorescence molecular tomography (FMT) can three-dimensionally resolve the molecular processes in small animals in vivo. In this paper, we propose an FMT reconstruction algorithm based on the iterated shrinkage method. By incorporating a surrogate function, the original optimization problem can be decoupled, which enables us to use the general sparsity regularization. Due to the sparsity characteristic of the fluorescent sources, the performance of this method can be greatly enhanced, which leads to a fast reconstruction algorithm. Numerical simulations and physical experiments were conducted. Compared to Newton method with Tikhonov regularization, the iterated shrinkage based algorithm can obtain more accurate results, even with very limited measurement data.
© 2010 OSA
In vivo small animal optical molecular imaging has become an important and rapidly developing method for biomedical research, and has been widely utilized for cancer detection, drug discovery and gene expression visualization, etc [1–4]. When small animals are labeled with optical molecular probes, the corresponding biological information can be indirectly visualized by measuring the emitted light photons of the probes over the surface, thus realizing non-invasive investigation of the small animals . Among optical molecular imaging modalities, fluorescence molecular imaging has become a popular and promising technique. This is partly due to the wealth of the fluorescent probes . However, because the photons in the NIR or visible spectrum are highly scattered by tissue, the traditional planar fluorescence imaging cannot reflect the probe distributions accurately. Fluorescence molecular tomography (FMT), on the contrary, can three-dimensionally resolve the molecular processes by measuring the photons over the animal surface and reconstructing the distribution of the fluorescent probes, which makes it a hot spot in recent years.
FMT is often an ill-posed problem since only the photon distribution over the surface is measurable . This can be alleviated by increasing the measurement data sets. However, even if sufficient measurements can be obtained, the problem may still be ill-conditioned, which means that it is unstable and is sensitive to noises caused by CCD measurement errors and data interpolation errors. To compute a meaningful approximate solution, more a priori information should be incorporated to regularize the FMT problem . Among different regularization methods, the Tikhonov regularization is a popular method that has been widely adopted in FMT problems [10,11]. Tikhonov method assumes that the “size” of the solution should not be very large and adds L2-norm constraint of the solution to the original problem, thus making the problem less sensitive to perturbations. The advantage of L2-norm regularization is that the problem is simple and can be solved efficiently by standard minimization algorithms, such as Newton method and conjugate gradient method. However, the solution is often over-smoothed with reduced intensities, and the localized features are lost during the reconstruction process . In recent years, some researchers began to use sparsity regularization for the optical tomography problems [8,9,12]. For FMT this is based on the fact that, the domains of the fluorescent sources are usually very small and sparse compared with the entire reconstruction domain . This can be considered as valuable a priori information for FMT. A straightforward way to integrate the sparsity constraint is to replace the Tikhonov method with L0-norm regularization. However, the optimization problem becomes NP-hard in this case, and cannot be solved efficiently . A tradeoff is to select a proper Lp-norm with . When p is within this range, large values in the solution are penalized less severely compared with Tikhonov regularization. Therefore, the Lp regularization () has a higher tendency to promote the sparsity of the reconstructed result. This effect has been demonstrated in many published articles [8,13]. Another advantage of the sparsity regularization is that, it can still perform well when the measurement data is very limited. This has been well studied in the area of compressed sensing . Besides, the Lp-norm with is convex, so the FMT problem remains convex which is important for successful FMT reconstructions. However, this requires the design of new algorithms that can solve the optimization problems with general Lp-norm regularization. Another problem lies in the performance of the reconstruction algorithm. A long reconstruction time may become an obstacle for FMT to be transferred into practical use. Therefore, we believe that designing a fast reconstruction algorithm is always a hot spot for FMT.
In this paper, a fast reconstruction algorithm based on the iterated shrinkage method is proposed . This algorithm decouples the original high dimensional optimization problem and converts it into a set of one dimensional optimization problems, which enables us to handle each one separately. More importantly, general Lp-norm regularization can be incorporated straightforwardly without increasing the complexity of the problem. Next, we provide two reconstruction strategies for different situations together with their complexity analysis. Last, we explain that duo to the sparsity of the light sources, the computational burden can be greatly reduced, which leads to a fast reconstruction algorithm.
This paper is organized as follows. The iterated shrinkage based reconstruction algorithm is presented in section 2. In section 3, numerical simulations and physical experiments are conducted to evaluate the performance of the proposed method. Finally, we discuss the results and conclude this paper.
2.1 Photon propagation model
For photon propagation in biological tissue within the near infrared spectral window, scattering is the dominant phenomenon over absorption. Therefore, the diffuse approximation to the radiative transfer equation has been extensively applied to model the photon transport [16–20]. For steady-state FMT with point excitation sources, the following coupled diffuse equations can be utilized to depict the forward problem:21].
2.2 Linear relationship establishment
Instead of solving Eqs. (1) and (2) directly, they are posed in the weak solution forms. Discretizing the domain with small elements and employing the base functions as the test functions, the FMT problem can be linearized and the following matrix-form equations can be obtained.Eq. (3). Considering the inverse crime problem, is calculated on a fine mesh using 2nd order Lagrange elements. Then it is projected onto a coarse mesh which will be used for the reconstruction of X with linear elements. As matrix is symmetrical positive definite, Eq. (4) can be transformed into:Eq. (6) for different excitation locations and obtain the following matrix-form equation:7].
2.3 Reconstruction based on iterated shrinkage method
As is mentioned, FMT is usually an ill-posed problem, which means that the dimension of the null space of A is not zero. Therefore, the solution is not unique in this case. Even if the FMT problem becomes less ill-posed when more fluorescence measurement data can be captured by the CCD camera, it can still remain ill-conditioned (with a large condition number). Therefore, errors in the FMT problem can be magnified, which will affect the reconstruction results. Errors are inevitable and can be introduced in several ways, e.g. the fluorescence measurement errors caused by CCD camera noise and the approximation errors caused by data interpolation. A standard way to regularize the problem is to incorporate additional constraints on the solution, which can be considered as a kind of a priori information:Eq. (10) is not correct. This special case will be considered later. Equation (10) is non-linear except when . If is a diagonal matrix, then solving X is equivalent to solving each individually, which is quite simple. However, this is generally not the case for FMT, and all are closely coupled. Therefore, solving Eq. (10) is not trivial.
Next, we introduce a surrogate function with the following form :Eq. (12) with respect to X and setting the result to zero, we have:Eq. (13) can be further simplified as follows:Eq. (14), all are decoupled now, and solving Eq. (14) is equivalent to solving each individually, which makes the use of general Lp-norm regularization possible. This is a distinctive advantage of the iterated shrinkage method compared with other methods, e.g. the Lasso estimator that is based on L1 regularization . Here, we take for example. To solve a certain , we firstly assumes that . Then, if , we have . If does not satisfy the condition, we don’t calculate using Eq. (14) but simply set . Therefore, we can avoid the aforementioned special case. Now, we define a function named Shrink:
Next, we analyze the influence of the surrogate function on the optimization problem. Since is strictly convex, it has a unique global minimum. Zeroing the derivative of with respect to X leads to the equation:Eq. (17), it is the unique solution. Therefore, the global minimum of is zero when . From the above analysis, we know that the optimal solution to this optimization problem will have a bias towards . can be regarded as the a priori knowledge of X. However, this a priori information may not be obtainable in most cases. To resolve this problem, an iterative reconstruction scheme is adopted instead. For a new iteration , is calculated by replacing with :15] cannot be directly used to analyze the proposed method. A major reason is that, the base functions in the finite element framework are not orthogonal, though they are linearly independent. However, it is shown in reference  that even with unorthogonal bases, the iterated shrinkage method is still practiced successfully. We share similar experience in the FMT reconstructions. This can be seen from the reconstruction results in section 3. Here, we provide a brief explanation to show that the proposed method still makes sense in this case. In fact, is a upper bound of , which means . Equality can be satisfied if and only if . Then, for a certain iteration k, the following inequality exists:
Now, we consider the calculation of c satisfying . A straightforward way is to calculate all the Eigen values of , and c can be set to the largest Eigen value plus ε, where ε is a small positive value. However, this could be rather time-consuming. Therefore, we provide two alternative ways to determine c. For the first one, we estimate the upper bound of the maximum Eigen value instead of actually calculating it, and c can be set to the upper bound plus ε. Several estimation algorithms can be adopted, e.g. the popular Minc method . The second way is to calculate the maximum Eigen value directly using e.g. the Power method , and c can be set similarly. Sometimes, the estimated upper bound of maximum Eigen value may be too large compared with the true value. This will increase the influence of the surrogate function on the optimization problem and will slow down the convergence rate. Therefore, for the above two ways, we prefer the latter one, because it is more accurate.
Next, we provide two reconstruction strategies for different situations. Suppose we have obtained A and Φ. For a certain iteration k, can be represented in the following two ways:Equations (20) and (21) lead to the following two reconstruction strategies:
For reconstruction strategy 1, we don’t compute since this is time-consuming. As we know, is equal to the square of the maximum singular value of A, and is equal to the maximum Eigen value of the assembled matrix . So we can calculate instead. For each iteration, two matrix-vector multiplications are needed. If we ignore the computations for the vector-vector operations (including the Shrink operations) and the calculation of , which are relatively cheap in the total reconstruction process, the computational complexity of strategy 1 can be represented as .
For reconstruction strategy 2, we pre-calculate both and . The computational complexity of calculating is . For each iteration, only one matrix-vector multiplication is needed. Then, the computational complexity of strategy 2 can be represented as .
From the above analysis, it is clear that when is not very large, strategy 1 is more efficient. For a large , especially when is much larger than the dimension of the problem, strategy 2 becomes a better choice.
When is very large, even strategy 2 will be quite time-consuming. Fortunately, if the sparsity characteristic of X is considered, the performance of both strategies can be greatly improved. Suppose is sparse with non-zero elements, and is usually far less than n. When multiplying by a matrix, the multiplications involving the zero elements in can be simply ignored. Taking strategy 2 for example, the computational complexity of calculating is . If the sparsity of is considered, only multiplications are needed, which is a small fraction of the original. For strategy 1, the computational complexity of calculating can be reduced to . However, is not sparse any more. Therefore, computing still needs multiplications.
3.1 Simulation verifications
In this subsection, heterogeneous simulation experiments were conducted to demonstrate the performance of the proposed method. Figure 1(a) shows the heterogeneous cylindrical phantom we used, which was of 20mm in diameter and 20mm in height. Figure 1(b) is a slice image of the phantom in z = 0 plane. The phantom consisted of four kinds of materials to represent muscle (M), lung (L), heart (H) and bone (B), respectively. The optical parameters can be found in Table 1 . The black dots in Fig. 1(b) represent the excitation light sources, which were modeled as isotropic point sources located one mean free path of photon transport beneath the surface in z = 0 plane.
As is mentioned, for FMT, the domains of the fluorescent sources are usually very small and sparse. Therefore, we used small spheres to represent the fluorescent sources in our experiments. Figure 2 . shows the three different phantom setups for single source (left column), double sources (middle column) and three sources (right column), respectively. The first row of Fig. 2 is the 3D views of the phantom setups and the second row is the slice images in plane. All the fluorescent sources were of 2mm in diameter and were centered in plane. The fluorescent yields of these sources were set to be 8. These three phantom setups were discretized with tetrahedrons. For the forward problem, three fine meshes, each with about 23000 degrees of freedom (DOFs), were used for different phantom setups. Then the forward solutions were projected onto a single coarse mesh with 2710 DOFs, which was used for the inverse problem. Fluorescence measurement was implemented in transillumination mode. For each excitation source, measurement was taken from the opposite cylindrical side with 160° field of view (FOV), which is illustrated in Fig. 1(b). This means that all the nodes on the cylindrical surface within this FOV were considered to be measurable. When practical fluorescence measurements are taken using a CCD camera, the shot noise always exists, which obeys the Poisson distribution. If large numbers of photons are collected, the shot noise will approach a Gaussian distribution. Therefore, for FMT reconstructions with simulated data, Gaussian noise is often used to simulate the real case. In our simulation experiments, we added 5% Gaussian noise to the measurement data.
In this paper, L1-norm regularization was utilized. The maximum iteration number was set to be 30000, which was sufficiently large for these experiments. Since was larger than the dimension of the inverse problem, reconstruction strategy 2 was adopted. To better illustrate the performance of the proposed method, we compared it to the bound-constraint Newton method that minimized the original energy function with Tikhonov regularization. Since the Newton method is a second-order optimization method, it generally converges faster than those first-order ones. Here, we set the maximum iteration number for the Newton method to be 300. Regularization parameter generally plays an important role in the FMT reconstructions. In this paper, the regularization parameters for the Newton method and the proposed method were manually optimized. Finding the optimal or near-optimal regularization parameters automatically will be our future work. Both reconstruction algorithms were implemented in C++, and all the reconstructions were carried out on a personal computer with 2.33GHz Intel Core2 duo CPU and 2GB RAM.
In the first set of experiments, fluorescence was excited by point sources from 15 different locations in sequence, which is illustrated in Fig. 1(b). Measurements were taken every 24° and a total of 15 data sets were acquired for the reconstruction of the fluorescent yield. Figures 3 , 4 and 5 show the reconstruction results for three different phantom setups, which are presented in the form of slice images in plane and iso-surfaces for 30% of the maximum value. The small circles in the slice images denote the real positions of the fluorescent sources. From Figs. 3, 4 and 5 we can clearly see that, when only one fluorescent source exists, the results from both methods are quite satisfactory. However, when multiple sources exist, the over-smooth effect of Tikhonov regularization begins to appear. On the contrary, the proposed method can still preserve the sparsity of the sources very well in this case, and the reconstructed intensities are greater. To analyze these results quantitatively, we define the location error to be , where is the real location of the source center and is the location of the node with the maximum reconstructed value for that source. We also define the relative intensity error to be , where is the true intensity of the source and is the maximum reconstructed intensity. The quantitative comparisons between these results are presented in Table 2 . From Table 2 we can see that, both reconstruction methods can obtain satisfactory source localizations. But the relative intensity errors for the proposed method are smaller.
Next, we compare the running time of the two methods. Since both methods need the pre-calculation of and , we set the starting point at the time when and were just obtained. Then, the running time of the Newton method for three different phantom setups was 166.62s, 189.32s and 180.09s, respectively. And the running time of the proposed method was 1.16s, 2.11s and 4.46s, respectively. The proposed method is much more efficient.
In the second set of experiments, we reduced the amount of measurement data to simulate a much worse case. This is possible when long-time measurement is not appropriate or feasible. For instance, when imaging small animals like mice, the artifacts caused by movements must be taken into consideration. Besides, long-time measurement can cause the bleaching effect of the fluorescent probe, which may influence the reconstructed results. One way to resolve these problems is to reduce the number of fluorescence measurements. This requires that we should be able to reconstruct the fluorescent sources from very limited data. It has been shown for bioluminescence tomography that, by using sparsity constraint, satisfactory results can still be achievable even with very limited imaging data . Here, we only retained the measurement data sets generated by excitation point source 1, 6 and 11. Figures 6 , 7 and 8 show the reconstruction results in this case. From these figures we can see that, when the measurement data is very limited and multiple fluorescent sources exist, the proposed method can obtain much better results compared to the Newton method with Tikhonov regularization. This demonstrates the applicability of the proposed method under more ill-posed conditions. Quantitative comparisons are presented in Table 3 . For the Newton method, the reconstruction errors for source S1 and S2 in the third phantom setup are not presented, because the two sources cannot be separated in the result. The running time of the Newton method for three different phantom setups was 183.98s, 173.91s and 168.02s, respectively. And the running time of the proposed method was 1.84s, 2.97s and 2.42s, respectively.
3.2 Physical experiments
In this subsection, physical experiments were conducted to further test the proposed algorithm. Figure 9 illustrates the experimental setup. Excitation illumination was provided by a 671nm CW laser and the power was set to 20mW. The spot diameter of the laser beam was approximately 1mm. The fluorescence measurements were implemented in transillumination mode. A 10nm band-pass filter centered at 700nm was used to allow light transmission at the emission wave-length. The optical density of the filter at the excitation wave-length was larger than 5. Fluorescence was captured by a CCD camera (Princeton Instruments VersArray 1300B, Roper Scientific, Trenton, NJ) which was cooled to −110°C. To reconstruct the sources, the pixels of the measured fluorescence image should firstly be converted into the corresponding photon flux densities. Then, the calibrated image needs to be back-projected onto the surface of the phantom. Errors can be introduced during the mapping procedure. To avoid this problem and to test only the performance of the proposed reconstruction method, a cubic phantom was utilized in this experiment, and the back-projection was reduced to a point-to-point mapping. Figure 10(a) shows the phantom we used. The side length is 20mm. The phantom is made from polyoxymethylene, and the optical parameters for both excitation and emission wave-lengths, which are presented in Table 4 , were determined by diffuse optical tomography. The sparsity-promoting characteristic of the proposed method can be better demonstrated when more than one source exists, which can be seen from our simulation results. Therefore, in this experiment, two small holes of 1.25mm in radius were drilled to allow two fluorescent sources. 2000nM Cy5.5 solution was used as the fluorescent source. The height of the two cylindrical sources was 2mm, and the centers were at (−3.75mm, 3.75mm, 1mm) and (3.75mm, −3.75mm, 1mm) respectively, which is illustrated in Fig. 10(b). To simulate a badly ill-posed situation, fluorescence was excited by point sources from only 4 different locations in plane, which is illustrated in Fig. 10(c). The phantom was placed on a rotational stage, which was controlled by computer, to allow measurements from four sides.
In this experiment, the cubic phantom was discretized with tetrahedrons. For the forward problem, a fine mesh with about 21000 DOFs was adopted. And a coarse mesh with 2705 DOFs was used for the inverse problem. Figure 11 shows the reconstruction results which are presented in the form of slice images in plane and iso-surfaces for 30% of the maximum value. From Fig. 11 we can clearly see that, due to the badly ill-posed situation and the over-smooth effect of Tikhonov regularization, the reconstructed fluorescent sources from the Newton method are spread within the entire domain, which makes result totally meaningless. The location errors are not presented for the Newton method, because the source locations cannot be identified from the result. On the contrary, the proposed method with L1 regularization can preserve the sparsity of the fluorescent sources very well, though some artifacts exist near the center. The location errors for the fluorescent sources S1 and S2 were 1.21mm and 0.82mm, respectively. The running time of the Newton method and the proposed method was 135.91 seconds and 2.56 seconds, respectively.
In this paper, an iterated shrinkage based reconstruction algorithm is proposed. By integrating an additional surrogate function, the original high dimensional optimization problem can be decoupled into a set of one dimensional ones that can be solved easily. This also enables us to incorporate sparsity regularization in a graceful way. Two reconstruction strategies are provided for different situations together with their complexity analysis. Besides, we explain that due to the sparsity characteristic of the fluorescent sources, the efficiency of the algorithm can be greatly improved, which leads to a fast reconstruction method. Simulation verifications show that the proposed method outperforms the traditional bound-constrained Newton method with Tikhonov regularization in two ways. First, due to the sparsity constraint of our method, obvious improvements can be seen from these reconstruction results. Second, our method is much faster than the Newton method when the fluorescent sources are sparse. This is important if we want to transfer FMT into practical use. We also test our method under a more ill-posed condition and satisfactory results can still be achievable. Reconstruction of experimental data further demonstrates the performance of the proposed method.
For FMT reconstruction, the choice of the regularization parameter will have a significant impact on the results. A large parameter value can make the reconstructed solution deviate from the real distribution, while a small value will have little contribution to the regularization of the problem. Finding the optimal or near-optimal regularization parameter automatically still remains a challenging task. Generally speaking, two strategies can be used: determine the parameter in advance or update it heuristically. This will be our future research.
Another important issue lies in the accuracy of the photon propagation model itself. The diffuse equation has been extensively utilized to describe light transport in biological tissue, yet it is not applicable in certain regions, such as void or more absorptive regions. Several improved models, e.g. higher order approximation to radiative transfer equation, have been proposed to resolve the problem, though more computations are typically needed and the physical meanings are not such explicit. Since FMT reconstruction is a linear inverse problem in nature, the proposed reconstruction algorithm can potentially be utilized in these improved models.
In conclusion, we have developed a fast reconstruction algorithm with sparsity constraint for FMT. Numerical simulations and physical experiments show the merits of our method compared to the Newton method with Tikhonov regularization. In vivo mouse studies using the proposed method will be reported in the future.
This paper is supported by the Project for the National Basic Research Program of China (973) under Grant No.2006CB705700, Changjiang Scholars and Innovative Research Team in University (PCSIRT) under Grant No.IRT0645, CAS Hundred Talents Program, CAS scientific research equipment develop program under Grant No. YZ200766, the Knowledge Innovation Project of the Chinese Academy of Sciences under Grant No. KGCX2-YW-129, KSCX2-YW-R-262, the National Natural Science Foundation of China under Grant No. 30672690, 30600151, 60532050, 60621001, 30873462, 60910006, 30970769, 30970771, Beijing Natural Science Fund under Grant No.4071003, Science and Technology Key Project of Beijing Municipal Education Commission under Grant No.KZ200910005005.
References and links
1. V. Ntziachristos, J. Ripoll, L. V. Wang, and R. Weissleder, “Looking and listening to light: the evolution of whole-body photonic imaging,” Nat. Biotechnol. 23(3), 313–320 (2005). [CrossRef] [PubMed]
2. R. Weissleder and U. Mahmood, “Molecular imaging,” Radiology 219(2), 316–333 (2001). [PubMed]
7. X. Song, D. Wang, N. Chen, J. Bai, and H. Wang, “Reconstruction for free-space fluorescence tomography using a novel hybrid adaptive finite element algorithm,” Opt. Express 15(26), 18300–18317 (2007), http://www.opticsinfobase.org/abstract.cfm?URI=OE-15-26-18300. [CrossRef] [PubMed]
8. Y. Lu, X. Zhang, A. Douraghy, D. Stout, J. Tian, T. F. Chan, and A. F. Chatziioannou, “Source Reconstruction for Spectrally-resolved Bioluminescence Tomography with Sparse a priori Information,” Opt. Express 17(10), 8062–8080 (2009), http://www.opticsinfobase.org/oe/abstract.cfm?URI=oe-17-10-8062. [CrossRef] [PubMed]
9. P. Mohajerani, A. A. Eftekhar, J. Huang, and A. Adibi, “Optimal sparse solution for fluorescent diffuse optical tomography: theory and phantom experimental results,” Appl. Opt. 46(10), 1679–1685 (2007). [CrossRef] [PubMed]
10. D. Wang, X. Song, and J. Bai, “Adaptive-mesh-based algorithm for fluorescence molecular tomography using an analytical solution,” Opt. Express 15(15), 9722–9730 (2007), http://www.opticsinfobase.org/oe/abstract.cfm?URI=oe-15-15-9722. [CrossRef] [PubMed]
11. W. Bangerth and A. Joshi, “Adaptive finite element methods for the solution of inverse problems in optical tomography,” Inverse Probl. 24(3), 034011 (2008). [CrossRef]
12. N. Cao, A. Nehorai, and M. Jacobs, “Image reconstruction for diffuse optical tomography using sparsity regularization and expectation-maximization algorithm,” Opt. Express 15(21), 13695–13708 (2007), http://www.opticsinfobase.org/oe/abstract.cfm?URI=oe-15-21-13695. [CrossRef] [PubMed]
13. I. F. Gorodnitsky and B. D. Rao, “Sparse signal reconstruction from limited data using focuss: a re-weighted minimum norm algorithm,” IEEE Trans. Signal Process. 45(3), 600–616 (1997). [CrossRef]
14. E. J. Candès, J. K. Romberg, and T. Tao, “Stable signal recovery from incomplete and inaccurate measurements,” Commun. Pure Appl. Math. 59(8), 1207–1223 (2006). [CrossRef]
15. I. Daubechies, M. Defrise, and C. DeMol, “An iterative thresholding algorithm for linear inverse problems with a sparsity constraint,” Commun. Pure Appl. Math. 57(11), 1413–1457 (2004). [CrossRef]
16. F. Gao, H. Zhao, L. Zhang, Y. Tanikawa, A. Marjono, and Y. Yamada, “A self-normalized, full time-resolved method for fluorescence diffuse optical tomography,” Opt. Express 16(17), 13104–13121 (2008), http://www.opticsinfobase.org/oe/abstract.cfm?uri=oe-16-17-13104. [CrossRef] [PubMed]
17. Y. Tan and H. Jiang, “DOT guided fluorescence molecular tomography of arbitrarily shaped objects,” Med. Phys. 35(12), 5703–5707 (2008). [CrossRef]
18. A. Joshi, W. Bangerth, and E. M. Sevick-Muraca, “Adaptive finite element based tomography for fluorescence optical imaging in tissue,” Opt. Express 12(22), 5402–5417 (2004), http://www.opticsexpress.org/abstract.cfm?URI=OPEX-12-22-5402. [CrossRef] [PubMed]
19. J. Feng, K. Jia, G. Yan, S. Zhu, C. Qin, Y. Lv, and J. Tian, “An optimal permissible source region strategy for multispectral bioluminescence tomography,” Opt. Express 16(20), 15640–15654 (2008), http://www.opticsinfobase.org/oe/abstract.cfm?URI=oe-16-20-15640. [CrossRef] [PubMed]
20. C. Qin, J. Tian, X. Yang, K. Liu, G. Yan, J. Feng, Y. Lv, and M. Xu, “Galerkin-based meshless methods for photon transport in the biological tissue,” Opt. Express 16(25), 20317–20333 (2008), http://www.opticsinfobase.org/oe/abstract.cfm?URI=oe-16-25-20317. [CrossRef] [PubMed]
21. M. Schweiger, S. R. Arridge, M. Hiraoka, and D. T. Delpy, “The finite element method for the propagation of light in scattering media: Boundary and source conditions,” Med. Phys. 22(11), 1779–1792 (1995). [CrossRef] [PubMed]
22. R. Tibshirani, “Regression Shrinkage and Selection via the Lasso,” J. R. Stat. Soc., B 58, 267–288 (1996).
23. M. Elad, B. Matalon, and M. Zibulevsky, “Image denoising with shrinkage and redundant representations,” in Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, New York2, 1924–1931 (2006).
24. H. Minc, Nonnegative matrices, (Wiley, New York, 1988).
25. M. T. Chu and J. L. Watterson, “On a multivariate eigenvalue problem, Part I: Algebraic theory and a power method,” SIAM J. Sci. Comput. 14(5), 1089–1106 (1993). [CrossRef]
26. A. X. Cong and G. Wang, “A finite-element-based reconstruction method for 3D fluorescence tomography,” Opt. Express 13(24), 9847–9857 (2005), http://www.opticsinfobase.org/oe/abstract.cfm?URI=oe-13-24-9847. [CrossRef] [PubMed]