When dealing with ill-posed problems such as fluorescence diffuse optical tomography (fDOT) the choice of the regularization parameter is extremely important for computing a reliable reconstruction. Several automatic methods for the selection of the regularization parameter have been introduced over the years and their performance depends on the particular inverse problem. Herein a U-curve-based algorithm for the selection of regularization parameter has been applied for the first time to fDOT. To increase the computational efficiency for large systems an interval of the regularization parameter is desirable. The U-curve provided a suitable selection of the regularization parameter in terms of Picard’s condition, image resolution and image noise. Results are shown both on phantom and mouse data.
© 2011 OSA
Over the last few years, several optical tomography applications that use diffuse light to examine tissue in vivo have been developed. These techniques are able to retrieve the 3D distribution of a wide set of intrinsic and extrinsic optical contrasts, and have many preclinical and clinical applications, such as breast imaging [1–4] and 3D or quantitative imaging of fluorophore concentration in live animal samples, as in fluorescence diffuse optical tomography (fDOT). fDOT is also named fluorescence molecular tomography (FMT) when there are molecular imaging probes involved [5–7].
The fluorescence diffuse optical tomography (fDOT) forward problem can be expressed as a linear system of equations
Solving [Eq. (1)] is equivalent to finding the predicted data that are as close as possible to real data, in other words, solving the least square problem
One of the main issues of the fDOT inverse problem is that it is ill-posed in the Hadamard sense [8,9], yielding multiple non-unique and unstable solutions to the reconstruction problem. Therefore, small perturbations in acquired data may lead to arbitrarily large changes in the reconstructed images.
In order to obtain a meaningful solution the problem must be regularized. The most common method is the Tikhonov regularization, which consists in replacing the least squares problem [Eq. (2)] by the following functional minimization problem:
In many implementations, the regularization parameter, α, is manually selected, using a sequence of regularization parameters and selecting the parameter value that leads to the best results, as judged by the operator . However, this procedure is subjective and slow. To avoid this, several automatic methods for selecting regularization parameters have been suggested over the years, for instance: the Unbiased predictive risk estimator method (UPRE), the Discrepancy principle (DP) and strategies based on the calculation of the variance of the solution that require prior knowledge of the noise, or others such as Generalized cross-validation (GCV) and L-curve (LC) that do not need a-priori information.
More details of these methods can be found in a survey paper that describes some of these methods , and in the chapter 7 of the book . As Vogel emphasized in his book, depending on the particular inverse problem some methods may perform better than others, and some methods may even fail.
Regarding fDOT, time-resolved (fluorescence) diffuse optical tomography, diffuse optical tomography and optical fluorescence tomography, the reported strategies for the selection of the regularization parameter have been the manual selection , selection by using some acceptable reconstruction variability (non-biased estimators: mean and standard deviation of the reconstruction) , choosing from the resolution-variance plot to correspond to equal noise variance , the L-curve [1,15,16] and a variant of the L-curve method .
L-curve method (LC) has been extensively analyzed [18,19] and applied in different areas. Recently, it has been found that L-curve returns good regularization parameter values for electrical impedance tomography of the brain  and for diffuse optical topography . In both studies several methods were compared arriving to the conclusion that in practice, selection methods without a-priori information, such as GCV and L-curve, were more robust and there was not significant difference between GCV and L-curve in accuracy. The only relevant difference is that GCV is more computationally expensive for large systems than L-curve .
In terms of diffuse optical tomography, the authors of  emphasized that the L-curve analysis yielded overly-smooth solution in same cases.
Lately, the U-curve method has been proposed by [23,24] for the selection of the regularization parameter in inverse problem. The U-curve was tested on some numerical examples of the Fredholm integral equation of first kind [23,24] and super-resolution . These works concluded that
- - The U-curve method provides an interval where the optimal regularization parameter exists. Given that any value out of this interval is not a candidate for being a regularization parameter, the computation of U-curve for values out of this interval is not necessary. Thus, the use of this interval can greatly increase the computational efficiency in selecting the regularization parameter.
- - The U-curve performed better than the L-curve, obtaining a better reconstruction result that can retain its robustness and efficacy in blurred or noisy situations.
We validated this method by confirming that Picard’s condition  is fulfilled and inspecting the noise level of the reconstructed images to ensure that the U-curve method yields a satisfactory regularized solution. Phantom and ex-vivo mice data were acquired with a custom-made fDOT system [27,28].
This paper is organized as follows: Section 2.1 describes our in-house parallel-plate non-contact fDOT setup. The forward problem is detailed in section 2.2. Section 2.3 briefly mentions the singular value decomposition (SVD), which provides a way of analyzing the ill-posedness of a problem. Section 2.4 introduces the L-curve method. Section 2.5 presents the selection of the regularization parameter by the U-curve method. The experiments are explained in section 2.6. Section 2.7 describes how we validate the parameter selection. In section 3, we summarize the main results obtained. We finalize by discussing the issues raised and draw conclusions in section 4.
2. Materials and methods
2.1 fDOT experimental set up
In the non-contact parallel-plate configuration , the study sample is gently compressed between two parallel anti-reflective plates.
The excitation laser beam enters the sample perpendicularly through the first plate, and the transmitted light emerging through the opposite plate is recorded with a CCD camera.
Figure 1 shows our parallel-plate fDOT experimental setup.
In our prototype, the light emerging from a laser diode is focused onto the sample at the desired points (source locations) using two mirrors moved by galvanometers, thus making it possible to choose the number and spatial distribution of sources. The laser emitter wavelength is set at 675±5 nm, and the power delivered to the sample is controlled by means of a TTL signal that modulates the laser duty cycle. Typical power values are in the 1-mW range. All the components of the set-up are placed inside a light-shielded box. The acquisition process is controlled by in-house software hosted on a PC workstation.
All fluorescence images are taken in transmission and are recorded by placing a 10-nm bandwidth filter centered at 720 nm in front of the camera lens, while for the transmitted excitation images a 10-nm bandwidth filter centered at 675 nm is used. The filters are placed in front of the camera using a motorized filter wheel (see Fig. 1). The acquired images are divided by their respective laser power.
For each source, a variable number and distribution of detectors can be defined over the CCD sensor field of view (FOV), thus making it possible to retrieve the fluorescent and excitation photon density at the desired points on the sample surface.
2.2 Weight matrix formulation. Forward problem
Calculating the weight matrix requires a theoretical model (forward problem) that predicts photon propagation through the diffusive medium. The propagation of photons through biological media can be described by the radiative transfer equation . Under highly scattering conditions such as those presented in this work, the diffuse approximation is derived from the radiative transfer equation , which provides simple analytical and numerical expressions for light transport and can be applied to a great variety of scattering systems, thus enabling their optical characterization . Photon propagation was modeled as described in [32,33], taking into account the effects of the planar boundaries of the slab in the light transport model.
In order to accurately obtain the 3D distribution of fluorophore concentration, we used the normalized Born approximation previously introduced in [6,34]. This involves normalizing the measured fluorescence intensity to the corresponding measured intensity at the excitation wavelength for each source rs and detector rd, and assuming the contribution of each voxel in the mesh to be correctly described by the Born approximation as described inEq. (4) for every source-detector pair, we obtain the following system of equations
2.3 Tikhonov solution based on SVD
A common solution to the Tikhonov minimization problem Eq. (3) is given in terms of the singular value decomposition (SVD) of the weight matrix . The SVD of a matrix generates a triplet of matrices 
Since this problem is ill-posed, the singular values gradually decay to zero. The left and right singular vectors associated with the smaller singular values tend to have more sign changes, thus increasing the number of unstable solutions. The purpose of the regularization is to filter out the contribution of small singular values to the solution. In particular, the Tikhonov regularization [Eq. (3)] can be written in terms of the solution [Eq. (7)] as 
2.4 L-curve method
The L-curve is a log-log plot, for the different , of the image norm against the corresponding residual norm [18,19].The L-curve is L-shaped. Applying the L-curve routine available in the Matlab Regularization Toolbox , we take the value of the regularization parameter, , which corresponds to the point of maximum curvature on the graph.
2.5 Selection of the noise threshold by the U-curve method (regularization parameter)
The U-curve is a plot of the sum of the inverse of the regularized solution norm and the corresponding residual norm , for on a log-log scale23] shows that the function is strictly decreasing on the interval and strictly increasing on the interval , where are the singular values. The function has a local minimum in the interval . Furthermore , points out that, if in the SVD there are one or more non-zero values, we can analytically calculate a unique for which the U-function will reach a minimum, and this would be the only minimum of the function. The sides of the curve correspond to regularization parameters for which either the solution norm or the residual norm dominates. The optimum value of α, , is a parameter for which the U-curve has a minimum.
The regularized solution norm and the corresponding residual norm are calculated numerically.
As previously mentioned, the weight matrices that describe our system were decomposed by SVD following [Eq. (6)]. Tikhonov regularization was used to reconstruct the images, and the regularization parameter was selected using the U-curve method explained above.
In the experiments presented here, α took 200 values equally spaced from to .
We present two fDOT reconstructions corresponding to two experiments: one consisted of a slab phantom in which the tip of a capillary containing a known concentration of fluorophore was inserted into the center. For the second experiment we used a euthanized mouse in which the tip of a capillary containing a known concentration of fluorophore was inserted into the esophagus.
2.6.1 Phantom experiments
We made a slab-shaped agar-based phantom (8 x 6 x 1.5 cm) using intralipid and India ink to obtain an absorption coefficient of approximately μa =0.3 cm−1 and a reduced scattering coefficient of μs=10 cm−1 . A capillary with its tip filled with 6 µl at 30µM of Alexa Fluor 700 (Invitrogen, Carlsbad, California, USA) was inserted into the phantom, with the tip positioned at the center of the slab.
We built a weight matrix of fDOT data collected with 20 x 20 x 10 mesh points for a 1.5 x 1.5 x 1.5-cm FOV. Equally spaced 10 x 10 sources and 12 x 12 detectors, respectively, were selected. The center of the mesh FOV was aligned with the center of the slab.
2.6.2 Ex-vivo mouse data
A euthanized mouse was imaged with a capillary inserted into the esophagus. The tip of the capillary (<1.5 mm thick) was filled with 6 µl at 30 µM of Alexa Fluor 700 (Invitrogen, Carlsbad, California, USA).
We constructed a weight matrix of fDOT data assembled with 20 x 20 x 10 mesh points for a 1.4 x 1.4 x 1.5-cm FOV centered on the chest of the mouse. Equally spaced 6 x 6 sources and 10 x 10 detectors were selected. The mouse was gently compressed between two anti-reflective plates (until 1.5 cm approximately) in order to conform its geometry as much as possible to that of a slab. We can assume the axis x of the FOV along the width of the mouse, the axis y along the length of the mouse and the axis z along the height of the mouse.
2.7 Validation of the regularization parameter obtained by the U-curve method
Once the weight matrix was decomposed by SVD, the images were reconstructed using Tikhonov regularization with α parameters in the 10−1 to 10−6 range, which included the U-curve-based regularization parameter, .
The noise content and resolution of the images for each α value were then evaluated. To assess image resolution, we followed the procedure described in , assuming that the FWHM of the point spread function (PSF) of the capillary tip (that can be considered as a single isolation region) is directly related to the resolution performance of the system. We measured the noise present in the images as the standard deviation in a region of the image with no signal. The resolution versus noise graph is plotted in Figs. 6 and 11
Generally, the use of low regularization parameters means that the resolution of the reconstructions improves while the image noise increases . It is not possible to define an optimal parameter common to all imaging applications, since for each case the user may ask for different noise and resolution tolerances.
An indication of the ill-posedness of the problem is the rate of decrease of the singular values, σi. The Discrete Picard’s condition (DPC)  provides us with an objective assessment of this fact.
The data vector d is said to satisfy the DPC if the data space coefficients , on average, decay to zero faster than the singular values σi .
To compute a satisfactory solution by means of the Tikhonov regularization, Picard’s condition has to be fulfilled , since it determines how well the regularized solution approximates the unknown, exact solution. In ill-posed problems, there may be a point, where the data become dominated by errors and the DPC fails. In these cases, a suitable regularization term should fulfill the DPC, therefore, it facilitates to locate before the point where the data becomes dominated by errors.
To confirm that Picard’s condition was fulfilled, we plotted on the same graph the coefficients, their corresponding singular value, and the quotient of both by applying the Picard routine available in the Matlab Regularization Toolbox .
Figure 3 shows the U-curve plot on a log-log scale for the phantom experiment.
In this case, the U-curve shows a minimum which corresponds to a regularization parameter = 4.38*10−2.
The L-curve did not exhibit a neat corner. The failure to find a sharp corner was due to the high ill-posedness of this problem.
Figure 5 depicts the fDOT reconstruction with α parameters in the 10−1 to 10−6 range and with = 4.38*10−2.
Figure 6(a) shows profiles taken in the x direction (corresponding to z=7.5 mm and y=7.5 mm) and the FWHM (Full width half maximum) versus noise plot, outlining the behavior of the resolution and the image noise of the reconstructions depending on the regularization parameter. It has been shown in  that the general trend is for resolution to increase together with image noise while the regularization parameter decreases, and this trend can be seen clearly in Fig. 6. For α values of 10−1, 4.38*10−2 (), and 10−2, the FWHM of the profiles decreases. For α = 10−3 the trend is truncated because image noise starts to prevail, while for α<10−3 the reconstruction is noise only, and the object is no longer visible in the reconstructed images. Taking these considerations into account, we observe a heuristic range of values that produces reconstructed images with a reasonable amount of noise and resolution. This range includes the optimum value obtained by the U-curve method, namely, .
In Figs. 7(a) and 7(b), we plot the noisy SVD components of the solution and the right-hand side of the phantom study. One interesting aspect is the severe ill-posedness of the problem, which is seen by the fact that the singular values decay exponentially (Fig. 7(c)).
Figure 7(d) illustrates Picard’s plot showing the maximum and minimum parameter of the heuristically acceptable range plotted as two horizontal dashed lines (10−1 and 10−3).
Picard’s plot makes it possible to compare the SVD coefficients of the right-hand side with the singular values and their quotient. Singular values below 10−3, on average, decay to zero faster than those of the respective coefficients.
The blue line is the decay of the singular values , the green crosses correspond to , and the red circles represent the quotient .
We can observe that the singular values above the heuristic acceptable range of parameters 10−1 and 10−3, particularly the singular values above the U-curve cut-off (4.38*10−2), fulfill Picard’s condition.
3.2 Ex-vivo experiments
Figure 8 shows the U-curve plot on a logarithmic scale with a minimum which corresponds to the regularization value =5.72*10−2.
The curve is not really U-shaped, showing that fewer singular values remain than in the phantom experiment.
Figure 9 depicts the fDOT reconstruction with α parameters in the 10−1 to 10−6 range and with =5.72*10−2.
Figure 10 shows a coronal view of a 3D render of the Tikhonov reconstruction in terms of SVD for the regularization parameter obtained using the U-curve method. The reconstruction is merged with the white light image of the mouse.
Figure 11(a) shows profiles taken in the x direction, corresponding to z=8 mm and y=0.7 mm, and the FWHM (resolution) vs. noise plot, which outlines the behavior of the resolution and image noise of the reconstructions depending on the regularization parameter. Again, we observe a range of α values that produces reconstructed images with a reasonable amount of noise and resolution. The U-curve-based value fall within this range, which is .
In this case, we can say that the U-curve gives minimum noise while retaining the best resolution possible.
Similar to Fig. 7(d), Fig. 12 shows that the singular values above the heuristic acceptable range of parameters, particularly the singular values above the U-curve cut-off, fulfill Picard’s condition.
The blue line is the decay of the singular values , the green crosses correspond to , the red circles represent the quotient , and the two horizontal dashed lines represent the heuristic acceptable range (10−1 and 10−2).
We can observe that, as in the case of Fig. 7(d), the singular values above the heuristic acceptable range of parameters 10−1 and 10−2, particularly the singular values above the U-curve cut-off (5.72*10−2), fulfill Picard’s condition. The singular values decay faster than the singular values of Figs. 7(c)–7(d) (for phantom experiment), therefore the problem for the ex-vivo experiment is more ill-posed than the problem for the phantom experiment.
4. Discussion and conclusion
In this paper a U-curve-based method is utilized for the first time to select the regularization parameter in Tikhonov regularization reconstruction of fDOT. Suitable selection of the regularization parameters, in terms of Picard’s condition, image resolution and image noise, has been provided by the U-curve. Results are shown both on phantom and mouse data.
Choosing the correct regularization parameter is crucial for the reconstruction of DOT and fDOT data. Singular Value Analysis has been used to optimize experimental setups in optical tomography [12,40–43], and the Tikhonov regularization has recently been used to balance the effect of anatomical and optical information in fDOT problems based on a priori information. Therefore, an automatic method that enables us to choose the regularization parameter is paramount.
We wish to recall that the aim of this paper was not to perform a comparison of these methods. Nevertheless, we would like to point out that the L-curve method presents several theoretical limitations and may fail to find a good regularization parameter when the solutions are very smooth , and examples of inverse problems where the L-curve does not converge have been found .
In the diffuse optical tomography case, the authors of  emphasized that the L-curve analysis yielded overly-smooth solution in same cases.
The GCV method, on the other hand, is more computationally expensive for large systems than the L-curve .
Only two experiments are presented in this study. However, when we used the U-curve method in other experiments with different aims [40,41], we always obtained satisfactory reconstructions, both for mice and for phantoms.
It can be seen that the L-curve calculated for the phantom experiment did not exhibit a neat corner (Fig. 4). However, the U-curve for the same experiment had a pronounced minimum (Fig. 3). Furthermore, this minimum was found in the interval given in the section 2.5 (thus not being necessary to calculate the U-curve for the α parameters out of this interval). Thus, this is a case that shows the robustness and the computation effectiveness of the U-curve method.
Besides, Figs. 7(d) and 12 (Picard’s plots) show how the U-curve regularization parameter satisfies Picard’s condition, thus assuring a satisfactory regularized solution. In Figs. 5 and 9 (reconstruction obtained for different α parameters) and Figs. 6 and 11 (profiles and FWHM versus noise plot), we can choose values for the regularization parameter that are lower than the value at which the reconstructed image started to be noisy. Figures 7(d) and 12 demonstrated that, for these lower values, Picard’s condition is satisfied, as the U-curve parameter is in this range.
Simple observation of the Picard’s plot can reveal a valid regularization parameter; however, the choice could prove to be more subjective, as it would depend on the visual observation of the user. An automatic selection of the threshold parameter may be simpler and more objective in most cases.
Furthermore, in agreement with , we can see clearly in Fig. 6(b) (FWHM versus noise plot) that the general trend is for resolution to increase together with image noise while the regularization parameter decreases.
We want to point out that for the ex-vivo experiment, the resolution is better (Fig. 11(b) versus Fig. 6(b)), due to the fact that the tip of the capillary is closer to the surface than in the phantom experiment. As the resolution of DOT systems are depth-dependent, resolution improves the closer the object is to either side of the slab .
Some advantages of our work are the following: First, we believe that the U-curve method may constitute a good alternative in the cases above mentioned, where the L-curve yields unsatisfactory results. Second, according to [23–25], we have found that an interval exists where the U-curve has a minimum (the optimal regularization parameter exists) and this fact can greatly increase the computational efficiency in selecting the regularization parameter.
We expect the automatic U-curve method of selection of regularization parameter to yield robust and useful results that can be applied to the reconstruction of fDOT images and studies of image performance by singular value analysis.
This work was supported in part by Fundación Caja Navarra (#12180), Ministerio de Ciencia e Innovación (FPI program, TEC2008-06715 and TEC2007-64731), and EU-FP7 project FMTXCT-201792.
References and links
1. A. Corlu, R. Choe, T. Durduran, M. A. Rosen, M. Schweiger, S. R. Arridge, M. D. Schnall, and A. G. Yodh, “Three-dimensional in vivo fluorescence diffuse optical tomography of breast cancer in humans,” Opt. Express 15(11), 6696–6716 (2007). [CrossRef] [PubMed]
2. X. Intes, J. Ripoll, Y. Chen, S. Nioka, A. G. Yodh, and B. Chance, “In vivo continuous-wave optical breast imaging enhanced with Indocyanine Green,” Med. Phys. 30(6), 1039–1047 (2003). [CrossRef] [PubMed]
3. A. Li, E. L. Miller, M. E. Kilmer, T. J. Brukilacchio, T. Chaves, J. Stott, Q. Zhang, T. Wu, M. Chorlton, R. H. Moore, D. B. Kopans, and D. A. Boas, “Tomographic optical breast imaging guided by three-dimensional mammography,” Appl. Opt. 42(25), 5181–5190 (2003). [CrossRef] [PubMed]
4. Y. Xu, X. J. Gu, L. L. Fajardo, and H. B. Jiang, “In vivo breast imaging with diffuse optical tomography based on higher-order diffusion equations,” Appl. Opt. 42(16), 3163–3169 (2003). [CrossRef] [PubMed]
5. A. Martin, J. Aguirre, A. Sarasa-Renedo, D. Tsoukatou, A. Garofalakis, H. Meyer, C. Mamalaki, J. Ripoll, and A. M. Planas, “Imaging changes in lymphoid organs in vivo after brain ischemia with three-dimensional fluorescence molecular tomography in transgenic mice expressing green fluorescent protein in T lymphocytes,” Mol. Imaging 7(4), 157–167 (2008).
6. V. Ntziachristos, C. Bremer, E. E. Graves, J. Ripoll, and R. Weissleder, “In vivo tomographic imaging of near-infrared fluorescent probes,” Mol. Imaging 1(2), 82–88 (2002). [CrossRef]
7. V. Ntziachristos, J. Ripoll, L. V. Wang, and R. Weissleder, “Looking and listening to light: the evolution of whole-body photonic imaging,” Nat. Biotechnol. 23(3), 313–320 (2005). [CrossRef] [PubMed]
8. S. R. Arridge, “Optical tomography in medical imaging,” Inverse Probl. 15(2), R41–R93 (1999). [CrossRef]
9. J. Hadamard, “Sur les problèmes aux dérivées partielles et leur signification physique,” Princeton University Bulletin, 49–52 (1902).
10. Hanke and Hansen, “Regularization methods for large scale problems,” Surv. Math. Ind. 3, 253–315 (1993).
11. C. R. Vogel, ed., Computational Methods for Inverse Problems (SIAM, 2002).
12. E. E. Graves, J. P. Culver, J. Ripoll, R. Weissleder, and V. Ntziachristos, “Singular-value analysis and optimization of experimental parameters in fluorescence molecular tomography,” J. Opt. Soc. Am. A 21(2), 231–241 (2004). [CrossRef]
13. N. Ducros, A. Da Silva, L. Hervé, J.-M. Dinten, and F. Peyrin, “A comprehensive study of the use of temporal moments in time-resolved diffuse optical tomography: part II. three-dimensional reconstructions,” Phys. Med. Biol. 54(23), 7107–7119 (2009). [CrossRef] [PubMed]
14. A. J. Chaudhari, S. Ahn, R. Levenson, R. D. Badawi, S. R. Cherry, and R. M. Leahy, “Excitation spectroscopy in multispectral optical fluorescence tomography: methodology, feasibility and computer simulation studies,” Phys. Med. Biol. 54(15), 4687–4704 (2009). [CrossRef] [PubMed]
15. J. P. Culver, R. Choe, M. J. Holboke, L. Zubkov, T. Durduran, A. Slemp, V. Ntziachristos, B. Chance, and A. G. Yodh, “Three-dimensional diffuse optical tomography in the parallel plane transmission geometry: evaluation of a hybrid frequency domain/continuous wave clinical system for breast imaging,” Med. Phys. 30(2), 235–247 (2003). [CrossRef] [PubMed]
16. A. Serdaroglu, B. Yazici, and V. Ntziachristos, "Fluorescence molecular tomography based on a priori information," in Biomedical Optics, Technical Digest (CD) (Optical Society of America, 2006), paper SH46, http://www.opticsinfobase.org/abstract.cfm?URI=BIO-2006-SH46.
17. Z. Xu, J. Yan, and J. Bai, “Determining the regularization parameter: a hybrid reconstruction technique in fluorescence molecular tomography,” in Communications and Photonics Conference and Exhibition (ACP),2009Asia 2009), 1 - 2.
18. P. C. Hansen, “Analysis of discrete ill-posed problems by means of the L-curve,” SIAM Rev. 34(4), 561–580 (1992). [CrossRef]
19. P. C. Hansen and D. P. O’Leary, “The use of L-curve in the regularization of discrete ill-posed problems,” SIAM J. Sci. Comput. 14(6), 1487–1503 (1993). [CrossRef]
20. J. F. Abascal, S. R. Arridge, R. H. Bayford, and D. S. Holder, “Comparison of methods for optimal choice of the regularization parameter for linear electrical impedance tomography of brain function,” Physiol. Meas. 29(11), 1319–1334 (2008). [CrossRef] [PubMed]
22. H. R. Busby and D. M. Trujillo, “Optimal regularization of an inverse dynamics problem,” Comput. Struc. 63(2), 243–248 (1997). [CrossRef]
23. D. Krawczyk-Stańdo and M. Rudnicki, “Regularization parameter selection in discrete ill-posed problems-the use of the U-curve,” Int. J. Appl. Math. Comput. Sci. 17(2), 157–164 (2007). [CrossRef]
24. D. Krawczyk-Stańdo and M. Rudnicki, “The use of L-curve and U-curve in inverse electromagnetic modelling,” Intell. Comput. Tech. Appl. Electromagn. 119, 73–82 (2008). [CrossRef]
25. L. Z. Y. Qiangquiang, S. Huanfeng, and L. Pingxiang, “Adaptative multi-frame image super-resolution based on U-curve,” IEEE Trans. Image Process. ; e-pub ahead of print (2010). [CrossRef]
26. P. C. Hansen, “The discrete Picard condition for discrete ill-posed problems,” BIT 30(4), 658–672 (1990). [CrossRef]
27. J. Aguirre, A. Sisniega, J. Ripoll, M. Desco, and J. J. Vaquero, “Co-planar FMT-CT,” in 2008 World Molecular Imaging Congress (WMIC), (2008).
28. J. Aguirre, A. Sisniega, J. Ripoll, M. Desco, and J. J. Vaquero, “Design and development of a co-planar fluorescence and X-ray tomograph,” 2008 IEEE Nuclear Science Symposium Conference Record, 5412–5413 (2008).
30. A. Ishimaru, Wave Propagation and Scattering in Random Media (Academic, 1978) vol. 1.
31. I. Freund, M. Kaveh, and M. Rosenbluh, “Dynamic multiple scattering: Ballistic photons and the breakdown of the photon-diffusion approximation,” Phys. Rev. Lett. 60(12), 1130–1133 (1988). [CrossRef] [PubMed]
33. V. A. Markel and J. C. Schotland, “Inverse problem in optical diffusion tomography. II. role of boundary conditions,” J. Opt. Soc. Am. A 19(3), 558–566 (2002). [CrossRef]
34. A. Soubret, J. Ripoll, and V. Ntziachristos, “Accuracy of fluorescent tomography in the presence of heterogeneities: study of the normalized Born ratio,” IEEE Trans. Med. Imaging 24(10), 1377–1386 (2005). [CrossRef] [PubMed]
35. M. Born and E. Wolf, Principles of Optics (University Press, 1999).
36. G. H. Golub and C. F. Van Loan, Matrix computations (Johns Hopkins University Press, 1996), p. 694.
37. P. C. Hansen, “The truncated SVD as a method for regularization,” BIT Num. Math. 27(4), 534–553 (1987). [CrossRef]
38. P. C. Hansen, Regularization Tools Version 4.0 for MATLAB 7.3 (Springer, 2007), pp. 189–194.
40. J. Chamorro, J. Aguirre, J. Ripoll, J. J. Vaquero, and M. Desco, “FDOT setting optimization and reconstruction using singular value analysis with automatic thresholding,” in Nuclear Science Symposium and Medical Imaging Conference (IEEE), (2009).
41. J. Chamorro-Servent, J. Aguirre, J. Ripoll, J. J. Vaquero, and M. Desco, “Maximizing the information content in acquired measurements of a parallel plate non-contact FDOT while minimizing the computational cost: singular value analysis,” in 4th European Molecular Imaging Meeting (ESMI), (2009).
42. J. P. Culver, V. Ntziachristos, M. J. Holboke, and A. G. Yodh, “Optimization of optode arrangements for diffuse optical tomography: A singular-value analysis,” Opt. Lett. 26(10), 701–703 (2001). [CrossRef]
44. M. Hanke, “Limitations of the L-curve method in ill-posed problems,” BIT 36(2), 287–301 (1996). [CrossRef]
45. C. R. Vogel, “Non-convergence of the L-curve regularization parameter selection method,” Inverse Probl. 12(4), 535–548 (1996). [CrossRef]
46. B. W. Pogue, T. O. McBride, J. Prewitt, U. L. Osterberg, and K. D. Paulsen, “Spatially variant regularization improves diffuse optical tomography,” Appl. Opt. 38(13), 2950–2961 (1999). [CrossRef]