Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Target depth-regularized reconstruction in diffuse optical tomography using ultrasound segmentation as prior information

Open Access Open Access

Abstract

Ultrasound (US)-guided diffuse optical tomography (DOT) is a promising non-invasive functional imaging technique for diagnosing breast cancer and monitoring breast cancer treatment response. However, because larger lesions are highly absorbing, reconstructions of these lesions using reflection geometry may exhibit light shadowing, which leads to inaccurate quantification of their deeper portions. Here we propose a depth-regularized reconstruction algorithm combined with a semi-automated interactive neural network (CNN) for depth-dependent reconstruction of absorption distribution. CNN segments co-registered US to extract both spatial and depth priors, and the depth-regularized algorithm incorporates these parameters into the reconstruction. Through simulation and phantom data, the proposed algorithm is shown to significantly improve the depth distribution of reconstructed absorption maps of large targets. Evaluated with 26 patients with larger breast lesions, the algorithm shows 2.4 to 3 times improvement in the top-to-bottom reconstructed homogeneity of the absorption maps for these lesions.

© 2020 Optical Society of America under the terms of the OSA Open Access Publishing Agreement

1. Introduction

Diffuse optical tomography (DOT) is a non-invasive functional imaging technique that reconstructs the optical properties of tissue [1]. Utilizing multiple wavelengths in the near infrared (NIR) spectrum enables quantitative estimations of oxygenated, deoxygenated, and total hemoglobin concentrations, which are directly related to tumor angiogenesis [23]. However, due to intense light scattering inside biological tissue, localizing a lesion area becomes challenging, and the DOT reconstruction problem becomes ill-posed and ill-conditioned. Other imaging modalities, such as ultrasound (US) [46], x-ray CT [78], and magnetic resonance imaging (MRI) [910], have been introduced to provide prior information to guide DOT reconstruction. As a result, the reconstruction is less ill-posed due to the significantly reduced number of imaging voxels. Several clinical studies have been reported for breast cancer diagnosis [47,910].

However, to reliably reconstruct the optical properties of MRI, CT, or US-identified breast lesions, a regularization scheme should be incorporated to constrain the solution space and obtain a quality image. The general approach is based on prior anatomical information from high resolution imaging modalities. Implementation of spatially encoded regularization using images obtained from MRI has been investigated, where a two-step image reconstruction procedure was required [1112]. In the first step, a suspicious lesion was segmented from the rest of the tissue. Then this prior information was imposed to regularize the matrix inversion. Newer approaches include direct regularization imaging (DRI) in DOT [13] and depth compensation in fluorescence molecular tomography [14]. In DRI, the gray-scale values of MRI images are directly used to regularize the reconstruction, without the need for image segmentation [13]. For transmission-mode fluorescence molecular tomography, a depth compensation algorithm is introduced to reconstruct inclusions in deep tissues [14].

Our group has developed a US-guided DOT system and imaging algorithms that use co-registered US images to locate a breast lesion in reflection geometry and to segment the lesion into a fine-mesh lesion region of interest (ROI) and a coarse-mesh background tissue region [15]. This dual-mesh scheme improves the ill-posed reconstruction by significantly reducing the total number of imaging voxels. Regularization is empirically implemented as $\lambda = p\sqrt {{\sigma _1}} $ to stabilize the solution, where p is proportional to the US measured tumor size and ${\sigma _1}$ is the largest eigenvalue of ${W^\dagger }W,\; \; \textrm{where}\; W\; $is the weight matrix [16]. However, images reconstructed using these algorithms without regularization in depth may show light shadowing due to the high absorption of the upper portion of the lesion, which may lead to inaccurate quantification of the lower portion [17]. Althobaiti et al. investigated DRI using US gray-scale images, and their method improved the malignant-to-benign lesion contrast as well as the lesion shape and distribution [18]. However, DRI using US gray-scale images is limited because the contrast of US images is poor due to speckle noise and posterior acoustic shadowing of larger tumors.

In recent years, deep learning-based image segmentation has been widely used in many areas, and several segmentation neural networks have been introduced in medical imaging [1921]. In this study, we have implemented a convolutional neural network (CNN) to automatically segment each US image into lesion and background regions [21]. The 2D target shape in one spatial dimension and in depth is automatically extracted, and the target symmetry in another spatial dimension is used to obtain an approximate 3D target shape. From the 3D target shape, coarse and fine meshes are defined and the regularization matrix in depth for image reconstruction is constructed. This approach allows near real-time image segmentation and speeds up image reconstruction, bringing US-guided DOT one step closer to clinical use.

Recently, sparse regularization has been shown to improve DOT image reconstruction because its sparsity is robust against noise and enables the preservation of edges in images. In this study, fast iterative shrinkage-thresholding (FISTA) [22] is used with the L1 regularization matrix to constrain the reconstruction [2324,26]. We call our new approach with depth-regularization “depth-regularized reconstruction”. The resulting absorption maps have shown more homogeneous absorption distribution in depth. The performance of our depth-regularized reconstruction has been evaluated using simulated targets, phantom targets, and clinical data, all compared with the results of a regularized reconstruction without depth-dependent regularization [17].

2. Methods

2.1 Diffuse optical tomography

Photon migration inside a highly scattering medium, such as breast tissue, can be modeled by a diffusion equation of the photon-density wave [25]. Assuming the optical properties inside a small voxel are constant and the optical properties change smoothly across all the voxels, the diffusion equation can be reformulated as a Helmholtz wave equation. Then, by linearizing the problem based on Born approximation and discretizing the imaging space into N disjoint voxels, the equation becomes

$${[{{U_{SC}}} ]_{M \times 1}} = {[W ]_{M \times N}}{[{\delta {\mu_a}} ]_{N \times 1}}, $$
where ${U_{SC}}$ represents the perturbation measurements and M is the number of measurements, which is the number of sources multiplied by the number of detectors. W is the weight matrix computed from the analytic solution of the diffusion equation for a semi-infinite medium, using the optical properties of the background breast tissue, and $\delta {\mu _a}$ is the absorption distribution to be reconstructed. We solve the inverse problem of Eq. (1) with a regularization term $R({\delta {\mu_a}} )= \; \vert\vert diag({\lambda i} )\delta {\mu _a}\vert\vert_1^1$ :
$$\delta {\mu _a} = \arg \mathop {\min }\limits_{\delta {\mu _a}} \vert\vert{U_{SC}} - W\delta {\mu _a}\vert\vert^2 + R({\delta {\mu_a}} )\; .$$
Ultrasound has been introduced to provide prior information of the lesion’s location and size. With co-registered US images, a dual-mesh method segments the medium into a fine-mesh ellipsoid lesion region and a coarse mesh background tissue region [15]. Mathematically, we separate W and $\delta {\mu _a}$ into two categories, $W = [{{W_f},\; {W_C}} ]$ and$\; \delta {\mu _a} = [{\delta {\mu_a}_f,\; \delta {\mu_a}_C} ]$, where ${W_f}$ and $\delta {\mu _a}_f$ are the weight matrix and the changes of absorption distribution inside the lesion region, and ${W_C}$ and $\delta {\mu _a}_C$ are the weight matrix and changes of absorption distribution in the background region. This simple ellipsoid dual-mesh schedule is used for both ${\sigma _1}$-regularized reconstruction (defined below) and depth-regularized reconstruction.

2.2 ${\sigma _1}$-regularized reconstruction using FISTA

The inverse problem in solving absorption distributions is described in Eq. (2), where $\vert\vert{U_{SC}} - W\delta {\mu _a}\vert\vert^2$ is used to measure how well the data fit the model, $R({\delta {\mu_a}} )= \; \vert\vert diag(\lambda )\delta {\mu _a}\vert\vert_1^1$ is used to regularize the reconstruction, where the ${l_1}$ norm is used in $R({\delta {\mu_a}} )\; $to obtain a sparse solution to improve the accuracy of reconstruction. Regularization is implemented as $\lambda = p\sqrt {{\sigma _1}} $, where p is proportional to the US measured tumor size and ${\sigma _1}$ is the largest eigenvalue of ${W^H}W$[16]. We refer to this algorithm as ${\sigma _1}$-regularized reconstruction in the rest of the manuscript. FISTA with a constant step size was used to solve the inverse problem [22,26]. The proximal gradient method solves the optimization problem with a gradient step followed by a proximal step.

The reconstruction method of the inverse problem is encapsulated in Algorithm 1 given below. We use a zero initialization for the absorption coefficient distribution$\; \; \delta {\mu _a} = {[0 ]_{N \times 1}}$. The intermediate variables s and q are also initialized accordingly, as described in step 2 [22]. We then go through the iteration. The gradient of the $\vert\vert{U_{SC}} - W\delta {\mu _a}\vert\vert^{2}$ used in step 4 can be computed as

$$\frac{{\partial \vert\vert{U_{SC}} - W\delta {\mu _a}\vert\vert^2}}{{\partial \delta {\mu _a}}} = {W^H}({W\delta\mu_a - {U_{SC}}} ).$$
where ${W^H}$ is the Hermitian adjoint of the weight matrix W. A step size of $\tau $ is chosen as $2/norm({{W^H}W} )$ in this paper. In step 5 of the algorithm, we compute $\delta {\mu _a}^t$ using the proximal operator associated with the regularization term$\; R({\delta {\mu_a}} )$, where the proximal operator is defined as
$$pro{x_R}({z,\; R({\delta {\mu_a}} )} )= \mathop {\textrm{argmin}}\limits_z \{ R({\delta {\mu_a}} )+ \; \frac{1}{2}\vert\vert z - \delta {\mu _a}\vert\vert_2^2\} .$$
This proximal operator can be efficiently calculated with the soft threshold function$\; {S_\gamma }(z )$, defined as [22,26]
$${S_\gamma }(z )= sign(z )\odot \max ({0,\; |z |- \gamma } ),$$
where ${\odot} $ is the element-wise multiplication operator and $\gamma = \tau \lambda $. Here, the $sign(z )$ is a sign function that extracts the sign of each element in z, and $\; |z |$ computes the absolute value of each element in z. We then update the intermediate variables ${s_t}$ and ${q_t}$, following the procedures listed as step 6 and step 7, which help accelerate the convergence rate [22].

2.3 Ultrasound segmentation using CNN and depth-regularized reconstruction

Co-registered US images can help to localize the lesion region and reduce the number of voxels with unknown optical properties, easing the ill-posed problem. However, due to the low contrast, speckle noise, and posterior shadow in most breast US images, it remains challenging to accurately perform automated lesion segmentation [2021]. Typically, an ellipsoid is used to approximate the lesion, and more than twice the lesion size in the spatial dimension is used to define the dual-mesh region, which accounts for larger target region in low resolution DOT imaging. The depth is manually measured from co-registered US images. Here we take a further step and extract lesion shape information from US images as a prior for depth-regularized DOT reconstruction.

boe-11-6-3331-i001

The automated convolutional neural network (CNN) for breast cancer detection was developed by Yap et al. [20]. Their CNN has 33 convolution layers. The network architecture is one convolution layer with a 7×7 kernel, one pooling layer, 6 convolution layers with 3×3 kernels, one pooling layer, 8 convolution layers with 3×3 kernels, one pooling layer, 12 convolution layers with 3×3 kernels, one pooling layer, and 6 convolution layers with 3×3 kernels. Instead of a regular convolution, the last two stages use dilated convolution (also known as atrous convolution) with a dilation rate of 2. A pyramid scene parsing network [21] extracts the feature map from the output of the final convolution layer. The CNN model is pre-trained on ImageNet, which is an image database with 1.2 million images. The input RGB image has three channels, for red, green, and blue in general. We fine-tuned the adapted CNN by freezing the weights of the first two convolutional layers, which captures universal features like boundaries and edges, then we trained the model using PASCAL, a well-known standard dataset with US images [21], and a small learning rate to modify the parameters to fit our task.

The segmentation of US images using the pre-trained CNN was performed automatically by selecting four markers, two each in the lateral and depth dimensions. Those four marks perform as boundary points. We introduced an extra channel in the image with four 2D Gaussian functions centered at each marker. After selecting the four markers, this CNN automatically generated the 2D shape of the lesion. The target symmetry in another spatial dimension was used to obtain an approximate 3D target shape. By performing this semi-automatic interactive segmentation, we could efficiently localize the lesion and extract its approximate shape, including its width at each depth layer and depth as a prior, and could incorporate these parameters into the depth-regularized DOT reconstruction with $R({\delta {\mu_a}} )= \; \vert\vert diag({\lambda i} )\delta {\mu _a}\vert\vert_1^1$, as given below.

The depth-dependent regularization matrix $[{diag({\lambda i} )} ]$ is computed as follows. At target layer i within the dual-mesh, which is generally 2 to 3 times the spatial dimension measured by co-registered US using CNN, the regularization matrix is computed as $\lambda i = \frac{C}{{width(i )\; \times \; dept{h^i}}}$, where the index i is the i-th power of the absolute depth at i-th target layer. For example, index i is 1 for the top target layer in the dual-mesh, 2 for the second layer, … and i for the i-th target layer. Here, $width(i )\; $ is the width of the i-th target layer automatically measured by the CNN. c is a constant empirically chosen as 4 based on simulations. The units of width and depth are in cm. $\; \lambda i$ outside the dual-mesh region is given as zeros. Thus, the regularization is tighter for shallow target layers and looser for deeper layers.

To quantify the light shadowing effect, the ratio of the summation of the reconstructed $\delta {\mu _{{a_{top}}}}$from the top depth layer to the summation of $\delta {\mu _{{a_{bottom}}}}$from the bottom layers is calculated as $R = \frac{{\sum \delta {\mu _{{a_{top\; layer}}}}}}{{\sum \delta {\mu _{{a_{bottom\; layers}}}}}}$.

2.4 Phantom and patient experiments

To evaluate the performance of the proposed algorithm, phantom and patient data were acquired from the US-guided DOT system. The DOT system uses four laser diodes with wavelengths of 740, 780, 808, and 830 nm, and it incorporates 10 or 14 parallel photomultiplier (PMT) detectors and a commercial US system [6,27]. Each laser diode is modulated at 140 MHz and delivers light sequentially to nine source locations on a hand-held probe. The sources are placed on one side of the ultrasound transducer and detectors are placed on the other side, with source-to-detector separations varying from 3 cm to 7.6 cm. The probe diameter is 9 cm, and the volume underneath the probe is represented by voxels. The patient study protocol for this study was approved by the local Institutional Review Boards and was compliant with the Health Insurance Portability and Accountability Act. All patients signed an informed consent form, and all patient data used in this study were de-identified.

A total of 26 patients with large lesions were studied to evaluate the algorithms. Based on biopsy results, 12 patients had malignant lesions (mean age 61 years; range 47-73 years) and 14 had benign lesions (mean age, 44 years; range 22-71 years). For malignant lesions, the average size was 1.8 ± 0.63 cm and average depth was 2.2 ± 0.39 cm; while for benign lesions, the average size was 2.2 ± 0.92 cm and average depth was 1.6 ± 0.43 cm.

3. Results

3.1 Ultrasound segmentation

Ultrasound segmentation, the first step in the proposed algorithm, provides the lesion location and shape information. With accurate segmentation, we can then achieve more accurate reconstruction results. The pre-trained CNN model is applied to our co-registered US images to perform US segmentation.

To evaluate the performance of the US segmentation, an experienced US imager delineated the boundaries of all breast lesions studied. One example is presented in Fig. 1. Based on the evaluations by US experts, the automated interactive neural network gives very similar results to the manual measurements.

 figure: Fig. 1.

Fig. 1. (a) US image of a breast tumor with manually marked lesion boundaries. (b) US image segmentation using the semi-automated interactive CNN model.

Download Full Size | PDF

3.2 Reconstruction results

3.2.1 Monte Carlo simulation

To evaluate the performance of the proposed algorithm, MC simulations were performed using the known optical properties of the target and background tissue. In the MC simulations, photons propagated in a 10 cm ${\times} $ 10 cm ${\times} $ 6 cm volume whose background optical properties were $\; {\mu _{{a_0}}} = 0.02\; c{m^{ - 1}}$ and $\; {\mu ^{\prime}_{s{p_0}}} = 7\; c{m^{ - 1}}$. Three target shapes were simulated using these same target optical properties of $\; {\mu _a} = 0.20\; c{m^{ - 1}}$ and $\; \mu {^{\prime}_{sp}} = 7\; c{m^{ - 1}}$. For the first shape, the target was set as two cylinders of different diameters, stacked symmetrically with their centers aligned, to mimic a shaped lesion. The top cylinder was 1.5 cm in diameter and 0.75 cm high, and the bottom one was 3 cm in diameter and 0.75 cm high. The center of this stacked target was located at (0, 0, 2 cm) inside the background volume. Together, the stacked cylinders measured 1.5 cm in height. Within the image background volume, the interfacial plane between the cylinders was set at a depth of 2 cm. Imaging two planes at 2.5 mm above and below that depth, as is typical practice in our DOT, would yield reconstructions at depths of 1.75 and 2.25 cm. Two sets of reconstructed images were generated by using the ${\sigma _1}$-regularized reconstruction and the proposed depth-regularized reconstruction. For the second target shape, both the top and bottom cylinders were 1.5 cm in diameter and 0.75 cm high, while for the third shape, the larger cylinder was placed on top and the smaller one below.

Longitudinal sections of simulated target and reconstructed absorption maps from both algorithms are presented in Fig. 2. For the first shape, the ${\sigma _1}$-regularized reconstruction has a maximum reconstructed value of $\; {\mu _a} = 0.199\; c{m^{ - 1}}$ in (d), but only the top portion of the lesion is resolved. The reconstructed FWHM width is 3 cm, which is two times larger than the true target dimension of 1.5 cm. The depth-regularized reconstruction provides a similar maximum reconstructed value of $\; {\mu _a} = 0.197\; c{m^{ - 1}}$, along with the second reconstructed depth layer with 2.25-cm FWHM, which is closer to the real target shape shown in (e). Similar results, (f) and (g), are shown for the second shape, where the ${\sigma _1}$-regularized reconstruction has a maximum reconstructed value of $\; {\mu _a} = 0.170\; c{m^{ - 1}}$, but fails to resolve the target in the second depth layer. The depth-regularized reconstruction provides a maximum reconstructed value of $\; {\mu _a} = 0.171\; c{m^{ - 1}}$ along with a second reconstructed target layer of R = 1.65. The second reconstructed target layer has a similar reconstructed FWHM width to the first layer’s target width, which is closer to the real target size. However, for the third target shape, shown in (h) and (i), due to the large absorbing top cylinder, both algorithms failed to resolve the target in the second depth layer.

 figure: Fig. 2.

Fig. 2. (a), (b), and (c) are cross section illustrations of three differently shaped simulated targets. In each target, two cylinders with the same optical properties are concentrically stacked. (d), (f), and (h) are reconstructed images from the ${\sigma _1}$-regularized reconstruction. Only the top layer of the target is resolved for all three shapes. (e), (g), and (i) are reconstructed images from the proposed depth-regularized algorithm. Under conditions (a) and (b), both targets are resolved at the deeper layer. The ratio R = 1.32 in (e) and 1.65 in (g). For target shape (c), the top portion absorbed most of the light and the deeper target portion could not be resolved. Each image slice is 9 cm by 9 cm in spatial dimensions and the depth between slices is 0.5 cm. The absorption coefficient is cm−1 with range 0 to 0.2 cm−1.

Download Full Size | PDF

The importance of accurate US measurement in resolving the target depth distribution was also evaluated by simulations. A simple ellipsoid fitting was applied to all three simulated target shapes to acquire the approximate width of each target at different depths. Two sets of images were reconstructed for each shape, based on the depth-regulated algorithm with ellipsoidal fitting and the same algorithm with the actual target width. (f) and (g) in Fig. 3 show that the simple ellipsoid fitting works fine when the target width is closer to the measurement from the ellipsoid. However, (d) and (e) show the ellipsoid fitting leads to a larger ratio, R = 2.45, as compared with using a more accurate measurement of the width, which yields R = 1.32. For target shape (c), due to the highly absorbing target on the top, both methods failed to resolve the bottom portion.

 figure: Fig. 3.

Fig. 3. (a), (b), and (c) are cross sections of three differently shaped simulated targets. For all three shapes, the ellipsoid fitting of the target is marked in the red. (d), (f), and (h) are reconstructed images from the depth-regularized algorithm with ellipsoidal fitting, with the ratio R = 2.45 in (d) and R = 1.59 in (f). (e), (g), and (i) are reconstructed images from the proposed algorithm with the actual target size, with the ratio R = 1.32 in (e) and R = 1.65 in (g). For shape (c), both methods failed to resolve the bottom portion.

Download Full Size | PDF

3.2.2 Phantom experiments

Phantom experiments were conducted at a wavelength of 780 nm to evaluate the performance of the depth-regularized algorithm. Solid ball phantoms made of polyester resin were used to mimic a high contrast malignant lesion with calibrated optical properties of $\; {\mu _a} = 0.23\; c{m^{ - 1}}$ and$\; \mu {^{\prime}_{sp}} = 7\; c{m^{ - 1}}$. These target phantoms were submerged 2 cm deep in background intralipid solution with $\; {\mu _{{a_0}}}$ in the range of 0.02 to 0.03$\; c{m^{ - 1}}$ and$\; \; \mu {^{\prime}_{s{p_0}}}\; $in the range of 7 to 8 $c{m^{ - 1}}$, typical bulk optical properties for normal breast tissue. An example of a 2 cm diameter target is shown in Fig. 4. Figure 4(a) is a co-registered US image showing the spherical phantom in the center of the image. Absorption distribution maps (b) reconstructed from the ${\sigma _1}$-regularized algorithm have a maximum $\; {\mu _a} = 0.214\; c{m^{ - 1}},$ but resolve the target from only the 1.5 cm depth. (c) The depth-regularized algorithm has a similar maximum $\; {\mu _a} = 0.204\; c{m^{ - 1}}$, and resolves the target in both depth layers. However, it cannot recover the third target layer at 2.5 cm.

 figure: Fig. 4.

Fig. 4. (a) Co-registered US image, a solid ball phantom 2 cm below the surface. (b) Reconstructed images from the ${\sigma _1}$-regularized algorithm, with a maximum reconstructed $\; {\mu _a} = 0.214\; c{m^{ - 1}}$. Only the target top-layer was resolved in depth, which leads to an infinite top-to-bottom ratio. (c) Reconstructed images from the proposed depth-regularized algorithm, with a maximum reconstructed $\; {\mu _a} = 0.204c{m^{ - 1}}$. The proposed algorithm resolves both target layers, which is closer to the phantom shape and has a R = 1.071 top-to-bottom ratio. Note that each image slice is 9 cm by 9 cm. The depth spacing between layers is 0.5 cm, and the absorption coefficient is measured in cm −1. Note that the target is marked by a circle, and some sound reverberations from the hard resin ball are shown outside the circle.

Download Full Size | PDF

Three high contrast phantoms, with $\; {\mu _a} = 0.23\; c{m^{ - 1}},$ and three low contrast phantoms, with$\; {\mu _a} = 0.11\; c{m^{ - 1}},$ were reconstructed using both algorithms. The ${\sigma _1}$-regularized algorithm failed to resolve the second target layer in the high contrast cases and resolved two out of three low contrast cases of R = 1.948 ± 0.276. The low contrast case in which the ${\sigma _1}$-regularized algorithm could not resolve the second target layer was located at 3 cm depth. The proposed depth-regularized algorithm provides an average ratio R = 1.022 ± 0.155 for the low contrast phantoms and an average ratio R = 1.269 ± 0.280 for the high contrast phantoms.

3.2.3 Clinical study

To evaluate the performance of the proposed algorithm, clinical data were acquired from both the lesion side of a breast, as target data, and the contralateral position of the normal breast, as reference data. Background breast tissue optical properties were computed by fitting the reference data. Four wavelength absorption distribution maps were reconstructed for each case, then used to compute hemoglobin concentrations.

Figure 5 shows example reconstructed 780 nm absorption distribution maps of a stage 2 invasive cancer with mixed ductal and lobular features. A co-registered US image is shown in Fig. 5(a), and the resulting image after applying the CNN, with the lesion shape marked, is shown in Fig. 5(b). Images (c) reconstructed with the ${\sigma _1}$-regularized algorithm show that the top portion of the lesion in slice 4 (2 cm depth) has a much higher $\; {\mu _a}\; $of $0.307\; c{m^{ - 1}}$ and is much larger than the bottom portion in slice 5 (2.5 cm depth). The depth-regularized algorithm (d) improves the reconstruction for the second layer, with R = 1.577. One example of a benign fibroadenoma, reconstructed from imaging at 780 nm, is presented in Fig. 6 to demonstrate the performance of the proposed algorithm. Image (c), reconstructed with the ${\sigma _1}$-regularized algorithm, has an R = 2.556. The depth-regularized algorithm (d) improves reconstruction for the second layer (slice 3), with R = 1.085.

 figure: Fig. 5.

Fig. 5. Segmented US image and reconstructed absorption maps of a stage 2 cancer at 780 nm. (a) US image from a co-registered commercial US system. (b) Segmented US image from CNN. (c) Reconstructed absorption map from the ${\sigma _1}$ -regularized algorithm with a maximum $\; {\mu _a} = 0.307\; c{m^{ - 1}}$and infinite R. (d) Reconstructed absorption map from the depth-regularized algorithm with a maximum $\; {\mu _a} = 0.299\; c{m^{ - 1}}$ and R = 1.57. (c) and (d) consist of 7 sub-images, each sub-image showing one 9 cm by 9 cm cross section of the reconstructed absorption map from 0.5 cm to 3.5 cm in 0.5 cm spacing below tissue surface. The absorption coefficient in cm−1 with range 0 to 0.2 cm−1.

Download Full Size | PDF

 figure: Fig. 6.

Fig. 6. Segmented US images and reconstructed absorption maps of a benign fibroadenoma imaged at 780 nm. (a) US image from a co-registered commercial US system. (b) Segmented US image from the segmentation CNN. (c) Reconstructed absorption map from the ${\sigma _1}$ -regularized algorithm, with a maximum ${\mu _a} = 0.084\; c{m^{ - 1}}$. (d) Reconstructed absorption map from the depth-regularized algorithm, with a maximum ${\mu _a} = 0.079\; c{m^{ - 1}}$. The units of absorption coefficients are cm−1, and the range is 0 to 0.1 cm−1.

Download Full Size | PDF

Finally, after applying both the ${\sigma _1}$-regularized algorithm and the depth-regularized algorithm to all the patient data, the total hemoglobin concentrations were computed from the reconstructed absorption distributions of all four wavelengths. The boxplot in Fig. 7 shows the maximum total hemoglobin concentrations (HbT) for all 26 cases for both the ${\sigma _1}$-regularized algorithm and depth-regularized algorithm. For the benign cases, the total hemoglobin from the ${\sigma _1}$-regularized algorithm is 63.44 ± 26.16$\mu M$, and for the depth-regularized algorithm, it is 50.32 ± 24.19$\; \mu M$. For the malignant cases, the value is 122.76 ± 26.29 $\mu M$ for the ${\sigma _1}$-regularized algorithm, and 117.18 ± 28.79$\; \mu M\; $for the depth-regularized algorithm. From the reconstructed values, we can conclude that the depth-regularized reconstruction algorithm has similar performance for both absorption coefficients and total hemoglobin, as compared to the ${\sigma _1}$-regularized algorithm. However, the depth-regularized algorithm provides improved depth distribution.

 figure: Fig. 7.

Fig. 7. Boxplots of reconstructed maximum total hemoglobin values among 14 benign and 12 malignant cases, using either the ${\sigma _1}$-regularized algorithm (${\sigma _1}$-regularized) or the proposed depth-regularized algorithm (depth). HbT is reported in $\mu M.$

Download Full Size | PDF

3.2.4 Quantitative analysis of the light shadowing effect

The top-to-bottom ratio of the light shadowing effect was quantitatively analyzed for 26 cases of patient data. Due light shadowing, some clinical cases do not have reconstructed values for their bottom portions when the ${\sigma _1}$-regularized algorithm is used, and so have an infinite ratio for their quantitative measurements of light shadowing effect. After excluding those cases, we reconstructed the absorption distribution maps of 12 benign and 11 malignant cases, and then computed the ratios, R, of light shadowing effect. From the boxplots in Fig. 8, we can see clear differences between the ${\sigma _1}$-regularized and the depth-regularized algorithms. The proposed depth-regularized algorithm has a small ratio, which means more homogeneously reconstructed absorption distribution maps in depth. Statistically, the ${\sigma _1}$-regularized algorithm has mean ratios of 3.53 in benign cases and 3.82 in malignant cases. These values are expected, because in benign cases the lesions are less absorbing than in malignant cases, which leads to reduced light shadowing. The depth-regularized algorithm reconstructs absorption distributions with reduced light shadowing effect, with a mean ratio of 1.19 for benign cases and 1.59 for malignant cases, corresponding to 3.0 to 2.4 times improvements in top-to-bottom reconstructed homogeneity for benign and malignant large lesions.

 figure: Fig. 8.

Fig. 8. Boxplot of R for 12 benign and 11 malignant cases.

Download Full Size | PDF

3.2.5 Evaluation on small targets

Although the proposed algorithm focused on large lesions, we want to show that the depth-regularized algorithm will not change reconstruction results with small targets. We evaluated the proposed algorithm’s performance on 5 benign and 5 malignant cases with small targets, where the lesion diameters were smaller than 1 cm. One example of a small 6 mm ductal carcinoma in situ is presented in Fig. 9, with the lesion located 1.3 cm beneath the skin. The ${\sigma _1}$-regularization algorithm and depth-regularized algorithm provide both similar results for reconstructed values and reconstructed image shape.

 figure: Fig. 9.

Fig. 9. Images of a small malignant lesion imaged at 780 nm, reconstructed with both the ${\sigma _1}$ -regularized algorithm and depth-regularized algorithm. (a) Co-registered US image (b) Segmented US image (c) Reconstructed absorption distribution from the ${\sigma _1}$-regularized algorithm, with a maximum ${\mu _a} = 0.227\; c{m^{ - 1}}$. (d) Reconstructed absorption distribution from the depth-regularized algorithm, with a maximum ${\mu _a} = 0.226\; c{m^{ - 1}}$. The absorption coefficient is in cm−1, and the range is 0 to 0.2 cm−1.

Download Full Size | PDF

For five benign lesions, the average size was 1.0 ± 0.33 cm and average depth was 1.33 ± 0.38 cm; while for five malignant lesions, the average size was 1.0 ± 0.44 cm and average depth was 1.6 ± 0.44 cm. For the benign cases, the total hemoglobin of the ${\sigma _1}$-regularized algorithm is 47.16 ± 12.27$\; \mu M$, and for the depth-regularized algorithm it is 46.27 ± 14.97$\; \mu M$. For the malignant cases, the total hemoglobin of the ${\sigma _1}$-regularized algorithm is 137.73 ± 21.93$\; \mu M$, and for the depth-regularized algorithm it is 138.53 ± 21.37$\; \mu M$.

3.2.6 Robustness test

The depth-regularized algorithm has shown promising results in reducing the light shadowing effect and improving target depth distribution. The key is the construction of the depth-dependent regularization matrix, which relies on the US segmentation. Due to the low contrast and speckle noise of US images and the user dependency of our semi-automated segmentation algorithm, different users may realize different segmentation results. Here we evaluated the influence of segmentation errors in reconstruction of a simulated target, where the target shape was known. The simulated target was a 2 cm diameter sphere with an absorption coefficient of $\; {\mu _a} = 0.2\; c{m^{ - 1}},$ located 2 cm below the surface of the tissue. Three target layers of 1.5 cm, 2 cm, and 2.5 cm were reconstructed. One set of images, reconstructed using the ground truth target shape, has a maximum reconstructed absorption coefficient of 0.184 cm−1. The segmentation error of each layer was defined as the percentage error of the width of the segmented region in the corresponding reconstructed layer. The error is positive when the segmented region is larger than the true size of the lesion and negative otherwise. We manually added 5%, 10%, 15%, and 20% positive and negative errors to each individual reconstructed layer and to all reconstructed layers simultaneously. Using the proposed depth-regularized algorithm, 24 sets of images were reconstructed. The reconstructed maximum absorption coefficients, versus segmentation errors, are presented in Fig. 10. The largest error differs by only 11% from the original reconstructed absorption coefficients, which means the proposed algorithm is fairly robust against small errors in segmentation. It is interesting to note that when the error of top target portion is positive, the width is larger, the $\lambda 1$ is smaller, and the regularization is looser. Therefore the reconstructed $\; {\mu _a}$ is larger than the use of the true width at the same depth layer. When the error is negative, the trend is reversed. However, for the second target layer, the positive error causes looser regularization and therefore a higher reconstructed $\; {\mu _a}$ value in the second target layer. Since the maximum is always located in the top target layer, this reduces the maximum $\; {\mu _a}$. The negative error in the second target layer increases the value of the reconstructed maximum target $\; {\mu _a}$ . The overall error follows the trend of the first target layer errors. The highest error is 11%, when the top layer is measured as 20% smaller than the true width, and the error is about 5% when the top layer is measured as 20% larger than the true width at the same depth. The second layer error is about 5% when the second layer is measured as either smaller or larger than the true width at the same depth. These findings suggest that when the uncertainty is higher in segmentation, it is the best to use a larger width for the top target layer. This could be done by providing guidelines to operators.

 figure: Fig. 10.

Fig. 10. Reconstructed absorption coefficients versus segmentation errors. Positive errors mean segmented regions are larger than the true size of the lesion, and negative errors mean the opposite. First and second target layer widths were changed to evaluate the errors. Segmentation errors have minimal influence on reconstructed absorption coefficients.

Download Full Size | PDF

4. Discussion and summary

As presented here, deep learning-based US segmentation extracts the lesion location and the lesion shape from co-registered US images as a prior, which is incorporated into a regularization matrix in the depth-regularized reconstruction algorithm. The choice of regularization matrix is both important and difficult. With the help of an accurate and efficient US segmentation neural network, a lesion’s width and its reconstruction depth can both be used for constructing a depth-dependent regularization matrix. This adaptive regularization matrix gives us a better estimate of the true target absorption distribution in depth.

The depth-regularized reconstruction does not change the maximum reconstructed target absorption coefficient, and therefore does not change the maximum total hemoglobin concentration when it is compared with the ${\sigma _1}$-regularized algorithm. Thus, it does not change the quantification of the malignant-to-benign lesion contrast ratio. However, in this work, it improved the lesion depth distribution more in the benign group, with a ratio of 3, than in the malignant group, with a ratio of 2.4. Thus, the improved depth distribution could be used as another parameter to distinguish benign vs. malignant lesions, i.e, benign lesions have a more homogenous depth distribution than malignant lesions. However, this initial result will need to be further tested on a large patient population.

Depth-regularized reconstruction is limited in recovering the depth distribution if the top of the target mass is much larger than the bottom of the mass, as seen from target shape 3 in the simulations (Figs. 2 and 3). Additionally, the algorithm cannot resolve the deep portion of the target beyond the second layer, as seen in the Fig. 4 phantom data, which is more than one centimeter below the top target layer in our reconstruction mesh set-up. In general, the target depth sensitivity is about 3 cm when the center of the target is located at this depth. This depth is adequate to image almost all breast lesions. Note that we can always position a patient differently by using a wedge underneath the patient in a supine position to make sure the lesion is within the depth range.

The automated CNN segmentation algorithm provides the lesion shape to extract the spatial and depth priors and reduce the need for manually measuring these parameters. It will also minimize user dependence in the reconstruction of lesion absorption maps, which is an important step toward the clinical translation of US-guided DOT technology. However, the CNN algorithm still requires users to input the boundary points, and the user must have basic knowledge of US images. This task can be readily accomplished by ultrasound sonographers who are trained to assist radiologists in breast exams. Ideally, the 3D shape of the target gained from several co-registered 2D US images can refine the regularization matrix in depth and further improve the accuracy of the reconstructed absorption distribution. This approach will require scanning the lesion in another dimension using a linear array. In future work, we will evaluate the additional improvement and its clinical utility.

In conclusion, we demonstrate a depth-regularized reconstruction algorithm combined with a semi-automated interactive neural network to reconstruct the absorption distribution and total hemoglobin concentration. Co-registered US images are segmented as both spatial and depth priors and are incorporated into the reconstruction. The depth-regularized algorithm is validated using both simulations and phantom and clinical data. The light shadowing effect is reduced by 2-3 times in large lesions. The algorithm improved depth distribution more in a benign group than in a malignant group, a difference that could provide another parameter for breast cancer classification. We believe our work can be generalized for US-guided diffuse optical tomography/fluorescence tomography in reflection geometry.

Funding

National Institutes of Health (NIBIB R01EB002136, R01CA228047).

Acknowledgements

We acknowledge the funding support for this work from NCI (R01CA228047) and NIBIB (R01EB002136). The authors thank Professor Jim Ballard for reviewing and editing the manuscript.

Disclosures

The authors declare that there are no conflicts of interest related to this article.

References

1. D. A. Boas, D. H. Brooks, E. L. Miller, C. A. DiMarzio, M. Kilmer, R. J. Gaudette, and Q. Zhang, “Imaging the body with diffuse optical tomography,” IEEE Signal Process. Mag. 18(6), 57–75 (2001). [CrossRef]  

2. B. J. Tromberg, B. W. Pogue, K. D. Paulsen, A. G. Yodh, D. A. Boas, and A. E. Cerussi, “Assessing the future of diffuse optical imaging technologies for breast cancer management,” Med. Phys. 35(6), 2443–2451 (2008). [CrossRef]  

3. D. Grosenick, H. Rinneberg, R. Cubeddu, and P. Taroni, “Review of optical breast imaging and spectroscopy,” J. Biomed. Opt. 21(9), 091311 (2016). [CrossRef]  

4. Q. Zhu, E. B. Cronin, A. A. Currier, H. S. Vine, M. Huang, N. Chen, and C. Xu, “Benign versus malignant breast masses: optical differentiation with us-guided optical imaging reconstruction,” Radiology 237(1), 57–66 (2005). [CrossRef]  

5. Q. Zhu, P. U. Hegde, A. Ricci Jr, M. Kane, E. B. Cronin, Y. Ardeshirpour, C. Xu, A. Aguirre, S. H. Kurtzman, and P. J. Deckers, “Early-stage invasive breast cancers: potential role of optical tomography with us localization in assisting diagnosis,” Radiology 256(2), 367–378 (2010). [CrossRef]  

6. Q. Zhu, A. Ricci Jr, P. Hegde, M. Kane, E. Cronin, A. Merkulov, Y. Xu, B. Tavakoli, and S. Tannenbaum, “Assessment of functional differences in malignant and benign breast lesions and improvement of diagnostic accuracy by using us-guided diffuse optical tomography in conjunction with conventional us,” Radiology 280(2), 387–397 (2016). [CrossRef]  

7. Q. Fang, J. Selb, S. A. Carp, G. Boverman, E. L. Miller, D. H. Brooks, R. H. Moore, D. B. Kopans, and D. A. Boas, “Combined optical and X-ray tomosynthesis breast imaging,” Radiology 258(1), 89–97 (2011). [CrossRef]  

8. B. B. Zimmermann, B. Deng, B. Singh, M. Martino, J. Selb, Q. Fang, A. Y. Sajjadi, J. Cormier, R. H. Moore, D. B. Kopans, D. A. Boas, M. A. Saksena, and S. A. Carp, “Multimodal breast cancer imaging using coregistered dynamic diffuse optical tomography and digital breast tomosynthesis,” J. Biomed. Opt. 22(4), 046008 (2017). [CrossRef]  

9. B. Brooksby, B. W. Pogue, S. Jiang, H. Dehghani, S. Srinivasan, C. Kogel, T. D. Tosteson, J. Weaver, S. P. Poplack SP, and K. D. Paulsen, “Imaging breast adipose and fibroglandular tissue molecular signatures by using hybrid MRI-guided near-infrared spectral tomography,” Proc. Natl. Acad. Sci. U. S. A. 103(23), 8828–8833 (2006). [CrossRef]  

10. J. Feng, J. Xu, S. Jiang, H. Yin, Y. Zhao, J. Gui, K. Wang, X. Lv, F. Ren, B. W. Pogue, and K. D. Paulsen, “Addition of T2-guided optical tomography improves noncontrast breast magnetic resonance imaging diagnosis,” Breast Cancer Res. 19(1), 117 (2017). [CrossRef]  

11. B. Brooksby, S. Jiang, H. Dehghani, B. W. Pogue, K. D. Paulsen, J. B. Weaver, C. Kogel, and S. P. Poplack, “Combining near-infrared tomography and magnetic resonance imaging to study in vivo breast tissue: implementation of a Laplacian-type regularization to incorporate magnetic resonance structure,” J. Biomed. Opt. 10(5), 051504 (2005). [CrossRef]  

12. P. K. Yalavarthy, B. W. Pogue, H. Dehghani, C. M. Carpenter, S. Jiang, and K. D. Paulsen, “Structural information within regularization matrices improves near infrared diffuse optical tomography,” Opt. Express 15(13), 8043–8058 (2007). [CrossRef]  

13. L. Zhang, S. Jiang, Y. Zhao, J. Feng, B. W. Pogue, and K. D. Paulsen, “Direct Regularization From Co-Registered Contrast MRI Improves Image Quality of MRI-Guided Near-Infrared Spectral Tomography of Breast Lesions,” IEEE Trans. Med. Imaging 37(5), 1247–1252 (2018). [CrossRef]  

14. F. Liu, M. Li, B. Zhang, J. Luo, and J. Bai, “Weighted depth compensation algorithm for fluorescence molecular tomography reconstruction,” Appl. Opt. 51(36), 8883–8892 (2012). [CrossRef]  

15. Q. Zhu, N. Chen, and S. H. Kurtzman, “Imaging tumor angiogenesis by use of combined near-infrared diffusive light and ultrasound,” Opt. Lett. 28(5), 337–339 (2003). [CrossRef]  

16. K. S. Uddin, A. Mostafa, M. Anastasio, and Q. Zhu, “Two-step imaging reconstruction using truncated pseudoinverse as a preliminary estimate in ultrasound guided diffuse optical tomography,” Biomed. Opt. Express 8(12), 5437–5449 (2017). [CrossRef]  

17. C. Xu and Q. Zhu, “Light shadowing effect of large breast lesions imaged by optical tomography in reflection geometry,” J. Biomed. Opt. 15(3), 036003 (2010). [CrossRef]  

18. M. Althobaiti, H. Vavadi, and Q. Zhu, “Diffuse optical tomography reconstruction method using ultrasound images as prior for regularization matrix,” J. Biomed. Opt. 22(2), 026002 (2017). [CrossRef]  

19. Q. Huang, Y. Luo, and Q. Zhang, “Breast ultrasound image segmentation: a survey,” Int. J. CARS 12(3), 493–507 (2017). [CrossRef]  

20. M. H. Yap, G. Pons, J. Martí, S. Ganau, M. Sentís, R. Zwiggelaar, A. K. Davison, and R. Martí, “Automated breast ultrasound lesions detection using convolutional neural networks,” IEEE J. Biomed. Health Inform 22(4), 1218–1226 (2018). [CrossRef]  

21. K.-K. Maninis, S. Caelles, J. Pont-Tuset, and L. Van Gool, “Deep extreme cut: From extreme points to object segmentation,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, (2018), pp. 616–625.

22. A. Beck and M. Teboulle, “A fast iterative shrinkage-thresholding algorithm for linear inverse problems,” SIAM J. Imaging Sci. 2(1), 183–202 (2009). [CrossRef]  

23. W. Lu, D. Lighter, and I. B. Styles, “L1-norm based nonlinear reconstruction improves quantitative accuracy of spectral diffuse optical tomography,” Biomed. Opt. Express 9(4), 1423–1444 (2018). [CrossRef]  

24. P. Mohajerani, A. A. Eftekhar, J. Huang, and A. Adibi, “Optimal sparse solution for fluorescent diffuse optical tomography: theory and phantom experimental results,” Appl. Opt. 46(10), 1679–1685 (2007). [CrossRef]  

25. A. Zacharopoulos, M. Schweiger, V. Kolehmainen, and S. Arridge, “3d shape based reconstruction of experimental data in diffuse optical tomography,” Opt. Express 17(21), 18940–18956 (2009). [CrossRef]  

26. S. Xu, K. S. Uddin, and Q. Zhu, “Improving DOT reconstruction with a Born iterative method and US-guided sparse regularization,” Biomed. Opt. Express 10(5), 2528–2541 (2019). [CrossRef]  

27. H. Vavadi, A. Mostafa, F. Zhou, K. M. S. Uddin, M. Althobaiti, C. Xu, R. Bansal, F. Ademuyiwa, S. Poplack, and Q. Zhu, “Compact ultrasound-guided diffuse optical tomography system for breast cancer imaging,” J. Biomed. Opt. 24(2), 1–9 (2018). [CrossRef]  

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (10)

Fig. 1.
Fig. 1. (a) US image of a breast tumor with manually marked lesion boundaries. (b) US image segmentation using the semi-automated interactive CNN model.
Fig. 2.
Fig. 2. (a), (b), and (c) are cross section illustrations of three differently shaped simulated targets. In each target, two cylinders with the same optical properties are concentrically stacked. (d), (f), and (h) are reconstructed images from the ${\sigma _1}$-regularized reconstruction. Only the top layer of the target is resolved for all three shapes. (e), (g), and (i) are reconstructed images from the proposed depth-regularized algorithm. Under conditions (a) and (b), both targets are resolved at the deeper layer. The ratio R = 1.32 in (e) and 1.65 in (g). For target shape (c), the top portion absorbed most of the light and the deeper target portion could not be resolved. Each image slice is 9 cm by 9 cm in spatial dimensions and the depth between slices is 0.5 cm. The absorption coefficient is cm−1 with range 0 to 0.2 cm−1.
Fig. 3.
Fig. 3. (a), (b), and (c) are cross sections of three differently shaped simulated targets. For all three shapes, the ellipsoid fitting of the target is marked in the red. (d), (f), and (h) are reconstructed images from the depth-regularized algorithm with ellipsoidal fitting, with the ratio R = 2.45 in (d) and R = 1.59 in (f). (e), (g), and (i) are reconstructed images from the proposed algorithm with the actual target size, with the ratio R = 1.32 in (e) and R = 1.65 in (g). For shape (c), both methods failed to resolve the bottom portion.
Fig. 4.
Fig. 4. (a) Co-registered US image, a solid ball phantom 2 cm below the surface. (b) Reconstructed images from the ${\sigma _1}$-regularized algorithm, with a maximum reconstructed $\; {\mu _a} = 0.214\; c{m^{ - 1}}$. Only the target top-layer was resolved in depth, which leads to an infinite top-to-bottom ratio. (c) Reconstructed images from the proposed depth-regularized algorithm, with a maximum reconstructed $\; {\mu _a} = 0.204c{m^{ - 1}}$. The proposed algorithm resolves both target layers, which is closer to the phantom shape and has a R = 1.071 top-to-bottom ratio. Note that each image slice is 9 cm by 9 cm. The depth spacing between layers is 0.5 cm, and the absorption coefficient is measured in cm −1. Note that the target is marked by a circle, and some sound reverberations from the hard resin ball are shown outside the circle.
Fig. 5.
Fig. 5. Segmented US image and reconstructed absorption maps of a stage 2 cancer at 780 nm. (a) US image from a co-registered commercial US system. (b) Segmented US image from CNN. (c) Reconstructed absorption map from the ${\sigma _1}$ -regularized algorithm with a maximum $\; {\mu _a} = 0.307\; c{m^{ - 1}}$and infinite R. (d) Reconstructed absorption map from the depth-regularized algorithm with a maximum $\; {\mu _a} = 0.299\; c{m^{ - 1}}$ and R = 1.57. (c) and (d) consist of 7 sub-images, each sub-image showing one 9 cm by 9 cm cross section of the reconstructed absorption map from 0.5 cm to 3.5 cm in 0.5 cm spacing below tissue surface. The absorption coefficient in cm−1 with range 0 to 0.2 cm−1.
Fig. 6.
Fig. 6. Segmented US images and reconstructed absorption maps of a benign fibroadenoma imaged at 780 nm. (a) US image from a co-registered commercial US system. (b) Segmented US image from the segmentation CNN. (c) Reconstructed absorption map from the ${\sigma _1}$ -regularized algorithm, with a maximum ${\mu _a} = 0.084\; c{m^{ - 1}}$. (d) Reconstructed absorption map from the depth-regularized algorithm, with a maximum ${\mu _a} = 0.079\; c{m^{ - 1}}$. The units of absorption coefficients are cm−1, and the range is 0 to 0.1 cm−1.
Fig. 7.
Fig. 7. Boxplots of reconstructed maximum total hemoglobin values among 14 benign and 12 malignant cases, using either the ${\sigma _1}$-regularized algorithm (${\sigma _1}$-regularized) or the proposed depth-regularized algorithm (depth). HbT is reported in $\mu M.$
Fig. 8.
Fig. 8. Boxplot of R for 12 benign and 11 malignant cases.
Fig. 9.
Fig. 9. Images of a small malignant lesion imaged at 780 nm, reconstructed with both the ${\sigma _1}$ -regularized algorithm and depth-regularized algorithm. (a) Co-registered US image (b) Segmented US image (c) Reconstructed absorption distribution from the ${\sigma _1}$-regularized algorithm, with a maximum ${\mu _a} = 0.227\; c{m^{ - 1}}$. (d) Reconstructed absorption distribution from the depth-regularized algorithm, with a maximum ${\mu _a} = 0.226\; c{m^{ - 1}}$. The absorption coefficient is in cm−1, and the range is 0 to 0.2 cm−1.
Fig. 10.
Fig. 10. Reconstructed absorption coefficients versus segmentation errors. Positive errors mean segmented regions are larger than the true size of the lesion, and negative errors mean the opposite. First and second target layer widths were changed to evaluate the errors. Segmentation errors have minimal influence on reconstructed absorption coefficients.

Equations (5)

Equations on this page are rendered with MathJax. Learn more.

[ U S C ] M × 1 = [ W ] M × N [ δ μ a ] N × 1 ,
δ μ a = arg min δ μ a | | U S C W δ μ a | | 2 + R ( δ μ a ) .
| | U S C W δ μ a | | 2 δ μ a = W H ( W δ μ a U S C ) .
p r o x R ( z , R ( δ μ a ) ) = argmin z { R ( δ μ a ) + 1 2 | | z δ μ a | | 2 2 } .
S γ ( z ) = s i g n ( z ) max ( 0 , | z | γ ) ,
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.