Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Digital labeling for 3D histology: segmenting blood vessels without a vascular contrast agent using deep learning

Open Access Open Access

Abstract

Recent advances in optical tissue clearing and three-dimensional (3D) fluorescence microscopy have enabled high resolution in situ imaging of intact tissues. Using simply prepared samples, we demonstrate here “digital labeling,” a method to segment blood vessels in 3D volumes solely based on the autofluorescence signal and a nuclei stain (DAPI). We trained a deep-learning neural network based on the U-net architecture using a regression loss instead of a commonly used segmentation loss to achieve better detection of small vessels. We achieved high vessel detection accuracy and obtained accurate vascular morphometrics such as vessel length density and orientation. In the future, such digital labeling approach could easily be transferred to other biological structures.

© 2023 Optica Publishing Group under the terms of the Optica Open Access Publishing Agreement

1. Introduction

Over the last decade, major improvements in three-dimensional (3D) microscopy and optical tissue clearing have enabled whole organ imaging at cellular resolution [13]. Many questions about the relationship between tissue morphology, organ function and molecular expression are now within reach of being answered. However, to extract the information contained in the large volumetric datasets produced by 3D microscopy, automated image processing such as tissue segmentation is necessary to quantify observations. Additionally, a simplification of the sample preparation and imaging process would make these experiments more accessible to a wider array of biomedical researchers.

3D histology is a family of microscopy techniques that provide images of the tissue in its full three-dimensional context [47]. We propose to pair 1) our one-step optical clearing protocol LIMPID (Lipid-preserving Index Matching for Prolonged Imaging Depth [8]) for fast, non-toxic sample clearing and 2) 3D microscopy to collect nuclei fluorescence and tissue autofluorescence as a user-friendly way of imaging intact samples. This 3D histology approach (see Fig. 1(a)) does not require tissue embedding or sectioning and the only staining procedure is for DAPI nuclei staining, a commonly used small molecule dye that easily diffuses inside thick samples. Overall, our method is an opportunity for researchers used to standard H&E histology (hematoxylin and eosin) to easily transition to 3D imaging.

 figure: Fig. 1.

Fig. 1. 3D Histology of the mouse heart. (A) 3D histology volume rendered from the tissue autofluorescence (pink) and the nuclei stain DAPI (purple) in an optically cleared mouse heart. (B) Large and small vessel cross-sections as they appear in the DAPI and autofluorescence channels in 3D histology. Isolectin was perfused through the vasculature to confirm the location of the vessels.

Download Full Size | PDF

A common feature between traditional histology and 3D “virtual” histology is the ability to see anatomical structures even if they are not specifically labeled. In mouse heart samples for example, large blood vessel lumens are clearly seen in the autofluorescence channel (see Fig. 1(b)). Even in smaller vessels where lumen walls are not visible, a reduction in autofluorescence signal indicates the vessels’ position. This observation leads to the following question: can we visualize, segment, or measure anatomical structures in fluorescence microscopy images without structure-specific labeling? “Digital labeling” would simplify sample preparation by avoiding the complexities associated with transgenic animals, fluorophore targeting, or dye diffusion, and by enabling the imaging of samples that cannot be perfused. During imaging, more fluorescent channels would also be left open for molecular markers or emerging techniques such as fluorescent in situ hybridization (FISH) [9]. Additionally, the possibility of obtaining “digital labels” for multiple structures (such as blood vessels, nerves, glands, or ducts) without any additional labor would offer morphological landmarks for researchers to contextualize the distribution of other cell populations or molecular targets in fluorescence images, and possibly to quantify such distribution [10].

Digital or “in silico” labeling, as the practice of labeling tissue structures, cells, or cellular components using software instead of chemical markers, has been demonstrated before in different contexts [4,5,1115]. Mehrvar et al. segmented some larger blood vessels label-free by thresholding regions with low autofluorescence signal in organs such as the kidney [12]. However, in the heart, low autofluorescence regions include a lot of extracellular space between cardiomyocyte fiber bundles, in addition to all vessels (Fig.1b). A different method is thus needed to segment the coronary blood vessels visible in our samples. Deep learning has shown great promise for medical image segmentation and could be used to accomplish this task. In another demonstration of digital labeling by Xie et al., deep learning was used to automatically segment prostate glands from 3D histology [5].

Separately from the field of digital labeling, deep learning has been used to segment blood vessels in a wide range of medical images, such as fundoscopy [16], MRI and CT [1720]. In the field of 3D microscopy, deep learning has been used to segment the vasculature after fluorescent labeling. Haft-Javaherian et al. segmented capillary blood vessels perfused with Texas Red labeled dextran in a mouse model of Alzheimer’s disease [21]. Todorov et al. segmented the whole mouse brain vasculature using multiple fluorescent labels [22]. Tahir et al. also used their own neural network to segment brain vasculature from in vivo images after labeling the blood plasma with dextranconjugated fluorescein [23].

In this study, we segment blood vessels using deep learning in 3D histology images of the heart without any vascular labels. This approach removes the need for transgenic animals, antibody labeling or dye perfusion to study blood vessels. For many researchers studying heart disease, heart development or drug distribution, this technique could provide images of the vasculature “for free” along with the expression of other fluorescent markers. In principle, a similar digital labeling strategy could be applied to various organs and microstructures of interest.

In this manuscript, we also propose the use of a regression-based deep learning network with “soft labels” for vessel segmentation. Most deep learning algorithms for medical segmentation use binary “hard labels” where “0” represents background and “1” represents vessels. In comparison, soft labels [24,25] exist on a continuum with values ∈ (0,1). In one deep learning model for medical images, soft labels produced consistent smooth predictions at tissue interfaces and had an increased sensitivity for segmenting small objects such as small brain lesions [25]. In addition, the use of a regression loss (instead of a segmentation loss) improves the prediction of soft labels, preserves the uncertainties of the model in the final output, and can lead to more meaningful post-processing steps [25]. Regression loss functions have been used previously to improve blood vessel segmentation by estimating vessel centerline [26,27] or vessel radius [27,28] but not commonly as a replacement for the segmentation loss.

Here we present a regression-based deep learning network for “digital labeling” of blood vessels in 3D microscopy. First in this manuscript, we describe how the segmentation ground truth was established using isolectin perfusion. Then network architecture, post-processing steps and success metrics are reported. We achieved a clDice score of 0.80 in our validation data, and we demonstrated our network’s performance on a heart imaged independently from our training set. We then describe how a well-trained network might in some ways outperform fluorescent dye perfusion for vessel labeling.

2. Methods

2.1 Sample preparation and image acquisition

All procedures were performed in accordance with relevant guidelines and regulations under the approval of the Case Western Reserve University Institutional Animal Care and Use Committee (IACUC). Five adult wild type mice (C57BL/6) were anesthetized and dissected to expose the heart. Using a needle inserted into the left ventricle, the vasculature was perfused with 20 mL of heparinized PBS (20 units/mL), followed by 25 mL of Isolectin GS-IB4 conjugated to Alexa Fluor 594 (1:500 in PBS), followed by 20 mL of 4% Paraformaldehyde (PFA). The hearts were then removed from the chest cavity and placed in 4% PFA for two hours. The hearts were sliced into thick sections with a scalpel (3-4 slices per heart), stained with DAPI (1 mg/mL diluted 1:1000 in PBS) for 48 h, then optically cleared using LIMPID for 48 h.

Three-dimensional image stacks of the hearts were acquired on a confocal microscope (SP8 with HyVolution 2, Leica Microsystems Inc., Buffalo Grove, IL, USA) using a 40x/1.30 NA objective with oil immersion. Voxel size was 141.3 × 141.3 nm x 350 nm (x, y, z) and optical sectioning (i.e., z-axial resolution based on objective NA) was 1.04 µm. DAPI images were acquired using 405 nm excitation and 415-490 nm emission. Tissue autofluorescence was acquired using 458 nm excitation and 490-588 nm emission. Isolectin was detected using 594 nm excitation and 604-671 nm emission. The three different channels were acquired sequentially (not simultaneously). The excitation light intensity was progressively increased with depth to compensate for tissue absorption. Each image stack was 2048 × 2048 voxels in size (x, y) and between 245-394 voxels in the z-dimension.

2.2 Ground truth creation

The isolectin signal was used to determine the true location of blood vessels. To create the vessel mask, the autofluorescence signal was subtracted from the isolectin signal to reduce the background intensity. The image volume was resampled to isotropic voxels (0.35 × 0.35 × 0.35 µm) and an edge-preserving anisotropic diffusion filter [29] was applied, followed by a combination of edge detection filters and morphological closing operations. At this stage, manual corrections to the vessel mask were made in ITK-SNAP [30] as needed. Finally, a 3-D Gaussian smoothing kernel with standard deviation of 1 voxel was applied to the mask, and the result was rescaled by a large integer n = 15,000 so that the signal gradually varied between 0 (background) and 15,000 (vessel lumen) (see Ground Truth in Fig. 2(a)). Rescaling the ground truth by a large number improves convergence during training [31,32].

 figure: Fig. 2.

Fig. 2. Architecture of the 3D HEUnet network. (A) Autofluorescence and DAPI sub-volumes serve as input to the network, which predicts the 3D location of blood vessels. The network prediction is compared to the ground truth and the loss function is calculated as the mean squared error (MSE). Both the ground truth and network predictions use “soft labels”, where continuous values between 0 and 15,000 reflect the degree of certainty a voxel belongs to a vessel. (B) Network architecture of 3D HEUnet for small vessels, based on the U-net encoder-decoder model. The input has two channels: DAPI (blue) and autofluorescence (green). The number above each cube indicates the number of channels at each step. Two patches of different sizes (the context patch: top, the detail patch: bottom) are accepted in two input branches. BN: Batch normalization. ReLU: Rectified linear unit.

Download Full Size | PDF

Large vessels (diameter > 14 µm, vessel walls visible) and small vessels (< 14 µm, no distinguishable vessel walls) were manually separated into two separate training sets in order to train two different segmentation networks. A similar idea has previously been presented by Li et al. [33]. To train the small vessel network, two hundreds volume patches (size 256 × 256 × 32 voxels) were randomly selected from each 3D image stack, for a total of 1000 patches. For the large vessels network, between 300-500 patches were extracted (size 256 × 256 × 32 voxels, then rescaled to be 128 × 128 × 32 voxels) from four of the five hearts, with special emphasis on sampling the few large vessels visible in each of the image volumes (in one heart, the imaged volume did not contain any large vessels).

2.3 Data augmentation

To increase the size of the training dataset and prevent network overfitting, DAPI and autofluorescence image patches were augmented on-the-fly during network training. Multiple stacked image transformations [34] were applied to each patch as follow: 1) Random image rotation between 0-270° (applied to 50% of patches), 2) Vertical (x-axis), horizontal (y-axis), or in-plane (z-axis) flip (75% of patches), 3) Image size scaling between 0.9-1.2x the original size (66% of patches), 4) Random Gaussian noise added with mean = 0 and variance between 0.01-0.03 (autofluorescence) or 0.001-0.005 (DAPI), 5) Random intensity variations added in the shape of an arbitrary function f(x,y,z)=A*sin(ax)+B*sin(by)+C*sin(cz) + 1, where A, B, C, a, b, and c are random numbers in the range 0.001-0.1. All patches were also intensity normalized and smoothed with a Gaussian filter (standard deviation = 1 voxel).

2.4 Network architecture

Our proposed deep learning network HEUnet (named after “H&E” and “U-net”) is based on the widely used convolutional neural network architecture U-net, which has proven successful for many segmentation tasks in medical imaging [3537]. The input to HEUnet contains two channels (DAPI and autofluorescence, Fig. 2(a)) and, in the case of the small vessel network, two input patches of different sizes (Fig. 2(b)). The “context” image patch is the full 256 × 256 × 32 voxels, while the “detail” patch is the center 128 × 128 × 32 voxels. For the large vessel network, a single input patch 128 × 128 × 32 is used similarly to a traditional U-net. The output of all networks are 128 × 128 × 32 voxels. Contrary to common U-net implementations, HEUnet uses a regression output layer and mean squared error loss function which increases the smoothness and connectivity of the predicted vessel segmentation (see Supplemental Fig. 1).

2.5 Deep learning experiments

This work was executed on the High Performance Computing cluster at Case Western Reserve University. All experiments were implemented in MATLAB 2021a (MathWorks, Natick, MA, USA). Training required approximately 20GB of VRAM and was performed on a 48GB A40 GPU (NVIDIA Corporation, Santa Clara, CA, USA).

Networks for the small and large vessels were trained with a leave-one-out cross-validation approach, with one heart left out at a time for testing, and four hearts (three hearts in the case of the “large vessel” network) kept as part of the training set. Therefore, five “small vessels” networks and four “large vessel” networks were trained. The “small vessel” networks were trained for 35 epochs, with an initial learning rate of 0.05, and the learning rate was halved every 4 epochs. The “large vessel” networks were trained for 40 epochs, with an initial learning rate of 0.01, and the learning rate multiplied by 0.75 every 3 epochs. For both network types, the mini-batch size was 10, and the Adam optimization algorithm was used [38]. See Supplemental Figure 2 for an example training and validation curve.

2.6 Data post-processing

Vessel segmentation was predicted by HEUnet one patch at a time in the test volumes, with an overlap of 30% between patches. The network prediction was averaged in areas of overlap between patches. Network prediction took on average 0.1s per patch for large vessels and 0.4s per patch for small vessels. The predicted segmentation was then smoothed using a Gaussian filter, rescaled so that voxel intensity was between 0-1, and passed through an edge-preserving anisotropic diffusion filter [29]. Vessel voxels were defined as voxels above a threshold of 0.2 in intensity while other voxels were labeled as background. The optimal threshold value was chosen to optimize segmentation accuracy across all samples (see Supplemental Figure 3). Connected components of less than 5000 voxels (214 µm3) were removed for the small vessel networks; connected components of less than 50,000 voxels (2,140 µm3) were removed when predicting the large vessels. Three-dimensional image rendering was performed in MATLAB and Amira (Thermo Fisher Scientific, Waltham, MA, USA). Vessel centerlines were obtained by skeletonizing the vessels in MATLAB using the Skeletonize3D package [39,40].

2.7 Success metrics for vessel detection

Prediction of the large vessel network were evaluated against the ground truth using intersection-over-union (IoU), a commonly used metric for segmentation tasks.

$$IoU = \frac{{|{\textrm{Predicted volume} \cap \textrm{True volume}} |}}{{|{\textrm{Predicted volume} \cup \textrm{True volume}} |}}$$

However, to evaluate the prediction of the small vessel network, we found that often used metrics such as voxel accuracy, the Dice coefficient or IoU were inappropriate for our current task. Voxel accuracy can be overweighed by the dominant background region, while important goals of the small vessel network such as vessel detection, connectivity, and continuity are not reflected in the Dice coefficient and other similar metrics [41]. To quantify small vessel detection, true and predicted vessels were skeletonized and the clDice score was calculated as follow [42]:

$$R = \frac{{\textrm{Predicted centerline} \cap \textrm{True volume}}}{{\textrm{Predicted centerline}}}$$
$$\textrm{P} = \frac{{\textrm{True centerline} \cap \textrm{Predicted volume}}}{{\textrm{True centerline}}}$$
$$\textrm{clDice} = \frac{{2\textrm{RP}}}{{\textrm{R} + \textrm{P}}}$$
Where the recall R reflects the presence of false positives, the precision P reflects the occurrence of false negatives, and the clDice score is a combination of both. Other metrics were chosen because of their relevance when studying vessel organization in biological systems [4347]. The vessel length density was defined as the total vessel length per unit volume. The local vessel length density was calculated with a cube of 70 µm per side centered around each vessel segment. The distance between each non-vessel point in the sample and its nearest vessel was also calculated. These metrics can be used to study vessel development, tissue oxygenation and hypoxia. Vessel orientation was also calculated for each vessel segment. A segment is defined as the vessel between two junction points (i.e., nodes). Vessel orientation was defined as the angle between a reference vector and the vector that passes through the start and end nodes of a vessel segment. We defined three reference vectors to study our vessel distribution: 1) v1 = (1,0,0) defines the helical angle, 2) v2 = (0,0,1) defines the intrusion angle, 3) v3 = (αx, αy, αz) where α is the mean vessel orientation is also defined to quantify the deviation of all vessels from the mean. The helical and intrusion angles have been used to characterize the helix-like orientation of coronary vessels and muscle fibers in the ventricle walls [46,48].

3. Results

3.1 Vessel segmentation without fluorescent vessel labeling

Vessel location was predicted by our network HEUnet based on the DAPI (nuclei) and tissue autofluorescence signals. This prediction was compared to the true location of vessels as established by isolectin staining and manual segmentation (see Fig. 3 and Visualization 1). The DAPI and autofluorescence inputs were used by both the “small vessel” network and the “large vessel” network for segmentation, and the networks’ output were merged back into one result volume with all vessel sizes. As seen in Fig. 3, the network can locate individual vessels, their general shape and size, and replicate the appearance of the original isolectin signal. In Visualization 1, the vessels appear smooth and continuous, and they are highly interconnected throughout the entire thickness of the sample. There is minimal fluorescent signal degradation with depth, and vessel prediction accuracy does not decrease deeper into the sample. This is due to our use of optical clearing and excitation compensation during imaging. See Supplemental Fig. 4 for more analysis into the features detected by HEUnet and the role of each channel and each input patch. Inaccuracies of HEUnet are most noticeable in the large vessel predictions (Fig. 3, lower right), which is due to the few large vessels present in the training dataset.

 figure: Fig. 3.

Fig. 3. Vessels predicted by convolutional neural network. Left: Network inputs (green: autofluorescence, blue: DAPI) with ground truth vessel location (red: isolectin fluorescence with manual correction). Right: Autofluorescence and DAPI signals (green, blue) with network prediction for vessel location (red). Vessels depicted in the right-sided image were predicted without any input from the isolectin fluorescent channel. White boxes indicate a few examples of accurately predicted vessels. A single 2D image is presented here, see Visualization 1 for the entire 3D stack.

Download Full Size | PDF

 figure: Fig. 4.

Fig. 4. Predicted vessel centerlines fall within the true vessel volumes. Centerlines extracted from the vessels predicted by the network (red) overlaid on a 3D rendering of the vessels segmented from the isolectin fluorescence signal (gray). On the left, four sub-volumes are magnified and rotated for best visualization. Location of the sub-volumes are indicated by black boxes on the right. The clDice score calculated for the volume rendered was 0.85.

Download Full Size | PDF

To quantify the accuracy of the vessel detection, the clDice score was calculated by comparing the predicted and true vessel centerlines to the predicted and true vessel volumes. An overlay of the predicted vessel centerlines onto the true vessel volume over a 38.5 µm thick tissue section can be seen in Fig. 4. Four sub-volumes are also displayed at different angles for better visualization. As seen in those sub-volumes, the predicted centerlines generally fall within the true vessel lumen. The predicted centerlines are also generally continuous and shows clear junction points when vessels connect. However, a minority of vessels in the ground truth do not have a corresponding centerline (false negative), and some predicted centerlines fall outside of the true vessels (false positives). Overall, the clDice score of this sub-volume was calculated to be 0.85.

3.2 Vascular quantitative metrics are accurately predicted by the segmentation network

The local vessel length density was calculated (small vessels only) for a cube of 70 µm per side centered around each vessel segment and the results were compared between the ground truth and network prediction (see Fig. 5(a)). Similar regions of high and low vessel density can be seen between the two vessel distributions. The total vessel length density for the entire sample was 2.54 × 10−3 µm/µm3 for the ground truth, and 2.55 × 10−3 µm/µm3 for the predicted data, for a percent difference of 0.21%. The distance from all points to their nearest vessel was also calculated for both the ground truth and predicted vessel distribution (see Fig. 5(b)). The two distributions generally overlap, with the median distance equal to 7.61 µm for the ground truth and 7.43 µm for the prediction, for a percent difference of 2.46%.

 figure: Fig. 5.

Fig. 5. Preserving quantitative metrics of vascular organization in the network prediction. (A) Local vessel length density calculated using a cubic moving window of size = 70 µm centered around each vessel point. Whole volume presented is 266 × 266 × 117 µm3. Large vessels (gray) were extracted from the ground truth (right) or predicted by network (left) but not included in the density calculation. (B) Distribution of distances between all points in tissue and their nearest vessel. Vertical lines indicate median distance. (C) Vessel orientation calculated from each vessel segment and colored based on their difference to the overall true mean vessel orientation. (D) Distribution of vessel orientation, centered around the mean vessel orientation.

Download Full Size | PDF

The orientation of vessels in space was also calculated to quantify their organization with respect to the muscle fibers. In healthy cardiac tissue, microvessels are highly aligned with each other and with the nearby cardiomyocytes [46]. The orientation for each vessel segment was calculated and the mean orientation α was found to be v3 = (αx, αy, αz) = (0.64, 0.73, 0.23). The distribution of vessel angles around the mean can be seen for the ground truth and the network prediction in Fig. 5(c)-(d). The orientation of most vessels is tightly distributed around the mean (red, Fig. 5(c)), with few vessels lying perpendicular to the mean (green, Fig. 5(c)). The histogram of predicted vessel angles for all vessel segments closely overlaps the histogram of the true vessel angles (Fig. 5(d)).

To best estimate the performance of our network on data that was not part of the training dataset, we used a leave-one-out cross-validation approach, with each of the five hearts left out of the training set one at a time. We thus report the performance metrics for all five small vessel networks applied to the hearts kept out for validation in Fig. 6. As seen in Fig. 6(a), the clDice score was on average 0.80 across all five hearts. The helical and intrusion angles were predicted correctly, with a 1.9% error on average (Fig. 6(b)). The median distance to nearest vessel (Fig. 6(c)) and the vessel length density (Fig. 6(d)) were, in all but one case, predicted accurately. One outlier (heart #4) underestimated the vessel length density by 27.71%. In this heart, almost no false positive vessels (recall = 0.93) and more false negatives vessels than usual (precision = 0.73) resulted in severely underestimating the vessel density. As seen in Supplemental Fig. 3, heart #4 would have benefited from a different threshold value t = 0.1, for which it would have had both precision and recall >0.8. All other hearts had similar values for recall and precision at the chosen threshold t = 0.2, thus the estimates for the vessel length density and median distance to the nearest vessel averaged to a more accurate value.

 figure: Fig. 6.

Fig. 6. Success metrics for all five mouse hearts for leave-one-out cross-validation. (A) clDice scores obtained in each heart, mean clDice = 0.80 indicated by yellow line. (B) Comparison between true (blue) and predicted (red) helical and intrusion mean vessel angles for each heart. (C) Comparison between true (blue) and predicted (red) median distance to the nearest vessel, with percent difference indicated for each heart. (D) Comparison between true (blue) and predicted (red) vessel length density, with percent difference indicated for each heart. Mean errors are always calculated over all five hearts. Some blue data points are hidden under red data points.

Download Full Size | PDF

Few large vessels were present in our datasets, which made vascular metrics such as the clDice score, vessel orientation or vessel length density less meaningful. It is also likely that the small amount of training data impacted the networks accuracy. On visual inspection (Fig. 5(a), c) the shape of the true and predicted large vessels seems accurate. We report an average IoU = 0.75 over all four hearts for which large vessels were imaged, and an average clDice = 0.87. More data acquired over larger fields-of-view and containing more examples of large vessels would increase the size of the training dataset, improve network training and lead to a better quantitative analysis of the results.

3.3 Testing network accuracy in independent heart sample with no isolectin

To test how HEUnet would perform on new heart samples, we used a mouse heart dataset acquired by a different researcher prior to the start of the current study. The heart was not perfused during resection (no heparinized PBS, no isolectin, no 4% PFA), but simply stained with DAPI and cleared with LIMPID post-resection. The precise DAPI concentration, staining time and optical clearing time were not recorded at the time of the experiment. The heart sample was then imaged on a confocal microscope using settings such as excitation laser intensity, excitation and emission wavelengths, and pixel sizes that the user felt appropriate at the time, independently from parameters used later for this study. For this sample, excitation wavelengths were 405 nm (DAPI) and 488 nm (autofluorescence), with emission wavelengths of 415-480 nm (DAPI) and 500-650 nm (autofluorescence). Voxel size was 0.271 × 0.271 × 0.350 µm (x, y, z) and the total volume acquired was 1024 × 1024 × 795 voxels.

Vessels were segmented in this new heart using the network previously trained on hearts #2-5 (with heart #1 held out as validation). No transfer learning or other adaptations to the network were necessary. The same pre- and post-processing steps (intensity normalization, Gaussian filtering, edge-preserving anisotropic diffusion filter, removal of small connected-components) were performed with the same parameters as previously established. The predicted vessels can be seen in Fig. 7(a). A small region of the heart was manually segmented to verify the network prediction. Because of the difficulty in manually segmenting small capillary vessels without any contrast agent, this region (Fig. 7(b)-(c)) took 3 hours to segment in ITK-SNAP by an experienced user. The same sub-region of the vessels predicted by the network can be seen in Fig. 7(d), and the overlay of the manual vessel centerlines onto the predicted vessel volume is displayed in Fig. 7(e). A clDice score of 0.84 was calculated for this region, which demonstrated the accuracy of our network when segmenting vessels in an independently acquired test volume, with no dye perfusion or network re-training.

 figure: Fig. 7.

Fig. 7. Vessel prediction in new heart sample with no isolectin. (A) Volume rendering of independently acquired mouse heart with no perfusion (no isolectin/heparin/fixative agent). Predicted blood vessels (red) with autofluorescence (green) and nuclei (DAPI, blue). In the visualization, vessel cross sections are seen at the volume edges, whole vessels are seen in the sub-volume where autofluorescence and nuclei channels are made transparent. (B) Manually segmented blood vessels (yellow) over the region of interest (white box in A and B). (C) Manually segmented blood vessels. (D) Matching blood vessels predicted by the network in the same region of interest. (E) Centerlines of the manually segmented vessels (red) overlaid on the predicted vessel volume (gray). Calculated clDice = 0.84 for the region.

Download Full Size | PDF

3.4 False positives? Or is the network outperforming fluorescent staining?

In thick tissue samples, fluorescent labeling of blood vessels is best achieved by dye perfusion through the vasculature. It produces a bright and specific fluorescent signal with low background fluorescence. However, it is still possible that some vessels will not be fluorescently labeled, either because some vessels were not adequately perfused during sample preparation (including if a physiological block is preventing blood flow), or because vessels with a weak fluorescent signal and poor signal-to-noise ratio were accidently removed during data pre-processing.

In this study, a combination of isolectin perfusion and manual correction was used to create the ground truth in our vessel segmentation task. However, it is likely some vessels were unlabeled, and our network was thus trained and evaluated against an “imperfect ground truth.” Images show that our network HEUnet appears to be segmenting, in some cases, blood vessels that were unlabeled by the isolectin (see Fig. 8). When inspecting the autofluorescence and DAPI signal (Fig. 8(a)), a continuous lumen characterized by low autofluorescence and DAPI signals can be seen in the ground truth, but no isolectin signal was detected. In comparison, this same vessel is clearly segmented by HEUnet. A volumetric rendering of the area (Fig. 8(b)) clearly shows a gap in the ground truth, while the network prediction shows a series of smooth, continuous and interconnected vessels.

 figure: Fig. 8.

Fig. 8. Example of vessels predicted by the network but missing in the ground truth. (A) 2D x-y slice and accompanying x-z slices (four side-panels corresponding to white dotted lines) showing the absence of isolectin signal in a vessel lumen (white arrow) which was probably not sufficiently perfused with the fluorescent dye (left). The presence of this vessel was correctly predicted by the network (right). (B) Volumetric rendering of the region-of-interest presented in (A), with multiple connected vessel sections (black arrows) missing in the ground truth but correctly predicted by the network.

Download Full Size | PDF

In all quantitative metrics used for this study, missing vessels such as the one presented in Fig. 8 have been counted as false positives. However, it is highly probable from the appearance of the autofluorescence and DAPI signals, and from the realistic shape of the predicted vessels, that such predictions are an improvement on an “imperfect ground truth”, and not false positives. It is thus more than likely that HEUnet has a better recall and clDice score than reported in this study, but this remains a qualitative observation at this stage of our experiments.

4. Discussion

In this study, we demonstrated “digital labeling” of the vasculature in 3D histology. We trained a neural network HEUnet with soft labels and a regression loss to determine the location of blood vessels using only the autofluorescence of the tissue and a common nuclear stain (DAPI). The results are smooth, highly connected, vessel-like structures that correspond to the true location of the vessels (clDice = 0.80). Vessel metrics that were calculated from the vessels stained with isolectin (such as the vessel length density, the median distance from the nearest vessel, and the vessel orientation) were recovered from the predicted vessels with minimal error. Our network easily predicted vessel location in an independent sample without any isolectin staining, demonstrating that our technique does not rely on the presence of any fluorescent dye in the vessels and could be performed on new, completely unperfused samples in the future. We also found that in some cases, our network surpassed the ground truth, and identified vessels that were insufficiently stained with isolectin. This led us to speculate that some vessels identified as “false positives” were the result of an imperfect ground truth.

Our study clearly demonstrates the potential of 3D histology as a research tool. 3D histology simplifies tissue preparation and preserves tissue integrity, two important features when studying complex anatomical structures in pre-clinical or clinical settings. While traditional 2D histology requires multiple time-intensive and labor-intensive steps, 3D histology consists of three “hands off” steps: fixation, whole mount staining, and tissue clearing. While 2D histology can offer an incomplete view of the sample, for example sectioning through vessels and misrepresenting their spatial distribution, 3D histology preserves the full complexity of tissue organization. And while 2D histology is fundamentally destructive to the sample, 3D histology preserves the sample for multiple types of fluorescent imaging or for further analysis using genomic and proteomic assays. These advantages indicate that, while 2D histology will remain an essential tool in pathology and biomedical research, 3D histology can accelerate research on the vasculature and other similarly complex structures.

Digital labeling is a powerful technique to pair with 3D histology. In this study we demonstrated this concept by detecting blood vessels, but more generally this technique can be thought of as labeling any anatomical structures without the need of a fluorescent marker [11,12,14]. Segmentation has always been essential to extract the wealth of information contained in 3D medical images. Segmenting without fluorescent labels simplifies the sample preparation process, frees detector channels during imaging, and possibly reduces the cost of experiments, as it does not require antibody staining. Segmenting structures that are well-known to scientists and clinicians (such as cells, nuclei, nerves, glands or blood vessels) can provide intuitive and quantitative metrics (such as density, tortuosity or orientation) to help with sample characterization or patient diagnosis [6]. Furthermore, segmenting such structures can serve as morphological landmarks to understand the distribution of other biological processes, such as observing the distribution of a certain cell type around the vasculature.

We used soft labels and a regression loss in our network in a similar approach to the work described in Gros et al. [25] where the authors presented a segmentation network for brain MRI images. We attempted to use “hard labels” (such as “0” for background voxels, “1” for vessel voxels), but we found those to overemphasize any mistakes or irregularities in the ground truth, such as vessel walls with partial isolectin labeling (due to poor signal-to-noise ratio or poor fluorescent staining), or label “leakage” due to oversaturation of the detector or imperfect optical sectioning. Soft labels that gradually increase between the low “background” values and the high “vessel” values without sharp edges represented more accurately the uncertainty present in the data. The use of a regression loss (mean squared error) instead of a more commonly used segmentation loss (e.g., Dice loss or cross-entropy loss) also led to better results in practice (see Supplemental Fig. 1). When we attempted to train our convolutional neural network with a cross-entropy loss, a weight correction had to be added to the loss function to compensate for the class imbalance between the large number of “background” voxels and the small number of “vessel” voxels. We found that the choice of the weight function greatly impacted the resulting voxel accuracy in our validation set, with either high false positive or high false negative rates, which made the network unpredictable to train and difficult to generalize to new datasets. We also attempted to use a Dice loss, which usually addresses the issue of class imbalance, and found that the predicted vessels where often disconnected from each other, with the network favoring vessel segments for which it had high certainty and ignoring the smaller connecting vessel segments for which it was less confident. As a result, we have found that similarly to Gros et al. [25] and their demonstration in MRI brain images, soft labels and a regression loss are favorable for segmentation tasks with small objects with imprecise borders, such as blood vessels in our case, or small brain lesions in their work.

Another advantage of our approach was our fast collection of ground truth labels by using isolectin as a vessel marker. This technique drastically decreased the time spent on ground truth creation compared to manual labeling and could be applied to other tissues or morphological structures simply by changing the fluorescent marker (a similar approach was performed in 2D in the prostate by Bulten et al. [49]). The current study was limited to five image volumes acquired from five hearts, but we expect that the size of the training dataset could be increased with minimal additional labor, which should improve the accuracy of our network prediction. Such improvement would be expected especially for the large vessels (> 14 µm in diameter with visible vessel walls) since very few of those vessels were present in our current training dataset, and network performance should improve with a larger training dataset. Additionally, to make HEUnet more generalizable to new samples, images acquired in different conditions (e.g., different microscopes or mouse breeds) should be added to the training database. Finally, network retraining would likely be necessary when working with additional fluorescent markers (e.g., inflammation markers, hypoxia markers, FISH [9]) which would change the excitation and emission wavelengths at which autofluorescence can be collected. Alternatively, autofluorescence present in the background of different acquisition channels could be used.

In the future, our technique could be used to study the microvasculature in cases of congenital heart diseases, as per our previous work [46,50]. More broadly, it could be adapted to segment any structure visible from 3D histology in any organ of interest. Once appropriate neural networks have been trained, this would be a “labor-free” and “label-free” way to visualize the vasculature and other structures for researchers across the biomedical sciences.

Funding

American Heart Association (916963); National Institutes of Health (R01EB028635, R01HL126747, S10-OD024996); U.S. Department of Defense (W81XWH2110659).

Acknowledgments

We thank Dr. Richard M. Levenson and Tanishq Abraham for their help with virtual H&E rendering. We thank Junwoo Suh for help with animal procedures. We thank the High Performance Computing Resource in the Core Facility for Advanced Research Computing at Case Western Reserve University. We thank the Case Western Reserve University School of Medicine Microscopy Core Facility and National Institutes of Health funding S10-OD024996 for access to the confocal microscope. Maryse Lapierre-Landry is supported by the American Heart Association Postdoctoral Fellowship Grant #916963, 2022-2023.

Disclosures

The authors declare that there are no conflicts of interest related to this article.

Data availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

Supplemental document

See Supplement 1 for supporting content.

References

1. M. V. Gómez-Gaviro, D. Sanderson, J. Ripoll, and M. Desco, “Biomedical applications of tissue clearing and three-dimensional imaging in health and disease,” iScience 23(8), 101432 (2020). [CrossRef]  

2. H. R. Ueda, A. Ertürk, K. Chung, V. Gradinaru, A. Chédotal, P. Tomancak, and P. J. Keller, “Tissue clearing and its applications in neuroscience,” Nat. Rev. Neurosci. 21(2), 61–79 (2020). [CrossRef]  

3. E. A. Susaki and H. R. Ueda, “Whole-body and whole-organ clearing and imaging techniques with single-cell resolution: toward organism-level systems biology in mammals,” Cell Chemical Biology 23(1), 137–157 (2016). [CrossRef]  

4. Y. Rivenson, H. Wang, Z. Wei, K. de Haan, Y. Zhang, Y. Wu, H. Günaydın, J. E. Zuckerman, T. Chong, A. E. Sisk, L. M. Westbrook, W. D. Wallace, and A. Ozcan, “Virtual histological staining of unlabelled tissue-autofluorescence images via deep learning,” Nat. Biomed. Eng. 3(6), 466–477 (2019). [CrossRef]  

5. W. Xie, N. P. Reder, C. Koyuncu, et al., “Prostate cancer risk stratification via non-destructive 3D pathology with deep learning-assisted gland analysis,” Cancer Res. 82(2), 334–345 (2022). [CrossRef]  

6. J. T. C. Liu, A. K. Glaser, K. Bera, L. D. True, N. P. Reder, K. W. Eliceiri, and A. Madabhushi, “Harnessing non-destructive 3D pathology,” Nat. Biomed. Eng. 5(3), 203–218 (2021). [CrossRef]  

7. Y. Liu, R. M. Levenson, and M. W. Jenkins, “Slide Over: advances in slide-free optical microscopy as drivers of diagnostic pathology,” Am. J. Pathol. 192(2), 180–194 (2022). [CrossRef]  

8. Y. Liu, M. W. Jenkins, M. Watanabe, and A. M. Rollins, “A simple optical clearing method for investigating molecular distribution in intact embryonic tissues (Conference Presentation),” Proc. SPIE10472, 104720P (2018). [CrossRef]  

9. A. Moter and U. B. Göbel, “Fluorescence in situ hybridization (FISH) for direct visualization of microorganisms,” J. Microbiol. Methods 41(2), 85–112 (2000). [CrossRef]  

10. B. A. Corliss, H. C. Ray, J. T. Patrie, J. Mansour, S. Kesting, J. H. Park, G. Rohde, P. A. Yates, K. A. Janes, and S. M. Peirce, “CIRCOAST: a statistical hypothesis test for cellular colocalization with network structures,” Bioinformatics 35(3), 506–514 (2019). [CrossRef]  

11. E. M. Christiansen, S. J. Yang, D. M. Ando, A. Javaherian, G. Skibinski, S. Lipnick, E. Mount, A. O’Neil, K. Shah, and A. K. Lee, “In silico labeling: predicting fluorescent labels in unlabeled images,” Cell 173(3), 792–803.e19 (2018). [CrossRef]  

12. S. Mehrvar, S. Mostaghimi, A. K. S. Camara, F. H. Foomani, J. Narayanan, B. Fish, M. Medhora, and M. Ranji, “Three-dimensional vascular and metabolic imaging using inverted autofluorescence,” J. Biomed. Opt. 26(7), 076002 (2021). [CrossRef]  

13. C. Qian, K. Miao, L.-E. Lin, X. Chen, J. Du, and L. Wei, “Super-resolution label-free volumetric vibrational imaging,” Nat. Commun. 12(1), 3648 (2021). [CrossRef]  

14. C. Ounkomol, S. Seshamani, M. M. Maleckar, F. Collman, and G. R. Johnson, “Label-free prediction of three-dimensional fluorescence images from transmitted-light microscopy,” Nat. Methods 15(11), 917–920 (2018). [CrossRef]  

15. A. L. Kiemen, A. M. Braxton, M. P. Grahn, K. S. Han, J. M. Babu, R. Reichel, A. C. Jiang, B. Kim, J. Hsu, F. Amoa, S. Reddy, S.-M. Hong, T. C. Cornish, E. D. Thompson, P. Huang, L. D. Wood, R. H. Hruban, D. Wirtz, and P.-H. Wu, “CODA: quantitative 3D reconstruction of large tissues at cellular resolution,” Nat. Methods 19, 1490–1499 (2022). [CrossRef]  

16. C. Chen, J. H. Chuah, R. Ali, and Y. Wang, “Retinal vessel segmentation using deep learning: a review,” IEEE Access 9, 111985–112004 (2021). [CrossRef]  

17. C. Chen, C. Qin, H. Qiu, G. Tarroni, J. Duan, W. Bai, and D. Rueckert, “Deep learning for cardiac image segmentation: a review,” Front. Cardiovasc. Med. 7, 25 (2020). [CrossRef]  

18. S. Moccia, E. De Momi, S. El Hadji, and L. S. Mattos, “Blood vessel segmentation algorithms–review of methods, datasets and evaluation metrics,” Comput. Methods Programs Biomed. 158, 71–91 (2018). [CrossRef]  

19. W. Tan, L. Zhou, X. Li, X. Yang, Y. Chen, and J. Yang, “Automated vessel segmentation in lung CT and CTA images via deep neural networks,” J. X-Ray Sci. Technol. 29(6), 1123–1137 (2021). [CrossRef]  

20. M. Livne, J. Rieger, O. U. Aydin, A. A. Taha, E. M. Akay, T. Kossen, J. Sobesky, J. D. Kelleher, K. Hildebrand, D. Frey, and V. I. Madai, “A U-net deep learning framework for high performance vessel segmentation in patients with cerebrovascular disease,” Front. Neurosci. 13, 97 (2019). [CrossRef]  

21. M. Haft-Javaherian, L. Fang, V. Muse, C. B. Schaffer, N. Nishimura, and M. R. Sabuncu, “Deep convolutional neural networks for segmenting 3D in vivo multiphoton images of vasculature in Alzheimer disease mouse models,” PLoS One 14(3), e0213539 (2019). [CrossRef]  

22. M. I. Todorov, J. C. Paetzold, O. Schoppe, G. Tetteh, S. Shit, V. Efremov, K. Todorov-Völgyi, M. Düring, M. Dichgans, M. Piraud, B. Menze, and A. Ertürk, “Machine learning analysis of whole mouse brain vasculature,” Nat. Methods 17(4), 442–449 (2020). [CrossRef]  

23. W. Tahir, S. Kura, J. Zhu, X. Cheng, R. Damseh, F. Tadesse, A. Seibel, B. S. Lee, F. Lesage, S. Sakadžic, D. A. Boas, and L. Tian, “Anatomical modeling of brain vasculature in two-photon microscopy by generalizable deep learning,” BME Front. 2021, 1–12 (2021). [CrossRef]  

24. E. Kats, J. Goldberger, and H. Greenspan, “Soft labeling by distilling anatomical knowledge for improved MS lesion segmentation,” (2019).

25. C. Gros, A. Lemay, and J. Cohen-Adad, “SoftSeg: Advantages of soft versus binary training for image segmentation,” Med. Image Anal. 71, 102038 (2021). [CrossRef]  

26. D. Keshwani, Y. Kitamura, S. Ihara, S. Iizuka, and E. Simo-Serra, “TopNet: Topology preserving metric learning for vessel tree reconstruction and labelling,” in Medical Image Computing and Computer Assisted Intervention – MICCAI 2020, A. L. Martel, P. Abolmaesumi, D. Stoyanov, D. Mateus, M. A. Zuluaga, S. K. Zhou, D. Racoceanu, and L. Joskowicz, eds., Lecture Notes in Computer Science (Springer International Publishing, 2020), 12266, pp. 14–23.

27. A. Sironi, E. Türetken, V. Lepetit, and P. Fua, “Multiscale centerline detection,” IEEE Trans. Pattern Anal. Mach. Intell. 38(7), 1327–1341 (2016). [CrossRef]  

28. J. M. Wolterink, R. W. van Hamersvelt, M. A. Viergever, T. Leiner, and I. Išgum, “Coronary artery centerline extraction in cardiac CT angiography using a CNN-based orientation classifier,” Med. Image Anal. 51, 46–60 (2019). [CrossRef]  

29. D.-J. Kroon, “Image edge enhancing coherence filter toolbox,” https://www.mathworks.com/matlabcentral/fileexchange/25449-image-edge-enhancing-coherence-filter-toolbox.

30. P. A. Yushkevich, J. Piven, H. C. Hazlett, R. G. Smith, S. Ho, J. C. Gee, and G. Gerig, “User-guided 3D active contour segmentation of anatomical structures: Significantly improved efficiency and reliability,” NeuroImage 31(3), 1116–1128 (2006). [CrossRef]  

31. W. Xie, J. A. Noble, and A. Zisserman, “Microscopy cell counting and detection with fully convolutional regression networks,” Comput. Methods Biomech. Biomed. Eng.: Imaging & Visualization 6(3), 283–292 (2018). [CrossRef]  

32. M. Lapierre-Landry, Z. Liu, S. Ling, M. Bayat, D. L. Wilson, and M. W. Jenkins, “Nuclei detection for 3d microscopy with a fully convolutional regression network,” IEEE Access 9, 60396–60408 (2021). [CrossRef]  

33. Y. Li, T. Ren, J. Li, X. Li, X. Li, A. Li, and A. Li, “Multi-perspective label based deep learning framework for cerebral vasculature segmentation in whole-brain fluorescence images,” Biomed. Opt. Express 13(6), 3657–3671 (2022). [CrossRef]  

34. L. Zhang, X. Wang, D. Yang, T. Sanford, S. Harmon, B. Turkbey, B. J. Wood, H. Roth, A. Myronenko, D. Xu, and Z. Xu, “Generalizing deep learning for medical image segmentation to unseen domains via deep stacked transformation,” IEEE Trans. Med. Imaging 39(7), 2531–2540 (2020). [CrossRef]  

35. O. Ronneberger, P. Fischer, and T. Brox, “U-Net: convolutional networks for biomedical image segmentation,” in Medical Image Computing and Computer-Assisted Intervention – MICCAI 2015, N. Navab, J. Hornegger, W. M. Wells, and A. F. Frangi, eds., Lecture Notes in Computer Science (Springer International Publishing, 2015), pp. 234–241.

36. Ö. Çiçek, A. Abdulkadir, S. S. Lienkamp, T. Brox, and O. Ronneberger, “3D U-Net: learning dense volumetric segmentation from sparse annotation,” in Medical Image Computing and Computer-Assisted Intervention – MICCAI 2016, S. Ourselin, L. Joskowicz, M. R. Sabuncu, G. Unal, and W. Wells, eds., Lecture Notes in Computer Science (Springer International Publishing, 2016), pp. 424–432.

37. F. Isensee, P. F. Jaeger, S. A. A. Kohl, J. Petersen, and K. H. Maier-Hein, “nnU-Net: a self-configuring method for deep learning-based biomedical image segmentation,” Nat. Methods 18(2), 203–211 (2021). [CrossRef]  

38. D. P. Kingma and J. Ba, “Adam: a method for stochastic optimization,” arXiv, arXiv:1412.6980 (2017). [CrossRef]  

39. P. Kollmannsberger, “Skeleton3D,” https://www.mathworks.com/matlabcentral/fileexchange/43400-skeleton3d.

40. P. Kollmannsberger, M. Kerschnitzki, F. Repp, W. Wagermaier, R. Weinkamer, and P. Fratzl, “The small world of osteocytes: connectomics of the lacuno-canalicular network in bone,” New J. Phys. 19(7), 073019 (2017). [CrossRef]  

41. A. Reinke, M. D. Tizabi, C. H. Sudre, et al., “Common limitations of image processing metrics: a picture story,” (2022).

42. J. C. Paetzold, S. Shit, I. Ezhov, G. Tetteh, A. Ertürk, and B. Menze, “clDice - a novel connectivity-preserving loss function for vessel segmentation,” 5 (n.d.).

43. B. A. Corliss, C. Mathews, R. Doty, G. Rohde, and S. M. Peirce, “Methods to label, image, and analyze the complex structural architectures of microvascular networks,” Microcirculation 26(5), e12520 (2019). [CrossRef]  

44. F. Bochner, V. Mohan, A. Zinger, O. Golani, A. Schroeder, I. Sagi, and M. Neeman, “Intravital imaging of vascular anomalies and extracellular matrix remodeling in orthotopic pancreatic tumors,” Int. J. Cancer 146(8), 2209–2217 (2020). [CrossRef]  

45. H. A. Strobel, A. Schultz, S. M. Moss, R. Eli, and J. B. Hoying, “Quantifying vascular density in tissue engineered constructs using machine learning,” Front. Physiol. 12, 650714 (2021). [CrossRef]  

46. M. Lapierre-Landry, H. Kolesová, Y. Liu, M. Watanabe, and M. W. Jenkins, “Three-dimensional alignment of microvasculature and cardiomyocytes in the developing ventricle,” Sci. Rep. 10(1), 14955 (2020). [CrossRef]  

47. B. A. Corliss, R. W. Doty, C. Mathews, P. A. Yates, T. Zhang, and S. M. Peirce, “REAVER: A program for improved analysis of high-resolution vascular network images,” Microcirculation 27(5), e12618 (2020). [CrossRef]  

48. P. Garcia-Canadilla, A. C. Cook, T. J. Mohun, O. Oji, S. Schlossarek, L. Carrier, W. J. McKenna, J. C. Moon, and G. Captur, “Myoarchitectural disarray of hypertrophic cardiomyopathy begins pre-birth,” J. Anat. 235(5), 962–976 (2019). [CrossRef]  

49. W. Bulten, P. Bándi, J. Hoven, R. van de Loo, J. Lotz, N. Weiss, J. van der Laak, B. van Ginneken, C. Hulsbergen-van de Kaa, and G. Litjens, “Epithelium segmentation using deep learning in H&E-stained prostate specimens with immunohistochemistry as reference standard,” Sci. Rep. 9(1), 864 (2019). [CrossRef]  

50. Y. Liu, M. C. Broberg, M. Watanabe, A. M. Rollins, and M. W. Jenkins, “SLIME: robust, high-speed 3D microvascular mapping,” Sci. Rep. 9(1), 893 (2019). [CrossRef]  

Supplementary Material (2)

NameDescription
Supplement 1       supplemental figure 1-4
Visualization 1       Vessels predicted by convolutional neural network in a 3D volume.

Data availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (8)

Fig. 1.
Fig. 1. 3D Histology of the mouse heart. (A) 3D histology volume rendered from the tissue autofluorescence (pink) and the nuclei stain DAPI (purple) in an optically cleared mouse heart. (B) Large and small vessel cross-sections as they appear in the DAPI and autofluorescence channels in 3D histology. Isolectin was perfused through the vasculature to confirm the location of the vessels.
Fig. 2.
Fig. 2. Architecture of the 3D HEUnet network. (A) Autofluorescence and DAPI sub-volumes serve as input to the network, which predicts the 3D location of blood vessels. The network prediction is compared to the ground truth and the loss function is calculated as the mean squared error (MSE). Both the ground truth and network predictions use “soft labels”, where continuous values between 0 and 15,000 reflect the degree of certainty a voxel belongs to a vessel. (B) Network architecture of 3D HEUnet for small vessels, based on the U-net encoder-decoder model. The input has two channels: DAPI (blue) and autofluorescence (green). The number above each cube indicates the number of channels at each step. Two patches of different sizes (the context patch: top, the detail patch: bottom) are accepted in two input branches. BN: Batch normalization. ReLU: Rectified linear unit.
Fig. 3.
Fig. 3. Vessels predicted by convolutional neural network. Left: Network inputs (green: autofluorescence, blue: DAPI) with ground truth vessel location (red: isolectin fluorescence with manual correction). Right: Autofluorescence and DAPI signals (green, blue) with network prediction for vessel location (red). Vessels depicted in the right-sided image were predicted without any input from the isolectin fluorescent channel. White boxes indicate a few examples of accurately predicted vessels. A single 2D image is presented here, see Visualization 1 for the entire 3D stack.
Fig. 4.
Fig. 4. Predicted vessel centerlines fall within the true vessel volumes. Centerlines extracted from the vessels predicted by the network (red) overlaid on a 3D rendering of the vessels segmented from the isolectin fluorescence signal (gray). On the left, four sub-volumes are magnified and rotated for best visualization. Location of the sub-volumes are indicated by black boxes on the right. The clDice score calculated for the volume rendered was 0.85.
Fig. 5.
Fig. 5. Preserving quantitative metrics of vascular organization in the network prediction. (A) Local vessel length density calculated using a cubic moving window of size = 70 µm centered around each vessel point. Whole volume presented is 266 × 266 × 117 µm3. Large vessels (gray) were extracted from the ground truth (right) or predicted by network (left) but not included in the density calculation. (B) Distribution of distances between all points in tissue and their nearest vessel. Vertical lines indicate median distance. (C) Vessel orientation calculated from each vessel segment and colored based on their difference to the overall true mean vessel orientation. (D) Distribution of vessel orientation, centered around the mean vessel orientation.
Fig. 6.
Fig. 6. Success metrics for all five mouse hearts for leave-one-out cross-validation. (A) clDice scores obtained in each heart, mean clDice = 0.80 indicated by yellow line. (B) Comparison between true (blue) and predicted (red) helical and intrusion mean vessel angles for each heart. (C) Comparison between true (blue) and predicted (red) median distance to the nearest vessel, with percent difference indicated for each heart. (D) Comparison between true (blue) and predicted (red) vessel length density, with percent difference indicated for each heart. Mean errors are always calculated over all five hearts. Some blue data points are hidden under red data points.
Fig. 7.
Fig. 7. Vessel prediction in new heart sample with no isolectin. (A) Volume rendering of independently acquired mouse heart with no perfusion (no isolectin/heparin/fixative agent). Predicted blood vessels (red) with autofluorescence (green) and nuclei (DAPI, blue). In the visualization, vessel cross sections are seen at the volume edges, whole vessels are seen in the sub-volume where autofluorescence and nuclei channels are made transparent. (B) Manually segmented blood vessels (yellow) over the region of interest (white box in A and B). (C) Manually segmented blood vessels. (D) Matching blood vessels predicted by the network in the same region of interest. (E) Centerlines of the manually segmented vessels (red) overlaid on the predicted vessel volume (gray). Calculated clDice = 0.84 for the region.
Fig. 8.
Fig. 8. Example of vessels predicted by the network but missing in the ground truth. (A) 2D x-y slice and accompanying x-z slices (four side-panels corresponding to white dotted lines) showing the absence of isolectin signal in a vessel lumen (white arrow) which was probably not sufficiently perfused with the fluorescent dye (left). The presence of this vessel was correctly predicted by the network (right). (B) Volumetric rendering of the region-of-interest presented in (A), with multiple connected vessel sections (black arrows) missing in the ground truth but correctly predicted by the network.

Equations (4)

Equations on this page are rendered with MathJax. Learn more.

I o U = | Predicted volume True volume | | Predicted volume True volume |
R = Predicted centerline True volume Predicted centerline
P = True centerline Predicted volume True centerline
clDice = 2 RP R + P
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.