Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Optimized U-Net model for 3D light-sheet image segmentation of zebrafish trunk vessels

Open Access Open Access

Abstract

The growth of zebrafish's vessels can be used as an indicator of the vascular development process and to study the biological mechanisms. The three-dimensional (3D) structures of zebrafish's trunk vessels could be imaged by state-of-art light-sheet fluorescent microscopy with high efficiency. A large amount of data was then produced. Accurate segmentation of these 3D images becomes a new bottleneck for automatic and quantitative analysis. Here, we propose a Multi-scale 3D U-Net model to perform the segmentation of trunk vessels. The segmentation accuracies of 82.3% and 83.0%, as evaluated by the IoU (Intersection over Union) parameter, were achieved for intersegmental vessels and the dorsal longitudinal anastomotic vessels respectively. The growth of zebrafish vasculature from 42-62 hours was then analyzed quantitatively.

© 2022 Optica Publishing Group under the terms of the Optica Open Access Publishing Agreement

1. Introduction

Due to its high homology to the human genome, Zebrafish were extensively used as miniature model animals to study the developmental mechanism of vertebrate organs [1,2]. Particularly, imaging of the formation of Zebrafish vasculature has contributed a lot to understand the process of vasculogenesis, differentiation, angiogenesis, and patterning [3,4]. During the Zebrafish development process, dorsal longitudinal anastomotic vessels (DLAVs) and intersegmental vessels (ISVs) are firstly formed and have well-defined structures [4]. So, characterization of the DLAVs and ISVs is essential to quantify the development process, as well as to evaluate associated vascular disease and drug response.

Light-sheet fluorescence microscopy (LSFM) is a widely-used 3D imaging technique that uses orthogonally-positioned objectives for fluorescence excitation and collection [5,6]. Due to its fast 3D imaging speed, large field of view, and low photobleaching, LSFM gradually became the most popular imaging technique for long-term live imaging of large specimens, such as the developmental process of Zebrafish [710]. Conventional LSFM with single objective illumination often contains stripes along the illumination direction which may hide relevant structural information [11]. The drawback can be overcome with illumination from both sides, as in Multi-view LSFM [12]. Developing of entire embryos has been imaged with LSFM and quantified at a single-cell level which provided great insights into the development process [12,13]. In 2016, Gualda et al. introduced a high-throughput light-sheet microscope, named SPIM-Fluid, that could obtain 3D structures of whole Zebrafish on a large population [14]. These LSFM experiments for long-time or high throughput imaging have generated massive amounts of 3D imaging data. Hence auto analysis becomes a new bottleneck to extract image information and qualify the Zebrafish development process.

Auto analysis of the 3D light-sheet imaging data includes image registration, segmentation, recognition, and statistics. Among them, segmentation of interested structure poses the central task. Classical image segmentation algorithms, such as the active contour model [15], can not meet the demand since prior knowledge was required for proper segmentation, especially for LSFM images with uneven fluorescence distribution or dense vessels. U-Net is a modified Convolutional Neural Networks (CNNs) that was successfully developed for biomedical image segmentation [16]. The U-shaped network consists of repeated down-sampling convolution modules as the encoder and sequential up-convolutions as the decoder to better extract the low-level and high-level semantic features. In 2019, Kun Zhang et al. developed a Dual ResUNet Model for segmentation of Zebrafish ISVs from maximum intensity projected (MIP) 2D images [17]. This model successfully segments overlapping vascular regions from the projection images.

3D images obtained by LSFM provide more spatial information than 2D imaging or 2D MIP images. The 3D dataset could be sliced into 2D images then analyzed with 2D neural networks, but that could result in the loss of some vital 3D spatial context information [18,19]. Kugler et al. showed the segmentation result of the Zebrafish brain vasculature from sliced light-sheet imaging datasets based on an improved 2D CNN [20]. However, there exist some ruptures, adhesions, and losses of brain vasculatures. Therefore, segmentation directly from the 3D volume data rather than 2D images is essential. In 2020, Todorov et al. developed a deep learning-based framework named Vessel Segmentation & Analysis Pipeline (VesSAP) to analyze the vascular features of mouse brains [21]. VesSAP successfully obtained the complete mouse brain vasculature by using transfer learning and 3D CNN, demonstrating the potent power of deep learning on 3D image segmentation. Daetwyler et al. also reported a machine-learning method to segment all vessels using images of red blood cells as training data and quantified the vasculatures over development [22].

In this paper, we introduced a modified 3D U-Net (MS-3D U-Net) to segment the Zebrafish ISVs and DLAVs from 3D light-sheet images. 3D U-Net is a neural network model widely used in semantic segmentation of 3D biomedical images [23]. We added a multi-scale feature ensembling in the network in order to extract the vessels with different diameters precisely [2426]. Furthermore, the hard attention mechanism [27], in which certain areas were selected for attention by a weighted Dice coefficient, was also introduced to train the network more specific to the vessels. The ISVs and DLAVs could be segmented with the IoU metrics of 82.3% and 83.0%, respectively. To our knowledge, this is the first 3D segmentation method reported for Zebrafish vessels and would greatly facilitate the developmental studies using Zebrafish as model.

2. Experiment method

2.1 Zebrafish

Transgenic fluorescent Zebrafish Tg (fli1: EGFP) were bred and raised according to the standard procedures on a 14-hr light/10-hr dark cycle at 28°C. The process of Zebrafish was carried out following the Guide for the Care and Use of Laboratory Animals (NIH Publications No. 8023, Eighth Edition) and approved by the animal ethics committee of Suzhou Institute of Biomedical Engineering and Technology, CAS.

2.2 3D light-sheet imaging and data preprocessing

The trunk vasculatures of Zebrafish were imaged with a custom-built bi-directional light-sheet illuminated microscope, as previously reported [10]. Briefly, a 488 nm laser beam (OBIS, 488 nm, Coherent Inc., USA) was collimated and divided into two arms by a beam splitter. In each illumination arm, one Galvo mirror (CT6215H, Cambridge Technology, UK) was used to scan the beam to form a light sheet, and another one (GVS002, Thorlabs Inc., USA) was used to shift the Z-axis position of the light-sheet for 3D imaging. A water immersion objective (16×, NA 0.8, Water immersion, Nikon, Japan) mounted on a piezoelectric positioner (SLC-2445-L, SmarAct, Germany) is used to capture the fluorescent images. A camera (Prime BSI, Teledyne Photonmetrics, USA) is placed behind the water immersion objective and the tube lens, to capture the 3D fluorescence images. Integration of the two images from both illumination arms generates final 3D images without shadowing stripes [10]. The anesthetized Zebrafish embryos were embedded in agarose (Sea Plaque GTG Agarose, Lonza) and were placed in a custom-designed chamber for light-sheet 3D imaging. Figure 1(b) shows a photo of the LSFM setup.

 figure: Fig. 1.

Fig. 1. (a) Schematic diagram of bi-directional light-sheet illuminated fluorescence microscope. L1-L4, lens; M1-M4: Mirrors; CO: collimation objective; BS: beam split; SL1-SL2: scanning lens; Gavlo1-Gavol2: Gavlo mirror pairs; IO1-IO2: illumination objectives; DO: detection objective; TL: tube-lens. (b) The bi-directional light-sheet illuminated fluorescence microscope for 3D imaging of Zebrafish vessels. (c) A typical 3D Zebrafish vasculature.

Download Full Size | PDF

Figure 1(b) shows a typical 3D vascular image of Zebrafish at 52 hours postfertilization (hpf). The original volume has the size of 2048×2048×800 voxels with a pixel size of 0.4×0.4×0.5 µm. It can be clearly observed that ISVs are arranged in pairs on both sides of the trunk and connected with the DLAVs. For segmentation of ISVs and DLAVs, we first cut out trunk regions from the entire volume. The ground truth models of trunk vessels were annotated manually using an open-source medical image processing package, Slicer3D [28]. The “Level tracing” tool was used to annotate the ISVs and DLAVs at each x-y plane and then the “Fill between slices” was used to connect the 2D sections into 3D structure. The manual segmentation results were doubled checked by overlaying with the original 3D images to avoid possible mistakes. 15 volumes were manually labeled, and whole data were expanded to 75 volumes after data augmentation. An additional 30 datasets were used for testing and subsequent analysis.

3. Multi-scale 3D U-Net

3.1 Network structure

A multi-scale 3D U-Net model was developed for automatic segmentation of the 3D trunk vessel. As shown in Fig. 2(a), the network contains an encoder and a decoder. The encoder is responsible for characteristics and multi-scale information extraction, while the decoder is responsible for feature fusion. On the decoder part, the input of each convolution module was the combination of the output of the last convolution module and the corresponding layer of the encoder. In this way, the features could pass through a certain depth without losing too much spatial information. After up-sampling on the decoder part, the network outputs the segmentation result with the same size as the original volume.

 figure: Fig. 2.

Fig. 2. (a) Architecture of the MS-3D U-Net model. The gray boxes with solid borders represent the convolution modules of the encoder, while the gray boxes with dotted borders represent the same modules of the decoder. (b) Structure of the Convolution module. (c) Structure of the MS-Conv module.

Download Full Size | PDF

Figure 2(b) shows the structure of the convolution modules. Each module includes two convolution layers. All 3D convolutional layers are followed by a batch normalization (BN) layer and a Rectified Linear Unit (ReLU) layer to prevent the gradient from disappearing while speeding up convergence.

The MS-Conv module realizes the down-sampling of the encoder. Inspired by the inception module [29,30], the MS-Conv module in our model contains three parallel branches, as shown in Fig. 2(c): the first branch consists of one 3×3×3 3D Convolutional Block, whereas the second branch consists of two stacked 3×3×3 3D Convolutional Blocks as a substitute for 5×5×5 3D Convolutional Block but with less computation; the third branch consists a MaxPooling Block for the optimization of the whole network. 1×1×1 Convolution Blocks in the three branches are used to compute dimension reductions. The MS-Conv module uses three parallel branches to extract different scales’ features, concatenate them as one valid feature, and transfer them to the next layer. In this way, our network obtains the capability to segment vessels of different sizes from 3D Zebrafish trunk images.

To evaluate the performance of our MS-3D U-Net, we conduct a comprehensive comparison to two classical 3D semantic segmentation models, 3D U-Net and V-Net, as shown in Table 1. The structure of 3D U-Net is similar to that shown in Fig. 2. (a), but all MS-Conv modules are replaced by max pooling operations, and all up-sampling layers are changed by convolutional layers. The structure of V-Net [31] is similar to 3D U-Net, but adds the residual modules and uses convolution layers instead of max pooling layers for up-sampling and down-sampling. The input channel dimension of the three models is 16.

Tables Icon

Table 1. Differences of neural networks

3.2 Loss function

For image segmentation, the Dice coefficient D [25] is widely used as the loss function, defined as:

$$D = \frac{{2\sum\nolimits_i^m {{y_i}{{\hat{y}}_i}} }}{{\sum\nolimits_i^m {y_i^2} + \sum\nolimits_i^m {\hat{y}_i^2} }}$$
where, ${y_i}$ is the ground truth volume, ${\hat{y}_i}$ is the predicted segmentation volume, m is the number of voxels.

We proposed a Dice loss function based on the hard attention mechanism to train the network more specific to the area of vasculature, as shown in Fig. 3. We denoted the entire input volume as ${C_{all}}$, which is indicated by the red dotted box in Fig. 3(a). The solid white box represents ${C_0}$, which is the area containing ISVs and DLAVs. In Fig. 3(b)-(c), ${C_1}$ consists of two boxes representing the left and right trunk parts containing blood vessels, respectively. Different weights (1:2:3) was assigned to these three regions to calculate the weighted Dice coefficient ${D_{Att}}$:

$${D_{Att}} = \sum\nolimits_{i \in {C_{all}}}^m {\frac{{{y_i}{{\hat{y}}_i}}}{{y_i^2 + \hat{y}_i^2}}} + \sum\nolimits_{i \in {C_0}}^m {\frac{{2{y_i}{{\hat{y}}_i}}}{{y_i^2 + \hat{y}_i^2}}} + \sum\nolimits_{i \in {C_1}}^m {\frac{{3{y_i}{{\hat{y}}_i}}}{{y_i^2 + \hat{y}_i^2}}} $$

The Dice loss function can be expressed as:

$$DiceLoss = 6 - {D_{Att}}$$

 figure: Fig. 3.

Fig. 3. Graphical representation of Call, C0, and C1 in loss function (a) Definitions of Call and C0. Call represents the entire input volume, and C0 is the area containing ISVs and DLAVs, (b), (c) Concepts of C1. C1 consists of the blue box and the yellow box, which present the left and right trunk parts containing blood vessels

Download Full Size | PDF

3.3 Training of the networks

During the network training, we performed three-fold cross-validation sessions. Two-thirds of datasets were used for training, and the rest were used for validation. An adaptive learning rate optimizer, Adam, was used to minimize the loss function during training of the network [32]. To accelerate the training process, we used an exponential decay learning rate with its initial learning rate of 0.00001 and updated it every 800 steps.

Computations were carried out on a computer with an Intel Xeon Silver 4215R CPU @ 3.20 GHz and 64-bit Windows 10 system. Two NVIDIA GeForce RTX 3090 GPUs with CUDA11.0 were used to accelerate calculation. The software is based on the Tensorflow architecture of and python development environment Pycharm. The training took about 22 hours (a batch size of 4, iterated for 2000 epochs), and the validation took about 1 second for one volume.

3.4 Evaluation of the network

Several evaluation parameters were used to demonstrate the segmentation performance of our MS-3D U-Net quantitatively, namely Pixel Accuracy (PA), Average Pixel Accuracy (MPA), Intersection over Union (IoU) [33], and area under the curve (AUC) [34]:

PA is a ratio of the number of correctly classified pixels to the total pixel number:

$$PA = \frac{{\sum\nolimits_{i = 0}^k {{p_{ii}}} }}{{\sum\nolimits_{i = 0}^k {\sum\nolimits_{j = 0}^k {{p_i}_j} } }}$$
where ${p_{ii}}$ is the pixels were classified correctly, and ${p_{ij}}$ is the pixels belonging to class i but are predicted to class j.

MPA is the result of summing and then averaging PA for ISVs, DLAVs, and the background:

$$MPA = \frac{1}{{k + 1}}\sum\nolimits_{i = 0}^k {\frac{{{p_{ii}}}}{{\sum\nolimits_{j = 0}^k {{p_{ij}}} }}} $$

IoU refers to the ratio of intersection and union of the ground truth and our predicted segmentation (ISVs or DLAVs), which shows the correlation between the predicted value and the actual value:

$$IoU = \sum\nolimits_{i = 0}^k {\frac{{{p_{ii}}}}{{\sum\nolimits_{j = 0}^k {{p_{ij}}} + \sum\nolimits_{j = 0}^k {{p_{ji}} - {p_{ii}}} }}}$$

AUC is an essential criterion for evaluating segmentation methods. It refers to the area of the receiver operating characteristic curve (ROC) [35]. The ROC curve uses false positive rate (FPR) as the abscissa and true positive rate (TPR) as the ordinate. FPR and TPR are defined as:

$$TPR = \frac{{TP}}{{TP + FN}}$$
$$FPR = \frac{{FP}}{{FP + TN}}$$
where the true-positive (TP) refers to the pixel number of correctly classified ISVs (or DLAVs), true-negative (TN) refers to the pixel number of correctly classified background, false-negative (FN) refers to the pixel number belonged to ISVs (or DLAVs) but are wrongly classified into other classes, and false-positive (FP) refers to the pixel number belonged to the background but are mistakenly classified into ISVs (or DLAVs).

3.5 Vessel dimension measurement

The segmentation results were imported to Fiji for further vessel dimension measurement [36]. The length of ISVs and DLAVs between neighboring ISVs were measured using the “Measure” function in Fiji. A suitable view angle was adjusted for easier observation. The surface area and volume of ISVs were measured by thye first crop out the region of each ISV and then using the “3D Objects Counter” function.

4. Results

4.1 3D vessel segmentation

Typical DLAV has a radius of almost 25µm to 32µm, while the typical radius of ISVs are about 11µm to 18µm. Their size and fluorescent intensity difference pose significant challenges to segmentation accuracy. We trained three neural network models and compared their performance using the same data sets.

Figure 4 illustrates the segmentation results of V-Net, 3D U-Net and MS-3D U-Net. Figure 4(a) shows the original 3D volume of Zebrafish’s trunk. All three models could segment out the majority of DLAVs and ISVs’ structures, as shown in Fig. 4(b-e). But for some details, the MS-3D U-Net performs better than V-Net and 3D U-Net. Figure 4(f) displays the results of the three models on a thick vessel. The segmented ISV by MS-3D U-Net is better consistent with the hand-annotated ground-truth model. In contrast, the ISV segmented by V-Net and 3D U-Net appears to be much thinner and has defects at its end in V-Net especially. For a thin ISV shown in Fig. 4(g), the ISV segmented by V-Net also appears to be thinner than the ground truth model. From the dark part of the light-colored ISV, it can be seen that 3D U-Net mistakenly segmented the ISV as DLAV. In Fig. 4(h), part of the ISVs connected with DLAVs was mistakenly segmented as DLAV by V-Net. Because of the hard attention mechanism in the loss function, the MS-3D U-Net segmented these junction regions correctly. Moreover, the segmentation results of V-Net and 3D U-Net have more noise points than those of MS-3D U-Net. All above indicate that MS-3D U-Net learns the multi-scale characteristics of the vessels better than V-Net and 3D U-Net.

 figure: Fig. 4.

Fig. 4. Typical segmentation results. (a) The original 3D volume. (b) The manually annotated ground truth structure. (c) corresponding predicted result by V-Net. (d) corresponding predicted result by 3D U-Net. (e) corresponding predicted result by MS-3D U-Net. (f)-(h) Zoom in the corresponding box regions with different colors in (b)-(e).

Download Full Size | PDF

To better demonstrate the segmentation accuracy, we calculated the error maps of segmentation results to the ground truth model, as shown in Fig. 5. Figure 5(a), (c) and (e) are the results of V-Net, 3D U-Net, and MS-3D U-Net on the same slice in the y-z plane. It clearly shows that the segmentation results by V-Net and 3D U-Net are worse than by MS-3D U-Net with much larger pink and orange areas on the tail of ISVs. All three models have some errors at the fracture parts of DLAVs themselves (blue and yellow areas), but MS-3D U-Net performs better. Figure 5(b), (d) and (f) show the slices in the x-z plane, demonstrating fewer wrong pink regions of ISVs by MS-3D U-Net.

 figure: Fig. 5.

Fig. 5. Error maps of the segmentation result of V-Net, 3D U-Net and MS-3D U-Net to the ground truth model (a) Error map in the y-z plane with V-Net. (b) Error map in the x-z plane with V-Net. (c) Corresponding error map in the y-z plane with 3D U-Net. (d) Corresponding error map in the x-z plane with 3D U-Net. (e) Corresponding error map in the y-z plane with MS-3D U-Net. (f) Corresponding error map in the x-z plane with MS-3D U-Net. Red color represents corrected segmented ISVs; green color represents correctly segmented DLAVs. Orange color represents the false-positive ISVs; pink color represents the false-negative ISVs; blue color represents the false-positive DLAVs; yellow color represents the false-negative DLAVs. Scale bars represent 100$\mu m $.

Download Full Size | PDF

Better performance of our MS-3D U-Net can also be reflected quantitatively from the evaluation metrics. Table 2 shows the voxel-wise segmentation metrics, PA, MPA, IoU and AUC for ISVs and DLAVs. Although the PA, MPA and AUC values of MS-3D U-Net are only a little bit higher than that of V-Net, the IoU of ISVs obtained by MS-3D U-Net is 13.78% and 23.63% higher than that obtained by 3D U-Net and by V-Net. The IoU can be a better indication of the network performance because it directly emphasizes the proportion of intersecting areas between the predicted segmentation and the ground truth. IoU values of 82.3% and 83.0 were obtained by MS-3D U-Net, due to its multi-scale feature and hard attention mechanism. The high accuracy of the blood vessels segmentation is essential for the following quantitative analysis.

Tables Icon

Table 2. Evaluation Metrics of the Networks

4.2 Quantitative analysis of zebrafish vascular development

For the cardiovascular study, the disease mechanism and drug response can be inferred from the process of vascular development [37]. Therefore, measuring the growth curves of different blood vessels is required for vascular development research. For demonstration, we used the light-sheet fluorescent microscope to image a Zebrafish continuously from 42 hpf to 62 hpf. The seven pairs of ISVs near the head and the DLAVs at the same regions were imaged and segmented using the MS-3D U-Net. The segmentation results were also shown in Visualization 1, Visualization 2 and Visualization 3, which presents Zebrafish vasculature in 42 hpf, 52 hpf, and 62 hpf. From the segmented results, their length, surface and volume were easily calculated.

The orange curve in Fig. 6(a) presents the length growth curves of ISVs. It can be seen that the curve is almost flat from 42 hpf to 52 hpf while increasing fast after 52 hpf. Figure 6(b) shows a similar trend about the surface area and the volume growth curves of ISVs. The difference between maximum and minimum values is getting large indicating the ISVs developed with different rates. The blue curve of Fig. 6(a) shows the length of DLAVs between neighboring ISVs. From 42 hpf to 46 hpf, the growth curve develops fast. From 46 hpf to 56 hpf, it continues to grow slowly. After 56 hpf, the length of DLAVs stops increases, indicating the completion of DLAVs growth. From the length, surface area, and volume growth curves of ISVs, it can be concluded that ISVs’ primary development time is after 52 hpf while the length of DLAV between neighboring ISV almost remains constant after 56 hpf.

 figure: Fig. 6.

Fig. 6. Growth curves of the trunk vessels for a Zebrafish imaged for 20 hours continuously. (a) the length of ISV and DLAVs, (b) the surface and volume of the ISVs. The middle points, up and low bounds represent the average, maximum, and minimum values for the first seven pairs of ISVs and DLAVs between the neighboring ISVs, respectively.

Download Full Size | PDF

5. Conclusions

The vessel segmentation is essential for Zebrafish vasculature development study. This paper aims for the precise 3D segmentation of Zebrafish ISVs and DLAVs. In order to extract the multi-scale information in the 3D light-sheet images, a multi-scale 3D U-Net network based on an encode-decode structure was developed. Compared with 3D U-Net and V-Net, MS-3D U-Net has better segmentation accuracy. We performed segmentation and quantitative analysis on ISVs and DLAVs of Zebrafish that had grown for consecutive hours. The vascular volume, surface area, length can be quantified with segmentation and related to its biological growth process. Our work will promote the quantitative study of vascular development, disease mechanisms, and evaluation of the drug effects.

With more pre-labeled data for training, it is expected that the segmentation accuracy by MS-3D U-Net can be further improved. However, labeling 3D image data is time-consuming. By now there are no easy-to-use and high efficient tools available for the task. Hence, network with less amount of labeled data required will be more practically useful. Meanwhile, our MS-3D U-Net is trained and tested for just two types of vessels in the Zebrafish trunk. In the future, it will be further optimized and evaluated for segmentation of the entire vascular network and identification of each vessel in Zebrafish, which will greatly accelerate the quantitative development study and drug screening using Zebrafish as a model animal. In principle, the MS-3D U-Net can also be applied to 3D light-sheet imaging of other specimens than Zebrafish.

Funding

National Key Research and Development Program of China (2017YFC0110100); National Natural Science Foundation of China (61805272).

Disclosures

The authors declare no conflicts of interest related to this article.

Data availability

Data and codes underlying the results presented in this paper are available upon reasonable request from the corresponding author.

References

1. A. V. Gore, K. Monzo, Y. R. Cha, W. Pan, and B. M. Weinstein, “Vascular development in the zebrafish,” Cold Spring Harb. Perspect. Med. 2(5), a006684 (2012). [CrossRef]  

2. J. F. Amatruda, J. L. Shepard, H. M. Stern, and L. I. Zon, “Zebrafish as a cancer model system,” Cancer Cell 1(3), 229–231 (2002). [CrossRef]  

3. M. Rajabi and S. A. Mousa, “The role of angiogenesis in cancer treatment,” Biomedicines 5(2), 34 (2017). [CrossRef]  

4. B. M. Hogan and S. Schulte-Merker, “How to Plumb a Pisces: Understanding Vascular Development and Disease Using Zebrafish Embryos,” Dev. Cell 42(6), 567–583 (2017). [CrossRef]  

5. M. Weber and J. Huisken, “Light sheet microscopy for real-time developmental biology,” Curr. Opin. Genet. Dev. 21(5), 566–572 (2011). [CrossRef]  

6. Y. Wan, K. McDole, and P. J. Keller, “Light-sheet microscopy and its potential for understanding developmental processes,” Annu. Rev. Cell Dev. Biol. 35(1), 655–681 (2019). [CrossRef]  

7. J. Huisken, J. Swoger, F. Del Bene, J. Wittbrodt, and E. H. K. Stelzer, “Optical sectioning deep inside live embryos by selective plane illumination microscopy,” Science 305(5686), 1007–1009 (2004). [CrossRef]  

8. J. Huisken and D. Y. R. Stainier, “Selective plane illumination microscopy techniques in developmental biology,” Development 136(12), 1963–1975 (2009). [CrossRef]  

9. F. O. Fahrbach, F. F. Voigt, B. Schmid, F. Helmchen, and J. Huisken, “Rapid 3D light-sheet microscopy with a tunable lens,” Opt. Express 21(18), 21010 (2013). [CrossRef]  

10. X. Qin, C. Chen, L. Wang, X. Chen, Y. Liang, X. Jin, W. Pan, Z. Liu, H. Li, and G. Yang, “In-vivo 3D imaging of Zebrafish’s intersegmental vessel development by a bi-directional light-sheet illumination microscope,” Biochem. Biophys. Res. Commun. 557, 8–13 (2021). [CrossRef]  

11. P. Ricci, V. Gavryusev, C. Müllenbroich, L. Turrini, G. de Vito, L. Silvestri, G. Sancataldo, and F. S. Pavone, “Removing striping artifacts in light-sheet fluorescence microscopy: a review,” Prog. Biophys. Mol. Biol. 168, 52–65 (2022). [CrossRef]  

12. R. Tomer, K. Khairy, F. Amat, and P. J. Keller, “Quantitative high-speed imaging of entire developing embryos with simultaneous multiview light-sheet microscopy,” Nat. Methods 9(7), 755–763 (2012). [CrossRef]  

13. P. J. Keller and M. B. Ahrens, “Visualizing whole-brain activity and development at the single-cell level using light-sheet microscopy,” Neuron 85(3), 462–483 (2015). [CrossRef]  

14. E. J. Gualda, H. Pereira, T. Vale, M. F. Estrada, C. Brito, and N. Moreno, “SPIM-fluid: open source light-sheet based platform for high-throughput imaging,” Biomed. Opt. Express 6(11), 4447–4456 (2015). [CrossRef]  

15. J. Feng, S. Han Cheng, P. K. Chan, and H. H. S. Ip, “Reconstruction and representation of caudal vasculature of zebrafish embryo from confocal scanning laser fluorescence microscopic images,” Comput. Biol. Med. 35(10), 915–931 (2005). [CrossRef]  

16. O. Ronneberger, P. Fischer, and B. Thomas, “U-Net: Convolutional Networks for Biomedical Image Segmentation,” in 18th International Conference on Medical Image Computing and Computer-Assisted Intervention (MICCAI) (2015), 9351, pp. 234–241.

17. K. Zhang, H. Zhang, H. Zhou, D. Crookes, L. Li, Y. Shao, and D. Liu, “Zebrafish Embryo Vessel Segmentation Using a Novel Dual ResUNet Model,” Comput. Intell. Neurosci. 2019, (2019).

18. J. Yun, J. Park, D. Yu, J. Yi, M. Lee, H. J. Park, J. G. Lee, J. B. Seo, and N. Kim, “Improvement of fully automated airway segmentation on volumetric computed tomographic images using a 2.5 dimensional convolutional neural net,” Med. Image Anal. 51, 13–20 (2019). [CrossRef]  

19. Y. Han, X. Li, B. Wang, and L. Wang, “Boundary loss-based 2.5D fully convolutional neural networks approach for segmentation: A case study of the liver and tumor on computed tomography,” Algorithms 14(5), 144 (2021). [CrossRef]  

20. E. Kugler, K. Plant, T. Chico, and P. Armitage, “Enhancement and segmentation workflow for the developing zebrafish vasculature,” J. Imaging 5(1), 14 (2019). [CrossRef]  

21. M. I. Todorov, J. C. Paetzold, O. Schoppe, G. Tetteh, S. Shit, V. Efremov, K. Todorov-Völgyi, M. Düring, M. Dichgans, M. Piraud, B. Menze, and A. Ertürk, “Machine learning analysis of whole mouse brain vasculature,” Nat. Methods 17(4), 442–449 (2020). [CrossRef]  

22. S. Daetwyler, U. Gunther, C. D. Modes, K. Harrington, and J. Huisken, “Multi-sample SPIM image acquisition, processing and analysis of vascular growth in zebrafish,” Development 146(6), dev173757 (2019). [CrossRef]  

23. Ö. Çiçek, A. Abdulkadir, S. S. Lienkamp, T. Brox, and O. Ronneberger, “3D U-Net: learning dense volumetric segmentation from sparse annotation,” in 19th International Conference on Medical Image Computing and Computer-Assisted Intervention (MICCAI) (2016), pp. 424–432.

24. J. Zhuang, E. Shelhamer, and T. Darrel, “Fully Convolutional Networks for Semantic Segmentation,” in IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2015), pp. 3431–3440.

25. L. C. Chen, G. Papandreou, I. Kokkinos, K. Murphy, and A. L. Yuille, “DeepLab: Semantic Image Segmentation with Deep Convolutional Nets, Atrous Convolution, and Fully Connected CRFs,” IEEE Trans. Pattern Anal. Mach. Intell. 40(4), 834–848 (2018). [CrossRef]  

26. H. Zhao, J. Shi, X. Qi, X. Wang, and J. Jia, “Pyramid Scene Parsing Network,” in 30th IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2017), pp. 6230–6239.

27. V. Mnih, N. Heess, A. Graves, and K. Kavukcuoglu, “Recurrent models of visual attention,” Adv. Neural Inf. Process. Syst. 3, 2204–2212 (2014).

28. A. Fedorov, R. Beichel, J. Kalpathy-Cramer, J. Finet, J.-C. Fillion-Robin, S. Pujol, C. Bauer, D. Jennings, F. Fennessy, M. Sonka, J. Buatti, S. Aylward, J. V. Miller, S. Pieper, and R. Kikinis, “3D Slicer as an image computing platform for the Quantitative Imaging Network,” Magn. Reson. Imaging 30(9), 1323–1341 (2012). [CrossRef]  

29. C. Szegedy, W. Liu, Y. Jia, P. Sermanet, S. Reed, D. Anguelov, D. Erhan, V. Vanhoucke, and A. Rabinovich, “Going Deeper with Convolutions,” in IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2016), pp. 1–9.

30. C. Szegedy, V. Vanhoucke, S. Ioffe, J. Shlens, and Z. Wojna, “Rethinking the Inception Architecture for Computer Vision,” in 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2016), pp. 2818–2826.

31. F. Milletari, N. Navab, and S.-A. Ahmadi, “V-Net: Fully convolutional neural networks for volumetric medical image segmentation,” in 4th IEEE International Conference on 3D Vision (3DV) (2016), pp. 565–571.

32. D. P. Kingma and J. L. Ba, “Adam: A method for stochastic optimization,” 3rd Int. Conf. Learn. Represent. ICLR 2015 - Conf. Track Proc.1–15 (2015).

33. A. Garcia-Garcia, S. Orts-Escolano, S. Oprea, V. Villena-Martinez, P. Martinez-Gonzalez, and J. Garcia-Rodriguez, “A survey on deep learning techniques for image and video semantic segmentation,” Appl. Soft Comput. 70, 41–65 (2018). [CrossRef]  

34. A. P. Bradley, “The use of the area under the ROC curve in the evaluation of machine learning algorithms,” Pattern Recognit. 30(7), 1145–1159 (1997). [CrossRef]  

35. T. Fawcett, “An introduction to ROC analysis,” Pattern Recognit. Lett. 27(8), 861–874 (2006). [CrossRef]  

36. J. Schindelin, I. Arganda-Carrera, E. Frise, K. Verena, L. Mark, P. Tobias, P. Stephan, R. Curtis, S. Stephan, S. Benjamin, T. Jean-Yves, J. W. Daniel, H. Volker, E. Kevin, T. Pavel, and C. Albert, “Fiji - an Open platform for biological image analysis,” Nat. Methods 9(7), 676–682 (2012). [CrossRef]  

37. Y. Blum, H. G. Belting, E. Ellertsdottir, L. Herwig, F. Lüders, and M. Affolter, “Complex cell rearrangements during intersegmental vessel sprouting and vessel fusion in the zebrafish embryo,” Dev. Biol. 316(2), 312–322 (2008). [CrossRef]  

Supplementary Material (3)

NameDescription
Visualization 1       Segmentation result of Zebrafish vasculature in 42 hours postfertilization.
Visualization 2       Segmentation result of Zebrafish vasculature in 52 hours postfertilization.
Visualization 3       Segmentation result of Zebrafish vasculature in 62 hours postfertilization.

Data availability

Data and codes underlying the results presented in this paper are available upon reasonable request from the corresponding author.

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (6)

Fig. 1.
Fig. 1. (a) Schematic diagram of bi-directional light-sheet illuminated fluorescence microscope. L1-L4, lens; M1-M4: Mirrors; CO: collimation objective; BS: beam split; SL1-SL2: scanning lens; Gavlo1-Gavol2: Gavlo mirror pairs; IO1-IO2: illumination objectives; DO: detection objective; TL: tube-lens. (b) The bi-directional light-sheet illuminated fluorescence microscope for 3D imaging of Zebrafish vessels. (c) A typical 3D Zebrafish vasculature.
Fig. 2.
Fig. 2. (a) Architecture of the MS-3D U-Net model. The gray boxes with solid borders represent the convolution modules of the encoder, while the gray boxes with dotted borders represent the same modules of the decoder. (b) Structure of the Convolution module. (c) Structure of the MS-Conv module.
Fig. 3.
Fig. 3. Graphical representation of Call, C0, and C1 in loss function (a) Definitions of Call and C0. Call represents the entire input volume, and C0 is the area containing ISVs and DLAVs, (b), (c) Concepts of C1. C1 consists of the blue box and the yellow box, which present the left and right trunk parts containing blood vessels
Fig. 4.
Fig. 4. Typical segmentation results. (a) The original 3D volume. (b) The manually annotated ground truth structure. (c) corresponding predicted result by V-Net. (d) corresponding predicted result by 3D U-Net. (e) corresponding predicted result by MS-3D U-Net. (f)-(h) Zoom in the corresponding box regions with different colors in (b)-(e).
Fig. 5.
Fig. 5. Error maps of the segmentation result of V-Net, 3D U-Net and MS-3D U-Net to the ground truth model (a) Error map in the y-z plane with V-Net. (b) Error map in the x-z plane with V-Net. (c) Corresponding error map in the y-z plane with 3D U-Net. (d) Corresponding error map in the x-z plane with 3D U-Net. (e) Corresponding error map in the y-z plane with MS-3D U-Net. (f) Corresponding error map in the x-z plane with MS-3D U-Net. Red color represents corrected segmented ISVs; green color represents correctly segmented DLAVs. Orange color represents the false-positive ISVs; pink color represents the false-negative ISVs; blue color represents the false-positive DLAVs; yellow color represents the false-negative DLAVs. Scale bars represent 100$\mu m $.
Fig. 6.
Fig. 6. Growth curves of the trunk vessels for a Zebrafish imaged for 20 hours continuously. (a) the length of ISV and DLAVs, (b) the surface and volume of the ISVs. The middle points, up and low bounds represent the average, maximum, and minimum values for the first seven pairs of ISVs and DLAVs between the neighboring ISVs, respectively.

Tables (2)

Tables Icon

Table 1. Differences of neural networks

Tables Icon

Table 2. Evaluation Metrics of the Networks

Equations (8)

Equations on this page are rendered with MathJax. Learn more.

D = 2 i m y i y ^ i i m y i 2 + i m y ^ i 2
D A t t = i C a l l m y i y ^ i y i 2 + y ^ i 2 + i C 0 m 2 y i y ^ i y i 2 + y ^ i 2 + i C 1 m 3 y i y ^ i y i 2 + y ^ i 2
D i c e L o s s = 6 D A t t
P A = i = 0 k p i i i = 0 k j = 0 k p i j
M P A = 1 k + 1 i = 0 k p i i j = 0 k p i j
I o U = i = 0 k p i i j = 0 k p i j + j = 0 k p j i p i i
T P R = T P T P + F N
F P R = F P F P + T N
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.