Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Automatic boundary segmentation of vascular Doppler optical coherence tomography images based on cascaded U-net architecture

Open Access Open Access

Abstract

Doppler optical coherence tomography (OCT) imaging of vessels after anastomosis procedures can provide high resolution structure and blood flow imaging of the vessel simultaneously for objective surgical evaluation. Automatic boundary segmentation of the outer vessel wall boundary and its inner lumen contour is a very crucial and fundamental step for the responsive and complicated quantitative analysis required in future clinical applications of Doppler OCT imaging. In this work, we proposed a cascaded U-net (CU-net) architecture to segment the vascular intensity image and its corresponding phase image for the outer vessel wall boundary and the inner blood flowing lumen contour, respectively. CU-net architecture was developed by training two specific U-net frameworks in coordination: the first performs intensity image segmentation while the second performs phase image segmentation. Output of the first framework was sent to the input of the second framework as a mask to select area of interests. Model training time can be reduced effectively by cascading two U-net frameworks. Testing segmentation accuracy for outer vessel wall boundary and inner lumen contour were calculated to be 96.7%±0.2% and 94.8%±0.2%, respectively. The CU-net architecture requires no pre-processing on noises inherent with OCT images, such as random noise and speckle noise. The segmentation was automatic and end-to-end. 250 Doppler OCT images from one in-vivo mouse femoral artery imaging were successfully segmented automatically with an average processing time of 0.68s including both the outer vessel boundary and inner lumen contour. Thrombosis morphology, the inner blood flowing lumen area, and its radius variation were quantitatively analyzed based on the segmentation results, demonstrating the potential for clinical objective evaluation.

© 2019 Optical Society of America under the terms of the OSA Open Access Publishing Agreement

1. Introduction

Vascular anastomosis, connecting two vessels end by end, is the foundation of organ transplantation and reconstructive plastic surgery, which was made possible by Dr. Carrel around hundred years ago [1]. Microvascular anastomosis for vessels with an outer diameter smaller than 1.0 mm is extremely challenging, which is mostly performed for children and hand surgery. Effective evaluation of vessel patency is of great importance. Problems hindering long-term success of the procedure such as weak blood flow restoration, luminal narrowing and thrombus formation needs to be detected as soon as possible for potential early re-intervention [2]. Devices that can assist surgeon’s evaluation in addition to their accumulated experience are thus demanded.

Traditional diagnostic imaging modalities such as computed tomographic angiography, magnetic resonance angiography, and ultrasound Doppler imaging are difficult to be applied for real-time intraoperative imaging due to relatively low spatial resolution, slow imaging speed and poor temporal resolution [3,4]. The idea of using optical coherence tomography (OCT) as a non-invasive high-resolution three-dimensional imaging modality to perform intraoperative assessment of microsurgery was proposed early in 1998 [5]. Later as the imaging speed of OCT goes close to hundred kilo A-scans per second, together with the real-time phase resolved Doppler OCT signal processing enabled by GPU technology, this topic was re-visited in 2013 [6]. Towards the goal of clinical translation, a miniature handheld OCT imaging probe using a MEMS mirror was developed for intraoperative evaluation of microvascular anastomosis [7]. OCT has demonstrated its unique advantages as an assisting tool for surgeons in objective, reliable surgical outcome evaluation for vessel reconnection in in-vivo animal study.

Boundary segmentation of vascular Doppler OCT images, including the inner blood flowing lumen contour from phase image and the outer boundary of vessel wall from intensity image are very important for objective and quantitative surgical evaluation. Based on the accurate segmentation results, crucial factors that affect long-term surgical success rate such as 3D thrombosis morphology and inner vessel lumen morphology can be extracted in addition to blood flow velocity. All these parameters can be fed into a poster fluidic dynamic computation algorithm to produce an objective assess of the surgical site and thus the prediction of long-term surgical success.

Due to the strong scattering of blood within the vessel and depth dependent sensitivity roll-off of OCT imaging system, in-vivo extra-vascular imaging suffers from low contrast and weak boundary at the bottom part of the vessel. In addition, OCT images suffer from speckle noise and random noise. Specially, noise level is generally higher in phase images than in intensity images, which increases the complexity of the inner blood flowing lumen boundary segmentation. Existing conventional segmentation techniques such as Chan-Vese-based boundary models, random walks, active contour models, and multiscale super-pixel chain tracking methods employ complicated pre-processing on image noises. This limits the robustness of these methods [810]. In addition, some methods require manual selection on the seeding points [1112].

Deep learning methods recently have shown great potential for robust and automatic segmentation performance based on series of feature extraction from gold standard image pools [13]. For example, Convolutional neural network (CNN) based on deep learning has shown significant improvement over other approaches for biomedical image segmentation [14,15]. It takes an image as an input and implements sequential convolution operations to extract features for the segmentation tasks. It has been used for the segmentation of retinal layers [1618]. The learnable weights and biases of the neurons act as filters. Placement of these convolution filters can be organized to design different frameworks for various segmentation tasks. The full convolutional network (FCN), a variant of CNN, not only achieves pixel-wise prediction but also forms global semantic information from local details. Multichannel FCN architectures were adopted for the coronary artery segmentation in X-ray angiograms [19]. Furthermore, the U-net architecture based on FCN proposed by Ronneberger had met with great success in the segmentation of cell wall in electron microscopy images [20], liver and lesion in CT [21] to achieve end-to-end segmentation on full-size images effectively [22,23].

Compared to other FCN networks, the U-net architecture is fast, which works with very few training images and yields more precise segmentations [24]. The U-net framework using a serial of convolutional layers to extract and predict the class of pixels of the input image is an end-to-end training model [20]. With pooling layers, the framework can achieve multi-scale feature recognition, which can improve the accuracy of the segmentation and extract complex feature information. Besides, the U-net utilizes robust data augmentation to enlarge the available labeled datasets efficiently. It composes of a converging path to capture context and a symmetric expanding path that enables accurate localization, yielding a u-shaped framework [25].

In this work, we proposed a cascaded U-net (CU-net) architecture to extract the contour of inner blood flowing lumen area and outer boundary of vessel wall from in-vivo extra-vascular Doppler OCT images automatically. The CU-net contains two U-net frameworks working in sequence: the first framework is used for segmenting outer boundary of vessel wall from intensity images and the second framework is applied for the contour of blood flowing lumen area segmentation from phase images. Output of the first framework is used as the mask to remove the background for the input of the second framework. Each framework was trained and optimized separately. Segmentation accuracy (SA) adopted for the quantitative analysis of the segmentation results reaches 96.7%±0.2% for outer vessel boundary and 94.8%±0.2% for inner contour. After the CU-net architecture was implemented, it was tested on 250 in-vivo mouse femoral artery images. Based on segmentation results, 3D thrombosis morphology, lumen area analysis was calculated for quantitative outcome evaluation.

The rest of the paper is organized as follows. Section 2 is method, which describes the CU-net architecture, data pre-processing and network training and testing. Section 3 shows the segmentation results. Section 4 is discussion and conclusions are drawn in Section 5.

2. Methods

2.1 Cascaded U-net (CU-net) architecture

The schematic of the cascaded U-net architecture contains two U-net based frameworks. The customized parameters of each U-net in this study contains 20 convolutional layers, 5 pooling layers and 4 up-convolutional operators. Every two 3*3 convolutional layers are followed by one 2*2 pooling layer in converging path. The number of feature channels is 64, 64, 128, 128, 256, 256, 512, 512, 1024, 1024 respectively. In expanding path, each up-sampling step added feature maps by 2*2 up-convolutional operator before two 3*3 convolutional layers. The corresponding number of feature channels is 512, 512, 256, 256, 128, 128, 64, 64, 2, and 1, whose channel eigenvectors can be converted to the number of classification required. All activation functions configure Relu.

The first U-net framework takes intensity image shown in Fig. 1(a) as input and will generate a probability map as output automatically. After simple threshold processing, outer vessel boundary can be segmented as shown in Fig. 1(b). This outer boundary result shown in Fig. 2(b) is then used as the mask on the corresponding phase image shown in Fig. 2(a) to select the area of interest, which will be fed as the input shown in Fig. 2(c) for the second U-net framework. After going through the network, a probability map is generated and the inner blood flowing lumen area is segmented shown in Fig. 2(d). These two U-net frameworks are cascaded together by connecting the output of the first to the input of the second framework. This cascaded architecture can alleviate the burden on the second U-net framework by removing non-targeted area.

 figure: Fig. 1.

Fig. 1. Schematic of the first U-net framework of CU-net for intensity image segmentation

Download Full Size | PDF

 figure: Fig. 2.

Fig. 2. Schematic of the second U-net framework of CU-net for phase image segmentation

Download Full Size | PDF

2.2 Data set

A traceable dataset of Doppler OCT images of mouse artery was adopted for this study and an expand on the procedures had been offered in the previous articles [7]. In brief, these data were obtained from the exposed femoral artery after anastomosis on six to eight-week-old male BALB/C mice using a handheld Doppler OCT imaging probe. A 3D volume data of the anastomosed vessel site was acquired covering a volume range of 1.5 mm×1.5 mm×5 mm (lateral X×lateral Y×axial Z). The 3D dataset consists of 250 B-frames. Each B-frame image consisted of 1000 A-scans.

2.3 Implementation

The system implementation was performed on a Lenovo (Beijing, China) P900 workstation with one NVIDIA (Santa Clara, California, USA) GeForce GTX TITAN Z graphics processing units (GPU) and 128GB of RAM. The models were implemented with Python (v3.5.2) based on Keras (v2.1.6) and Tensorflow (v1.4.0) with NVIDIA CUDA (v8.0) and cu-dnn (v6.1) libraries.

2.4 Training of CU-net

The training process of the CU-net architecture consists of two separate U-net framework training process. The first is intensity image segmentation model training as shown in Fig. 3. It contains three procedures: data augmentation, model training and model testing.

 figure: Fig. 3.

Fig. 3. Flowchart of the U-net model training for intensity image segmentation.

Download Full Size | PDF

Data augmentation was first applied to the intensity training image set and their corresponding gold standard images. Then the intensity training image was fed into the raw U-net model, while the intensity label for outer boundary was used as the reference for minimizing loss function and optimizing the U-net parameters. For every training epoch, the U-net model was tested for its segmentation accuracy on images from the test set.

Phase image segmentation model training shared the same three procedures: data augmentation, model training and model testing as shown in Fig. 4. The main difference is that the phase training set was the original phase image masked with the outer vessel boundary.

 figure: Fig. 4.

Fig. 4. Flowchart of the U-net model training for phase image segmentation.

Download Full Size | PDF

2.4.1 Data Pre-processing

Vascular Doppler-OCT intensity images and their corresponding phase images were first sampled into grayscale images with the size of 512 × 512 pixels to fit the U-net model. Then 190 image pairs were manually delineated by specialists to form the gold standards of outer vessel wall boundary and inner blood flowing lumen area contour. The capability of Doppler OCT imaging for detection of thrombosis has been investigated and validated by gold standard histology analysis in our previous publication [2,6]. That’ s the foundation of our drawing on the formation of gold standard.

Each pair contains one intensity image and one corresponding phase image. 150 image pairs of the 190 were used for the framework training, while 40 of the 190 were used for the framework testing. Please note that the 40 testing image pairs were selected from a 3D data volume that is different from the source of 150 training image pairs to avoid shared inherent feature induced fake high accuracy on the training process.

To teach the network with desired invariance and robustness properties, data augmentation was applied to the 150 image pairs and their corresponding segmentation labels. It helps to alleviate the over-fitting problem with some additional computation time, which has already been demonstrated in the scope of unsupervised feature extraction. 300 more image pairs and corresponding segmentation labels were generated through translation, rotation (coefficient sets 0.2), shear (coefficient sets 0.05), and flipping based on experimental experience. The translation was performed with the coefficient of width, height, and zoom set as 0.05, 0.05, and 0.05, respectively.

2.4.2 Model training and testing

The intensity and the phase segmentation U-net framework shared almost the same training procedures. During the training procedure, images and their corresponding labels were network inputs. Trainings were divided into one group of 360 training subset and one group of 90 validation subset to analyze the learning process. Conventional weights map and binary cross-entropy loss function of the U-net framework were adopted from [20]. By comparing the label and the probability map, the loss value was calculated to alter the weights map of the convolutional layers. In every training epoch such an alternation of the weights map lead to the optimization of the network model. A high momentum of 0.99 was used during the U-net training process so that a large number of the previously seen training samples determine the update in the current optimization step. The Adam optimizer [26], a variant of stochastic gradient descent (SGD), was adopted to update the weights with the learning rate parameter set as 1×10−4. The framework training stopped when the loss function achieved convergence. Segmentation testing was performed after every epoch. Segmentation accuracy (SA) is adopted for the quantitative analysis of the segmentation results, which is described in [27] as below in Eq. (1):

$$SA({S_{seg}},{S_{gt}}) = 2\frac{{|{{S_{seg}} \cap {S_{gt}}} |}}{{|{{S_{seg}}} |+ |{{S_{gt}}} |}}$$
where Sseg and Sgt are the segmented region and the ground truth region (delineated gold standard region). When the segmentation result is close to the ground truth, the SA is close to one. The higher the segmentation accuracy is, the better the performance of the networks. Average segmentation accuracy of the model on 40 testing image pairs was calculated as the selection criteria for the U-net model for the follow-up segmentation task.

Figure 5(a) and 5(b) show the learning curves during training process for intensity and phase image segmentation U-net frameworks. Blue curve stands for the train loss value while red curve stands for the validation loss value. These two frameworks were trained up to 40 and 60 epochs respectively to achieve stabilization of loss values. The reason that the phase image segmentation framework required more epochs than the intensity image segmentation framework might be higher noise level in phase images and more sophisticated boundary feature. As shown in Fig. 5, both loss values decreased faster in the first few epochs, then became steady. The validation loss followed the trend of the training loss. Difference between train loss and validation loss was very small at the end confirming no serious over-fitting existed for our datasets.

 figure: Fig. 5.

Fig. 5. Loss value vs. epoch for two U-net models in training process.

Download Full Size | PDF

The time consumption for the intensity and phase segmentation U-net framework was 5 hours and 7.5 hours respectively. Generally, the time consumption is proportional to the required number of epochs to achieve stabilization. It also depends on the hardware configuration. As computing power continues to increase in the foreseeable future, the computing speed will get faster and faster. From the perspective of final application demand, training time is not a main problem compared to the accuracy.

3. Results

3.1 Quantitative evaluation of the CU-net

Segmentation accuracy (SA) of 40 testing image pairs using the CU-net architecture after every epoch is a good quantitative indicator of the model performance. Average SA of the intensity segmentation model and the phase segmentation model after each epoch are shown in Fig. 6(a) and 6(b). Average SA of the intensity segmentation model saturates at 96.7%±0.2% while that of the phase segmentation model saturates at 94.8%±0.2%. Once the SA reaches saturation, performance of the U-net model becomes insensitive to the number of epochs. These two SA values are close to one. The segmentation results were almost consistent with the gold standard, which means the CU-net model achieved a very good performance. It conquered the weak boundary, random and speckle noise issues on Doppler OCT images and held great potential for intraoperative microvascular anastomosis evaluation. The reason might be that features such as the outer vessel boundary and inner blood flowing area contour are relatively simple and easy to learn. The train accuracy is higher than the test accuracy by an average of 0.05%. The consistent difference between train accuracy and test accuracy is due to the fact that training data was seen by the network while test data was not in the training process. Hence, the model was optimized more towards the training data instead of the test data.

 figure: Fig. 6.

Fig. 6. Segmentation accuracy vs. epoch for two U-net models in training process.

Download Full Size | PDF

3.2 Segmentation results

An independent 3D data volume consisting of 250 in-vivo Doppler OCT mouse femoral artery image pairs were fed into the trained CU-net architecture for further validation. Segmentation results of four selective positions of mouse femoral artery are shown in Fig. 7. Figure 7(a-1) to 7(a-4) show the intensity images with outer vessel boundary marked out by red curves. Figure 7(b-1) to 7(b-4) show the phase images with both outer vessel boundary and inner lumen contour marked out by red curves. Through visual observation, we can see that outer vessel wall boundary and inner blood flowing lumen area contour were successfully segmented for different degree of thrombosis occlusion. Specifically, there is no thrombosis detected in Fig. 7(b-1) while different occlusion degree and morphology of thrombosis were detected in Figures 7(b-2), 7(b-3) and 7(b-4). Figure 7(c-1) to 7(c-4) are the final segmented binary masks showing the vessel wall and attached thrombosis on the inner wall. As for the processing time, it took an average of 0.31 s for the CU-net to get the segmentation result of one intensity image while 0.37 s for one phase image. In total, it took 170.5 s for the CU-net to get all the results.

 figure: Fig. 7.

Fig. 7. Segmentation results of four selective Doppler OCT image pairs using CU-net. (a-1)–(a-4): intensity image with outer boundary outlined by red curve; (b-1)–(b-4): phase image with outer and inner boundary outlined by red curve; (c-1)–(c-4): segmented binary masks. (Scale bar: 500 μm)

Download Full Size | PDF

To demonstrate the effectiveness of the proposed method for surgical outcome evaluation, we performed the registration and 3D reconstruction with these 250 segmented binary masks to visualize the blood vessel intuitively. The sub-pixel image registration algorithm [28,29] was used to register these images and ImageJ (v2.0.0, National institutes of Health, United States) software was used to reconstruct the vessel 3D visualization as shown in Fig. 8. Please note motion artifacts of these images came from the hand motion of investigator and mouse breath and heart beating motion during the handheld probe imaging process.

 figure: Fig. 8.

Fig. 8. 3D volume rendering of the segmented vessel from two different views: (a) front view (b) skewed front view (c) slice image of the P1 plane of the vessel (d) slice image of the P2 plane of the vessel and (e) slice image of the P3 plane of the vessel

Download Full Size | PDF

Although image registration was performed, these movements still affected the continuity and smoothness between inter-frames of the OCT volumes, which results in the saw-tooth structures in Fig. 8(c)–8(e). Figure 8(a) and 8(b) presents the vessel in two different view angles. We can see clearly that certain amount of thrombosis has occluded the inner blood flowing area and its 3D morphology is also clearly depicted. Figure 8(c) to 8(e) shows the corresponding slice of the plane marked by dashed lines P1, P2, P3 in Fig. 8(a), respectively. It helps the evaluation of vessel stenosis with the size, shape and location of the thrombosis.

Quantitative analysis of the vessel condition became possible with the segmentation results and is of great importance to objective assessment of the long-term surgical outcome. Along the direction of the blood flow, we calculated the inner blood flowing lumen area and vessel radius of these 250 images covering a physical range of 1.5 mm. The normalized inner blood flowing lumen area along the blood flow axis is plotted in Fig. 9(a). Two valleys of the curve marked by circles in Fig. 9(a) indicate the position of two thrombosis. The larger thrombus caused a 48% flow area drop at the narrowest position of the vessel. Mean radius of inner blood flowing lumen in 10 angular directions within one cross-sectional image and their stand deviations are plotted in Fig. 9(b). The average inner radius of the blood vessel gets smaller distinctly at the positions of thrombus. In addition, the stand deviation fluctuates largely along the vessel. The larger the stand deviation of the radius is, the easier for it to cause turbulence that may contribute to the development of thrombosis. In the future work, computational fluid dynamics (CFD) algorithms can be used to study the state of blood flow and analyze wall shear stress of the vessel [30], especially for the arteries with serious stenosis or thrombosis occlusion [31,32].

 figure: Fig. 9.

Fig. 9. (a) the inner blood flowing lumen area and (b) the average inner blood flowing lumen area radius and its variation along the blood flow axis.

Download Full Size | PDF

4. Discussion

The main advantages of cascading two U-net architecture are below. First, during the training process ground truth of the outer vessel wall boundary and inner lumen contour can be combined together to narrow down the area of interest for phase image segmentation model optimization thus saving training time to achieve convergence. Second, during the segmentation process, since output of the intensity image segmentation model is generated first, it can be used as the mask to remove irrelevant background region. Therefore, it can cooperate with the phase image segmentation model and save segmentation time. It is based on the fact that inner blood flowing area must lie within the outer vessel wall boundary. Thirdly, based on the experimental results, the segmentation accuracy of phase image segmentation is improved by 5.7%.

Currently the training data set contains only 450 image pairs and their corresponding labels, which is a relatively small data set. When the size of data set increases with the deployment of more imaging systems and thus more available images, effective training time reduction is important to future CU-net model update.

To illustrate the advantage of the proposed CU-net architecture, we ran the test of training the phase image segmentation model using the original phase images without outer vessel boundary mask, which we refer as single U-net phase image model. Figure 10(a) and 10(b) respectively show the loss value and accuracy of the single U-net phase image model for original phase images during the training process. We can see that the loss value was not converged until 120 epochs. An average of 10% difference was observed between the train accuracy and the test accuracy. Figure 10(c) and 10(d) compare the 40 testing images segmentation accuracy of CU-net phase segmentation model and single U-net phase image segmentation model. CU-net phase segmentation model has higher segmentation accuracy of 94.8%±0.2% compared to 89.1%±1.6% of the single U-net phase segmentation model.

 figure: Fig. 10.

Fig. 10. Loss value (a) and accuracy (b) of the single U-net with the original phase images as architecture inputs in training process and the segmentation accuracy vs. epochs of the CU-net phase model (c) and single U-net phase model (d).

Download Full Size | PDF

Figure 11 compares the segmentation results of CU-net phase segmentation model (left column) and single U-net phase segmentation model (middle column) and gold standard (right column) through visual inspection for two selective phase images. We can see clearly that CU-net phase segmentation model is more accurate than single U-net phase segmentation model for the blood flowing lumen area contour. In Fig. 11(a), a sharp thrombus intrusion pointed by the black arrow was detected by CU-net phase model that was missed by single U-net phase model. In Fig. 11(b), the single U-net forms an artifact of thrombus intrusion and an obvious erroneous boundary created by sing U-net phase model.

 figure: Fig. 11.

Fig. 11. Segmentation results comparison between CU-net phase model (left column) and single U-net phase model (middle column) and gold standard (right column) for two selective phase images: (a) single U-net phase model fails to detect a sharp thrombus intrusion (b) single U-net phase model forms an artifact of thrombus intrusion and creates obvious erroneous boundary. Black arrows point out the positions of segmentation artifact. (Scale bar: 250 μm)

Download Full Size | PDF

To further improve the segmentation accuracy, on one hand it requires further optimization of the architecture by altering the network structure and updating the network with larger and richer training data size in the future. On the other hand, it requires sensitivity and SNR improvement from the imaging system. Advanced denoising methods such as sparse reconstruction [33], multi-scale sparsity based tomographic denoising algorithm [34] can be used before the segmentation to improve the image SNR. However, at current stage we didn’t use any complicated denoising operation on the images. In sum, the CU-net architecture yields a fully automatic end-to-end segmentation process, which is ideal for intraoperative clinical application. Currently, the total processing time for 250 image pairs was 170.5 s. It is desirable to have the results ready within less time, which requires further study on multi-thread and multiple GPU acceleration techniques in the future.

5. Conclusion

In sum, we proposed a CU-net framework for the segmentation tasks of extra-vascular Doppler OCT images. As an end-to-end automatic method, it achieved a testing segmentation accuracy of 96.7%±0.2% for outer vessel wall boundary and 94.8%±0.2% for contour of inner blood flowing lumen area. Experimental study on in-vivo mouse femoral artery images validated its performance. Based on the segmentation results, quantitative analysis of the vessel condition including the 3D thrombosis morphology, inner lumen area and its irregularity was performed, which we believe will be of great benefits to providing surgeons with objective evaluation of the surgical outcome after vascular anastomosis.

Funding

National Natural Science Foundation of China (NSFC) (61505006); National Key Research & Development Program of China (2017YFC0107801, 2017YFC0107900); Chinese Association of Science and Technology; Central Special Funds of China for S&T Development; 111 Project (B18005).

Disclosures

The authors declare that there are no conflicts of interest related to this article.

References

1. L. Aida, “Alexis Carrel (1873–1944): Visionary vascular surgeon and pioneer in organ transplantation,” J. Med. Biogr. 22(3), 172–175 (2014). [CrossRef]  

2. Y. Huang, D. Tong, S. Zhu, L. Wu, Q. Mao, Z. Ibrahim, W. P. Lee, G. Brandacher, and J. U. Kang, “Evaluation of microvascular anastomosis using real-time, ultra-high-resolution, Fourier domain Doppler optical coherence tomography,” Plast. Reconstr. Surg. 135(4), 711e–720e (2015). [CrossRef]  

3. F. M. Leclère, M. Schoofs, F. Auger, B. Buys, and S. R. Mordon, “Blood Flow Assessment with Magnetic Resonance Imaging After 1.9 mm Diode Laser-Assisted Microvascular Anastomosis,” Lasers Surg. Med. 42(4), 299–305 (2010). [CrossRef]  

4. E. I. Chang, M. G. Galvez, J. P. Glotzbach, C. D. Hamou, S. El-fesi, C. T. Rappleye, K. M. Sommer, J. Rajadas, O. J. Abilez, G. G. Fuller, M. T. Longaker, and G. C. Gurtner, “Vascular anastomosis using controlled phase transitions in poloxamer gels,” Nat. Med. 17(9), 1147–1152 (2011). [CrossRef]  

5. S. A. Boppart, B. E. Bouma, C. Pitris, G. J. Tearny, and J. F. Southern, “Intraoperative assessment of microsurgery with three-dimensional optical coherence tomography,” Radiology 208(1), 81–86 (1998). [CrossRef]  

6. Y. Huang, Z. Ibrahim, D. Tong, S. Zhu, Q. Mao, J. Pang, W. P. Andree Lee, G. Brandacher, and J. U. Kang, “Microvascular anastomosis guidance and evaluation using real-time three-dimensional Fourier-domain Doppler optical coherence tomography,” J. Biomed. Opt. 18(11), 111404 (2013). [CrossRef]  

7. Y. Huang, G. J. Furtmüller, D. Tong, S. Zhu, W. P. Lee, G. Brandacher, and J. U. Kang, “MEMS-based handheld fourier domain Doppler optical coherence tomography for intraoperative microvascular anastomosis imaging,” PLoS One 9(12), e114215 (2014). [CrossRef]  

8. A. Mishra, A. Wong, K. Bizheva, and D. A. Clausi, “Intra-retinal layer segmentation in optical coherence tomography images,” Opt. Express 17(26), 23719–23728 (2009). [CrossRef]  

9. A. Yazdanpanah, G. Hamar, B. R. Smith, and M. V. Sarunic, “Segmentation of intra-retinal layers from optical coherence tomgraphy images using an active contour approach,” IEEE Trans. Med. Imaging 30(2), 484–496 (2011). [CrossRef]  

10. J. Zhao, J. Yang, D. Ai, H. Song, Y. Jiang, Y. Huang, L. Zhang, and Y. Wang, “Automatic retinal vessel segmentation using multi-scale superpixel chain tracking,” Digital Signal Processing 81, 26–42 (2018). [CrossRef]  

11. L. Grady, “Random walks for image segmentation,” IEEE Trans. Pattern Anal. Mach. Intell. 28(11), 1768–1783 (2006). [CrossRef]  

12. A. Guha Roy, S. Conjeti, S. G. Carlier, P. K. Dutta, A. Kastrati, A. F. Laine, N. Navab, A. Katouzian, and D. Sheet, “Lumen Segmentation in Intravascular Optical Coherence Tomography Using Backscattering Tracked and Initialized Random Walks,” IEEE J Biomed. Health Inform. 20(2), 606–614 (2016). [CrossRef]  

13. C. S. Lee, A. J. Tyring, N. P. Deruyter, Y. Wu, A. Rokem, and A. Y. Lee, “Deep-learning based automated segmentation of macular edema in optical coherence tomography,” Biomed. Opt. Express 8(7), 3440–3448 (2017). [CrossRef]  

14. K. Kamnitsas, L. Chen, C. Ledig, D. Rueckert, and B. Glocker, “Multiscale 3D convolutional neural networks for lesion segmentation in brain MRI,” in Proc of MICCAI Brain Lesion Workshop (2015).

15. Y. Lequan, H. Chen, Q. Dou, J. Qin, and P. A. Heng, “Automated melanoma recognition in dermoscopy images via very deep residual networks,” IEEE Trans. Med. Imag. 36(4), 994–1004 (2017). [CrossRef]  

16. L. Fang, D. Cunefare, C. Wang, R. H. Guymer, S. Li, and S. Farsiu, “Automatic segmentation of nine retinal layer boundaries in OCT images of non-exudative AMD patients using deep learning and graph search,” Biomed. Opt. Express 8(5), 2732–2744 (2017). [CrossRef]  

17. A. Shah, L. Zhou, M. D. Abramoff, and X. Wu, “Multiple surface segmentation using convolution neural nets: application to retinal layer segmentation in OCT images,” Biomed. Opt. Express 9(9), 4509–4526 (2018). [CrossRef]  

18. J. Hamwood, D. Alonso-Caneiro, S. A. Read, S. J. Vincent, and M. J. Collins, “Effect of patch size and network architecture on a convolutional neural network approach for automatic segmentation of OCT retinal layers,” Biomed. Opt. Express 9(7), 3049–3066 (2018). [CrossRef]  

19. J. Fan, J. Yang, Y. Wang, S. Yang, D. Ai, Y. Huang, H. Song, A. Hao, and Y. Wang, “Multichannel Fully Convolutional Network for Coronary Artery Segmentation in X-ray Angiograms,” IEEE Access 6, 44635–44643 (2018). [CrossRef]  

20. O. Ronneberger, P. Fischer, and T. Brox, “U-Net: Convolutional Networks for Biomedical Image S Microvascular egmentation,” arXiv: 1505.04597v1 [cs.CV] (2015).

21. P. F. Christ, M. E. A. Elshaer, F. Ettlinger, S. Tatavarty, M. Bickel, P. Bilic, M. Rempfler, M. Armbruster, F. Hofmann, M. D’Anastasi, W. H. Sommer, S. A. Ahmadi, and B. H. Menze, “Automatic Liver and Lesion Segmentation in CT Using Cascaded Fully Convolutional Neural Networks and 3D Conditional Random Fields,” arXiv:1610.02177 [cs.CV] (2016).

22. G. N. Girish, T. Bibhash, R. Sohini, R. Abhishek, and R. Jeny, “Segmentation of Intra-Retinal Cysts from Optical Coherence Tomography Images using a Fully Convolutional Neural Network Model,” IEEE J Biomed. Health Inform. 23(1), 296–304 (2019). [CrossRef]  

23. A. G. Roy, S. Conjeti, S. P. K. Karri, D. Sheet, A. Katouzian, C. Wachinger, and N. Navab, “ReLayNet: retinal layer and fluid segmentation of macular optical coherence tomography using fully convolutional networks,” Biomed. Opt. Express 8(8), 3627–3642 (2017). [CrossRef]  

24. J. Long, E. Shelhamer, and T. Darrell, “Fully convolutional networks for semantic segmentation,” Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition3431–3440 (2015). [CrossRef]  

25. F. G. Venhuizen, B. van Ginneken, B. Liefers, F. van Asten, V. Schreur, S. Fauser, C. Hoyng, T. Theelen, and C. I. Sanchez, “Deep learning approach for the detection and quantification of intraretinal cystoid fluid in multivendor optical coherence tomography,” Biomed. Opt. Express 9(4), 1545–1569 (2018). [CrossRef]  

26. D. Kingma and J. Ba, “Adam: A method for stochastic optimization,” arXiv preprint 1412.6980 (2014).

27. F. Yi, I. Moon, and B. Javidi, “Automated red blood cells extraction from holographic images using fully convolutional neural networks,” Biomed. Opt. Express 8(10), 4466–4479 (2017). [CrossRef]  

28. J. R. Fienup, “Invariant error metrics for image reconstruction,” Appl. Opt. 36(32), 8352–8357 (1997). [CrossRef]  

29. M. Guizar-Sicairos, S. T. Thurman, and J. R. Fienup, “Efficient subpixel image registration algorithms,” Opt. Lett. 33(2), 156–158 (2008). [CrossRef]  

30. D. Tang, C. Yang, J. Zheng, P. K. Woodard, J. E. Saffitz, J. D. Petruccelli, G. A. Sicard, and C. Yuan, “Local Maximal Stress Hypothesis and Computational Plaque Vulnerability Index for Atherosclerotic Plaque Assessment,” Ann. Biomed. Eng. 33(12), 1789–1801 (2005). [CrossRef]  

31. A. Liu, X. Yin, L. Shi, P. Li, K. L. Thornburg, R. K. Wang, and S. Rugonyi, “Biomechanics of the Chick Embryonic Heart Outflow Tract at HH18 Using 4D Optical Coherence TomographyImaging and Computational odeling,” PLoS One 7(7), e40869 (2012). [CrossRef]  

32. Y. Mei, M. Müller-Eschner, J. Yi, Z. Zhang, D. Chen, M. Kronlage, H. Tengg-Kobligk, H. U. Kauczor, D. Böckler, and S. Demirel, “Hemodynamics analyses in treated and untreated carotid arteries of the same patient: a preliminary study based on three patientcases,” Bio-Med. Mater. Eng. 26(s1), S299–S309 (2015). [CrossRef]  

33. L. Fang, S. Li, D. Cunefare, and S. Farsiu, “Segmentation Based Sparse Reconstruction of Optical Coherence Tomography Images,” IEEE Trans. Med. Imaging 36(2), 407–421 (2017). [CrossRef]  

34. L. Fang, S. Li, R. P. McNabb, Q. Nie, A. N. Kuo, C. A. Toth, J. A. Izatt, and S. Farsiu, “Fast acquisition and reconstruction of optical coherence tomography images via sparse representation,” IEEE Trans. Med. Imaging 32(11), 2034–2049 (2013). [CrossRef]  

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (11)

Fig. 1.
Fig. 1. Schematic of the first U-net framework of CU-net for intensity image segmentation
Fig. 2.
Fig. 2. Schematic of the second U-net framework of CU-net for phase image segmentation
Fig. 3.
Fig. 3. Flowchart of the U-net model training for intensity image segmentation.
Fig. 4.
Fig. 4. Flowchart of the U-net model training for phase image segmentation.
Fig. 5.
Fig. 5. Loss value vs. epoch for two U-net models in training process.
Fig. 6.
Fig. 6. Segmentation accuracy vs. epoch for two U-net models in training process.
Fig. 7.
Fig. 7. Segmentation results of four selective Doppler OCT image pairs using CU-net. (a-1)–(a-4): intensity image with outer boundary outlined by red curve; (b-1)–(b-4): phase image with outer and inner boundary outlined by red curve; (c-1)–(c-4): segmented binary masks. (Scale bar: 500 μm)
Fig. 8.
Fig. 8. 3D volume rendering of the segmented vessel from two different views: (a) front view (b) skewed front view (c) slice image of the P1 plane of the vessel (d) slice image of the P2 plane of the vessel and (e) slice image of the P3 plane of the vessel
Fig. 9.
Fig. 9. (a) the inner blood flowing lumen area and (b) the average inner blood flowing lumen area radius and its variation along the blood flow axis.
Fig. 10.
Fig. 10. Loss value (a) and accuracy (b) of the single U-net with the original phase images as architecture inputs in training process and the segmentation accuracy vs. epochs of the CU-net phase model (c) and single U-net phase model (d).
Fig. 11.
Fig. 11. Segmentation results comparison between CU-net phase model (left column) and single U-net phase model (middle column) and gold standard (right column) for two selective phase images: (a) single U-net phase model fails to detect a sharp thrombus intrusion (b) single U-net phase model forms an artifact of thrombus intrusion and creates obvious erroneous boundary. Black arrows point out the positions of segmentation artifact. (Scale bar: 250 μm)

Equations (1)

Equations on this page are rendered with MathJax. Learn more.

S A ( S s e g , S g t ) = 2 | S s e g S g t | | S s e g | + | S g t |
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.