D. Abdullah, F. Fajriana, M. Maryana, L. Rosnita, A. P. U. Siahaan, R. Rahim, P. Harliana, H. Harmayani, Z. Ginting, and C. I. Erliana et al., “Application of interpolation image by using bi-cubic algorithm,” in Journal of Physics: Conference Series, vol. 1114 (IOP Publishing, 2018), p. 012066.

S. J. Ahn, W. Rauh, and H.-J. Warnecke, “Least-squares orthogonal distances fitting of circle, sphere, ellipse, hyperbola, and parabola,” Pattern Recognit. 34(12), 2283–2303 (2001).

[Crossref]

T. T. D. Team, R. Al-Rfou, G. Alain, A. Almahairi, C. Angermueller, D. Bahdanau, N. Ballas, F. Bastien, J. Bayer, and A. Belikov et al., “Theano: A python framework for fast computation of mathematical expressions,” arXiv preprint arXiv:1605.02688 (2016).

I. R. Titze and F. Alipour, The myoelastic aerodynamic theory of phonation (National Center for Voice and Speech, 2006).

T. T. D. Team, R. Al-Rfou, G. Alain, A. Almahairi, C. Angermueller, D. Bahdanau, N. Ballas, F. Bastien, J. Bayer, and A. Belikov et al., “Theano: A python framework for fast computation of mathematical expressions,” arXiv preprint arXiv:1605.02688 (2016).

T. T. D. Team, R. Al-Rfou, G. Alain, A. Almahairi, C. Angermueller, D. Bahdanau, N. Ballas, F. Bastien, J. Bayer, and A. Belikov et al., “Theano: A python framework for fast computation of mathematical expressions,” arXiv preprint arXiv:1605.02688 (2016).

T. T. D. Team, R. Al-Rfou, G. Alain, A. Almahairi, C. Angermueller, D. Bahdanau, N. Ballas, F. Bastien, J. Bayer, and A. Belikov et al., “Theano: A python framework for fast computation of mathematical expressions,” arXiv preprint arXiv:1605.02688 (2016).

F. H. Araújo, R. R. Silva, D. M. Ushizima, M. T. Rezende, C. M. Carneiro, A. G. C. Bianchi, and F. N. Medeiros, “Deep learning for cell image segmentation and ranking,” Comput. Med. Imag. Grap. 72, 13–21 (2019).

[Crossref]

D. P. Kingma and J. Ba, “Adam: A method for stochastic optimization,” arXiv preprint arXiv:1412.6980 (2014).

J. Lin, E. S. Walsted, V. Backer, J. H. Hull, and D. S. Elson, “Quantification and analysis of laryngeal closure from endoscopic videos,” IEEE Trans. Biomed. Eng. 66(4), 1127–1136 (2019).

[Crossref]

V. Badrinarayanan, A. Kendall, and R. Cipolla, “Segnet: A deep convolutional encoder-decoder architecture for image segmentation,” IEEE Trans. Pattern Anal. Mach. Intell. 39(12), 2481–2495 (2017).

[Crossref]

T. T. D. Team, R. Al-Rfou, G. Alain, A. Almahairi, C. Angermueller, D. Bahdanau, N. Ballas, F. Bastien, J. Bayer, and A. Belikov et al., “Theano: A python framework for fast computation of mathematical expressions,” arXiv preprint arXiv:1605.02688 (2016).

T. T. D. Team, R. Al-Rfou, G. Alain, A. Almahairi, C. Angermueller, D. Bahdanau, N. Ballas, F. Bastien, J. Bayer, and A. Belikov et al., “Theano: A python framework for fast computation of mathematical expressions,” arXiv preprint arXiv:1605.02688 (2016).

T. T. D. Team, R. Al-Rfou, G. Alain, A. Almahairi, C. Angermueller, D. Bahdanau, N. Ballas, F. Bastien, J. Bayer, and A. Belikov et al., “Theano: A python framework for fast computation of mathematical expressions,” arXiv preprint arXiv:1605.02688 (2016).

T. T. D. Team, R. Al-Rfou, G. Alain, A. Almahairi, C. Angermueller, D. Bahdanau, N. Ballas, F. Bastien, J. Bayer, and A. Belikov et al., “Theano: A python framework for fast computation of mathematical expressions,” arXiv preprint arXiv:1605.02688 (2016).

T. T. D. Team, R. Al-Rfou, G. Alain, A. Almahairi, C. Angermueller, D. Bahdanau, N. Ballas, F. Bastien, J. Bayer, and A. Belikov et al., “Theano: A python framework for fast computation of mathematical expressions,” arXiv preprint arXiv:1605.02688 (2016).

O. Russakovsky, J. Deng, H. Su, J. Krause, S. Satheesh, S. Ma, Z. Huang, A. Karpathy, A. Khosla, and M. Bernstein, “Imagenet large scale visual recognition challenge,” Int. J. Comput. Vis. 115(3), 211–252 (2015).

[Crossref]

F. H. Araújo, R. R. Silva, D. M. Ushizima, M. T. Rezende, C. M. Carneiro, A. G. C. Bianchi, and F. N. Medeiros, “Deep learning for cell image segmentation and ranking,” Comput. Med. Imag. Grap. 72, 13–21 (2019).

[Crossref]

M.-H. Laves, J. Bicker, L. A. Kahrs, and T. Ortmaier, “A dataset of laryngeal endoscopic images with comparative study on convolution neural network-based semantic segmentation,” Int. J. CARS 14(3), 483–492 (2019).

[Crossref]

R. Hemelings, B. Elen, I. Stalmans, K. Van Keer, P. De Boever, and M. B. Blaschko, “Artery–vein segmentation in fundus images using a fully convolutional network,” Comput. Med. Imag. Grap. 76, 101636 (2019).

[Crossref]

L. Bottou, “Large-scale machine learning with stochastic gradient descent,” in Proceedings of COMPSTAT’2010, (Springer, 2010), pp. 177–186.

O. Ronneberger, P. Fischer, and T. Brox, “U-net: Convolutional networks for biomedical image segmentation,” in International Conference on Medical image computing and computer-assisted intervention, (Springer, 2015), pp. 234–241.

J. J. Cerrolaza, V. Osma-Ruiz, N. Sáenz-Lechón, A. Villanueva, J. M. Gutiérrez-Arriola, J. I. Godino-Llorente, and R. Cabeza, “Fully-automatic glottis segmentation with active shape models,” in MAVEBA, (2011), pp. 35–38.

W. R. Crum, O. Camara, and D. L. Hill, “Generalized overlap measures for evaluation and validation in medical image analysis,” IEEE Trans. Med. Imaging 25(11), 1451–1461 (2006).

[Crossref]

F. H. Araújo, R. R. Silva, D. M. Ushizima, M. T. Rezende, C. M. Carneiro, A. G. C. Bianchi, and F. N. Medeiros, “Deep learning for cell image segmentation and ranking,” Comput. Med. Imag. Grap. 72, 13–21 (2019).

[Crossref]

J. J. Cerrolaza, V. Osma-Ruiz, N. Sáenz-Lechón, A. Villanueva, J. M. Gutiérrez-Arriola, J. I. Godino-Llorente, and R. Cabeza, “Fully-automatic glottis segmentation with active shape models,” in MAVEBA, (2011), pp. 35–38.

F. Chollet et al., “Keras,” (2015).

V. Badrinarayanan, A. Kendall, and R. Cipolla, “Segnet: A deep convolutional encoder-decoder architecture for image segmentation,” IEEE Trans. Pattern Anal. Mach. Intell. 39(12), 2481–2495 (2017).

[Crossref]

W. R. Crum, O. Camara, and D. L. Hill, “Generalized overlap measures for evaluation and validation in medical image analysis,” IEEE Trans. Med. Imaging 25(11), 1451–1461 (2006).

[Crossref]

J. Long, E. Shelhamer, and T. Darrell, “Fully convolutional networks for semantic segmentation,” in Proceedings of the IEEE conference on computer vision and pattern recognition, (2015), pp. 3431–3440.

R. Hemelings, B. Elen, I. Stalmans, K. Van Keer, P. De Boever, and M. B. Blaschko, “Artery–vein segmentation in fundus images using a fully convolutional network,” Comput. Med. Imag. Grap. 76, 101636 (2019).

[Crossref]

J. Demeyer, T. Dubuisson, B. Gosselin, and M. Remacle, “Glottis segmentation with a high-speed glottography: a fully automatic method,” in 3rd Adv. Voice Funct. Assess. Int. Workshop, (2009).

O. Russakovsky, J. Deng, H. Su, J. Krause, S. Satheesh, S. Ma, Z. Huang, A. Karpathy, A. Khosla, and M. Bernstein, “Imagenet large scale visual recognition challenge,” Int. J. Comput. Vis. 115(3), 211–252 (2015).

[Crossref]

L. R. Dice, “Measures of the amount of ecologic association between species,” Ecology 26(3), 297–302 (1945).

[Crossref]

J. Lohscheller, H. Toy, F. Rosanowski, U. Eysholdt, and M. Döllinger, “Clinically evaluated procedure for the reconstruction of vocal fold vibrations from endoscopic digital high-speed videos,” Med. Image Anal. 11(4), 400–413 (2007).

[Crossref]

J. Demeyer, T. Dubuisson, B. Gosselin, and M. Remacle, “Glottis segmentation with a high-speed glottography: a fully automatic method,” in 3rd Adv. Voice Funct. Assess. Int. Workshop, (2009).

R. Hemelings, B. Elen, I. Stalmans, K. Van Keer, P. De Boever, and M. B. Blaschko, “Artery–vein segmentation in fundus images using a fully convolutional network,” Comput. Med. Imag. Grap. 76, 101636 (2019).

[Crossref]

J. Lin, E. S. Walsted, V. Backer, J. H. Hull, and D. S. Elson, “Quantification and analysis of laryngeal closure from endoscopic videos,” IEEE Trans. Biomed. Eng. 66(4), 1127–1136 (2019).

[Crossref]

D. Abdullah, F. Fajriana, M. Maryana, L. Rosnita, A. P. U. Siahaan, R. Rahim, P. Harliana, H. Harmayani, Z. Ginting, and C. I. Erliana et al., “Application of interpolation image by using bi-cubic algorithm,” in Journal of Physics: Conference Series, vol. 1114 (IOP Publishing, 2018), p. 012066.

J. Lohscheller, H. Toy, F. Rosanowski, U. Eysholdt, and M. Döllinger, “Clinically evaluated procedure for the reconstruction of vocal fold vibrations from endoscopic digital high-speed videos,” Med. Image Anal. 11(4), 400–413 (2007).

[Crossref]

D. Abdullah, F. Fajriana, M. Maryana, L. Rosnita, A. P. U. Siahaan, R. Rahim, P. Harliana, H. Harmayani, Z. Ginting, and C. I. Erliana et al., “Application of interpolation image by using bi-cubic algorithm,” in Journal of Physics: Conference Series, vol. 1114 (IOP Publishing, 2018), p. 012066.

M. K. Fehling, F. Grosch, M. E. Schuster, B. Schick, and J. Lohscheller, “Fully automatic segmentation of glottis and vocal folds in endoscopic laryngeal high-speed videos using a deep convolutional lstm network,” PLoS One 15(2), e0227791 (2020).

[Crossref]

O. Ronneberger, P. Fischer, and T. Brox, “U-net: Convolutional networks for biomedical image segmentation,” in International Conference on Medical image computing and computer-assisted intervention, (Springer, 2015), pp. 234–241.

A. Rao MV, R. Krishnamurthy, P. Gopikishore, V. Priyadharshini, and P. K. Ghosh, “Automatic glottis localization and segmentation in stroboscopic videos using deep neural network,” in Proc. Interspeech 2018, (2018), pp. 3007–3011.

D. Abdullah, F. Fajriana, M. Maryana, L. Rosnita, A. P. U. Siahaan, R. Rahim, P. Harliana, H. Harmayani, Z. Ginting, and C. I. Erliana et al., “Application of interpolation image by using bi-cubic algorithm,” in Journal of Physics: Conference Series, vol. 1114 (IOP Publishing, 2018), p. 012066.

O. Gloger, B. Lehnert, A. Schrade, and H. Völzke, “Fully automated glottis segmentation in endoscopic videos using local color and shape features of glottal regions,” IEEE Trans. Biomed. Eng. 62(3), 795–806 (2015).

[Crossref]

J. J. Cerrolaza, V. Osma-Ruiz, N. Sáenz-Lechón, A. Villanueva, J. M. Gutiérrez-Arriola, J. I. Godino-Llorente, and R. Cabeza, “Fully-automatic glottis segmentation with active shape models,” in MAVEBA, (2011), pp. 35–38.

A. Rao MV, R. Krishnamurthy, P. Gopikishore, V. Priyadharshini, and P. K. Ghosh, “Automatic glottis localization and segmentation in stroboscopic videos using deep neural network,” in Proc. Interspeech 2018, (2018), pp. 3007–3011.

J. Demeyer, T. Dubuisson, B. Gosselin, and M. Remacle, “Glottis segmentation with a high-speed glottography: a fully automatic method,” in 3rd Adv. Voice Funct. Assess. Int. Workshop, (2009).

M. K. Fehling, F. Grosch, M. E. Schuster, B. Schick, and J. Lohscheller, “Fully automatic segmentation of glottis and vocal folds in endoscopic laryngeal high-speed videos using a deep convolutional lstm network,” PLoS One 15(2), e0227791 (2020).

[Crossref]

J. J. Cerrolaza, V. Osma-Ruiz, N. Sáenz-Lechón, A. Villanueva, J. M. Gutiérrez-Arriola, J. I. Godino-Llorente, and R. Cabeza, “Fully-automatic glottis segmentation with active shape models,” in MAVEBA, (2011), pp. 35–38.

R. M. Haralick and L. G. Shapiro, Computer and Robot Vision, vol. 1 (Addison-Wesley Reading, 1992).

D. Abdullah, F. Fajriana, M. Maryana, L. Rosnita, A. P. U. Siahaan, R. Rahim, P. Harliana, H. Harmayani, Z. Ginting, and C. I. Erliana et al., “Application of interpolation image by using bi-cubic algorithm,” in Journal of Physics: Conference Series, vol. 1114 (IOP Publishing, 2018), p. 012066.

D. Abdullah, F. Fajriana, M. Maryana, L. Rosnita, A. P. U. Siahaan, R. Rahim, P. Harliana, H. Harmayani, Z. Ginting, and C. I. Erliana et al., “Application of interpolation image by using bi-cubic algorithm,” in Journal of Physics: Conference Series, vol. 1114 (IOP Publishing, 2018), p. 012066.

R. Hemelings, B. Elen, I. Stalmans, K. Van Keer, P. De Boever, and M. B. Blaschko, “Artery–vein segmentation in fundus images using a fully convolutional network,” Comput. Med. Imag. Grap. 76, 101636 (2019).

[Crossref]

W. R. Crum, O. Camara, and D. L. Hill, “Generalized overlap measures for evaluation and validation in medical image analysis,” IEEE Trans. Med. Imaging 25(11), 1451–1461 (2006).

[Crossref]

O. Russakovsky, J. Deng, H. Su, J. Krause, S. Satheesh, S. Ma, Z. Huang, A. Karpathy, A. Khosla, and M. Bernstein, “Imagenet large scale visual recognition challenge,” Int. J. Comput. Vis. 115(3), 211–252 (2015).

[Crossref]

J. Lin, E. S. Walsted, V. Backer, J. H. Hull, and D. S. Elson, “Quantification and analysis of laryngeal closure from endoscopic videos,” IEEE Trans. Biomed. Eng. 66(4), 1127–1136 (2019).

[Crossref]

Z. Jiang, H. Zhang, Y. Wang, and S.-B. Ko, “Retinal blood vessel segmentation using fully convolutional network with transfer learning,” Comput. Med. Imag. Grap. 68, 1–15 (2018).

[Crossref]

M.-H. Laves, J. Bicker, L. A. Kahrs, and T. Ortmaier, “A dataset of laryngeal endoscopic images with comparative study on convolution neural network-based semantic segmentation,” Int. J. CARS 14(3), 483–492 (2019).

[Crossref]

O. Russakovsky, J. Deng, H. Su, J. Krause, S. Satheesh, S. Ma, Z. Huang, A. Karpathy, A. Khosla, and M. Bernstein, “Imagenet large scale visual recognition challenge,” Int. J. Comput. Vis. 115(3), 211–252 (2015).

[Crossref]

V. Badrinarayanan, A. Kendall, and R. Cipolla, “Segnet: A deep convolutional encoder-decoder architecture for image segmentation,” IEEE Trans. Pattern Anal. Mach. Intell. 39(12), 2481–2495 (2017).

[Crossref]

D. F. Williamson, R. A. Parker, and J. S. Kendrick, “The box plot: a simple visual method to interpret data,” Ann. Intern. Med. 110(11), 916–921 (1989).

[Crossref]

C. Shorten and T. M. Khoshgoftaar, “A survey on image data augmentation for deep learning,” >J. Big Data 6(1), 60 (2019).

[Crossref]

O. Russakovsky, J. Deng, H. Su, J. Krause, S. Satheesh, S. Ma, Z. Huang, A. Karpathy, A. Khosla, and M. Bernstein, “Imagenet large scale visual recognition challenge,” Int. J. Comput. Vis. 115(3), 211–252 (2015).

[Crossref]

D. P. Kingma and J. Ba, “Adam: A method for stochastic optimization,” arXiv preprint arXiv:1412.6980 (2014).

Z. Jiang, H. Zhang, Y. Wang, and S.-B. Ko, “Retinal blood vessel segmentation using fully convolutional network with transfer learning,” Comput. Med. Imag. Grap. 68, 1–15 (2018).

[Crossref]

T. Nawka and U. Konerding, “The interrater reliability of stroboscopy evaluations,” J. Voice 26(6), 812.E1–812.E10 (2012).

[Crossref]

O. Russakovsky, J. Deng, H. Su, J. Krause, S. Satheesh, S. Ma, Z. Huang, A. Karpathy, A. Khosla, and M. Bernstein, “Imagenet large scale visual recognition challenge,” Int. J. Comput. Vis. 115(3), 211–252 (2015).

[Crossref]

A. Rao MV, R. Krishnamurthy, P. Gopikishore, V. Priyadharshini, and P. K. Ghosh, “Automatic glottis localization and segmentation in stroboscopic videos using deep neural network,” in Proc. Interspeech 2018, (2018), pp. 3007–3011.

M.-H. Laves, J. Bicker, L. A. Kahrs, and T. Ortmaier, “A dataset of laryngeal endoscopic images with comparative study on convolution neural network-based semantic segmentation,” Int. J. CARS 14(3), 483–492 (2019).

[Crossref]

O. Gloger, B. Lehnert, A. Schrade, and H. Völzke, “Fully automated glottis segmentation in endoscopic videos using local color and shape features of glottal regions,” IEEE Trans. Biomed. Eng. 62(3), 795–806 (2015).

[Crossref]

J. Lin, E. S. Walsted, V. Backer, J. H. Hull, and D. S. Elson, “Quantification and analysis of laryngeal closure from endoscopic videos,” IEEE Trans. Biomed. Eng. 66(4), 1127–1136 (2019).

[Crossref]

M. K. Fehling, F. Grosch, M. E. Schuster, B. Schick, and J. Lohscheller, “Fully automatic segmentation of glottis and vocal folds in endoscopic laryngeal high-speed videos using a deep convolutional lstm network,” PLoS One 15(2), e0227791 (2020).

[Crossref]

J. Lohscheller, H. Toy, F. Rosanowski, U. Eysholdt, and M. Döllinger, “Clinically evaluated procedure for the reconstruction of vocal fold vibrations from endoscopic digital high-speed videos,” Med. Image Anal. 11(4), 400–413 (2007).

[Crossref]

J. Long, E. Shelhamer, and T. Darrell, “Fully convolutional networks for semantic segmentation,” in Proceedings of the IEEE conference on computer vision and pattern recognition, (2015), pp. 3431–3440.

O. Russakovsky, J. Deng, H. Su, J. Krause, S. Satheesh, S. Ma, Z. Huang, A. Karpathy, A. Khosla, and M. Bernstein, “Imagenet large scale visual recognition challenge,” Int. J. Comput. Vis. 115(3), 211–252 (2015).

[Crossref]

D. Abdullah, F. Fajriana, M. Maryana, L. Rosnita, A. P. U. Siahaan, R. Rahim, P. Harliana, H. Harmayani, Z. Ginting, and C. I. Erliana et al., “Application of interpolation image by using bi-cubic algorithm,” in Journal of Physics: Conference Series, vol. 1114 (IOP Publishing, 2018), p. 012066.

F. H. Araújo, R. R. Silva, D. M. Ushizima, M. T. Rezende, C. M. Carneiro, A. G. C. Bianchi, and F. N. Medeiros, “Deep learning for cell image segmentation and ranking,” Comput. Med. Imag. Grap. 72, 13–21 (2019).

[Crossref]

T. Nawka and U. Konerding, “The interrater reliability of stroboscopy evaluations,” J. Voice 26(6), 812.E1–812.E10 (2012).

[Crossref]

M.-H. Laves, J. Bicker, L. A. Kahrs, and T. Ortmaier, “A dataset of laryngeal endoscopic images with comparative study on convolution neural network-based semantic segmentation,” Int. J. CARS 14(3), 483–492 (2019).

[Crossref]

J. J. Cerrolaza, V. Osma-Ruiz, N. Sáenz-Lechón, A. Villanueva, J. M. Gutiérrez-Arriola, J. I. Godino-Llorente, and R. Cabeza, “Fully-automatic glottis segmentation with active shape models,” in MAVEBA, (2011), pp. 35–38.

D. Owen, “The power of student’s t-test,” J. Am. Stat. Assoc. 60(309), 320–333 (1965).

[Crossref]

D. F. Williamson, R. A. Parker, and J. S. Kendrick, “The box plot: a simple visual method to interpret data,” Ann. Intern. Med. 110(11), 916–921 (1989).

[Crossref]

A. Rao MV, R. Krishnamurthy, P. Gopikishore, V. Priyadharshini, and P. K. Ghosh, “Automatic glottis localization and segmentation in stroboscopic videos using deep neural network,” in Proc. Interspeech 2018, (2018), pp. 3007–3011.

D. Abdullah, F. Fajriana, M. Maryana, L. Rosnita, A. P. U. Siahaan, R. Rahim, P. Harliana, H. Harmayani, Z. Ginting, and C. I. Erliana et al., “Application of interpolation image by using bi-cubic algorithm,” in Journal of Physics: Conference Series, vol. 1114 (IOP Publishing, 2018), p. 012066.

A. Rao MV, R. Krishnamurthy, P. Gopikishore, V. Priyadharshini, and P. K. Ghosh, “Automatic glottis localization and segmentation in stroboscopic videos using deep neural network,” in Proc. Interspeech 2018, (2018), pp. 3007–3011.

S. J. Ahn, W. Rauh, and H.-J. Warnecke, “Least-squares orthogonal distances fitting of circle, sphere, ellipse, hyperbola, and parabola,” Pattern Recognit. 34(12), 2283–2303 (2001).

[Crossref]

J. Demeyer, T. Dubuisson, B. Gosselin, and M. Remacle, “Glottis segmentation with a high-speed glottography: a fully automatic method,” in 3rd Adv. Voice Funct. Assess. Int. Workshop, (2009).

F. H. Araújo, R. R. Silva, D. M. Ushizima, M. T. Rezende, C. M. Carneiro, A. G. C. Bianchi, and F. N. Medeiros, “Deep learning for cell image segmentation and ranking,” Comput. Med. Imag. Grap. 72, 13–21 (2019).

[Crossref]

O. Ronneberger, P. Fischer, and T. Brox, “U-net: Convolutional networks for biomedical image segmentation,” in International Conference on Medical image computing and computer-assisted intervention, (Springer, 2015), pp. 234–241.

J. Lohscheller, H. Toy, F. Rosanowski, U. Eysholdt, and M. Döllinger, “Clinically evaluated procedure for the reconstruction of vocal fold vibrations from endoscopic digital high-speed videos,” Med. Image Anal. 11(4), 400–413 (2007).

[Crossref]

D. Abdullah, F. Fajriana, M. Maryana, L. Rosnita, A. P. U. Siahaan, R. Rahim, P. Harliana, H. Harmayani, Z. Ginting, and C. I. Erliana et al., “Application of interpolation image by using bi-cubic algorithm,” in Journal of Physics: Conference Series, vol. 1114 (IOP Publishing, 2018), p. 012066.

L. Rudmik, Evidence-based Clinical Practice in Otolaryngology (Elsevier Health Sciences, 2018).

O. Russakovsky, J. Deng, H. Su, J. Krause, S. Satheesh, S. Ma, Z. Huang, A. Karpathy, A. Khosla, and M. Bernstein, “Imagenet large scale visual recognition challenge,” Int. J. Comput. Vis. 115(3), 211–252 (2015).

[Crossref]

A. Sadovski, “Algorithm as 74: L1-norm fit of a straight line,” J. Royal Stat. Soc. Ser. C (Applied Stat.) 23(2), 244–248 (1974).

[Crossref]

J. J. Cerrolaza, V. Osma-Ruiz, N. Sáenz-Lechón, A. Villanueva, J. M. Gutiérrez-Arriola, J. I. Godino-Llorente, and R. Cabeza, “Fully-automatic glottis segmentation with active shape models,” in MAVEBA, (2011), pp. 35–38.

O. Russakovsky, J. Deng, H. Su, J. Krause, S. Satheesh, S. Ma, Z. Huang, A. Karpathy, A. Khosla, and M. Bernstein, “Imagenet large scale visual recognition challenge,” Int. J. Comput. Vis. 115(3), 211–252 (2015).

[Crossref]

M. K. Fehling, F. Grosch, M. E. Schuster, B. Schick, and J. Lohscheller, “Fully automatic segmentation of glottis and vocal folds in endoscopic laryngeal high-speed videos using a deep convolutional lstm network,” PLoS One 15(2), e0227791 (2020).

[Crossref]

O. Gloger, B. Lehnert, A. Schrade, and H. Völzke, “Fully automated glottis segmentation in endoscopic videos using local color and shape features of glottal regions,” IEEE Trans. Biomed. Eng. 62(3), 795–806 (2015).

[Crossref]

M. K. Fehling, F. Grosch, M. E. Schuster, B. Schick, and J. Lohscheller, “Fully automatic segmentation of glottis and vocal folds in endoscopic laryngeal high-speed videos using a deep convolutional lstm network,” PLoS One 15(2), e0227791 (2020).

[Crossref]

R. M. Haralick and L. G. Shapiro, Computer and Robot Vision, vol. 1 (Addison-Wesley Reading, 1992).

J. Long, E. Shelhamer, and T. Darrell, “Fully convolutional networks for semantic segmentation,” in Proceedings of the IEEE conference on computer vision and pattern recognition, (2015), pp. 3431–3440.

C. Shorten and T. M. Khoshgoftaar, “A survey on image data augmentation for deep learning,” >J. Big Data 6(1), 60 (2019).

[Crossref]

D. Abdullah, F. Fajriana, M. Maryana, L. Rosnita, A. P. U. Siahaan, R. Rahim, P. Harliana, H. Harmayani, Z. Ginting, and C. I. Erliana et al., “Application of interpolation image by using bi-cubic algorithm,” in Journal of Physics: Conference Series, vol. 1114 (IOP Publishing, 2018), p. 012066.

F. H. Araújo, R. R. Silva, D. M. Ushizima, M. T. Rezende, C. M. Carneiro, A. G. C. Bianchi, and F. N. Medeiros, “Deep learning for cell image segmentation and ranking,” Comput. Med. Imag. Grap. 72, 13–21 (2019).

[Crossref]

K. Simonyan and A. Zisserman, “Very deep convolutional networks for large-scale image recognition,” arXiv preprint arXiv:1409.1556 (2014).

R. Hemelings, B. Elen, I. Stalmans, K. Van Keer, P. De Boever, and M. B. Blaschko, “Artery–vein segmentation in fundus images using a fully convolutional network,” Comput. Med. Imag. Grap. 76, 101636 (2019).

[Crossref]

O. Russakovsky, J. Deng, H. Su, J. Krause, S. Satheesh, S. Ma, Z. Huang, A. Karpathy, A. Khosla, and M. Bernstein, “Imagenet large scale visual recognition challenge,” Int. J. Comput. Vis. 115(3), 211–252 (2015).

[Crossref]

T. T. D. Team, R. Al-Rfou, G. Alain, A. Almahairi, C. Angermueller, D. Bahdanau, N. Ballas, F. Bastien, J. Bayer, and A. Belikov et al., “Theano: A python framework for fast computation of mathematical expressions,” arXiv preprint arXiv:1605.02688 (2016).

I. R. Titze and F. Alipour, The myoelastic aerodynamic theory of phonation (National Center for Voice and Speech, 2006).

J. Lohscheller, H. Toy, F. Rosanowski, U. Eysholdt, and M. Döllinger, “Clinically evaluated procedure for the reconstruction of vocal fold vibrations from endoscopic digital high-speed videos,” Med. Image Anal. 11(4), 400–413 (2007).

[Crossref]

F. H. Araújo, R. R. Silva, D. M. Ushizima, M. T. Rezende, C. M. Carneiro, A. G. C. Bianchi, and F. N. Medeiros, “Deep learning for cell image segmentation and ranking,” Comput. Med. Imag. Grap. 72, 13–21 (2019).

[Crossref]

R. Hemelings, B. Elen, I. Stalmans, K. Van Keer, P. De Boever, and M. B. Blaschko, “Artery–vein segmentation in fundus images using a fully convolutional network,” Comput. Med. Imag. Grap. 76, 101636 (2019).

[Crossref]

J. J. Cerrolaza, V. Osma-Ruiz, N. Sáenz-Lechón, A. Villanueva, J. M. Gutiérrez-Arriola, J. I. Godino-Llorente, and R. Cabeza, “Fully-automatic glottis segmentation with active shape models,” in MAVEBA, (2011), pp. 35–38.

O. Gloger, B. Lehnert, A. Schrade, and H. Völzke, “Fully automated glottis segmentation in endoscopic videos using local color and shape features of glottal regions,” IEEE Trans. Biomed. Eng. 62(3), 795–806 (2015).

[Crossref]

J. Lin, E. S. Walsted, V. Backer, J. H. Hull, and D. S. Elson, “Quantification and analysis of laryngeal closure from endoscopic videos,” IEEE Trans. Biomed. Eng. 66(4), 1127–1136 (2019).

[Crossref]

Z. Jiang, H. Zhang, Y. Wang, and S.-B. Ko, “Retinal blood vessel segmentation using fully convolutional network with transfer learning,” Comput. Med. Imag. Grap. 68, 1–15 (2018).

[Crossref]

S. J. Ahn, W. Rauh, and H.-J. Warnecke, “Least-squares orthogonal distances fitting of circle, sphere, ellipse, hyperbola, and parabola,” Pattern Recognit. 34(12), 2283–2303 (2001).

[Crossref]

D. F. Williamson, R. A. Parker, and J. S. Kendrick, “The box plot: a simple visual method to interpret data,” Ann. Intern. Med. 110(11), 916–921 (1989).

[Crossref]

Z. Jiang, H. Zhang, Y. Wang, and S.-B. Ko, “Retinal blood vessel segmentation using fully convolutional network with transfer learning,” Comput. Med. Imag. Grap. 68, 1–15 (2018).

[Crossref]

K. Simonyan and A. Zisserman, “Very deep convolutional networks for large-scale image recognition,” arXiv preprint arXiv:1409.1556 (2014).

C. Shorten and T. M. Khoshgoftaar, “A survey on image data augmentation for deep learning,” >J. Big Data 6(1), 60 (2019).

[Crossref]

D. F. Williamson, R. A. Parker, and J. S. Kendrick, “The box plot: a simple visual method to interpret data,” Ann. Intern. Med. 110(11), 916–921 (1989).

[Crossref]

R. Hemelings, B. Elen, I. Stalmans, K. Van Keer, P. De Boever, and M. B. Blaschko, “Artery–vein segmentation in fundus images using a fully convolutional network,” Comput. Med. Imag. Grap. 76, 101636 (2019).

[Crossref]

F. H. Araújo, R. R. Silva, D. M. Ushizima, M. T. Rezende, C. M. Carneiro, A. G. C. Bianchi, and F. N. Medeiros, “Deep learning for cell image segmentation and ranking,” Comput. Med. Imag. Grap. 72, 13–21 (2019).

[Crossref]

Z. Jiang, H. Zhang, Y. Wang, and S.-B. Ko, “Retinal blood vessel segmentation using fully convolutional network with transfer learning,” Comput. Med. Imag. Grap. 68, 1–15 (2018).

[Crossref]

L. R. Dice, “Measures of the amount of ecologic association between species,” Ecology 26(3), 297–302 (1945).

[Crossref]

O. Gloger, B. Lehnert, A. Schrade, and H. Völzke, “Fully automated glottis segmentation in endoscopic videos using local color and shape features of glottal regions,” IEEE Trans. Biomed. Eng. 62(3), 795–806 (2015).

[Crossref]

J. Lin, E. S. Walsted, V. Backer, J. H. Hull, and D. S. Elson, “Quantification and analysis of laryngeal closure from endoscopic videos,” IEEE Trans. Biomed. Eng. 66(4), 1127–1136 (2019).

[Crossref]

W. R. Crum, O. Camara, and D. L. Hill, “Generalized overlap measures for evaluation and validation in medical image analysis,” IEEE Trans. Med. Imaging 25(11), 1451–1461 (2006).

[Crossref]

V. Badrinarayanan, A. Kendall, and R. Cipolla, “Segnet: A deep convolutional encoder-decoder architecture for image segmentation,” IEEE Trans. Pattern Anal. Mach. Intell. 39(12), 2481–2495 (2017).

[Crossref]

M.-H. Laves, J. Bicker, L. A. Kahrs, and T. Ortmaier, “A dataset of laryngeal endoscopic images with comparative study on convolution neural network-based semantic segmentation,” Int. J. CARS 14(3), 483–492 (2019).

[Crossref]

O. Russakovsky, J. Deng, H. Su, J. Krause, S. Satheesh, S. Ma, Z. Huang, A. Karpathy, A. Khosla, and M. Bernstein, “Imagenet large scale visual recognition challenge,” Int. J. Comput. Vis. 115(3), 211–252 (2015).

[Crossref]

D. Owen, “The power of student’s t-test,” J. Am. Stat. Assoc. 60(309), 320–333 (1965).

[Crossref]

A. Sadovski, “Algorithm as 74: L1-norm fit of a straight line,” J. Royal Stat. Soc. Ser. C (Applied Stat.) 23(2), 244–248 (1974).

[Crossref]

T. Nawka and U. Konerding, “The interrater reliability of stroboscopy evaluations,” J. Voice 26(6), 812.E1–812.E10 (2012).

[Crossref]

J. Lohscheller, H. Toy, F. Rosanowski, U. Eysholdt, and M. Döllinger, “Clinically evaluated procedure for the reconstruction of vocal fold vibrations from endoscopic digital high-speed videos,” Med. Image Anal. 11(4), 400–413 (2007).

[Crossref]

S. J. Ahn, W. Rauh, and H.-J. Warnecke, “Least-squares orthogonal distances fitting of circle, sphere, ellipse, hyperbola, and parabola,” Pattern Recognit. 34(12), 2283–2303 (2001).

[Crossref]

M. K. Fehling, F. Grosch, M. E. Schuster, B. Schick, and J. Lohscheller, “Fully automatic segmentation of glottis and vocal folds in endoscopic laryngeal high-speed videos using a deep convolutional lstm network,” PLoS One 15(2), e0227791 (2020).

[Crossref]

J. Long, E. Shelhamer, and T. Darrell, “Fully convolutional networks for semantic segmentation,” in Proceedings of the IEEE conference on computer vision and pattern recognition, (2015), pp. 3431–3440.

O. Ronneberger, P. Fischer, and T. Brox, “U-net: Convolutional networks for biomedical image segmentation,” in International Conference on Medical image computing and computer-assisted intervention, (Springer, 2015), pp. 234–241.

J. Demeyer, T. Dubuisson, B. Gosselin, and M. Remacle, “Glottis segmentation with a high-speed glottography: a fully automatic method,” in 3rd Adv. Voice Funct. Assess. Int. Workshop, (2009).

L. Rudmik, Evidence-based Clinical Practice in Otolaryngology (Elsevier Health Sciences, 2018).

A. Rao MV, R. Krishnamurthy, P. Gopikishore, V. Priyadharshini, and P. K. Ghosh, “Automatic glottis localization and segmentation in stroboscopic videos using deep neural network,” in Proc. Interspeech 2018, (2018), pp. 3007–3011.

J. J. Cerrolaza, V. Osma-Ruiz, N. Sáenz-Lechón, A. Villanueva, J. M. Gutiérrez-Arriola, J. I. Godino-Llorente, and R. Cabeza, “Fully-automatic glottis segmentation with active shape models,” in MAVEBA, (2011), pp. 35–38.

F. Chollet et al., “Keras,” (2015).

T. T. D. Team, R. Al-Rfou, G. Alain, A. Almahairi, C. Angermueller, D. Bahdanau, N. Ballas, F. Bastien, J. Bayer, and A. Belikov et al., “Theano: A python framework for fast computation of mathematical expressions,” arXiv preprint arXiv:1605.02688 (2016).

D. P. Kingma and J. Ba, “Adam: A method for stochastic optimization,” arXiv preprint arXiv:1412.6980 (2014).

R. M. Haralick and L. G. Shapiro, Computer and Robot Vision, vol. 1 (Addison-Wesley Reading, 1992).

L. Bottou, “Large-scale machine learning with stochastic gradient descent,” in Proceedings of COMPSTAT’2010, (Springer, 2010), pp. 177–186.

I. R. Titze and F. Alipour, The myoelastic aerodynamic theory of phonation (National Center for Voice and Speech, 2006).

D. Abdullah, F. Fajriana, M. Maryana, L. Rosnita, A. P. U. Siahaan, R. Rahim, P. Harliana, H. Harmayani, Z. Ginting, and C. I. Erliana et al., “Application of interpolation image by using bi-cubic algorithm,” in Journal of Physics: Conference Series, vol. 1114 (IOP Publishing, 2018), p. 012066.

K. Simonyan and A. Zisserman, “Very deep convolutional networks for large-scale image recognition,” arXiv preprint arXiv:1409.1556 (2014).