I. Goodfellow, J. P. Abadie, M. Mirza, B. Xu, and D. W. Farley, “Generative Adversarial Nets,” Advances in neural information processing systems2014, pp. 2672. https://arxiv.org/abs/1406.2661
I. Gulrajani, F. Ahmed, M. Arjovsky, V. Dumoulin, and A. Courville, “Improved training of Wasserstein GANs,” in Proc. Int. Conf. Neural Inf. Process. Syst. (NIPS), Long Beach, CA, USA, Dec. 2017, pp. 5767–5777. https://arxiv.org/abs/1704.00028
T. Karras, T. Aila, S. Laine, and J. Lehtinen, “Progressive growing of GANs for improved quality, stability, and variation,”, ICLR 2018, pp. 1–26, [Online]. Available: http://arxiv.org/abs/1701.10196
S. Akcay, M. E. Kundegorski, M. Devereux, and T. P. Breckon, “Transfer learning using convolutional neural networks for object classification within X-ray baggage security imagery,” IEEE International Conference on Image Processing (ICIP), Phoenix, AZ, pp. 1057–1061(2016). https://dx.doi.org/10.1109/ICIP.2016.7532519
S. Akçay, M. E. Kundegorski, M. Devereux, and T. P. Breckon, “Using Deep Convolutional Neural Network Architectures for Object Classification and Detection Within X-Ray Baggage Security Imagery,” IEEE Trans. Inform. Forensic Secur. 13(9), 2203–2215 (2018).
[Crossref]
M. E. Kundegorski, S. Akçay, M. Devereux, A. Mouton, and T. P. Breckon, “On using feature descriptors as visual words for object detection within X-ray baggage security screening,” International Conference on Imaging for Crime Detection & Prevention, Nov. 2016, p. 12–18. https://dx.doi.org/10.1049/ic.2016.0080
M. Frid-Adar, E. Klang, M. Amitai, J. Goldberger, and H. Greenspan, “Synthetic data augmentation using GAN for improved liver lesion classification,” IEEE International Symposium on Biomedical Imaging (ISBI), Washington, DC, 2018, pp. 289–293. http://arxiv.org/abs/1801.02385
J. An, H. Zhang, Y. Zhu, and J. Yang, “Semantic segmentation for prohibited items in baggage inspection,” in Proc. Int. Conf. Intell. Sci. Big Data Eng. (ISCIDE), Nanjing, China, Oct. 2019, pp. 495–505. https://dx.doi.org/10.1007/978-3-030-36189-1_41
H. Shin, N. A. Tenenholtz, J. K. Rogers, C. G. Schwarz, M. L. Senjem, J. L. Gunter, K. Andriole, and M. Michalski, “Medical image synthesis for data augmentation and anonymization using generative adversarial networks,” International Conference on Medical Image Computing and Computer Assisted Intervention (MICCAI), Granada, Spain, 2018, pp. 1–11. https://dx.doi.org/10.1007/978-3-030-00536-8_1
W. Liu, D. Anguelov, D. Erhan, C. Szegedy, S. Reed, C. Y. Fu, and A. C. Berg, “SSD: Single shot multibox detector,” in Proc. Eur. Conf. Comput. Vis. (ECCV), Amsterdam, The Netherlands, Oct. 2016, pp. 21–27. https://dx.doi.org/10.1007/978-3-319-46448-0_2
C. Szegedy, W. Liu, Y. Jia, P. Sermanet, S. Reed, D. Anguelov, D. Erhan, V. Vanhoucke, and A. Rabinovich, “Going deeper with convolutions,” in Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), Boston, MA, USA, Jun. 2015, pp. 1–9. https://dx.doi.org/10.1109/CVPR.2015.7298594
I. Gulrajani, F. Ahmed, M. Arjovsky, V. Dumoulin, and A. Courville, “Improved training of Wasserstein GANs,” in Proc. Int. Conf. Neural Inf. Process. Syst. (NIPS), Long Beach, CA, USA, Dec. 2017, pp. 5767–5777. https://arxiv.org/abs/1704.00028
M. Arjovsky, S. Chintala, and L. Bottou, “Wasserstein GAN,” 2017, pp. 1–32, [Online]. Available: http://arxiv.org/abs/1701.07875
S. Gurumurthy, R. K. Sarvadevabhatla, and R. V. Babu, “DeLiGAN: generative adversarial networks for diverse and limited data,” IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Honolulu, HI, 2017, pp. 4941–4949. https://dx.doi.org/10.1109/CVPR.2017.525
M. Baştan, “Multi-view object detection in dual-energy X-ray images,” Machine Vision and Applications 26(7-8), 1045–1060 (2015).
[Crossref]
M. Baştan, M. R. Yousefi, and T. M. Breuel, “Visual words on baggage X-ray images,” Computer Analysis of Images and Patterns - 14th International Conference(CAIP 2011), Seville, Spain, Aug. 2011, pp. 360–368. https://dx.doi.org/10.1007/978-3-642-23672-3_44
M. Bastan, W. Byeon, and T. M. Breuel, “Object recognition in multi-view dual energy X-ray images,” in Proc. BMVC, 2013, pp. 130–131. https://dx.doi.org/10.1007/978-3-642-32717-9_15
J. Hosang, R. Benenson, and B. Schiele. “How good are detection proposals, really?” 2014. [Online] https://dx.doi.org/10.5244/C.28.24
W. Liu, D. Anguelov, D. Erhan, C. Szegedy, S. Reed, C. Y. Fu, and A. C. Berg, “SSD: Single shot multibox detector,” in Proc. Eur. Conf. Comput. Vis. (ECCV), Amsterdam, The Netherlands, Oct. 2016, pp. 21–27. https://dx.doi.org/10.1007/978-3-319-46448-0_2
M. Arjovsky, S. Chintala, and L. Bottou, “Wasserstein GAN,” 2017, pp. 1–32, [Online]. Available: http://arxiv.org/abs/1701.07875
S. Akçay, M. E. Kundegorski, M. Devereux, and T. P. Breckon, “Using Deep Convolutional Neural Network Architectures for Object Classification and Detection Within X-Ray Baggage Security Imagery,” IEEE Trans. Inform. Forensic Secur. 13(9), 2203–2215 (2018).
[Crossref]
S. Akcay, M. E. Kundegorski, M. Devereux, and T. P. Breckon, “Transfer learning using convolutional neural networks for object classification within X-ray baggage security imagery,” IEEE International Conference on Image Processing (ICIP), Phoenix, AZ, pp. 1057–1061(2016). https://dx.doi.org/10.1109/ICIP.2016.7532519
D. Turcsany, A. Mouton, and T. P. Breckon, “Improving feature-based object recognition for X-ray baggage security screening using primed visual words,” IEEE International Conference on Industrial Technology (ICIT), Cape Town, 2013, pp. 1140–1145. https://dx.doi.org/10.1109/ICIT.2013.6505833
M. E. Kundegorski, S. Akçay, M. Devereux, A. Mouton, and T. P. Breckon, “On using feature descriptors as visual words for object detection within X-ray baggage security screening,” International Conference on Imaging for Crime Detection & Prevention, Nov. 2016, p. 12–18. https://dx.doi.org/10.1049/ic.2016.0080
M. Baştan, M. R. Yousefi, and T. M. Breuel, “Visual words on baggage X-ray images,” Computer Analysis of Images and Patterns - 14th International Conference(CAIP 2011), Seville, Spain, Aug. 2011, pp. 360–368. https://dx.doi.org/10.1007/978-3-642-23672-3_44
M. Bastan, W. Byeon, and T. M. Breuel, “Object recognition in multi-view dual energy X-ray images,” in Proc. BMVC, 2013, pp. 130–131. https://dx.doi.org/10.1007/978-3-642-32717-9_15
A. Brock, J. Donahue, and K. Simonyan, “Large scale GAN training for high fidelity natural image synthesis,” ICLR2019, pp. 1-35. http://arxiv.org/abs/1809.11096
M. Bastan, W. Byeon, and T. M. Breuel, “Object recognition in multi-view dual energy X-ray images,” in Proc. BMVC, 2013, pp. 130–131. https://dx.doi.org/10.1007/978-3-642-32717-9_15
F. Calimeri, A. Marzullo, C. Stamile, and G. Terracina, “Biomedical data augmentation using generative adversarial neural networks,” in Proc. Int. Conf. Artif. Neural Netw. (ICANN), Alghero, Italy, Sep. 2017, pp. 626–634. https://dx.doi.org/10.1007/978-3-319-68612-7_71
T. Tran, T. Pham, G. Carneiro, L. Palmer, and I. Reid, “A Bayesian data augmentation approach for learning deep models,” in Proc. Int. Conf. Neural Inf. Process. Syst. (NIPS), Long Beach, CA, USA, Dec. 2017, pp. 2797–2806. https://arxiv.org/abs/1710.10564
D. Mery, V. Riffo, U. Zscherpel, G. Mondragón, I. Lillo, I. Zuccar, H. Lobel, and M. Carrasco, “GDXray: The database of X-ray images for nondestructive testing,” J. Nondestruct. Eval. 34(4), 42 (2015).
[Crossref]
T. Salimans, I. Goodfellow, W. Zaremba, V. Cheung, A. Radford, and X. Chen, “Improved techniques for training GANs,” International Conference on Neural Information Processing Systems (NIPS), Barcelona, Spain, 2016, pp. 2234–2242. http://papers.nips.cc/paper/6125
T. Salimans, I. Goodfellow, W. Zaremba, V. Cheung, A. Radford, and X. Chen, “Improved techniques for training GANs,” International Conference on Neural Information Processing Systems (NIPS), Barcelona, Spain, 2016, pp. 2234–2242. http://papers.nips.cc/paper/6125
M. Arjovsky, S. Chintala, and L. Bottou, “Wasserstein GAN,” 2017, pp. 1–32, [Online]. Available: http://arxiv.org/abs/1701.07875
A. Radford, L. Metz, and S. Chintala, “Unsupervised representation learning with deep convolutional generative adversarial networks,” Computer Ence, 2015. [Online]. Available: http://arxiv.org/abs/1511.06434
M. Mathieu M, C. Couprie, and Y. LeCun, “Deep multi-scale video prediction beyond meansquare error,” ICLR 2016, pp. 1–14, [Online]. Available: http://arxiv.org/abs/1511.05440
I. Gulrajani, F. Ahmed, M. Arjovsky, V. Dumoulin, and A. Courville, “Improved training of Wasserstein GANs,” in Proc. Int. Conf. Neural Inf. Process. Syst. (NIPS), Long Beach, CA, USA, Dec. 2017, pp. 5767–5777. https://arxiv.org/abs/1704.00028
R. Girshick, J. Donahue, T. Darrell, and J. Malik, “Rich Feature Hierarchies for Accurate Object Detection and Semantic Segmentation,” in Proc. IEEE Conf. Comput. Vis. Pattern Recognit., Columbus, OH, USA, Jun. 2014, pp. 580–587. https://dx.doi.org/10.1109/CVPR.2014.81
S. Akçay, M. E. Kundegorski, M. Devereux, and T. P. Breckon, “Using Deep Convolutional Neural Network Architectures for Object Classification and Detection Within X-Ray Baggage Security Imagery,” IEEE Trans. Inform. Forensic Secur. 13(9), 2203–2215 (2018).
[Crossref]
M. E. Kundegorski, S. Akçay, M. Devereux, A. Mouton, and T. P. Breckon, “On using feature descriptors as visual words for object detection within X-ray baggage security screening,” International Conference on Imaging for Crime Detection & Prevention, Nov. 2016, p. 12–18. https://dx.doi.org/10.1049/ic.2016.0080
S. Akcay, M. E. Kundegorski, M. Devereux, and T. P. Breckon, “Transfer learning using convolutional neural networks for object classification within X-ray baggage security imagery,” IEEE International Conference on Image Processing (ICIP), Phoenix, AZ, pp. 1057–1061(2016). https://dx.doi.org/10.1109/ICIP.2016.7532519
J. Redmon, S. Divvala, R. Girshick, and A. Farhadi, “You only look once: Unified, real-time object detection,” in Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), Las Vegas, NV, USA, Jun. 2016, pp. 779–788. https://dx.doi.org/10.1109/CVPR.2016.91
A. Brock, J. Donahue, and K. Simonyan, “Large scale GAN training for high fidelity natural image synthesis,” ICLR2019, pp. 1-35. http://arxiv.org/abs/1809.11096
R. Girshick, J. Donahue, T. Darrell, and J. Malik, “Rich Feature Hierarchies for Accurate Object Detection and Semantic Segmentation,” in Proc. IEEE Conf. Comput. Vis. Pattern Recognit., Columbus, OH, USA, Jun. 2014, pp. 580–587. https://dx.doi.org/10.1109/CVPR.2014.81
I. Gulrajani, F. Ahmed, M. Arjovsky, V. Dumoulin, and A. Courville, “Improved training of Wasserstein GANs,” in Proc. Int. Conf. Neural Inf. Process. Syst. (NIPS), Long Beach, CA, USA, Dec. 2017, pp. 5767–5777. https://arxiv.org/abs/1704.00028
P. Isola, J. Zhu, T. Zhou, and A. Efros, “Image-to-image translation with conditional adversarial networks,” in Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), Honolulu, HI, USA, Jul. 2017, pp. 1125–1134. https://dx.doi.org/10.1109/CVPR.2017.632
J. Zhu, T. Park, P. Isola, and A. Efros, “Unpaired image-to-image translation using cycle-consistent adversarial networks,” in Proc. IEEE Int. Conf. Comput. Vis. (ICCV), Venice, Italy, Oct. 2017, pp. 2223–2232. https://dx.doi.org/10.1109/ICCV.2017.244
W. Liu, D. Anguelov, D. Erhan, C. Szegedy, S. Reed, C. Y. Fu, and A. C. Berg, “SSD: Single shot multibox detector,” in Proc. Eur. Conf. Comput. Vis. (ECCV), Amsterdam, The Netherlands, Oct. 2016, pp. 21–27. https://dx.doi.org/10.1007/978-3-319-46448-0_2
C. Szegedy, W. Liu, Y. Jia, P. Sermanet, S. Reed, D. Anguelov, D. Erhan, V. Vanhoucke, and A. Rabinovich, “Going deeper with convolutions,” in Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), Boston, MA, USA, Jun. 2015, pp. 1–9. https://dx.doi.org/10.1109/CVPR.2015.7298594
M. Everingham, L. Van Gool, C. K. I. Williams, J. Winn, and A. Zisserman, “The Pascal Visual Object Classes Challenge,” Machine Learning Challenges, Evaluating Predictive Uncertainty, Visual Object Classification and Recognizing Textual Entailment, MLCW 2005, Southampton, UK, April pp. 117–176 (2005). https://dx.doi.org/10.1007/11736790_8
J. Redmon, S. Divvala, R. Girshick, and A. Farhadi, “You only look once: Unified, real-time object detection,” in Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), Las Vegas, NV, USA, Jun. 2016, pp. 779–788. https://dx.doi.org/10.1109/CVPR.2016.91
I. Goodfellow, J. P. Abadie, M. Mirza, B. Xu, and D. W. Farley, “Generative Adversarial Nets,” Advances in neural information processing systems2014, pp. 2672. https://arxiv.org/abs/1406.2661
M. Frid-Adar, E. Klang, M. Amitai, J. Goldberger, and H. Greenspan, “Synthetic data augmentation using GAN for improved liver lesion classification,” IEEE International Symposium on Biomedical Imaging (ISBI), Washington, DC, 2018, pp. 289–293. http://arxiv.org/abs/1801.02385
W. Liu, D. Anguelov, D. Erhan, C. Szegedy, S. Reed, C. Y. Fu, and A. C. Berg, “SSD: Single shot multibox detector,” in Proc. Eur. Conf. Comput. Vis. (ECCV), Amsterdam, The Netherlands, Oct. 2016, pp. 21–27. https://dx.doi.org/10.1007/978-3-319-46448-0_2
J. Uijlings, K. van de Sande, T. Gevers, and A. Smeulders, “Selective Search for Object Recognition,” in IEEE International Conference on Computer Vision (ICCV), 2012, pp.154–172. https://dx.doi.org/10.1007/s11263-013-0620-5
S. Ren, K. He, R. Girshick, and J. Sun, “Spatial pyramid pooling in deep convolutional networks for visual recognition,” IEEE Trans. Pattern Anal. Mach. Intell. 37(9), 1904–1916 (2015).
[Crossref]
S. Ren, K. He, R. Girshick, and J. Sun, “Faster R-CNN: Towards real-time object detection with region proposal networks,” in Proc. Adv. Neural Inf. Proces. Syst., Montreal, QC, Canada, 2015, pp. 91–99. https://dx.doi.org/10.1109/TPAMI.2016.2577031
R. Girshick, J. Donahue, T. Darrell, and J. Malik, “Rich Feature Hierarchies for Accurate Object Detection and Semantic Segmentation,” in Proc. IEEE Conf. Comput. Vis. Pattern Recognit., Columbus, OH, USA, Jun. 2014, pp. 580–587. https://dx.doi.org/10.1109/CVPR.2014.81
J. Redmon, S. Divvala, R. Girshick, and A. Farhadi, “You only look once: Unified, real-time object detection,” in Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), Las Vegas, NV, USA, Jun. 2016, pp. 779–788. https://dx.doi.org/10.1109/CVPR.2016.91
R. Girshick, “Fast R-CNN,” in Proc. IEEE Int. Conf. Comput. Vis. (ICCV), Santiago, Chile, Dec. 2015, pp. 1440–1448. https://dx.doi.org/10.1109/ICCV.2015.169
M. Frid-Adar, E. Klang, M. Amitai, J. Goldberger, and H. Greenspan, “Synthetic data augmentation using GAN for improved liver lesion classification,” IEEE International Symposium on Biomedical Imaging (ISBI), Washington, DC, 2018, pp. 289–293. http://arxiv.org/abs/1801.02385
H. Zhang, I. Goodfellow, D. Metaxas, and A. Odena, “Self-attention generative adversarial networks,” 36th International Conference on Machine Learning, Long Beach, California, PMLR 97, 2019, pp.1–10. http://arxiv.org/abs/1805.08318
T. Salimans, I. Goodfellow, W. Zaremba, V. Cheung, A. Radford, and X. Chen, “Improved techniques for training GANs,” International Conference on Neural Information Processing Systems (NIPS), Barcelona, Spain, 2016, pp. 2234–2242. http://papers.nips.cc/paper/6125
I. Goodfellow, J. P. Abadie, M. Mirza, B. Xu, and D. W. Farley, “Generative Adversarial Nets,” Advances in neural information processing systems2014, pp. 2672. https://arxiv.org/abs/1406.2661
M. Frid-Adar, E. Klang, M. Amitai, J. Goldberger, and H. Greenspan, “Synthetic data augmentation using GAN for improved liver lesion classification,” IEEE International Symposium on Biomedical Imaging (ISBI), Washington, DC, 2018, pp. 289–293. http://arxiv.org/abs/1801.02385
I. Gulrajani, F. Ahmed, M. Arjovsky, V. Dumoulin, and A. Courville, “Improved training of Wasserstein GANs,” in Proc. Int. Conf. Neural Inf. Process. Syst. (NIPS), Long Beach, CA, USA, Dec. 2017, pp. 5767–5777. https://arxiv.org/abs/1704.00028
H. Shin, N. A. Tenenholtz, J. K. Rogers, C. G. Schwarz, M. L. Senjem, J. L. Gunter, K. Andriole, and M. Michalski, “Medical image synthesis for data augmentation and anonymization using generative adversarial networks,” International Conference on Medical Image Computing and Computer Assisted Intervention (MICCAI), Granada, Spain, 2018, pp. 1–11. https://dx.doi.org/10.1007/978-3-030-00536-8_1
Q. Xu, G. Huang, Y. Yuan, C. Guo, Y. Sun, F. Wu, and K. Q. Weinberger, “An empirical study on evaluation metrics of generative adversarial networks,” 2018, [Online]. Available: http://arxiv.org/abs/1806.07755
Z. Ma, Y. Lai, W. B. Kleijn, Y. Song, L. Wang, and J. Guo, “Variational bayesian learning for dirichlet process mixture of inverted dirichlet distri- butions in nongaussian image feature modeling,” IEEE Transactions on Neural Networks and Learning Systems 30(2), 449–463 (2019).
[Crossref]
Z. Ma, A. E. Teschendorff, A. Leijon, Y. Qiao, H. Zhang, and J. Guo, “Variational bayesian matrix factorization for bounded support data,” IEEE Trans. Pattern Anal. Mach. Intell. 37(4), 876–889 (2015).
[Crossref]
S. Gurumurthy, R. K. Sarvadevabhatla, and R. V. Babu, “DeLiGAN: generative adversarial networks for diverse and limited data,” IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Honolulu, HI, 2017, pp. 4941–4949. https://dx.doi.org/10.1109/CVPR.2017.525
S. Ren, K. He, R. Girshick, and J. Sun, “Spatial pyramid pooling in deep convolutional networks for visual recognition,” IEEE Trans. Pattern Anal. Mach. Intell. 37(9), 1904–1916 (2015).
[Crossref]
S. Ren, K. He, R. Girshick, and J. Sun, “Faster R-CNN: Towards real-time object detection with region proposal networks,” in Proc. Adv. Neural Inf. Proces. Syst., Montreal, QC, Canada, 2015, pp. 91–99. https://dx.doi.org/10.1109/TPAMI.2016.2577031
M. Heusel, H. Ramsauer, T. Unterthiner, B. Nessler, and S. Hochreiter, “GANs trained by a two time-scale update rule converge to a local nash equilibrium,” in Proc. Adv. Neural Inf. Process. Syst., 2017, pp. 6626–6637. http://arxiv.org/abs/1706.08500v1
M. Heusel, H. Ramsauer, T. Unterthiner, B. Nessler, and S. Hochreiter, “GANs trained by a two time-scale update rule converge to a local nash equilibrium,” in Proc. Adv. Neural Inf. Process. Syst., 2017, pp. 6626–6637. http://arxiv.org/abs/1706.08500v1
J. Hosang, R. Benenson, and B. Schiele. “How good are detection proposals, really?” 2014. [Online] https://dx.doi.org/10.5244/C.28.24
Q. Xu, G. Huang, Y. Yuan, C. Guo, Y. Sun, F. Wu, and K. Q. Weinberger, “An empirical study on evaluation metrics of generative adversarial networks,” 2018, [Online]. Available: http://arxiv.org/abs/1806.07755
C. Szegedy, V. Vanhoucke, S. Ioffe, J. Shlens, and Z. Wojna, “Rethinking the inception architecture for computer vision,” in Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), Las Vegas, NV, USA, Jun. 2016, pp. 2818–2826. https://dx.doi.org/10.1109/CVPR.2016.308
J. Zhu, T. Park, P. Isola, and A. Efros, “Unpaired image-to-image translation using cycle-consistent adversarial networks,” in Proc. IEEE Int. Conf. Comput. Vis. (ICCV), Venice, Italy, Oct. 2017, pp. 2223–2232. https://dx.doi.org/10.1109/ICCV.2017.244
P. Isola, J. Zhu, T. Zhou, and A. Efros, “Image-to-image translation with conditional adversarial networks,” in Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), Honolulu, HI, USA, Jul. 2017, pp. 1125–1134. https://dx.doi.org/10.1109/CVPR.2017.632
K. Naveen, A. Jacob, H. James, and K. Zsolt, “On Convergence and Stability of GANs,” 2018, pp. 1–18, [Online]. Available: http://arxiv.org/abs/1705.07215 . http://arxiv.org/abs/1705.07215
K. Naveen, A. Jacob, H. James, and K. Zsolt, “On Convergence and Stability of GANs,” 2018, pp. 1–18, [Online]. Available: http://arxiv.org/abs/1705.07215 . http://arxiv.org/abs/1705.07215
C. Szegedy, W. Liu, Y. Jia, P. Sermanet, S. Reed, D. Anguelov, D. Erhan, V. Vanhoucke, and A. Rabinovich, “Going deeper with convolutions,” in Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), Boston, MA, USA, Jun. 2015, pp. 1–9. https://dx.doi.org/10.1109/CVPR.2015.7298594
T. Karras, T. Aila, S. Laine, and J. Lehtinen, “Progressive growing of GANs for improved quality, stability, and variation,”, ICLR 2018, pp. 1–26, [Online]. Available: http://arxiv.org/abs/1701.10196
M. Frid-Adar, E. Klang, M. Amitai, J. Goldberger, and H. Greenspan, “Synthetic data augmentation using GAN for improved liver lesion classification,” IEEE International Symposium on Biomedical Imaging (ISBI), Washington, DC, 2018, pp. 289–293. http://arxiv.org/abs/1801.02385
Z. Ma, Y. Lai, W. B. Kleijn, Y. Song, L. Wang, and J. Guo, “Variational bayesian learning for dirichlet process mixture of inverted dirichlet distri- butions in nongaussian image feature modeling,” IEEE Transactions on Neural Networks and Learning Systems 30(2), 449–463 (2019).
[Crossref]
S. Akçay, M. E. Kundegorski, M. Devereux, and T. P. Breckon, “Using Deep Convolutional Neural Network Architectures for Object Classification and Detection Within X-Ray Baggage Security Imagery,” IEEE Trans. Inform. Forensic Secur. 13(9), 2203–2215 (2018).
[Crossref]
S. Akcay, M. E. Kundegorski, M. Devereux, and T. P. Breckon, “Transfer learning using convolutional neural networks for object classification within X-ray baggage security imagery,” IEEE International Conference on Image Processing (ICIP), Phoenix, AZ, pp. 1057–1061(2016). https://dx.doi.org/10.1109/ICIP.2016.7532519
M. E. Kundegorski, S. Akçay, M. Devereux, A. Mouton, and T. P. Breckon, “On using feature descriptors as visual words for object detection within X-ray baggage security screening,” International Conference on Imaging for Crime Detection & Prevention, Nov. 2016, p. 12–18. https://dx.doi.org/10.1049/ic.2016.0080
Z. Ma, Y. Lai, W. B. Kleijn, Y. Song, L. Wang, and J. Guo, “Variational bayesian learning for dirichlet process mixture of inverted dirichlet distri- butions in nongaussian image feature modeling,” IEEE Transactions on Neural Networks and Learning Systems 30(2), 449–463 (2019).
[Crossref]
T. Karras, T. Aila, S. Laine, and J. Lehtinen, “Progressive growing of GANs for improved quality, stability, and variation,”, ICLR 2018, pp. 1–26, [Online]. Available: http://arxiv.org/abs/1701.10196
X. Mao, Q. Li, H. Xie, R. Lau, Z. Wang, and S. Smolley, “Least squares generative adversarial networks,” Computer Vision and Pattern Recognition 1, 2813–2821 (2017).
M. Mathieu M, C. Couprie, and Y. LeCun, “Deep multi-scale video prediction beyond meansquare error,” ICLR 2016, pp. 1–14, [Online]. Available: http://arxiv.org/abs/1511.05440
T. Karras, T. Aila, S. Laine, and J. Lehtinen, “Progressive growing of GANs for improved quality, stability, and variation,”, ICLR 2018, pp. 1–26, [Online]. Available: http://arxiv.org/abs/1701.10196
Z. Ma, A. E. Teschendorff, A. Leijon, Y. Qiao, H. Zhang, and J. Guo, “Variational bayesian matrix factorization for bounded support data,” IEEE Trans. Pattern Anal. Mach. Intell. 37(4), 876–889 (2015).
[Crossref]
X. Mao, Q. Li, H. Xie, R. Lau, Z. Wang, and S. Smolley, “Least squares generative adversarial networks,” Computer Vision and Pattern Recognition 1, 2813–2821 (2017).
D. Mery, V. Riffo, U. Zscherpel, G. Mondragón, I. Lillo, I. Zuccar, H. Lobel, and M. Carrasco, “GDXray: The database of X-ray images for nondestructive testing,” J. Nondestruct. Eval. 34(4), 42 (2015).
[Crossref]
C. Szegedy, W. Liu, Y. Jia, P. Sermanet, S. Reed, D. Anguelov, D. Erhan, V. Vanhoucke, and A. Rabinovich, “Going deeper with convolutions,” in Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), Boston, MA, USA, Jun. 2015, pp. 1–9. https://dx.doi.org/10.1109/CVPR.2015.7298594
W. Liu, D. Anguelov, D. Erhan, C. Szegedy, S. Reed, C. Y. Fu, and A. C. Berg, “SSD: Single shot multibox detector,” in Proc. Eur. Conf. Comput. Vis. (ECCV), Amsterdam, The Netherlands, Oct. 2016, pp. 21–27. https://dx.doi.org/10.1007/978-3-319-46448-0_2
D. Mery, V. Riffo, U. Zscherpel, G. Mondragón, I. Lillo, I. Zuccar, H. Lobel, and M. Carrasco, “GDXray: The database of X-ray images for nondestructive testing,” J. Nondestruct. Eval. 34(4), 42 (2015).
[Crossref]
Z. Ma, Y. Lai, W. B. Kleijn, Y. Song, L. Wang, and J. Guo, “Variational bayesian learning for dirichlet process mixture of inverted dirichlet distri- butions in nongaussian image feature modeling,” IEEE Transactions on Neural Networks and Learning Systems 30(2), 449–463 (2019).
[Crossref]
Z. Ma, A. E. Teschendorff, A. Leijon, Y. Qiao, H. Zhang, and J. Guo, “Variational bayesian matrix factorization for bounded support data,” IEEE Trans. Pattern Anal. Mach. Intell. 37(4), 876–889 (2015).
[Crossref]
R. Girshick, J. Donahue, T. Darrell, and J. Malik, “Rich Feature Hierarchies for Accurate Object Detection and Semantic Segmentation,” in Proc. IEEE Conf. Comput. Vis. Pattern Recognit., Columbus, OH, USA, Jun. 2014, pp. 580–587. https://dx.doi.org/10.1109/CVPR.2014.81
X. Mao, Q. Li, H. Xie, R. Lau, Z. Wang, and S. Smolley, “Least squares generative adversarial networks,” Computer Vision and Pattern Recognition 1, 2813–2821 (2017).
F. Calimeri, A. Marzullo, C. Stamile, and G. Terracina, “Biomedical data augmentation using generative adversarial neural networks,” in Proc. Int. Conf. Artif. Neural Netw. (ICANN), Alghero, Italy, Sep. 2017, pp. 626–634. https://dx.doi.org/10.1007/978-3-319-68612-7_71
M. Mathieu M, C. Couprie, and Y. LeCun, “Deep multi-scale video prediction beyond meansquare error,” ICLR 2016, pp. 1–14, [Online]. Available: http://arxiv.org/abs/1511.05440
D. Mery, V. Riffo, U. Zscherpel, G. Mondragón, I. Lillo, I. Zuccar, H. Lobel, and M. Carrasco, “GDXray: The database of X-ray images for nondestructive testing,” J. Nondestruct. Eval. 34(4), 42 (2015).
[Crossref]
H. Zhang, I. Goodfellow, D. Metaxas, and A. Odena, “Self-attention generative adversarial networks,” 36th International Conference on Machine Learning, Long Beach, California, PMLR 97, 2019, pp.1–10. http://arxiv.org/abs/1805.08318
A. Radford, L. Metz, and S. Chintala, “Unsupervised representation learning with deep convolutional generative adversarial networks,” Computer Ence, 2015. [Online]. Available: http://arxiv.org/abs/1511.06434
H. Shin, N. A. Tenenholtz, J. K. Rogers, C. G. Schwarz, M. L. Senjem, J. L. Gunter, K. Andriole, and M. Michalski, “Medical image synthesis for data augmentation and anonymization using generative adversarial networks,” International Conference on Medical Image Computing and Computer Assisted Intervention (MICCAI), Granada, Spain, 2018, pp. 1–11. https://dx.doi.org/10.1007/978-3-030-00536-8_1
M. Mirza and S. Osindero, “Conditional generative adversarial nets,” 2014, pp. 1–7, [Online]. https://arxiv.org/abs/1411.1784
I. Goodfellow, J. P. Abadie, M. Mirza, B. Xu, and D. W. Farley, “Generative Adversarial Nets,” Advances in neural information processing systems2014, pp. 2672. https://arxiv.org/abs/1406.2661
D. Mery, V. Riffo, U. Zscherpel, G. Mondragón, I. Lillo, I. Zuccar, H. Lobel, and M. Carrasco, “GDXray: The database of X-ray images for nondestructive testing,” J. Nondestruct. Eval. 34(4), 42 (2015).
[Crossref]
M. E. Kundegorski, S. Akçay, M. Devereux, A. Mouton, and T. P. Breckon, “On using feature descriptors as visual words for object detection within X-ray baggage security screening,” International Conference on Imaging for Crime Detection & Prevention, Nov. 2016, p. 12–18. https://dx.doi.org/10.1049/ic.2016.0080
D. Turcsany, A. Mouton, and T. P. Breckon, “Improving feature-based object recognition for X-ray baggage security screening using primed visual words,” IEEE International Conference on Industrial Technology (ICIT), Cape Town, 2013, pp. 1140–1145. https://dx.doi.org/10.1109/ICIT.2013.6505833
K. Naveen, A. Jacob, H. James, and K. Zsolt, “On Convergence and Stability of GANs,” 2018, pp. 1–18, [Online]. Available: http://arxiv.org/abs/1705.07215 . http://arxiv.org/abs/1705.07215
M. Heusel, H. Ramsauer, T. Unterthiner, B. Nessler, and S. Hochreiter, “GANs trained by a two time-scale update rule converge to a local nash equilibrium,” in Proc. Adv. Neural Inf. Process. Syst., 2017, pp. 6626–6637. http://arxiv.org/abs/1706.08500v1
H. Zhang, I. Goodfellow, D. Metaxas, and A. Odena, “Self-attention generative adversarial networks,” 36th International Conference on Machine Learning, Long Beach, California, PMLR 97, 2019, pp.1–10. http://arxiv.org/abs/1805.08318
M. Mirza and S. Osindero, “Conditional generative adversarial nets,” 2014, pp. 1–7, [Online]. https://arxiv.org/abs/1411.1784
T. Tran, T. Pham, G. Carneiro, L. Palmer, and I. Reid, “A Bayesian data augmentation approach for learning deep models,” in Proc. Int. Conf. Neural Inf. Process. Syst. (NIPS), Long Beach, CA, USA, Dec. 2017, pp. 2797–2806. https://arxiv.org/abs/1710.10564
J. Zhu, T. Park, P. Isola, and A. Efros, “Unpaired image-to-image translation using cycle-consistent adversarial networks,” in Proc. IEEE Int. Conf. Comput. Vis. (ICCV), Venice, Italy, Oct. 2017, pp. 2223–2232. https://dx.doi.org/10.1109/ICCV.2017.244
T. Tran, T. Pham, G. Carneiro, L. Palmer, and I. Reid, “A Bayesian data augmentation approach for learning deep models,” in Proc. Int. Conf. Neural Inf. Process. Syst. (NIPS), Long Beach, CA, USA, Dec. 2017, pp. 2797–2806. https://arxiv.org/abs/1710.10564
Z. Ma, A. E. Teschendorff, A. Leijon, Y. Qiao, H. Zhang, and J. Guo, “Variational bayesian matrix factorization for bounded support data,” IEEE Trans. Pattern Anal. Mach. Intell. 37(4), 876–889 (2015).
[Crossref]
C. Szegedy, W. Liu, Y. Jia, P. Sermanet, S. Reed, D. Anguelov, D. Erhan, V. Vanhoucke, and A. Rabinovich, “Going deeper with convolutions,” in Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), Boston, MA, USA, Jun. 2015, pp. 1–9. https://dx.doi.org/10.1109/CVPR.2015.7298594
A. Radford, L. Metz, and S. Chintala, “Unsupervised representation learning with deep convolutional generative adversarial networks,” Computer Ence, 2015. [Online]. Available: http://arxiv.org/abs/1511.06434
T. Salimans, I. Goodfellow, W. Zaremba, V. Cheung, A. Radford, and X. Chen, “Improved techniques for training GANs,” International Conference on Neural Information Processing Systems (NIPS), Barcelona, Spain, 2016, pp. 2234–2242. http://papers.nips.cc/paper/6125
M. Heusel, H. Ramsauer, T. Unterthiner, B. Nessler, and S. Hochreiter, “GANs trained by a two time-scale update rule converge to a local nash equilibrium,” in Proc. Adv. Neural Inf. Process. Syst., 2017, pp. 6626–6637. http://arxiv.org/abs/1706.08500v1
J. Redmon, S. Divvala, R. Girshick, and A. Farhadi, “You only look once: Unified, real-time object detection,” in Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), Las Vegas, NV, USA, Jun. 2016, pp. 779–788. https://dx.doi.org/10.1109/CVPR.2016.91
C. Szegedy, W. Liu, Y. Jia, P. Sermanet, S. Reed, D. Anguelov, D. Erhan, V. Vanhoucke, and A. Rabinovich, “Going deeper with convolutions,” in Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), Boston, MA, USA, Jun. 2015, pp. 1–9. https://dx.doi.org/10.1109/CVPR.2015.7298594
W. Liu, D. Anguelov, D. Erhan, C. Szegedy, S. Reed, C. Y. Fu, and A. C. Berg, “SSD: Single shot multibox detector,” in Proc. Eur. Conf. Comput. Vis. (ECCV), Amsterdam, The Netherlands, Oct. 2016, pp. 21–27. https://dx.doi.org/10.1007/978-3-319-46448-0_2
T. Tran, T. Pham, G. Carneiro, L. Palmer, and I. Reid, “A Bayesian data augmentation approach for learning deep models,” in Proc. Int. Conf. Neural Inf. Process. Syst. (NIPS), Long Beach, CA, USA, Dec. 2017, pp. 2797–2806. https://arxiv.org/abs/1710.10564
S. Ren, K. He, R. Girshick, and J. Sun, “Spatial pyramid pooling in deep convolutional networks for visual recognition,” IEEE Trans. Pattern Anal. Mach. Intell. 37(9), 1904–1916 (2015).
[Crossref]
S. Ren, K. He, R. Girshick, and J. Sun, “Faster R-CNN: Towards real-time object detection with region proposal networks,” in Proc. Adv. Neural Inf. Proces. Syst., Montreal, QC, Canada, 2015, pp. 91–99. https://dx.doi.org/10.1109/TPAMI.2016.2577031
D. Mery, V. Riffo, U. Zscherpel, G. Mondragón, I. Lillo, I. Zuccar, H. Lobel, and M. Carrasco, “GDXray: The database of X-ray images for nondestructive testing,” J. Nondestruct. Eval. 34(4), 42 (2015).
[Crossref]
H. Shin, N. A. Tenenholtz, J. K. Rogers, C. G. Schwarz, M. L. Senjem, J. L. Gunter, K. Andriole, and M. Michalski, “Medical image synthesis for data augmentation and anonymization using generative adversarial networks,” International Conference on Medical Image Computing and Computer Assisted Intervention (MICCAI), Granada, Spain, 2018, pp. 1–11. https://dx.doi.org/10.1007/978-3-030-00536-8_1
T. Salimans, I. Goodfellow, W. Zaremba, V. Cheung, A. Radford, and X. Chen, “Improved techniques for training GANs,” International Conference on Neural Information Processing Systems (NIPS), Barcelona, Spain, 2016, pp. 2234–2242. http://papers.nips.cc/paper/6125
S. Gurumurthy, R. K. Sarvadevabhatla, and R. V. Babu, “DeLiGAN: generative adversarial networks for diverse and limited data,” IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Honolulu, HI, 2017, pp. 4941–4949. https://dx.doi.org/10.1109/CVPR.2017.525
J. Hosang, R. Benenson, and B. Schiele. “How good are detection proposals, really?” 2014. [Online] https://dx.doi.org/10.5244/C.28.24
H. Shin, N. A. Tenenholtz, J. K. Rogers, C. G. Schwarz, M. L. Senjem, J. L. Gunter, K. Andriole, and M. Michalski, “Medical image synthesis for data augmentation and anonymization using generative adversarial networks,” International Conference on Medical Image Computing and Computer Assisted Intervention (MICCAI), Granada, Spain, 2018, pp. 1–11. https://dx.doi.org/10.1007/978-3-030-00536-8_1
H. Shin, N. A. Tenenholtz, J. K. Rogers, C. G. Schwarz, M. L. Senjem, J. L. Gunter, K. Andriole, and M. Michalski, “Medical image synthesis for data augmentation and anonymization using generative adversarial networks,” International Conference on Medical Image Computing and Computer Assisted Intervention (MICCAI), Granada, Spain, 2018, pp. 1–11. https://dx.doi.org/10.1007/978-3-030-00536-8_1
C. Szegedy, W. Liu, Y. Jia, P. Sermanet, S. Reed, D. Anguelov, D. Erhan, V. Vanhoucke, and A. Rabinovich, “Going deeper with convolutions,” in Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), Boston, MA, USA, Jun. 2015, pp. 1–9. https://dx.doi.org/10.1109/CVPR.2015.7298594
H. Shin, N. A. Tenenholtz, J. K. Rogers, C. G. Schwarz, M. L. Senjem, J. L. Gunter, K. Andriole, and M. Michalski, “Medical image synthesis for data augmentation and anonymization using generative adversarial networks,” International Conference on Medical Image Computing and Computer Assisted Intervention (MICCAI), Granada, Spain, 2018, pp. 1–11. https://dx.doi.org/10.1007/978-3-030-00536-8_1
C. Szegedy, V. Vanhoucke, S. Ioffe, J. Shlens, and Z. Wojna, “Rethinking the inception architecture for computer vision,” in Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), Las Vegas, NV, USA, Jun. 2016, pp. 2818–2826. https://dx.doi.org/10.1109/CVPR.2016.308
A. Brock, J. Donahue, and K. Simonyan, “Large scale GAN training for high fidelity natural image synthesis,” ICLR2019, pp. 1-35. http://arxiv.org/abs/1809.11096
J. Uijlings, K. van de Sande, T. Gevers, and A. Smeulders, “Selective Search for Object Recognition,” in IEEE International Conference on Computer Vision (ICCV), 2012, pp.154–172. https://dx.doi.org/10.1007/s11263-013-0620-5
X. Mao, Q. Li, H. Xie, R. Lau, Z. Wang, and S. Smolley, “Least squares generative adversarial networks,” Computer Vision and Pattern Recognition 1, 2813–2821 (2017).
Z. Ma, Y. Lai, W. B. Kleijn, Y. Song, L. Wang, and J. Guo, “Variational bayesian learning for dirichlet process mixture of inverted dirichlet distri- butions in nongaussian image feature modeling,” IEEE Transactions on Neural Networks and Learning Systems 30(2), 449–463 (2019).
[Crossref]
F. Calimeri, A. Marzullo, C. Stamile, and G. Terracina, “Biomedical data augmentation using generative adversarial neural networks,” in Proc. Int. Conf. Artif. Neural Netw. (ICANN), Alghero, Italy, Sep. 2017, pp. 626–634. https://dx.doi.org/10.1007/978-3-319-68612-7_71
S. Ren, K. He, R. Girshick, and J. Sun, “Spatial pyramid pooling in deep convolutional networks for visual recognition,” IEEE Trans. Pattern Anal. Mach. Intell. 37(9), 1904–1916 (2015).
[Crossref]
S. Ren, K. He, R. Girshick, and J. Sun, “Faster R-CNN: Towards real-time object detection with region proposal networks,” in Proc. Adv. Neural Inf. Proces. Syst., Montreal, QC, Canada, 2015, pp. 91–99. https://dx.doi.org/10.1109/TPAMI.2016.2577031
Q. Xu, G. Huang, Y. Yuan, C. Guo, Y. Sun, F. Wu, and K. Q. Weinberger, “An empirical study on evaluation metrics of generative adversarial networks,” 2018, [Online]. Available: http://arxiv.org/abs/1806.07755
W. Liu, D. Anguelov, D. Erhan, C. Szegedy, S. Reed, C. Y. Fu, and A. C. Berg, “SSD: Single shot multibox detector,” in Proc. Eur. Conf. Comput. Vis. (ECCV), Amsterdam, The Netherlands, Oct. 2016, pp. 21–27. https://dx.doi.org/10.1007/978-3-319-46448-0_2
C. Szegedy, V. Vanhoucke, S. Ioffe, J. Shlens, and Z. Wojna, “Rethinking the inception architecture for computer vision,” in Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), Las Vegas, NV, USA, Jun. 2016, pp. 2818–2826. https://dx.doi.org/10.1109/CVPR.2016.308
C. Szegedy, W. Liu, Y. Jia, P. Sermanet, S. Reed, D. Anguelov, D. Erhan, V. Vanhoucke, and A. Rabinovich, “Going deeper with convolutions,” in Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), Boston, MA, USA, Jun. 2015, pp. 1–9. https://dx.doi.org/10.1109/CVPR.2015.7298594
H. Shin, N. A. Tenenholtz, J. K. Rogers, C. G. Schwarz, M. L. Senjem, J. L. Gunter, K. Andriole, and M. Michalski, “Medical image synthesis for data augmentation and anonymization using generative adversarial networks,” International Conference on Medical Image Computing and Computer Assisted Intervention (MICCAI), Granada, Spain, 2018, pp. 1–11. https://dx.doi.org/10.1007/978-3-030-00536-8_1
F. Calimeri, A. Marzullo, C. Stamile, and G. Terracina, “Biomedical data augmentation using generative adversarial neural networks,” in Proc. Int. Conf. Artif. Neural Netw. (ICANN), Alghero, Italy, Sep. 2017, pp. 626–634. https://dx.doi.org/10.1007/978-3-319-68612-7_71
Z. Ma, A. E. Teschendorff, A. Leijon, Y. Qiao, H. Zhang, and J. Guo, “Variational bayesian matrix factorization for bounded support data,” IEEE Trans. Pattern Anal. Mach. Intell. 37(4), 876–889 (2015).
[Crossref]
T. Tran, T. Pham, G. Carneiro, L. Palmer, and I. Reid, “A Bayesian data augmentation approach for learning deep models,” in Proc. Int. Conf. Neural Inf. Process. Syst. (NIPS), Long Beach, CA, USA, Dec. 2017, pp. 2797–2806. https://arxiv.org/abs/1710.10564
D. Turcsany, A. Mouton, and T. P. Breckon, “Improving feature-based object recognition for X-ray baggage security screening using primed visual words,” IEEE International Conference on Industrial Technology (ICIT), Cape Town, 2013, pp. 1140–1145. https://dx.doi.org/10.1109/ICIT.2013.6505833
J. Uijlings, K. van de Sande, T. Gevers, and A. Smeulders, “Selective Search for Object Recognition,” in IEEE International Conference on Computer Vision (ICCV), 2012, pp.154–172. https://dx.doi.org/10.1007/s11263-013-0620-5
M. Heusel, H. Ramsauer, T. Unterthiner, B. Nessler, and S. Hochreiter, “GANs trained by a two time-scale update rule converge to a local nash equilibrium,” in Proc. Adv. Neural Inf. Process. Syst., 2017, pp. 6626–6637. http://arxiv.org/abs/1706.08500v1
J. Uijlings, K. van de Sande, T. Gevers, and A. Smeulders, “Selective Search for Object Recognition,” in IEEE International Conference on Computer Vision (ICCV), 2012, pp.154–172. https://dx.doi.org/10.1007/s11263-013-0620-5
M. Everingham, L. Van Gool, C. K. I. Williams, J. Winn, and A. Zisserman, “The Pascal Visual Object Classes Challenge,” Machine Learning Challenges, Evaluating Predictive Uncertainty, Visual Object Classification and Recognizing Textual Entailment, MLCW 2005, Southampton, UK, April pp. 117–176 (2005). https://dx.doi.org/10.1007/11736790_8
C. Szegedy, V. Vanhoucke, S. Ioffe, J. Shlens, and Z. Wojna, “Rethinking the inception architecture for computer vision,” in Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), Las Vegas, NV, USA, Jun. 2016, pp. 2818–2826. https://dx.doi.org/10.1109/CVPR.2016.308
C. Szegedy, W. Liu, Y. Jia, P. Sermanet, S. Reed, D. Anguelov, D. Erhan, V. Vanhoucke, and A. Rabinovich, “Going deeper with convolutions,” in Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), Boston, MA, USA, Jun. 2015, pp. 1–9. https://dx.doi.org/10.1109/CVPR.2015.7298594
Z. Ma, Y. Lai, W. B. Kleijn, Y. Song, L. Wang, and J. Guo, “Variational bayesian learning for dirichlet process mixture of inverted dirichlet distri- butions in nongaussian image feature modeling,” IEEE Transactions on Neural Networks and Learning Systems 30(2), 449–463 (2019).
[Crossref]
X. Mao, Q. Li, H. Xie, R. Lau, Z. Wang, and S. Smolley, “Least squares generative adversarial networks,” Computer Vision and Pattern Recognition 1, 2813–2821 (2017).
Q. Xu, G. Huang, Y. Yuan, C. Guo, Y. Sun, F. Wu, and K. Q. Weinberger, “An empirical study on evaluation metrics of generative adversarial networks,” 2018, [Online]. Available: http://arxiv.org/abs/1806.07755
M. Everingham, L. Van Gool, C. K. I. Williams, J. Winn, and A. Zisserman, “The Pascal Visual Object Classes Challenge,” Machine Learning Challenges, Evaluating Predictive Uncertainty, Visual Object Classification and Recognizing Textual Entailment, MLCW 2005, Southampton, UK, April pp. 117–176 (2005). https://dx.doi.org/10.1007/11736790_8
M. Everingham, L. Van Gool, C. K. I. Williams, J. Winn, and A. Zisserman, “The Pascal Visual Object Classes Challenge,” Machine Learning Challenges, Evaluating Predictive Uncertainty, Visual Object Classification and Recognizing Textual Entailment, MLCW 2005, Southampton, UK, April pp. 117–176 (2005). https://dx.doi.org/10.1007/11736790_8
C. Szegedy, V. Vanhoucke, S. Ioffe, J. Shlens, and Z. Wojna, “Rethinking the inception architecture for computer vision,” in Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), Las Vegas, NV, USA, Jun. 2016, pp. 2818–2826. https://dx.doi.org/10.1109/CVPR.2016.308
Q. Xu, G. Huang, Y. Yuan, C. Guo, Y. Sun, F. Wu, and K. Q. Weinberger, “An empirical study on evaluation metrics of generative adversarial networks,” 2018, [Online]. Available: http://arxiv.org/abs/1806.07755
X. Mao, Q. Li, H. Xie, R. Lau, Z. Wang, and S. Smolley, “Least squares generative adversarial networks,” Computer Vision and Pattern Recognition 1, 2813–2821 (2017).
I. Goodfellow, J. P. Abadie, M. Mirza, B. Xu, and D. W. Farley, “Generative Adversarial Nets,” Advances in neural information processing systems2014, pp. 2672. https://arxiv.org/abs/1406.2661
M. Xu, H. Zhang, and J. Yang, “Prohibited item detection in airport X- ray security images via attention mechanism based CNN,” Chinese Conference on Pattern Recognition & Computer Vision, Guangzhou, China, 2018, pp. 429–439. https://dx.doi.org/10.1007/978-3-030-03335-4_37
Q. Xu, G. Huang, Y. Yuan, C. Guo, Y. Sun, F. Wu, and K. Q. Weinberger, “An empirical study on evaluation metrics of generative adversarial networks,” 2018, [Online]. Available: http://arxiv.org/abs/1806.07755
M. Xu, H. Zhang, and J. Yang, “Prohibited item detection in airport X- ray security images via attention mechanism based CNN,” Chinese Conference on Pattern Recognition & Computer Vision, Guangzhou, China, 2018, pp. 429–439. https://dx.doi.org/10.1007/978-3-030-03335-4_37
J. An, H. Zhang, Y. Zhu, and J. Yang, “Semantic segmentation for prohibited items in baggage inspection,” in Proc. Int. Conf. Intell. Sci. Big Data Eng. (ISCIDE), Nanjing, China, Oct. 2019, pp. 495–505. https://dx.doi.org/10.1007/978-3-030-36189-1_41
M. Baştan, M. R. Yousefi, and T. M. Breuel, “Visual words on baggage X-ray images,” Computer Analysis of Images and Patterns - 14th International Conference(CAIP 2011), Seville, Spain, Aug. 2011, pp. 360–368. https://dx.doi.org/10.1007/978-3-642-23672-3_44
Q. Xu, G. Huang, Y. Yuan, C. Guo, Y. Sun, F. Wu, and K. Q. Weinberger, “An empirical study on evaluation metrics of generative adversarial networks,” 2018, [Online]. Available: http://arxiv.org/abs/1806.07755
T. Salimans, I. Goodfellow, W. Zaremba, V. Cheung, A. Radford, and X. Chen, “Improved techniques for training GANs,” International Conference on Neural Information Processing Systems (NIPS), Barcelona, Spain, 2016, pp. 2234–2242. http://papers.nips.cc/paper/6125
G. Zentai, “X-ray imaging for homeland security,” IEEE International Workshop on Imaging Systems and Techniques (IST2008), Chania, Greece, Sep.2008, pp.1–6. https://dx.doi.org/10.1109/IST.2008.4659929
Z. Ma, A. E. Teschendorff, A. Leijon, Y. Qiao, H. Zhang, and J. Guo, “Variational bayesian matrix factorization for bounded support data,” IEEE Trans. Pattern Anal. Mach. Intell. 37(4), 876–889 (2015).
[Crossref]
M. Xu, H. Zhang, and J. Yang, “Prohibited item detection in airport X- ray security images via attention mechanism based CNN,” Chinese Conference on Pattern Recognition & Computer Vision, Guangzhou, China, 2018, pp. 429–439. https://dx.doi.org/10.1007/978-3-030-03335-4_37
J. An, H. Zhang, Y. Zhu, and J. Yang, “Semantic segmentation for prohibited items in baggage inspection,” in Proc. Int. Conf. Intell. Sci. Big Data Eng. (ISCIDE), Nanjing, China, Oct. 2019, pp. 495–505. https://dx.doi.org/10.1007/978-3-030-36189-1_41
H. Zhang, I. Goodfellow, D. Metaxas, and A. Odena, “Self-attention generative adversarial networks,” 36th International Conference on Machine Learning, Long Beach, California, PMLR 97, 2019, pp.1–10. http://arxiv.org/abs/1805.08318
P. Isola, J. Zhu, T. Zhou, and A. Efros, “Image-to-image translation with conditional adversarial networks,” in Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), Honolulu, HI, USA, Jul. 2017, pp. 1125–1134. https://dx.doi.org/10.1109/CVPR.2017.632
J. Zhu, T. Park, P. Isola, and A. Efros, “Unpaired image-to-image translation using cycle-consistent adversarial networks,” in Proc. IEEE Int. Conf. Comput. Vis. (ICCV), Venice, Italy, Oct. 2017, pp. 2223–2232. https://dx.doi.org/10.1109/ICCV.2017.244
P. Isola, J. Zhu, T. Zhou, and A. Efros, “Image-to-image translation with conditional adversarial networks,” in Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), Honolulu, HI, USA, Jul. 2017, pp. 1125–1134. https://dx.doi.org/10.1109/CVPR.2017.632
J. An, H. Zhang, Y. Zhu, and J. Yang, “Semantic segmentation for prohibited items in baggage inspection,” in Proc. Int. Conf. Intell. Sci. Big Data Eng. (ISCIDE), Nanjing, China, Oct. 2019, pp. 495–505. https://dx.doi.org/10.1007/978-3-030-36189-1_41
M. Everingham, L. Van Gool, C. K. I. Williams, J. Winn, and A. Zisserman, “The Pascal Visual Object Classes Challenge,” Machine Learning Challenges, Evaluating Predictive Uncertainty, Visual Object Classification and Recognizing Textual Entailment, MLCW 2005, Southampton, UK, April pp. 117–176 (2005). https://dx.doi.org/10.1007/11736790_8
D. Mery, V. Riffo, U. Zscherpel, G. Mondragón, I. Lillo, I. Zuccar, H. Lobel, and M. Carrasco, “GDXray: The database of X-ray images for nondestructive testing,” J. Nondestruct. Eval. 34(4), 42 (2015).
[Crossref]
K. Naveen, A. Jacob, H. James, and K. Zsolt, “On Convergence and Stability of GANs,” 2018, pp. 1–18, [Online]. Available: http://arxiv.org/abs/1705.07215 . http://arxiv.org/abs/1705.07215
D. Mery, V. Riffo, U. Zscherpel, G. Mondragón, I. Lillo, I. Zuccar, H. Lobel, and M. Carrasco, “GDXray: The database of X-ray images for nondestructive testing,” J. Nondestruct. Eval. 34(4), 42 (2015).
[Crossref]
X. Mao, Q. Li, H. Xie, R. Lau, Z. Wang, and S. Smolley, “Least squares generative adversarial networks,” Computer Vision and Pattern Recognition 1, 2813–2821 (2017).
S. Akçay, M. E. Kundegorski, M. Devereux, and T. P. Breckon, “Using Deep Convolutional Neural Network Architectures for Object Classification and Detection Within X-Ray Baggage Security Imagery,” IEEE Trans. Inform. Forensic Secur. 13(9), 2203–2215 (2018).
[Crossref]
S. Ren, K. He, R. Girshick, and J. Sun, “Spatial pyramid pooling in deep convolutional networks for visual recognition,” IEEE Trans. Pattern Anal. Mach. Intell. 37(9), 1904–1916 (2015).
[Crossref]
Z. Ma, A. E. Teschendorff, A. Leijon, Y. Qiao, H. Zhang, and J. Guo, “Variational bayesian matrix factorization for bounded support data,” IEEE Trans. Pattern Anal. Mach. Intell. 37(4), 876–889 (2015).
[Crossref]
Z. Ma, Y. Lai, W. B. Kleijn, Y. Song, L. Wang, and J. Guo, “Variational bayesian learning for dirichlet process mixture of inverted dirichlet distri- butions in nongaussian image feature modeling,” IEEE Transactions on Neural Networks and Learning Systems 30(2), 449–463 (2019).
[Crossref]
D. Mery, V. Riffo, U. Zscherpel, G. Mondragón, I. Lillo, I. Zuccar, H. Lobel, and M. Carrasco, “GDXray: The database of X-ray images for nondestructive testing,” J. Nondestruct. Eval. 34(4), 42 (2015).
[Crossref]
M. Baştan, “Multi-view object detection in dual-energy X-ray images,” Machine Vision and Applications 26(7-8), 1045–1060 (2015).
[Crossref]
C. Szegedy, W. Liu, Y. Jia, P. Sermanet, S. Reed, D. Anguelov, D. Erhan, V. Vanhoucke, and A. Rabinovich, “Going deeper with convolutions,” in Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), Boston, MA, USA, Jun. 2015, pp. 1–9. https://dx.doi.org/10.1109/CVPR.2015.7298594
C. Szegedy, V. Vanhoucke, S. Ioffe, J. Shlens, and Z. Wojna, “Rethinking the inception architecture for computer vision,” in Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), Las Vegas, NV, USA, Jun. 2016, pp. 2818–2826. https://dx.doi.org/10.1109/CVPR.2016.308
R. Girshick, “Fast R-CNN,” in Proc. IEEE Int. Conf. Comput. Vis. (ICCV), Santiago, Chile, Dec. 2015, pp. 1440–1448. https://dx.doi.org/10.1109/ICCV.2015.169
J. Redmon, S. Divvala, R. Girshick, and A. Farhadi, “You only look once: Unified, real-time object detection,” in Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), Las Vegas, NV, USA, Jun. 2016, pp. 779–788. https://dx.doi.org/10.1109/CVPR.2016.91
W. Liu, D. Anguelov, D. Erhan, C. Szegedy, S. Reed, C. Y. Fu, and A. C. Berg, “SSD: Single shot multibox detector,” in Proc. Eur. Conf. Comput. Vis. (ECCV), Amsterdam, The Netherlands, Oct. 2016, pp. 21–27. https://dx.doi.org/10.1007/978-3-319-46448-0_2
G. Zentai, “X-ray imaging for homeland security,” IEEE International Workshop on Imaging Systems and Techniques (IST2008), Chania, Greece, Sep.2008, pp.1–6. https://dx.doi.org/10.1109/IST.2008.4659929
S. Akcay, M. E. Kundegorski, M. Devereux, and T. P. Breckon, “Transfer learning using convolutional neural networks for object classification within X-ray baggage security imagery,” IEEE International Conference on Image Processing (ICIP), Phoenix, AZ, pp. 1057–1061(2016). https://dx.doi.org/10.1109/ICIP.2016.7532519
D. Turcsany, A. Mouton, and T. P. Breckon, “Improving feature-based object recognition for X-ray baggage security screening using primed visual words,” IEEE International Conference on Industrial Technology (ICIT), Cape Town, 2013, pp. 1140–1145. https://dx.doi.org/10.1109/ICIT.2013.6505833
M. Baştan, M. R. Yousefi, and T. M. Breuel, “Visual words on baggage X-ray images,” Computer Analysis of Images and Patterns - 14th International Conference(CAIP 2011), Seville, Spain, Aug. 2011, pp. 360–368. https://dx.doi.org/10.1007/978-3-642-23672-3_44
M. Bastan, W. Byeon, and T. M. Breuel, “Object recognition in multi-view dual energy X-ray images,” in Proc. BMVC, 2013, pp. 130–131. https://dx.doi.org/10.1007/978-3-642-32717-9_15
M. E. Kundegorski, S. Akçay, M. Devereux, A. Mouton, and T. P. Breckon, “On using feature descriptors as visual words for object detection within X-ray baggage security screening,” International Conference on Imaging for Crime Detection & Prevention, Nov. 2016, p. 12–18. https://dx.doi.org/10.1049/ic.2016.0080
S. Gurumurthy, R. K. Sarvadevabhatla, and R. V. Babu, “DeLiGAN: generative adversarial networks for diverse and limited data,” IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Honolulu, HI, 2017, pp. 4941–4949. https://dx.doi.org/10.1109/CVPR.2017.525
M. Mirza and S. Osindero, “Conditional generative adversarial nets,” 2014, pp. 1–7, [Online]. https://arxiv.org/abs/1411.1784
T. Salimans, I. Goodfellow, W. Zaremba, V. Cheung, A. Radford, and X. Chen, “Improved techniques for training GANs,” International Conference on Neural Information Processing Systems (NIPS), Barcelona, Spain, 2016, pp. 2234–2242. http://papers.nips.cc/paper/6125
H. Zhang, I. Goodfellow, D. Metaxas, and A. Odena, “Self-attention generative adversarial networks,” 36th International Conference on Machine Learning, Long Beach, California, PMLR 97, 2019, pp.1–10. http://arxiv.org/abs/1805.08318
A. Brock, J. Donahue, and K. Simonyan, “Large scale GAN training for high fidelity natural image synthesis,” ICLR2019, pp. 1-35. http://arxiv.org/abs/1809.11096
M. Arjovsky, S. Chintala, and L. Bottou, “Wasserstein GAN,” 2017, pp. 1–32, [Online]. Available: http://arxiv.org/abs/1701.07875
T. Karras, T. Aila, S. Laine, and J. Lehtinen, “Progressive growing of GANs for improved quality, stability, and variation,”, ICLR 2018, pp. 1–26, [Online]. Available: http://arxiv.org/abs/1701.10196
P. Isola, J. Zhu, T. Zhou, and A. Efros, “Image-to-image translation with conditional adversarial networks,” in Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), Honolulu, HI, USA, Jul. 2017, pp. 1125–1134. https://dx.doi.org/10.1109/CVPR.2017.632
J. Zhu, T. Park, P. Isola, and A. Efros, “Unpaired image-to-image translation using cycle-consistent adversarial networks,” in Proc. IEEE Int. Conf. Comput. Vis. (ICCV), Venice, Italy, Oct. 2017, pp. 2223–2232. https://dx.doi.org/10.1109/ICCV.2017.244
H. Shin, N. A. Tenenholtz, J. K. Rogers, C. G. Schwarz, M. L. Senjem, J. L. Gunter, K. Andriole, and M. Michalski, “Medical image synthesis for data augmentation and anonymization using generative adversarial networks,” International Conference on Medical Image Computing and Computer Assisted Intervention (MICCAI), Granada, Spain, 2018, pp. 1–11. https://dx.doi.org/10.1007/978-3-030-00536-8_1
M. Frid-Adar, E. Klang, M. Amitai, J. Goldberger, and H. Greenspan, “Synthetic data augmentation using GAN for improved liver lesion classification,” IEEE International Symposium on Biomedical Imaging (ISBI), Washington, DC, 2018, pp. 289–293. http://arxiv.org/abs/1801.02385
T. Tran, T. Pham, G. Carneiro, L. Palmer, and I. Reid, “A Bayesian data augmentation approach for learning deep models,” in Proc. Int. Conf. Neural Inf. Process. Syst. (NIPS), Long Beach, CA, USA, Dec. 2017, pp. 2797–2806. https://arxiv.org/abs/1710.10564
F. Calimeri, A. Marzullo, C. Stamile, and G. Terracina, “Biomedical data augmentation using generative adversarial neural networks,” in Proc. Int. Conf. Artif. Neural Netw. (ICANN), Alghero, Italy, Sep. 2017, pp. 626–634. https://dx.doi.org/10.1007/978-3-319-68612-7_71
Q. Xu, G. Huang, Y. Yuan, C. Guo, Y. Sun, F. Wu, and K. Q. Weinberger, “An empirical study on evaluation metrics of generative adversarial networks,” 2018, [Online]. Available: http://arxiv.org/abs/1806.07755
M. Heusel, H. Ramsauer, T. Unterthiner, B. Nessler, and S. Hochreiter, “GANs trained by a two time-scale update rule converge to a local nash equilibrium,” in Proc. Adv. Neural Inf. Process. Syst., 2017, pp. 6626–6637. http://arxiv.org/abs/1706.08500v1
J. Uijlings, K. van de Sande, T. Gevers, and A. Smeulders, “Selective Search for Object Recognition,” in IEEE International Conference on Computer Vision (ICCV), 2012, pp.154–172. https://dx.doi.org/10.1007/s11263-013-0620-5
J. Hosang, R. Benenson, and B. Schiele. “How good are detection proposals, really?” 2014. [Online] https://dx.doi.org/10.5244/C.28.24
M. Everingham, L. Van Gool, C. K. I. Williams, J. Winn, and A. Zisserman, “The Pascal Visual Object Classes Challenge,” Machine Learning Challenges, Evaluating Predictive Uncertainty, Visual Object Classification and Recognizing Textual Entailment, MLCW 2005, Southampton, UK, April pp. 117–176 (2005). https://dx.doi.org/10.1007/11736790_8
M. Xu, H. Zhang, and J. Yang, “Prohibited item detection in airport X- ray security images via attention mechanism based CNN,” Chinese Conference on Pattern Recognition & Computer Vision, Guangzhou, China, 2018, pp. 429–439. https://dx.doi.org/10.1007/978-3-030-03335-4_37
J. An, H. Zhang, Y. Zhu, and J. Yang, “Semantic segmentation for prohibited items in baggage inspection,” in Proc. Int. Conf. Intell. Sci. Big Data Eng. (ISCIDE), Nanjing, China, Oct. 2019, pp. 495–505. https://dx.doi.org/10.1007/978-3-030-36189-1_41
I. Goodfellow, J. P. Abadie, M. Mirza, B. Xu, and D. W. Farley, “Generative Adversarial Nets,” Advances in neural information processing systems2014, pp. 2672. https://arxiv.org/abs/1406.2661
A. Radford, L. Metz, and S. Chintala, “Unsupervised representation learning with deep convolutional generative adversarial networks,” Computer Ence, 2015. [Online]. Available: http://arxiv.org/abs/1511.06434
I. Gulrajani, F. Ahmed, M. Arjovsky, V. Dumoulin, and A. Courville, “Improved training of Wasserstein GANs,” in Proc. Int. Conf. Neural Inf. Process. Syst. (NIPS), Long Beach, CA, USA, Dec. 2017, pp. 5767–5777. https://arxiv.org/abs/1704.00028
K. Naveen, A. Jacob, H. James, and K. Zsolt, “On Convergence and Stability of GANs,” 2018, pp. 1–18, [Online]. Available: http://arxiv.org/abs/1705.07215 . http://arxiv.org/abs/1705.07215
M. Mathieu M, C. Couprie, and Y. LeCun, “Deep multi-scale video prediction beyond meansquare error,” ICLR 2016, pp. 1–14, [Online]. Available: http://arxiv.org/abs/1511.05440
R. Girshick, J. Donahue, T. Darrell, and J. Malik, “Rich Feature Hierarchies for Accurate Object Detection and Semantic Segmentation,” in Proc. IEEE Conf. Comput. Vis. Pattern Recognit., Columbus, OH, USA, Jun. 2014, pp. 580–587. https://dx.doi.org/10.1109/CVPR.2014.81
S. Ren, K. He, R. Girshick, and J. Sun, “Faster R-CNN: Towards real-time object detection with region proposal networks,” in Proc. Adv. Neural Inf. Proces. Syst., Montreal, QC, Canada, 2015, pp. 91–99. https://dx.doi.org/10.1109/TPAMI.2016.2577031