|
[1]P. Correa, "Helicobacter pylori and gastric carcinogenesis," The American journal of surgical pathology, vol. 19, pp. S37-43, 1995. [2]S. Suerbaum and P. Michetti, "Helicobacter pylori infection," New England Journal of Medicine, vol. 347, pp. 1175-1186, 2002. [3]M. J. Blaser and J. C. Atherton, "Helicobacter pylori persistence: biology and disease," The Journal of clinical investigation, vol. 113, pp. 321-333, 2004. [4]J. G. Kusters, A. H. Van Vliet, and E. J. Kuipers, "Pathogenesis of Helicobacter pylori infection," Clinical microbiology reviews, vol. 19, pp. 449-490, 2006. [5]F. E. Rihane, D. Erguibi, O. Elyamine, B. Abumsimir, M. M. Ennaji, and F. Chehab, "Helicobacter pylori co-infection with Epstein-Barr virus and the risk of developing gastric adenocarcinoma at an early age: Observational study infectious agents and cancer," Annals of Medicine and Surgery, vol. 68, p. 102651, 2021. [6]S. Shichijo, S. Nomura, K. Aoyama, Y. Nishikawa, M. Miura, T. Shinagawa, et al., "Application of convolutional neural networks in the diagnosis of Helicobacter pylori infection based on endoscopic images," EBioMedicine, vol. 25, pp. 106-111, 2017. [7]S. Zhou, H. Marklund, O. Blaha, M. Desai, B. Martin, D. Bingham, et al., "Deep learning assistance for the histopathologic diagnosis of Helicobacter pylori," Intelligence-Based Medicine, vol. 1, p. 100004, 2020. [8]J. Parsonnet, G. D. Friedman, D. P. Vandersteen, Y. Chang, J. H. Vogelman, N. Orentreich, et al., "Helicobacter pylori infection and the risk of gastric carcinoma," New England Journal of Medicine, vol. 325, pp. 1127-1131, 1991. [9]K. Togashi, "Applications of artificial intelligence to endoscopy practice: The view from Japan Digestive Disease Week 2018," ed: Wiley Online Library, 2019. [10]G. Urban, P. Tripathi, T. Alkayali, M. Mittal, F. Jalali, W. Karnes, et al., "Deep learning localizes and identifies polyps in real time with 96% accuracy in screening colonoscopy," Gastroenterology, vol. 155, pp. 1069-1078. e8, 2018. [11]X. Jia and M. Q.-H. Meng, "A deep convolutional neural network for bleeding detection in wireless capsule endoscopy images," in 2016 38th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC), 2016, pp. 639-642. [12]Y. Zhu, Q.-C. Wang, M.-D. Xu, Z. Zhang, J. Cheng, Y.-S. Zhong, et al., "Application of convolutional neural network in the diagnosis of the invasion depth of gastric cancer based on conventional endoscopy," Gastrointestinal endoscopy, vol. 89, pp. 806-815. e1, 2019. [13]G. Liu, J. Hua, Z. Wu, T. Meng, M. Sun, P. Huang, et al., "Automatic classification of esophageal lesions in endoscopic images using a convolutional neural network," Annals of translational medicine, vol. 8, 2020. [14]Z. Sobri and H. A. M. Sakim, "Texture color fusion based features extraction for endoscopic gastritis images classification," International Journal of Computer and Electrical Engineering, vol. 4, pp. 674-678, 2012. [15]S. Jain, A. Seal, A. Ojha, A. Yazidi, J. Bures, I. Tacheci, et al., "A deep CNN model for anomaly detection and localization in wireless capsule endoscopy images," Computers in Biology and Medicine, p. 104789, 2021. [16]C. Zhang, J. Wu, Z. Chen, W. Liu, M. Li, and S. Jiang, "Dense-CNN: Dense convolutional neural network for stereo matching using multiscale feature connection," Signal Processing: Image Communication, vol. 95, p. 116285, 2021. [17]P. M. Narendra and R. C. Fitch, "Real-time adaptive contrast enhancement," IEEE transactions on pattern analysis and machine intelligence, pp. 655-661, 1981. [18]S. M. Pizer, E. P. Amburn, J. D. Austin, R. Cromartie, A. Geselowitz, T. Greer, et al., "Adaptive histogram equalization and its variations," Computer vision, graphics, and image processing, vol. 39, pp. 355-368, 1987. [19]I. Goodfellow, Y. Bengio, and A. Courville, Deep learning: MIT press, 2016. [20]J. Wu, "Introduction to convolutional neural networks," National Key Lab for Novel Software Technology. Nanjing University. China, vol. 5, p. 495, 2017. [21]S. Albawi, T. A. Mohammed, and S. Al-Zawi, "Understanding of a convolutional neural network," in 2017 International Conference on Engineering and Technology (ICET), 2017, pp. 1-6. [22]S. Woo, J. Park, J.-Y. Lee, and I. S. Kweon, "Cbam: Convolutional block attention module," in Proceedings of the European conference on computer vision (ECCV), 2018, pp. 3-19. [23]A. G. Roy, N. Navab, and C. Wachinger, "Concurrent spatial and channel ‘squeeze & excitation’in fully convolutional networks," in International conference on medical image computing and computer-assisted intervention, 2018, pp. 421-429. [24]R. F. Murray, "Classification images: A review," Journal of vision, vol. 11, pp. 2-2, 2011. [25]E. Fix, Discriminatory analysis: nonparametric discrimination, consistency properties vol. 1: USAF school of Aviation Medicine, 1985. [26]G. Guo, H. Wang, D. Bell, Y. Bi, and K. Greer, "KNN model-based approach in classification," in OTM Confederated International Conferences" On the Move to Meaningful Internet Systems", 2003, pp. 986-996. [27]C. Cortes and V. Vapnik, "Support-vector networks," Machine learning, vol. 20, pp. 273-297, 1995. [28]W. S. Noble, "What is a support vector machine?," Nature biotechnology, vol. 24, pp. 1565-1567, 2006. [29]Y. Freund and R. E. Schapire, "A decision-theoretic generalization of on-line learning and an application to boosting," Journal of computer and system sciences, vol. 55, pp. 119-139, 1997. [30]R. E. Schapire, "Explaining adaboost," in Empirical inference, ed: Springer, 2013, pp. 37-52. [31]T. K. Ho, "Random decision forests," in Proceedings of 3rd international conference on document analysis and recognition, 1995, pp. 278-282. [32]L. Breiman, "Random forests," Machine learning, vol. 45, pp. 5-32, 2001. [33]J. H. Friedman, "Greedy function approximation: a gradient boosting machine," Annals of statistics, pp. 1189-1232, 2001. [34]Y. Xi, X. Zhuang, X. Wang, R. Nie, and G. Zhao, "A research and application based on gradient boosting decision tree," in International Conference on Web Information Systems and Applications, 2018, pp. 15-26. [35]T. Chen and C. Guestrin, "Xgboost: A scalable tree boosting system," in Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining, 2016, pp. 785-794. [36]G. Ke, Q. Meng, T. Finley, T. Wang, W. Chen, W. Ma, et al., "Lightgbm: A highly efficient gradient boosting decision tree," Advances in neural information processing systems, vol. 30, pp. 3146-3154, 2017. [37]J. Cai, X. Li, Z. Tan, and S. Peng, "An assembly-level neutronic calculation method based on LightGBM algorithm," Annals of Nuclear Energy, vol. 150, p. 107871, 2021. [38]L. Prokhorenkova, G. Gusev, A. Vorobev, A. V. Dorogush, and A. Gulin, "CatBoost: unbiased boosting with categorical features," arXiv preprint arXiv:1706.09516, 2017. [39]B. Dhananjay and J. Sivaraman, "Analysis and classification of heart rate using CatBoost feature ranking model," Biomedical Signal Processing and Control, vol. 68, p. 102610, 2021. [40]N. Chen and D. Blostein, "A survey of document image classification: problem statement, classifier architecture and performance evaluation," International Journal of Document Analysis and Recognition (IJDAR), vol. 10, pp. 1-16, 2007. [41]J. Lever, M. Krzywinski, and N. Altman, "Classification evaluation," ed: Nature Publishing Group, 2016. [42]J. M. Lobo, A. Jiménez‐Valverde, and R. Real, "AUC: a misleading measure of the performance of predictive distribution models," Global ecology and Biogeography, vol. 17, pp. 145-151, 2008.
|