|
[1] 徐宏民 , “ 少 量 標 註 樣 本 的 機 器 ( 深 度 ) 學 習 ”, DIGITIMES: https://www.digitimes.com.tw/col/article.asp?id=1106, 2019 [2] KEEL-dataset repository, https://sci2s.ugr.es/keel/datasets.php, 2005. [3] Z. Niu, F. Li, X. Zhang, Y. Fan, & X. Wei, “Improved Under-sampling Method and Its Application in the Classification of Imbalanced Data Sets”, Computer Engineering, vol.45, no. 06, pp.218-224, 2019. [4] A. D. Pozzolo, G. Boracchi, O. Caelen, C. Alippi, & G. Bontempi,“Credit card fraud detection: A realistic modeling and a novel learning strategy”, IEEE rans. Neural Netw. Learn. Syst., vol. 28, no. 08, pp. 1–14, 2018. [5] P. K. Chan, & S. J. Stolfo, “Toward scalable learning with non-uniform class and cost distributions: a case study in credit card fraud detection”, KDD, pp 164–168, 1998. [6] Y. Sun, A. K. Wong, & M. S. Kamel, “Classification of imbalanced data: A review”, International journal of pattern recognition and artificial intelligence, vol. 23, no. 04, pp. 687–719, 2009. [7] M. Galar, A. Fernandez, E. Barrenechea, H. Bustince, & F. Herrera, “A review on ensembles for the class imbalance problem: bagging-, boosting-, and hybrid-based approaches”, IEEE Transactions on Systems, Man, and Cybernetics, Part C (Applications and Reviews), vol. 42, no. 4, pp. 463–484, 2011. [8] X. Wen, J. Chen, W. Jing, & K. Xu, “Research on Optimization of Classification Model for Imbalanced Data Set[J].”, Computer Engineering, vol. 44, no. 04, pp. 268-273,293, 2018. [9] U. Bhowan, M. Johnston, & M. Zhang,“Developing New Fitness Functions in Genetic Programming for Classification With Imbalanced Data”, IEEE Transaction on system, man and cybernetics—part b, vol. 42, no. 02, pp. 406-421, 2011. [10] 徐宏民 , “標 註 的訓練 資 料 不 夠 怎麼 辦? 談 自 我 監 督學 習新 趨 勢 ”, DIGITIMES: https://www.digitimes.com.tw/col/article.asp?id=1101, 2019 [11] 侯品如, “AI的歧視來自人性?從性別與種族不平等談機器學習”, bnext: https://www.bnext.com.tw/article/68942/ai-discrimination, 2022 [12] 施威銘研究室, “訓練資料不平衡,後果有多嚴重?” medium: https://flag-editors.medium.com/%E8%A8%93%E7%B7%B4%E8%B3%87%E6%96%99%E4%B8%8D%E5%B9%B3%E8%A1%A1-%E5%BE%8C%E6%9E%9C%E6%9C%89%E5%A4%9A%E5%9A%B4%E9%87%8D-bdfea227eeaa, 2021 [13] A. Singh, A. Purohit., “A Survey on Methods for Solving Data Imbalance Problem for Classification”, International Journal of Computer Applications (0975 – 8887), vol. 127, no.15, pp. 37-41, 2015. [14] J. L. Leevy, T. M. Khoshgoftaar, R. A. Bauder, & N. Seliya, “A survey on addressing high‑class imbalance in big data”, J Big Data, vol 5, no. 42, pp. 1-30, 2018。 [15] G. E. Batista, R. C. Prati, M. C. Monard, “A study of the behavior of several methods for balancing machine learning training data.”, SIGKDD Explor Newsl. Vol. 6, no. 1, pp.20–29, 2004. [16] R. Luo, Q. Feng, & C. Wang et al., “Feature learning with a divergence-encouraging autoencoder for imbalanced data classification”, IEEE Access, vol. 6, no. 6, pp. 70197–70211, 2018. [17] N. Chawla, K. Bowyer, L. Hall, & W. Kegelmeyer, “SMOTE: synthetic minority over-sampling technique.”, J. Artif. Intell. Res., vol. 16, no. 1, pp. 321–357, 2002. [18] D. L. Wilson, “Asymptotic Properties of Nearest Neighbor Rules Using Edited Data.”, IEEE Trans. Syst. Man Cybern., vol. SMC-2, no. 3, pp. 408-421, Jul. 1972. [19] J. Ge, Y. Qiu, C. M. Wu, & G. Pu, “Summary of genetic algorithms research.”, Appl. Res. Comput. vol. 25, nol. 10, pp. 2911–2916, 2008 [20] R. Poli, J. Kennedy, & T. Blackwell, “Particle swarm optimization”, Swarm Intell., vol.1, no. 1, pp. 33–57, 2007. [21] S. Mirjalili, S. M. Mirjalili & A. Lewis, “Grey wolf optimizer”, Adv. Eng. Softw., vol. 69, pp. 46–61, 2014. A. Ali, SM. Shamsuddin, & AL. Ralescu, “Classification with class imbalance problem: a review”, Int J Adv Soft Comput Appl., vol. 7, no. 3, pp.176-204, 2015. [23] M. Galar, A. Fernandez, E. Barrenechea, H. Bustince, & F. Herrera,“A review on ensembles for the class imbalance problem: Bagging-, boosting-, and hybrid-based approaches,” IEEE Trans. Syst. Man Cybern. Syst. Part C, vol. 42, no. 4, pp. 463–484, Jul. 2012. [24] B. Wang, & J. Pineau, “Online bagging and boosting for imbalanced data streams,” IEEE Trans. Knowl. Data Eng., vol. 28, no. 12, pp. 3353–3366, 2016. [25] S. Wang, L. Minku, & X. Yao, “Resampling-based ensemble methods for online class imbalance learning,” IEEE Trans. Knowl. Data Eng., vol. 27, no. 5, pp. 1356–1368, 2015. [26] S. E. Gómez, L. Hernandez-Callejo, B. C. Martinez, & A. J. Sánchez-Esguevillas, “Exploratory study on class imbalance and solutions for network traffic classification”, Neurocomputing, vol. 343, pp. 100–119, 2019. [27] K. Jiang, J. Lu, & K. Xia, “A Novel Algorithm for Imbalance Data Classification Based on Genetic Algorithm Improved SMOTE”, Arabian Journal for Science and Engineering, vol. 41, no. 8, pp. 3255–3266, 2016. [28] J. Li, S. Fong, & Y. Zhuang, “Optimizing SMOTE by Metaheuristics with Neural Network and Decision Tree.”, 2015 3rd International Symposium on Computational and Business Intelligence (ISCBI), pp. 26-32, 2015. [29] J. Cervantes, F. García-Lamont, L. Rodríguez-Mazahua, A. López Chau, J.S.R. Castilla, & A. Trueba, “PSO-based method for SVM classification on skewed data sets”, Neurocomputing, vol. 228, pp. 187-197, 2017. [30] S. Sawangarreerak, & P. Thanathamathee, “Random forest with sampling techniques for handling imbalanced prediction of university student depression.”, Inf., vol. 11, no. 11, pp. 1-13, 2020. [31] I. Tomek, "Two modifications of CNN", IEEE Transactions on Systems Man and Cybernetics, vol. 6, pp. 769-772, 1976. [32] M. Zeng, B. Zou, F. Wei, X. Liu, & L. Wang, “Effective prediction of three common diseases by combining SMOTE with Tomek links technique for imbalanced medical data.”, 2016 IEEE International Conference of Online Analysis and Computing Science (ICOACS). IEEE, pp. 225-228, 2016. [33] H. He, Y. Bai, E. A. Garcia, & S. Li, “Adasyn: Adaptive synthetic sampling approach for imbalanced learning”, 2008 IEEE international joint conference on neural networks (IEEE world congress on computational intelligence). IEEE, pp. 1322–1328, 2008. [34] H. Han, W.-Y. Wang, & B.-H. Mao, “Borderline-smote: a new oversampling method in imbalanced data sets learning,” in International conference on intelligent computing. Springer, pp. 878–887, 2005. [35] J. Kennedy, "Particle swarm optimization." Encyclopedia of Machine Learning. Springer US, pp.760-766, 2010. [36] XS. Yang, "A new metaheuristic bat-inspired algorithm." Nature inspired cooperative strategies for optimization (NICSO 2010). Springer Berlin Heidelberg, pp. 65-74, 2010. [37] M. Zeng, B. Zou, F. Wei, X. Liu, & L. Wang, "Effective prediction of three common diseases by combining SMOTE with Tomek links technique for imbalanced medical data." 2016 IEEE International Conference of Online Analysis and Computing Science (ICOACS). IEEE, pp. 225-228, 2016. [38] J. Li, S. Fong, R.K. Wong, & V.W. Chu, “Adaptive multi-objective swarm fusion for imbalanced data classification Information Fusion”, vol. 39, pp. 1-24, 2018. [39] C. Ling, V. Sheng, & Q. Yang, “Test strategies for cost-sensitive decision trees”, IEEE Trans. Knowl. Data Eng., vol. 18, no. 8, pp. 1055–1067, 2006. [40] N.V. Chawla, A. Lazarevic, L. O. Hall, & K. W. Bowyer, “SMOTEBoost: improving prediction of the minority class in boosting”, European Conference on Principles of Data Mining and Knowledge Discovery, Springer, pp. 107-119, 2003. [41] S. Wang, & X. Yao, “Diversity analysis on imbalanced data sets by using ensemble models.” 2009 IEEE Symposium on Computational Intelligence and Data Mining. IEEE, pp. 324–331, 2009. [42] Z. Liu, W. Cao, Z. Gao, J. Bian, H. Chen, Y. Chang, & T. Y. Liu, “Self-paced ensemble for highly imbalanced massive data classification.” 2020 IEEE 36th International Conference on Data Engineering (ICDE), IEEE, pp. 841-852, 2020. [43] 刘芷宁, “IMBENS:方便好用的长尾/不平衡数据处理与集成学习模型库”, zhihu: https://zhuanlan.zhihu.com/p/376572330, 2021 [44] P. Domingos, “Metacost: A general method for making classifiers cost-sensitive”, in Proc. 5th ACM SIGKDD Int. Conf. Knowl. Discovery Data Mining, pp. 155–164, 1999. [45] W. Fan, S. J. Stolfo, J. Zhang, & P. K. Chan, “AdaCost: misclassification cost-sensitive boosting.”, Icml, vol. 99, pp. 97-105, 1999. [46] P. Viola, & M. Jones, “Fast and robust classification using asymmetric adaboost and a detector cascade.” Advances in Neural Information Processing System, vol. 14, pp. 1311-1318, 2001. [47] J. Kennedy, & R. Eberhart, ‘‘Particle swarm optimization’’ in Proc. Int. Conf. Neural Netw. (ICNN), vol. 4, pp. 1942–1948, 1995. [48] F. Busetti. “Genetic algorithms overview”, 2007. [49] D. Whitely, "A Genetic Algorithm Tutorial", Statistics and computing, vol. 4, no. 2, pp. 65-85, 1993. [50] J. G. Moreno-Torres, J. A. Saez, & F. Herrera, “Study on the impact of partition-induced dataset shift on k-fold cross-validation”, IEEE Trans. Neural Netw. Learn. Syst., vol. 23, no. 8, pp. 1304–1312, 2012. [51] 葉松林, “解模糊關係方程式之改良演算法及非線性最佳化問題應用”, 國立臺灣師範大學, 2007.
|