|
[1]陳美娟. (2009). 台灣地區老年人糖尿病相關因素探討. https://www.airitilibrary.com/Publication/alDetailedMesh1?DocID=U0118-1511201215455957#Summary [2]蘇昭安. (2003). 應用倒傳遞類神經網路在颱風波浪預報之硏究 (Doctoral dissertation, National Taiwan University Department of Engineering Science & Ocean Engineering). [3]衛生福利部國民健康署糖尿病資料 https://www.hpa.gov.tw/Pages/List.aspx?nodeid=359 [4]Acharya, U. R., Krishnan, S. M., Spaan, J. A., & Suri, J. S. (Eds.). (2007). Advances in cardiac signal processing (pp. 121-165). Berlin: springer. https://doi.org/10.1016/j.compbiomed.2021.105108 [5]American Diabetes Association Professional Practice Committee. (2022). 1. Improving Care and Promoting Health in Populations: Standards of Medical Care in Diabetes—2022. Diabetes Care, 45(Supplement_1), S8-S16. https://doi.org/10.2337/dc22-S001 [6]Atlas, D. (2015). International diabetes federation. IDF Diabetes Atlas, 7th edn. Brussels, Belgium: International Diabetes Federation, 33. https://suckhoenoitiet.vn/download/Atla-benh-dai-thao-duong-2-1511669800.pdf [7]Cai, J., Luo, J., Wang, S., & Yang, S. (2018). Feature selection in machine learning: A new perspective. Neurocomputing, 300, 70-79. https://doi.org/10.1016/j.neucom.2017.11.077 [8]Camm, A. J., Malik, M., Bigger, J. T., Breithardt, G., Cerutti, S., Cohen, R. J., ... & Singer, D. H. (1996). Heart rate variability: standards of measurement, physiological interpretation and clinical use. Task Force of the European Society of Cardiology and the North American Society of Pacing and Electrophysiology. [9]Chandrashekar, G., & Sahin, F. (2014). A survey on feature selection methods. Computers & Electrical Engineering, 40(1), 16-28. https://doi.org/10.1016/j.compeleceng.2013.11.024 [10]Chang, V., Bailey, J., Xu, Q. A., & Sun, Z. (2022). Pima Indians diabetes mellitus classification based on machine learning (ML) algorithms. Neural Computing and Applications, 1-17. https://link.springer.com/article/10.1007/s00521-022-07049-z [11]Chen, C., Wan, Y., Ma, A., Zhang, L., & Zhong, Y. (2022). A Decomposition-Based Multiobjective Clonal Selection Algorithm for Hyperspectral Image Feature Selection. IEEE Transactions on Geoscience and Remote Sensing, 60, 5541516. https://ieeexplore.ieee.org/stamp/stamp.jsp?arnumber=9927462&casa_token=7wK8kcID5DUAAAAA:7cxrowdZuXYzuvhG5KkS7jnZ4DTwwnP7Yc06_7oWeuTI3elDOYZ9FUP1ESZeXHjs8XyKoAyW&tag=1 [12]Chen, T., & Guestrin, C. (2016, August). Xgboost: A scalable tree boosting system. In Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining (pp. 785-794). https://dl.acm.org/doi/pdf/10.1145/2939672.2939785 [13]Chen, T., He, T., Benesty, M., Khotilovich, V., Tang, Y., Cho, H., & Chen, K. (2015). Xgboost: extreme gradient boosting. R package version 0.4-2, 1(4), 1-4. [14]Dash, M., & Liu, H. (1997). Feature selection for classification. Intelligent data analysis, 1(1-4), 131-156. https://cran.microsoft.com/snapshot/2017-12-11/web/packages/xgboost/vignettes/xgboost.pdf [15]De'ath, G., & Fabricius, K. E. (2000). Classification and regression trees: a powerful yet simple technique for ecological data analysis. Ecology, 81(11), 3178-3192. https://www.researchgate.net/profile/Katharina-Fabricius/publication/312976423_Classification_and_regression_trees_A_powerful_yet_simple_technique_for_ecological_data_analysis/links/593c16fba6fdcc17a9e819fd/Classification-and-regression-trees-A-powerful-yet-simple-technique-for-ecological-data-analysis.pdf [16]Deng, Z., Zhu, X., Cheng, D., Zong, M., & Zhang, S. (2016). Efficient kNN classification algorithm for big data. Neurocomputing, 195, 143-148. https://doi.org/10.1016/j.neucom.2015.08.112 [17]Engelse, W. A., & Zeelenberg, C. (1979). A single scan algorithm for QRS-detection and feature extraction. Computers in cardiology, 6, 37-42. [18]Fallon, B., Ma, J., Allan, K., Pillhofer, M., Trocmé, N., & Jud, A. (2013). Opportunities for prevention and intervention with young children: lessons from the Canadian incidence study of reported child abuse and neglect. Child and adolescent psychiatry and mental health, 7(1), 1-13. https://capmh.biomedcentral.com/articles/10.1186/1753-2000-7-4 [19]Freund, Y., Schapire, R., & Abe, N. (1999). A short introduction to boosting. Journal-Japanese Society For Artificial Intelligence, 14(5): 771-780. http://www.yorku.ca/gisweb/eats4400/boost.pdf [20]Friesen, G. M., Jannett, T. C., Jadallah, M. A., Yates, S. L., Quint, S. R., & Nagle, H. T. (1990). A comparison of the noise sensitivity of nine QRS detection algorithms. IEEE Transactions on biomedical engineering, 37(1), 85-98. https://ieeexplore.ieee.org/iel1/10/1671/00043620.pdf?casa_token=3ARNegtxF8MAAAAA:uEW0XqlAkSKsidK0AM1_C8wUGg4GyTAVcbSBs7iCLJQZeW29RvPZxckw50GP47-bmwdTniRq [21]Gershman, A., Meisels, A., Lüke, K. H., Rokach, L., Schclar, A., & Sturm, A. (2010). A decision tree based recommender system. 10th International Conferenceon Innovative Internet Community Systems (I2CS)–Jubilee Edition 2010–. https://dl.gi.de/bitstream/handle/20.500.12116/19012/170.pdf?sequence=1 [22]Guijt, A. M., Sluiter, J. K., & Frings-Dresen, M. H. (2007). Test-retest reliability of heart rate variability and respiration rate at rest and during light physical activity in normal subjects. Archives of medical research, 38(1), 113-120. https://doi.org/10.1016/j.arcmed.2006.07.009 [23]Hamby, R. I., Zoneraich, S., & Sherman, L. (1974). Diabetic cardiomyopathy. Jama, 229(13), 1749-1754. https://doi.org/10.1001/jama.1974.03230510023016 [24]Hastie, T., Tibshirani, R., Friedman, J. H., & Friedman, J. H. (2009). The elements of statistical learning: data mining, inference, and prediction (Vol. 2, pp. 1-758). New York: springer. https://doi.org/10.1007/978-0-387-21606-5 [25]Ho, T. K. (1995, August). Random decision forests. In Proceedings of 3rd international conference on document analysis and recognition (Vol. 1, pp. 278-282). IEEE. https://doi.org/10.1109/ICDAR.1995.598994 [26]Jiang, S., Pang, G., Wu, M., & Kuang, L. (2012). An improved K-nearest-neighbor algorithm for text categorization. Expert Systems with Applications, 39(1), 1503-1509. https://doi.org/10.1016/j.eswa.2011.08.040 [27]Jović, A., Brkić, K., & Bogunović, N. (2015, May). A review of feature selection methods with applications. In 2015 38th international convention on information and communication technology, electronics and microelectronics (MIPRO) (pp. 1200-1205). Ieee. https://doi.org/10.1109/MIPRO.2015.7160458 [28]Khandoker, A. H., Jelinek, H. F., & Palaniswami, M. (2008, August). Heart rate variability and complexity in people with diabetes associated cardiac autonomic neuropathy. In 2008 30th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (pp. 4696-4699). IEEE. https://doi.org/10.1109/IEMBS.2008.4650261 [29]Komeili, M., Louis, W., Armanfard, N., & Hatzinakos, D. (2017). Feature selection for nonstationary data: Application to human recognition using medical biometrics. IEEE transactions on cybernetics, 48(5), 1446-1459. https://doi.org/10.1109/TCYB.2017.2702059 [30]Kuo, P. H., Lu, S. S., Kuo, J. C., Yang, Y. J., Wang, T., Ho, Y. L., & Chen, M. F. (2012, May). A hydrogel-based implantable wireless CMOS glucose sensor SoC. In 2012 IEEE International Symposium on Circuits and Systems (ISCAS) (pp. 994-997). IEEE. https://doi.org/10.1109/ISCAS.2012.6272214 [31]Li, J., Cheng, K., Wang, S., Morstatter, F., Trevino, R. P., Tang, J., & Liu, H. (2017). Feature selection: A data perspective. ACM computing surveys (CSUR), 50(6), 1-45. https://doi.org/10.1145/3136625 [32]Liu, H., Dougherty, E. R., Dy, J. G., Torkkola, K., Tuv, E., Peng, H., ... & Forman, G. (2005). Evolving feature selection. IEEE Intelligent systems, 20(6), 64-76. https://doi.org/10.1109/MIS.2005.105 [33]López, V., Fernández, A., García, S., Palade, V., & Herrera, F. (2013). An insight into classification with imbalanced data: Empirical results and current trends on using data intrinsic characteristics. Information sciences, 250, 113-141. https://doi.org/10.1016/j.ins.2013.07.007 [34]Makowski, D., Pham, T., Lau, Z. J., Brammer, J. C., Lespinasse, F., Pham, H., ... & Chen, S. H. (2021). NeuroKit2: A Python toolbox for neurophysiological signal processing. Behavior research methods, 53(4), 1689-1696. https://doi.org/10.3758/s13428-020-01516-y [35]Markuszewski, L., & Bissinger, A. (2005). Application of heart rate variability in prognosis of patients with diabetes mellitus. Polski Merkuriusz Lekarski: Organ Polskiego Towarzystwa Lekarskiego, 19(112), 548-552. https://europepmc.org/article/med/16379323 [36]Moisen, G. G. (2008). Classification and regression trees. In: Jørgensen, Sven Erik; Fath, Brian D.(Editor-in-Chief). Encyclopedia of Ecology, volume 1. Oxford, UK: Elsevier. p. 582-588., 582-588. https://www.fs.usda.gov/research/treesearch/30645 [37]Niskanen, J. P., Tarvainen, M. P., Ranta-Aho, P. O., & Karjalainen, P. A. (2004). Software for advanced HRV analysis. Computer methods and programs in biomedicine, 76(1), 73-81. https://doi.org/10.1016/j.cmpb.2004.03.004 [38]Nobre, J., & Neves, R. F. (2019). Combining principal component analysis, discrete wavelet transform and XGBoost to trade in the financial markets. Expert Systems with Applications, 125, 181-194. https://doi.org/10.1016/j.eswa.2019.01.083 [39]Nunan, D., Sandercock, G. R., & Brodie, D. A. (2010). A quantitative systematic review of normal values for short‐term heart rate variability in healthy adults. Pacing and clinical electrophysiology, 33(11), 1407-1417. https://doi.org/10.1111/j.1540-8159.2010.02841.x [40]Ogurtsova, K., Guariguata, L., Barengo, N. C., Ruiz, P. L. D., Sacre, J. W., Karuranga, S., ... & Magliano, D. J. (2022). IDF diabetes Atlas: Global estimates of undiagnosed diabetes in adults for 2021. Diabetes Research and Clinical Practice, 183, 109118. https://doi.org/10.1016/j.diabres.2021.109118 [41]Pahwa, R., Goyal, A., & Jialal, I. (2021). Chronic inflammation. StatPearls [Internet]. https://www.ncbi.nlm.nih.gov/books/NBK493173/ [42]Pandya, D. H., Upadhyay, S. H., & Harsha, S. P. (2013). Fault diagnosis of rolling element bearing with intrinsic mode function of acoustic emission data using APF-KNN. Expert Systems with Applications, 40(10), 4137-4145. https://doi.org/10.1016/j.eswa.2013.01.033 [43]Pappachan, J. M., Sebastian, J., Bino, B. C., Jayaprakash, K., Vijayakumar, K., Sujathan, P., & Adinegara, L. A. (2008). Cardiac autonomic neuropathy in diabetes mellitus: prevalence, risk factors and utility of corrected QT interval in the ECG for its diagnosis. Postgraduate medical journal, 84(990), 205-210. http://dx.doi.org/10.1136/pgmj.2007.064048 [44]Pham, T., Lau, Z. J., Chen, S. A., & Makowski, D. (2021). Heart Rate Variability in psychology: A review of HRV indices and an analysis tutorial. Sensors, 21(12), 3998. https://doi.org/10.3390/s21123998 [45]Poungponsri, S., & Yu, X. H. (2013). An adaptive filtering approach for electrocardiogram (ECG) signal noise reduction using neural networks. Neurocomputing, 117, 206-213. https://doi.org/10.1016/j.neucom.2013.02.010 [46]Pudil, P., Novovičová, J., & Kittler, J. (1994). Floating search methods in feature selection. Pattern recognition letters, 15(11), 1119-1125. https://doi.org/10.1016/0167-8655(94)90127-9 [47]Remeseiro, B., & Bolon-Canedo, V. (2019). A review of feature selection methods in medical applications. Computers in biology and medicine, 112, 103375. https://doi.org/10.1016/j.compbiomed.2019.103375 [48]Reunanen, J. (2003). Overfitting in making comparisons between variable selection methods. Journal of Machine Learning Research, 3, 1371-1382. https://www.jmlr.org/papers/volume3/reunanen03a/reunanen03a.pdf [49]Saeys, Y., Inza, I., & Larranaga, P. (2007). A review of feature selection techniques in bioinformatics. bioinformatics, 23(19), 2507-2517. https://doi.org/10.1093/bioinformatics/btm344 [50]Saini, I., Singh, D., & Khosla, A. (2013). QRS detection using K-Nearest Neighbor algorithm (KNN) and evaluation on standard ECG databases. Journal of advanced research, 4(4), 331-344. https://doi.org/10.1016/j.jare.2012.05.007 [51]Sculley, D. (2010, April). Web-scale k-means clustering. In Proceedings of the 19th international conference on World wide web (pp. 1177-1178). https://doi.org/10.1145/1772690.1772862 [52]Shaffer, F., & Ginsberg, J. P. (2017). An overview of heart rate variability metrics and norms. Frontiers in public health, 258. https://doi.org/10.3389/fpubh.2017.00258 [53]Song, Y. Y., & Ying, L. U. (2015). Decision tree methods: applications for classification and prediction. Shanghai archives of psychiatry, 27(2), 130. https://doi.org/10.11919/j.issn.1002-0829.215044 [54]Sredniawa, B., Musialik-Lydka, A., Herdyńska-Was, M., & Pasyk, S. (1999). The assessment and clinical significance of heart rate variability. Polski merkuriusz lekarski: organ Polskiego Towarzystwa Lekarskiego, 7(42), 283-288. https://europepmc.org/article/med/10710956 [55]Stork, D. G., Duda, R. O., Hart, P. E., & Stork, D. (2001). Pattern classification. A Wiley-Interscience Publication. [56]Stys, A., & Stys, T. (1998). Current clinical applications of heart rate variability. Clinical cardiology, 21(10), 719-724. https://doi.org/10.1002/clc.4960211005 [57]Swapna, G., Soman, K. P., & Vinayakumar, R. (2020). Diabetes detection using ecg signals: An overview. Deep Learning Techniques for Biomedical and Health Informatics, 299-327. https://doi.org/10.1007/978-3-030-33966-1_14 [58]Sztajzel, J. (2004). Heart rate variability: a noninvasive electrocardiographic method to measure the autonomic nervous system. Swiss medical weekly, 134(35-36), 514-522. https://boccignone.di.unimi.it/CompAff2016_files/Heart-rate-variability_a-noninvasive-electrocardiographic-_2004.pdf [59]Tayefi, M., Esmaeili, H., Karimian, M. S., Zadeh, A. A., Ebrahimi, M., Safarian, M., ... & Ghayour-Mobarhan, M. (2017). The application of a decision tree to establish the parameters associated with hypertension. Computer methods and programs in biomedicine, 139, 83-91. https://doi.org/10.1016/j.cmpb.2016.10.020 [60]Timofeev, R. (2004). Classification and regression trees (CART) theory and applications. Humboldt University, Berlin, 54. https://d1wqtxts1xzle7.cloudfront.net/38106508/timofeev-libre.pdf?1436189003=&response-content-disposition=inline%3B+filename%3DClassification_and_Regression_Trees_CART.pdf&Expires=1670489882&Signature=Rc26eaeRpjFhGef6Rb-kGRSv5GV1sC8W9-oWvFlP94D6jm30RnKiBN7jbOzjJ1rYOER5z7ax8ts26E24PKIFz7RbqbgnWO4wayD5V3tp0Okf~5NTCNwJA4QzrZcN8A1qJJMkjYEj3yNZ74fnX9PtNntOEFJkNp28GNrdbtC5uM5-BcfDsqIwrL3tj3wqXpDdGP65ra2FU6P-1cSvhu~PXmbqaTuYNj2CATx9EjRTg0PuohnMmmlyikJgUBaKxVXskDN~1kjp3KGl8enz8z8qgsb3XcVwg0Zs4Gxzc4Lpxei6PB6u~XTeKkc1Vto69fwuC3By~NuDNG0vmlcnlwYDaQ__&Key-Pair-Id=APKAJLOHF5GGSLRBV4ZA [61]Torlay, L., Perrone-Bertolotti, M., Thomas, E., & Baciu, M. (2017). Machine learning–XGBoost analysis of language networks to classify patients with epilepsy. Brain informatics, 4(3), 159-169. https://doi.org/10.1007/s40708-017-0065-7 [62]Xue, B., Zhang, M., Browne, W. N., & Yao, X. (2015). A survey on evolutionary computation approaches to feature selection. IEEE Transactions on Evolutionary Computation, 20(4), 606-626. https://doi.org/10.1109/TEVC.2015.2504420 [63]Yazdani, A., Ebrahimi, T., & Hoffmann, U. (2009, April). Classification of EEG signals using Dempster Shafer theory and a k-nearest neighbor classifier. In 2009 4th International IEEE/EMBS Conference on Neural Engineering (pp. 327-330). IEEE. https://doi.org/10.1109/NER.2009.5109299 [64]Zanelli, S., Ammi, M., Hallab, M., & El Yacoubi, M. A. (2022). Diabetes Detection and Management through Photoplethysmographic and Electrocardiographic Signals Analysis: A Systematic Review. Sensors, 22(13), 4890. https://doi.org/10.3390/s22134890 [65]Zhang, H., & Sun, G. (2002). Optimal reference subset selection for nearest neighbor classification by tabu search. Pattern Recognition, 35(7), 1481-1490. https://doi.org/10.1016/S0031-3203(01)00137-6 [66]Zhang, M. L., & Zhou, Z. H. (2007). ML-KNN: A lazy learning approach to multi-label learning. Pattern recognition, 40(7), 2038-2048. https://doi.org/10.1016/j.patcog.2006.12.019
|