[1]M. Cornacchia, K. Ozcan, Y. Zheng and S. Velipasalar, “A survey on activity detection and classification using wearable sensors,” IEEE Sensors J., vol. 17, pp. 386-403, 2017.
[2]M. Seiffert, F. Holstein, R. Schlosser and J. Schiller, “Next generation cooperative wearables: Generalized activity assessment computed fully distributed within a wireless body area network,” IEEE Access, vol. 5, pp. 16793-16807, 2017.
[3]Y. Chen and C. Shen, “Performance analysis of smartphone-sensor behavior for human activity recognition,” IEEE Access, vol. 5, pp. 3095-3110, 2017.
[4]M. Janidarmian, A. Roshan-Fekr, K. Radecka and Z. Zilic, “A comprehensive analysis on wearable acceleration sensors in human activity recognition,” Sensors, vol. 17, no. 3, pp. 529, 2017.
[5]S. Khalifa, G. Lan, M. Hassan, A. Seneviratne and S. K. Das, “Human activity recognition from kinetic energy harvesting data in wearable devices,” IEEE Transactions on Mobile Computing, vol. 17, no. 6, pp. 1353-1368, 2018.
[6]J. Margarito, R. Helaoui, A. M. Bianchi, F. Sartor and A. G. Bonomi, “User-independent recognition of sports activities from a single wrist-worn accelerometer: A template-matching-based approach,” IEEE Trans. Biomed. Eng., vol. 63, no. 4, pp. 788-796, 2016.
[7]Y.-L. Hsu, J.-S. Wang and C.-W. Chang, “A wearable inertial pedestrian navigation system with quaternion-based extended Kalman filter for pedestrian localization,” IEEE Sensors J., vol. 17, no. 10, pp. 3193-3206, 2017.
[8]M. Ueda, H. Negoro, Y. Kurihara and K. Watanabe, “Measurement of angular motion in golf swing by a local sensor at the grip end of a golf club,” IEEE Trans. Human-Mach. Syst., vol. 43, no. 4, pp. 398-404, 2013.
[9]L. Cantelli, G. Muscato, M. Nunnari and D. Spina, “A joint-angle estimation method for industrial manipulators using inertial sensors,” IEEE/ASME Trans. Mechatron., vol. 20, no. 5, pp. 2486-2495, 2015.
[10]J.-S. Wang, Y.-L. Hsu and J.-N. Liu, “An inertial-measurement-unit-based pen with a trajectory reconstruction algorithm and its applications,” IEEE Trans. Ind. Electron., vol. 57, no. 10, pp. 3508-3521, 2010.
[11]S. J. Preece, J. Y. Goulermas, L. P. J. Kenney, D. Howard, K. Meijer and R. Crompton, “Activity identification using body-mounted sensors—a review of classification techniques,” Physiol. Meas., vol. 30, pp. R1-R33, 2009.
[12]A. Wang, G. Chen, J. Yang, S. Zhao and C.-Y. Chang, “A comparative study on human activity recognition using inertial sensors in a smartphone,” IEEE Sensors J., vol. 16, no. 11, pp. 4566-4578, 2016.
[13]S. Zhang, R. Feng, Y. Wu and N. Yu, “Adaptive compressed sensing for acceleration data transmission in human motion capture,” 2017 10th International Congress on Image and Signal Processing, BioMedical Engineering and Informatics (CISP-BMEI), Shanghai, pp. 1-6, 2017.
[14]E. Kańtoch, “Human activity recognition for physical rehabilitation using wearable sensors fusion and artificial neural networks,” 2017 Computing in Cardiology (CinC), Rennes, pp. 1-4, 2017.
[15]F. Yang and L. Zhang, “Real-time human activity classification by accelerometer embedded wearable devices,” 2017 4th International Conference on Systems and Informatics (ICSAI), Hangzhou, pp. 469-473, 2017.
[16]M. Ermes, J. PÄrkkÄ, J. MÄntyjÄrvi and I. Korhonen, “Detection of daily activities and sports with wearable sensors in controlled and uncontrolled conditions,” IEEE Transactions on Information Technology in Biomedicine, vol. 12, no. 1, pp. 20-26, 2008.
[17]X. Liu, L. Liu, S. J. Simske and J. Liu, “Human daily activity recognition for healthcare using wearable and visual sensing data,” 2016 IEEE International Conference on Healthcare Informatics (ICHI), Chicago, IL, pp. 24-31, 2016.
[18]陳俊瑋,“基於無線感測網路之多人姿態辨識系統”,國立交通大學電機與控制工程學系碩士論文,2007。[19]林恩德,“基於三軸加速度計與陀螺儀之長時間健身房動作分析”,國立交通大學資訊科學與工程研究所碩士論文,2018。[20]楊于進,“多感測器系統與即時性動作辨識”,國立交通大學資訊科學工程研究所碩士論文,2017。[21]J. Mantyjarvi, J. Himberg and T. Seppanen, “Recognizing human motion with multiple acceleration sensors,” IEEE International Conference on Systems, Man and Cybernetics. e-Systems and e-Man for Cybernetics in Cyberspace (Cat.No.01CH37236), Tucson, AZ, USA, vol. 2, pp. 747-752, 2001.
[22]J.-Y. Yang, J.-S. Wang and Y.-P. Chen, “Using acceleration measurements for activity recognition: An effective learning algorithm for constructing neural classifiers,” Pattern Recognition Letters, vol. 29, no. 16, pp. 2213-2220, 2008.
[23]K. Altun, B. Barshan and O. Tuncel, “Comparative study on classifying human activities with miniature inertial and magnetic sensors,” Pattern Recognition, vol. 43, no. 10, pp. 3605-3620, 2010.
[24]I. C. Gyllensten and A. G. Bonomi, “Identifying types of physical activity with a single accelerometer: Evaluating laboratory-trained algorithms in daily life,” IEEE Transactions on Biomedical Engineering, vol. 58, no. 9, pp. 2656-2663, 2011.
[25]S. Chernbumroong, S. Cang, A. Atkins and H. Yu, “Elderly activities recognition and classification for applications in assisted living,” Expert Systems with Applications, vol. 40, no. 5, pp. 1662-1674, 2013.
[26]A. M. Torres, K. Leuenberger, R. Gonzenbach, A. Luft and R. Gassert, “Activity classification based on inertial and barometric pressure sensors at different anatomical locations,” Physiological Measurement, vol. 35, no. 7, pp. 1245-1263, 2014.
[27]F. Attal, S. Mohammed, M, Dedabrishvili, F. Chamroukhi, L, Oukhellou and Y. Amirat, “Physical human activity recognition using wearable sensors,” Sensors, vol. 15, no. 12, pp. 31314-31338, 2015.
[28]M. Arif and A. Kattan, “Physical activities monitoring using wearable acceleration sensors attached to the body,” PLoS One, vol. 10, no. 7, pp. 1-16, 2015.
[29]W. Xiao and Y. Lu, “Daily human physical activity recognition based on kernel discriminant analysis and extreme learning machine,” Mathematical Problems in Engineering, vol. 2015, no. 1, pp. 1-8, 2015.
[30]Y. Hsu, S. Yang, H. Chang and H. Lai, “Human daily and sport activity recognition using a wearable inertial sensor network,” IEEE Access, vol. 6, pp. 31715-31728, 2018.
[31]M. Elhoushi, J. Georgy, A. Noureldin and M. J. Korenberg, “A survey on approaches of motion mode recognition using sensors,” IEEE Trans. Intell. Transp. Syst., vol. 18, no. 7, pp. 1662-1686, 2016.
[32]B.-C. Kuo and D. A. Landgrebe, “Nonparametric weighted feature extraction for classification,” IEEE Trans. Geosci. Remote Sens., vol. 42, no. 5, pp. 1096-1105, 2004.
[33]B. C. Kuo, C. H. Li and J. M. Yang, “Kernel nonparametric weighted feature extraction for hyperspectral image classification,” IEEE Trans. Geosci. Remote Sens., vol. 47, no. 4, pp. 1139-1155, 2009.
[34]J.-L. Reyes-Ortiz, L. Oneto, A. Ghio, A. Sama, D. Anguita and X. Parra, “Human activity recognition on smartphones with awareness of basic activities and postural transitions,” In Artificial Neural Networks and Machine Learning–ICANN 2014, pp. 177-184, 2014.
[35]N. Davies, D. P. Siewiorek and R. Sukthankar, “Activity-based computing,” IEEE Pervasive Computing, vol. 7, no. 2, pp. 20-21, 2008.
[36]R. Poppe, “Vision-based human motion analysis: An overview computer vision and image understanding,” pp. 4–18, 2007.
[37]N. Ravi, N. Dandekar, P. Mysore and M. L, “Littman. Activity recognition from accelerometer data,” In AAAI, vol. 5, pp. 1541-1546, 2005.
[38]A. S. A. Sukor, A. Zakaria and N. A. Rahim, “Activity recognition using accelerometer sensor and machine learning classifiers,” 2018 IEEE 14th International Colloquium on Signal Processing & Its Applications (CSPA), Batu Feringghi, pp. 233-238, 2018.
[39]F. Gu, A. Kealy, K. Khoshelham and J. Shang, “User-independent motion state recognition using smartphone sensors,” Sensors, vol. 15, no. 12, pp. 30636-30652, 2015.
[40]A. Wang, G. Chen, J. Yang, S. Zhao and C. Chang, “A comparative study on human activity recognition using inertial sensors in a smartphone,” IEEE Sensors Journal, vol. 16, no. 11, pp. 4566-4578, 2016.
[41]Gaston Baudat and Fatiha Anouar, “Generalized discriminant analysis using a kernel approach,” Neural computation, vol. 12, no. 10, pp. 2385-2404, 2000.
[42]C. Zou and D. Hou, “LDA Analyzer: A tool for exploring topic models,” 2014 IEEE International Conference on Software Maintenance and Evolution, Victoria, BC, pp. 593-596, 2014.
[43]J. Kong, J. Cao, Y. Liu, Y. Guo and W. Shao, “Smarter wheelchairs who can talk to each other: An integrated and collaborative approach,” e-Health Networking, Applications and Services (Healthcom), 2012 IEEE 14th International Conference on, pp. 522-525, 2012.
[44]陳涵宇,“三視角運動姿勢即時語音輔助系統”,國立成功大學工程科學系碩士論文,2018。[45]劉建賢,“使用加速度計和陀螺儀之跌倒偵測系統”,大同大學資訊工程研究所碩士論文,2011。[46]J.S. Jang, “Data clustering and pattern recognition,” (in Chinese) available at the links for on-line courses at the author's homepage at http://www.cs.nthu.edu.tw/~jang.
[47]J. Yang and J. Y. Yang, “From image vector to matrix: A straightforward image projection technique—IMPCA vs. PCA,” Pattern Recognition, vol. 35, no. 9, pp. 1997–1999, 2002.
[48]J. Yang, D. Zhang, A. F. Frangi and J. Y. Yang, “Two-dimensional PCA: a new approach to appearance-based face representation and recognition,” IEEE Trans. On Pattern Analysis and Machine Intelligence, vol. 26, no. 1, pp. 131-137, 2004.
[49]J. Yang and C. Liu, “Horizontal and vertical 2DPCA-based discriminant analysis for face verification on a large-scale database,” IEEE Trans. on Information Forensics and Security, vol. 2, no. 4, pp. 781-792, 2007.
[50]R.A. Fisher, “The use of multiple measures in taxonomic problems,” Ann. Eugenics, vol. 7, pp. 179-188, 1936.
[51]M. Li and B. Yuan, “2D-LDA: A novel statistical linear discriminant analysis for image matrix,” Proc. of the IEEE Int. Conf. on Signal Processing, vol. 2, no. 31, pp. 1419-1422, 2002.
[52]J. Yang, A. F. Frangi and D. Zhang, “Uncorrelated projection discriminant analysis and its application to face image feature extraction,” Int. J. of Pattern Recognition and Artificial Intelligence, vol. 17, no. 8, pp. 1325–1347, 2003.
[53]J. Yang and D. Zhang, “Two-dimensional discriminant transform for face recognition,” Pattern Recognition, vol. 38, no. 7, pp. 1125-1129, 2005.
[54]J. Yang and C. Liu, “On image matrix based feature extraction algorithms,” IEEE Trans. on Systems, Man, and Cybernetics-PART B: Cybernetics, vol. 36, no. 1, pp. 194-197, 2006.
[55]全啟安,“結合2D-PCA 於2D-LDA 虹膜辨識之研究”,國立暨南國際大學通訊工程研究所碩士論文,2008。[56]雷祖強,周天穎,萬絢,楊龍士,許晉嘉,“空間特徵分類器支援向量機之研究”,航測及遙測學刊第十二卷 第二期第145-163頁,2007。
[57]莊建福,“基於支持向量機線上互動式心律不整分析專家系統之設計與實現”,國立中興大學資訊科學與工程學系碩士學位論文,2016。[58]C. J. C. Burges, “A tutorial on support vector machines for pattern recognition,” knowledge discovery and data mining, vol. 2, no. 2, pp. 121 -167, 1998.
[59]S. Kikuchi, S. Matsuyama, A. Sano and H. Tsuji, “Subspace-based indoor mobile positioning using support vector regressor,” Proceedings of 2006 IEEE 64th Vehicular Technology Conference, pp. 1-5, 2006.
[60]V. Cherkassky and F. Mulier, “Learning from data: Concepts, Theory and Methods,2nd Edition,” Wiley-IEEE Press, United States of America.
[61]H. Liu , H. Darabi , P. Banerjee and J. Liu, “Survey of wireless indoor positioning techniques and systems,” IEEE Transactions on Systems, Man, and Cybernetics, vol. 37, no. 6, pp. 1067-1080, 2007.
[62]C.-C. Chang and C.-J. Lin, “LIBSVM: A Library for Support Vector Machines,” 2001 [online] Available: http://www.csie.ntu.edu.tw/~cjlin/libsvm.
[63]余勝威,“MATLAB優化算法案例分析與應用(進階篇) ”,清華大學出版社,2015。
[64]許翔智,“在ZigBee室內定位系統中以倒傳遞類神經網路提升區域定位準確性之研究”,國立虎尾科技大學電機工程系研究所碩士論文,2016。[65]王進德,“類神經網路與模糊控制理論入門與應用”,全華圖書,2013年。
[66]鄭傑撰,“類神經網路室內定位方法之研究”,義守大學電機工程學系研究所碩士論文,2011。
[67]葉怡成,“類神經網路模式應用與實作”,儒林圖書,2009年。
[68]C.-W. Hsu , C.-C. Chang and C.-J. Lin , “A practical guide to support vector classification,” [online] Available: http://www.csie.ntu.edu.tw/~cjlin/papers/guide/guide.
pdf.
[69]H.-Q. Zhang, X.-W. Shi, L.-G. Cao and G.-H. Deng, “A new indoor location technology using back propagation neural network and improved centroid algorithm,” 2012 31st Chinese on Control Conference (CCC), pp. 5460-5463, 2012.
[70]林孝融,“基於低耗電藍牙裝置使用卡爾曼濾波器與支持向量機之室內定位”,國立成功大學工程科學學系碩士論文,2015。[71]J. Han and M. Kamber, “Data mining: Concepts and techniques,” 2nd ed. San Francisco, CA: Morgan Kaufmann, 2006.
[72]P.-N. Tan, M. Steinbach and V. Kumar, “Introduction to data mining. Addison-wesley,” 2005.
[73]王柏忠,“一種以最近鄰居搜尋為基礎之快速樣式分類方法”,國立高雄第一科技大學電腦與通訊工程系碩士論文,2010。[74]Machine Learning in Action, Peter Harrington, ISBN:9781617290183, 2012.
[75]孫昱程,“具備邏輯回歸與排程方法之CAN FD車載網路系統”,國立虎尾科技大學資訊工程系碩士班碩士論文,2018。[76]余豪,“應用決策樹及類神經技術於車輛啟動系統之故障診斷”,國立彰化師範大學車輛科技研究所碩士論文,2011。[77]顏利憲,“以決策樹分析鐵路誤點原因及解決方法”,國立成功大學交通管理科學研究所碩士論文,2013。[78]葉建良,“利用CART分類與迴歸樹建立消費者信用貸款違約風險評估模型之研究-以國內A銀行為例”,輔仁大學應用統計學研究所碩士論文,2006。[79]張添瑋,“應用專家決策樹改進教學評量系統”,國立嘉義大學資訊工程研究所碩士論文,2009。[80]盧瑜芳,“使用三種資料探勘演算法-類神經網路、邏輯斯迴歸及決策樹-預測乳癌患者存活情形之效能比較”,國防醫學院公共衛生學研究所碩士論文,2006。
[81]葉子維,“顧客消費行為分析及行動銀行使用預測-決策樹、隨機森林與判別分析之比較”,國立臺北大學統計學系碩士論文,2018。[82]J. Ekholm and S. Fabre, “Forecast: Mobile data traffic and revenue,” Worldwide,2010-2015, Gartner, Inc., 2011.
[83]李宥瑾,“利用機器學習方法分析電子氣體感測資料以鑑別慢性肺阻塞與氣喘患者”,國立清華大學電機工程學系碩士論文,2015。[84]張光佑,“探討特徵萃取要素於小樣本分類問題”,國立臺中教育大學教育測驗統計研究所碩士論文,2006。[85]Sushma Niket Borade and R. P. Adgaonkar, “Comparative analysis of PCA and LDA,” 2011 International Conference on Business, Engineering and Industrial Applications, Kuala Lumpur, pp. 203-206, 2011.
[86]D. Anguita, A. Ghio, L. Oneto, X. Parra and J. L. Reyes-Ortiz, “Human activity recognition on smartphones using a multiclass hardware-friendly support vector machine,” Ambient Assisted Living and Home Care, 2012.
[87]Valverde-Albacete FJ, Peláez-Moreno C.100% classification accuracy considered harm- ful: the normalized information transfer factor explains the accuracy paradox[J]. PloS one, 2014.
[88]M. Manasee, R. Anuroop G and S. Aanchal, “Human Activity Recognition Using Smartphones Data Set,” 2018.