[1]K. Murakami, T. Shibano, Y. Fujimoto and T. Yamaguchi, “Information Recommendation System for the Care Prevention Using a Communication Robot,” Proceedings of the SICE Annual Conference, Taipei, pp. 388-389, 2010.
[2]M. Ding, R. Ikeura, Y. Mori, T. Mukai and S. Hosoe, “Measurement of Human Body Stiffness for Lifting-Up Motion Generation Using Nursing-Care Assistant Robot — RIBA,” Proceedings of the IEEE Sensors, Baltimore, MD, pp. 1-4, 2013.
[3]H. S. Ahn, M. H. Lee, E. Broadbent and B. A. MacDonald, “Gathering Healthcare Service Robot Requirements from Young People’s Perceptions of an Older Care Robot,” Proceedings of the First IEEE International Conference on Robotic Computing (IRC), Taichung, pp. 22-27, 2017.
[4]H. Lee, S. Kim, J. Kim, J. Lee, A. Byun, H. Ryu and H. J. Kong, “Assessment of User Needs for the Teleconsultation Robot and the Bedside Robot Using Simulation,” Proceedings of the International Conference on Intelligent Informatics and Biomedical Sciences (ICIIBMS), Okinawa, pp. 126-128, 2017.
[5]S. Sempena, Nur Ulfa Maulidevi and Peb Ruswono Aryan, “Human Action Recognition Using Dynamic Time Warping,” Proceedings of the International Conference on Electrical Engineering and Informatics, Bandung, pp. 1-5, 2011.
[6]Y. Chen, Q. Wu and X. He, “Using Dynamic Programming to Match Human Behavior Sequences,” Proceedings of the 10th International Conference on Control, Automation, Robotics and Vision, Hanoi, pp. 1498-1503, 2008.
[7]B. Bhanu and Xiaoli Zhou, “Face Recognition from Face Profile Using Dynamic Time Warping,” Proceedings of the 17th International Conference on Pattern Recognition (ICPR), Cambridge, vol.4, pp. 499-502, 2004.
[8]F. Dornaika and F. Davoine, “View and Texture-Independent Facial Expression Recognition in Videos Using Dynamic Programming,” Proceedings of the IEEE International Conference on Image Processing, Genova, pp. II-1314, 2005.
[9]A. Brahme and U. Bhadade, “Marathi Digit Recognition Using Lip Geometric Shape Features and Dynamic Time Warping,” Proceedings of the TENCON 2017 - IEEE Region 10 Conference, Penang, pp. 974-979, 2017.
[10]G. García-Bautista, F. Trujillo-Romero and S. O. Caballero-Morales, “Mexican Sign Language Recognition Using Kinect and Data Time Warping Algorithm,” Proceedings of the International Conference on Electronics, Communications and Computers (CONIELECOMP), Cholula, pp. 1-5, 2017.
[11]G. Plouffe and A. Cretu, “Static and Dynamic Hand Gesture Recognition in Depth Data Using Dynamic Time Warping,” Proceedings of the IEEE Transactions on Instrumentation and Measurement, vol. 65, no. 2, pp. 305-316, Feb. 2016.
[12]L. Patras, I. Giosan and S. Nedevschi, “Body Gesture Validation Using Multi-Dimensional Dynamic Time Warping on Kinect Data,” Proceedings of the IEEE International Conference on Intelligent Computer Communication and Processing (ICCP), Cluj-Napoca, pp. 301-307, 2015.
[13]S. Riofrío, D. Pozo, J. Rosero and J. Vásquez, “Gesture Recognition Using Dynamic Time Warping and Kinect: A Practical Approach,” Proceedings of the International Conference on Information Systems and Computer Science (INCISCOS), Quito, pp. 302-308, 2017.
[14]M. Salagar, P. Kulkarni and S. Gondane, “Implementation of Dynamic Time Warping for Gesture Recognition in Sign Language Using High Performance Computing,” Proceedings of the International Conference on Human Computer Interactions (ICHCI), Chennai, pp. 1-6, 2013.
[15]S. Giraldo, A. Ortega, A. Perez, R. Ramirez, G. Waddell and A. Williamon, “Automatic Assessment of Violin Performance Using Dynamic Time Warping Classification,” Proceedings of the 26th Signal Processing and Communications Applications Conference (SIU), Izmir, pp. 1-3, 2018.
[16]B. Huang and W. Kinsner, “ECG Frame Classification Using Dynamic Time Warping,” Proceedings of the IEEE CCECE2002. Canadian Conference on Electrical and Computer Engineering. Conference Proceedings (Cat. No.02CH37373), Winnipeg, Manitoba, Canada, vol.2, pp. 1105-1110, 2002.
[17]S. Tsevas and D. K. Iakovidis, “Dynamic Time Warping Fusion for the Retrieval of Similar Patient Cases Represented by Multimodal Time-Series Medical Data,” Proceedings of the 10th IEEE International Conference on Information Technology and Applications in Biomedicine, Corfu, pp. 1-4, 2010.
[18]T. Nishino, Y. Kajikawa and M. Muneyasu, “Multimodal Person Authentication System Using Features of Utterance,” Proceedings of the International Symposium on Intelligent Signal Processing and Communications Systems, Taipei, pp. 43-47, 2012.
[19]D. K. Vishwakarma and S. Ansari, “A Framework for Human-Computer Interaction Using Dynamic Time Warping and Neural Network,” Proceedings of the International Conference on Inventive Computing and Informatics (ICICI), Coimbatore, pp. 242-246, 2017.
[20]C. Khorinphan and S. Saiyod, “Tone Detection of Thai Phonemes for Home Robot Based on Fundamental Frequency Analysis with Dynamic Time Warping,” Proceedings of the 5th International Conference on Business and Industrial Research (ICBIR), Bangkok, pp. 167-171, 2018.
[21]Che Yong Yeo, S. A. R. Al-Haddad and C. K. Ng, “Animal Voice Recognition for Identification (ID) Detection System,” Proceedings of the IEEE 7th International Colloquium on Signal Processing and its Applications, Penang, pp. 198-201, 2011.
[22]D. A. A. Tuasikal, H. Fakhrurroja and C. Machbub, “Voice Activation Using Speaker Recognition for Controlling Humanoid Robot,” Proceedings of the 2018 IEEE 8th International Conference on System Engineering and Technology (ICSET), Bandung, pp. 79-84, 2018.
[23]T. Dutta, “Dynamic Time Warping Based Approach to Text-Dependent Speaker Identification Using Spectrograms,” Proceedings of the Congress on Image and Signal Processing, Sanya, Hainan, pp. 354-360, 2008.
[24]B. M. Eskofier, S. I. Lee, J. F. Daneault, F. N. Golabchi, G. F. Carvalho, G. V. Diaz, S. Sapienza, G. Costante, J. Klucken, T. Kautz and P. Bonato, “Recent Machine Learning Advancements in Sensor-Based Mobility Analysis: Deep Learning for Parkinson's Disease Assessment,” Proceedings of the 38th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC), Orlando, FL, pp. 655-658, 2016.
[25]Z. Guo, M. Shen, L. Duan, Y. Zhou, J. Xiang, H. Ding, S. Chen, O. Deussen and G. Dan, “Deep Assessment Process: Objective Assessment Process for Unilateral Peripheral Dacial Paralysis Via Deep Convolutional Neural Network,” Proceedings of the IEEE 14th International Symposium on Biomedical Imaging (ISBI 2017), Melbourne, VIC, pp. 135-138, 2017.
[26]M. A. Haque, R. B. Batista, F. Noroozi, K. Kulkarni, C. B. Laursen, R. lrani, M. Bellantonio, S. Escalera, G. Anbarjafari, K. Nasrollahi, O. K. Andersen, E. G. Spaich and T. B. Moeslund, “Deep Multimodal Pain Recognition: A Database and Comparison of Spatio-Temporal Visual Modalities,” Proceedings of the 13th IEEE International Conference on Automatic Face & Gesture Recognition (FG 2018), Xi'an, pp. 250-257, 2018.
[27]S. Kumar, S. Conjeti, A. G. Roy, C. Wachinger and N. Navab, “InfiNet: Fully Convolutional Networks for Infant Brain MRI Segmentation,” Proceedings of the IEEE 15th International Symposium on Biomedical Imaging (ISBI 2018), Washington, DC, pp. 145-148, 2018.
[28]A. Antoniades, L. Spyrou, D. M. Lopez, A. Valentin, G. Alarcon, S. Sanei and C. C. Took, “Detection of Interictal Discharges with Convolutional Neural Networks Using Discrete Ordered Multichannel Intracranial EEG,” IEEE Transactions on Neural Systems and Rehabilitation Engineering, vol. 25, no. 12, pp. 2285-2294, Dec. 2017.
[29]X. Zhang, W. Pan and P. Xiao, “In-Vivo Skin Capacitive Image Classification Using AlexNet Convolution Neural Network,” Proceedings of the IEEE 3rd International Conference on Image, Vision and Computing (ICIVC), Chongqing, pp. 439-443, 2018.
[30]S. Roy, J. A. Butman, L. Chan and D. L. Pham, “TBI Contusion Segmentation from MRI Using Convolutional Neural Networks,” Proceedings of the IEEE 15th International Symposium on Biomedical Imaging (ISBI), Washington, DC, pp. 158-162, 2018.
[31]L. B. Maia, A. Lima, R. M. Pinheiro Pereira, G. B. Junior, J. Dallyson Sousa de Almeida and A. C. de Paiva, “Evaluation of Melanoma Diagnosis Using Deep Features,” Proceedings of the 25th International Conference on Systems, Signals and Image Processing (IWSSIP), Maribor, pp. 1-4, 2018.
[32]D. Ahmedt-Aristizabal, K. Nguyen, S. Denman, S. Sridharan, S. Dionisio and C. Fookes, “Deep Motion Analysis for Epileptic Seizure Classification,” Proceedings of the 40th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC), Honolulu, HI, pp. 3578-3581, 2018.
[33]N. Churamani, P. Barros, E. Strahl and S. Wermter, “Learning Empathy-Driven Emotion Expressions Using Affective Modulations,” Proceedings of the International Joint Conference on Neural Networks (IJCNN), Rio de Janeiro, pp. 1-8, 2018.
[34]D. O. Pop, A. Rogozan, F. Nashashibi and A. Bensrhair, “Pedestrian Recognition Through Different Cross-Modality Deep Learning Methods,” Proceedings of the IEEE International Conference on Vehicular Electronics and Safety (ICVES), Vienna, pp. 133-138, 2017.
[35]H. El-Ghaish, M. E. Hussein, A. Shoukry and R. Onai, “Human Action Recognition Based on Integrating Body Pose, Part Shape, and Motion,” IEEE Access, vol. 6, pp. 49040-49055, 2018.
[36]S. Katakis, N. Barotsis, D. Kastaniotis, C. Theoharatos, D. Tsourounis, S. Fotopoulos and E. Panagiotopoulos, “Muscle Type Classification on Ultrasound Imaging Using Deep Convolutional Neural Networks,” Proceedings of the IEEE 13th Image, Video, and Multidimensional Signal Processing Workshop (IVMSP), Zagorochoria, pp. 1-5, 2018.
[37]A. Rueda and S. Krishnan, “Augmenting Dysphonia Voice Using Fourier-based Synchrosqueezing Transform for a CNN Classifier,” Proceedings of the ICASSP 2019 - IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Brighton, United Kingdom, pp. 6415-6419, 2019.
[38]H. Liang, X. Lin, Q. Zhang and X. Kang, “Recognition of Spoofed Voice Using Convolutional Neural Networks,” Proceedings of the IEEE Global Conference on Signal and Information Processing (GlobalSIP), Montreal, QC, pp. 293-297, 2017.
[39]H. Huang, W. Chen, C. Liu and S. D. You, “Singing Voice Detection Based on Convolutional Neural Networks,” Proceedings of the 7th International Symposium on Next Generation Electronics (ISNE), Taipei, pp. 1-4, 2018.
[40]M. Wang, T. Sirlapu, A. Kwasniewska, M. Szankin, M. Bartscherer and R. Nicolas, “Speaker Recognition Using Convolutional Neural Network with Minimal Training Data for Smart Home Solutions,” Proceedings of the 11th International Conference on Human System Interaction (HSI), Gdansk, pp. 139-145, 2018.
[41]B. Shi, “An Intelligent System for English Pronunciation Correction,” Proceedings of the International Conference on Virtual Reality and Intelligent Systems (ICVRIS), Changsha, pp. 255-258, 2018.
[42]B. Lin, H. Huang, R. Sheu and Y. Chang, “Speech Recognition for People with Dysphasia Using Convolutional Neural Network,” Proceedings of the IEEE International Conference on Systems, Man, and Cybernetics (SMC), Miyazaki, Japan, pp. 2164-2169, 2018.
[43]E. Franti, I. Ispas and M. Dascalu, “Testing the Universal Baby Language Hypothesis - Automatic Infant Speech Recognition with CNNs,” Proceedings of the 41st International Conference on Telecommunications and Signal Processing (TSP), Athens, pp. 1-4, 2018.
[44]O. Abdel-Hamid, A. Mohamed, H. Jiang, L. Deng, G. Penn and D. Yu, “Convolutional Neural Networks for Speech Recognition,” in IEEE/ACM Transactions on Audio, Speech, and Language Processing, vol. 22, no. 10, pp. 1533-1545, Oct. 2014.
[45]T. Fan, Z. Mu and R. Yang, “Multi-Modality Recognition of Human Face and Ear Based on Deep Learning,” Proceedings of the International Conference on Wavelet Analysis and Pattern Recognition (ICWAPR), Ningbo, pp. 38-42, 2017.
[46]C. M. A. Ilyas, K. Nasrollahi, M. Rehm and T. B. Moeslund, “Rehabilitation of Traumatic Brain Injured Patients: Patient Mood Analysis from Multimodal Video,” Proceedings of the 25th IEEE International Conference on Image Processing (ICIP), Athens, pp. 2291-2295, 2018.
[47]A. Choromanska, M. Henaff, M. Mathieu, G. B. Arous, and Y. LeCun, “The Loss Surfaces of Multilayer Networks,” in Artificial Intelligence and Statistics, pp. 192-204, 2015.
[48]Y. N. Dauphin, R. Pascanu, C. Gulcehre, K. Cho, S. Ganguli, and Y. Bengio, “Identifying and Attacking the Saddle Point Problem in High-Dimensional Non-Convex Optimization,” in Advances in neural information processing systems, pp. 2933-2941, 2014.
[49]I. J. Goodfellow, O. Vinyals and A. M. Saxe, “Qualitatively Characterizing Neural Network Optimization Problems,” Proceedings of the International Conference on Learning Representations, San Diego, CA, 2015.
[50]A. M. Saxe, J. L. McClelland and S. Ganguli, “Exact Solutions to the Nonlinear Dynamics of Learning in Deep Linear Neural Networks,” Proceedings of the 2nd International Conference on Learning Representations, Banff, Canada, 2014.
[51]I. Goodfellow, Y. Bengio and A. Courville, 。深度学习(Deep learning)(趙申劍、黎彧君、符天凡與李凱譯)(初版)。北京:人民邮电出版社, 2017。
[52]A. Gaber, M. F. Taher and M. A. Wahed, “Quantifying Facial Paralysis Using the Kinect v2,” Proceedings of the 37th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC), Milan, pp. 2497-2501, 2015.
[53]R. C. Carro, E. B. Huerta, R. M. Caporal, J. C. Hernandez and F. R. Cruz, “Facial Expression Analysis with Kinect for the Diagnosis of Paralysis Using Nottingham Grading System,” in IEEE Latin America Transactions, vol. 14, no. 7, pp. 3418-3426, July 2016.
[54]A. Gaber, M. F. Faher and M. A. Waned, “Automated Grading of Facial Paralysis Using the Kinect v2: A proof of Concept Study,” Proceedings of the International Conference on Virtual Rehabilitation (ICVR), Valencia, pp. 258-264, 2015.
[55]M. Hakata, M. Seo, Y. Chen and N. Matsushiro, “Facial Paralysis Modeling Based on Image Morphing,” Proceedings of the 6th International Conference on Biomedical Engineering and Informatics, Hangzhou, pp. 806-810, 2013.
[56]H. Yoshihara, M. Seo, T. H. Ngo, N. Matsushiro and Y. Chen, “Automatic Feature Point Detection Using Deep Convolutional Networks for Quantitative Evaluation of Facial Paralysis,” 9th International Congress on Image and Signal Processing, BioMedical Engineering and Informatics (CISP-BMEI), Datong, pp. 811-814, 2016.
[57]T. H. Ngo, M. Seo, Y. Chen and N. Matsushiro, “Quantitative Evaluation of Facial Paralysis Using Tracking Method,” Proceedings of the IEEE RIVF International Conference on Computing & Communication Technologies - Research, Innovation, and Vision for Future (RIVF), Can Tho, pp. 100-105, 2015.
[58]J. Ahlberg, “Candide-3 - an Updated Parameterised Face,” Technical Report LiTH-ISY-R-2326, Image Coding Group, Dept. of Electrical Engineering, Linkoping University, Sweden, 2001.
[59]Kinect Face Tracking SDK(http://brightguo.com/kinect-face-tracking/)
[60]許晏銘,「基於動態歸化之機器學習方法於小字彙DTW語音辨識系統之研究」,國立虎尾科技大學電機工程學系碩士論文,2013。[61]Thiang and S. Wijoyo, “Speech Recognition Using Linear Predictive Coding and Artificial Neural Network for Controlling Movement of Mobile Robot,” Proceedings of the International Conference on Information and Electronics Engineering (ICIEE), Bangkok, Thailand, pp. 28-29, 2011.
[62]W. S. M. Sanjaya, D. Anggraeni and I. P. Santika. “Speech Recognition Using Linear Predictive Coding (LPC) and Adaptive Neuro-Fuzzy (ANFIS) to Control 5 DoF Arm Robot,” Proceedings of the International Conference on Computation in Science and Engineering, Malaysia, 2018.
[63]K. Simonyan and A. Zisserman, “Very Deep Convolutional Networks for Large-Scale Image Recognition,” Proceedings of the International Conference on Learning Representations (ICLR), San Diego, CA, 2015.
[64]B. Selbes and M. Sert, “Multimodal vehicle type classification using convolutional neural network and statistical representations of MFCC,” Proceedings of the 14th IEEE International Conference on Advanced Video and Signal Based Surveillance (AVSS), Lecce, pp. 1-6, 2017.
[65]J. Jiang, X. Feng, F. Liu, Y. Xu and H. Huang, “Multi-Spectral RGB-NIR Image Classification Using Double-Channel CNN,” IEEE Access, vol. 7, pp. 20607-20613, 2019.