[1]A. A. Patil, S. A, A. R. R, N. N. V and G. R, “Human Action Recognition Using Skeleton Features,” Proc. IEEE Int. Symp. Mixed and Augmented Reality Adjunct (ISMAR-Adjunct), pp. 289–296, 2022.
[2]T. Lu, L. Peng and S. Miao, “Human Action Recognition of Hidden Markov Model Based on Depth Information,” Proc. 15th Int. Symp. Parallel and Distributed Computing (ISPDC), pp. 354–357, 2016.
[3]Y. Fan, S. Weng, Y. Zhang, B. Shi and Y. Zhang, “Context-Aware Cross-Attention for Skeleton-Based Human Action Recognition,” IEEE Access, vol. 8, pp. 15280–15290, 2020.
[4]J. Liu, N. Akhtar and A. Mian, “Adversarial Attack on Skeleton-Based Human Action Recognition,” IEEE Transactions on Neural Networks and Learning Systems, vol. 33, no. 4, pp. 1609–1622, 2022.
[5]Z. Deng, Q. Gao, Z. Ju and X. Yu, “Skeleton-Based Multifeatures and Multistream Network for Real-Time Action Recognition,” IEEE Sensors Journal, vol. 23, no. 7, pp. 7397–7409, 2023.
[6]M.-F. Tsai and C.-H. Chen, “Spatial Temporal Variation Graph Convolutional Networks (STV-GCN) for Skeleton-Based Emotional Action Recognition,” IEEE Access, vol. 9, pp. 13870–13877, 2021.
[7]Y. Han, S.-L. Chung, Q. Xiao, W. Y. Lin and S.-F. Su, “Global Spatio-Temporal Attention for Action Recognition Based on 3D Human Skeleton Data,” IEEE Access, vol. 8, pp. 88604–88616, 2020.
[8]X. Jiang, K. Xu and T. Sun, “Action Recognition Scheme Based on Skeleton Representation with DS-LSTM Network,” IEEE Transactions on Circuits and Systems for Video Technology, vol. 30, no. 7, pp. 2129–2140, 2020.
[9]L. Wang, X. Zhao and Y. Liu, “Skeleton Feature Fusion Based on Multi-Stream LSTM for Action Recognition,” IEEE Access, vol. 6, pp. 50788–50800, 2018.
[10]P. T. Hai and H. H. Kha, “An Efficient Star Skeleton Extraction for Human Action Recognition Using Hidden Markov Models,” Proc. IEEE Sixth Int. Conf. Commun. and Electron. (ICCE), pp. 351–356, 2016.
[11]T. Z. Wint Cho, M. T. Win and A. Win, “Human Action Recognition System Based on Skeleton Data,” Proc. IEEE Int. Conf. Agents (ICA), pp. 93–98, 2018.
[12]林宗毅,2018,「一個運用貝式網路模型機制於三維感測器影像應用之最佳定位方法之方法研究」,國立虎尾科技大學碩士論文。[13]張育瑞,2016,「一種人體姿態命令辨識及其身分識別的強化式方法之研究」,國立虎尾科技大學碩士論文。[14]張哲維,2014,「運用3D運動特徵於人體姿態辨識及學習方法之研究」,國立虎尾科技大學碩士論文。[15]D. Qu, Z. Huang, Z. Gao, Y. Zhao, X. Zhao and G. Song, “An Automatic System for Smile Recognition Based on CNN and Face Detection,” Proc. IEEE Int. Conf. Robotics and Biomimetics (ROBIO), pp. 243–247, 2018.
[16]Y. Zhou, H. Ni, F. Ren and X. Kang, “Face and Gender Recognition System Based on Convolutional Neural networks,” Proc. IEEE Int. Conf. Mechatronics and Automation (ICMA), pp. 1091–1095, 2019.
[17]R. Szmurlo and S. Osowski, “Deep CNN Ensemble for Recognition of Face Images,” Proc. 22nd Int. Conf. Comput. Probl. Electr. Eng. (CPEE), pp. 1–4, 2021.
[18]A.-P. Song, Q. Hu, X.-H. Ding, X.-Y. Di and Z.-H. Song, “Similar Face Recognition Using the IE-CNN Model,” IEEE Access, vol. 8, pp. 45244–45253, 2020.
[19]D. S. Breland, A. Dayal, A. Jha, P. K. Yalavarthy, O. J. Pandey and L. R. Cenkeramaddi, “Robust Hand Gestures Recognition Using a Deep CNN and Thermal Images,” IEEE Sensors Journal, vol. 21, no. 23, pp. 26602–26614, 2021.
[20]G. Lingyun, Z. Lin and W. Zhaokui, “Hierarchical Attention-Based Astronaut Gesture Recognition: A Dataset and CNN Model,” IEEE Access, vol. 8, pp. 68787–68798, 2020.
[21]D. Fan, H. Lu, S. Xu and S. Cao, “Multi-Task and Multi-Modal Learning for RGB Dynamic Gesture Recognition,” IEEE Sensors Journal, vol. 21, no. 23, pp. 27026–27036, 2021.
[22]S. Meshram, R. Singh, P. Pal and S. K. Singh, “Convolution Neural Network based Hand Gesture Recognition System,” Proc. Third Int. Conf. Adv. Electrl. Comput. Commun. Sustain. Technol. (ICAECT), pp. 1–5, 2023.
[23]D. Kollias and S. Zafeiriou, “Exploiting Multi-CNN Features in CNN-RNN Based Dimensional Emotion Recognition on the OMG in-the-Wild Dataset,” IEEE Transactions on Affective Computing, vol. 12, no. 3, pp. 595–606, 2021.
[24]J. Zhang, M. Xing and Y. Xie, “FEC: A Feature Fusion Framework for SAR Target Recognition Based on Electromagnetic Scattering Features and Deep CNN Features,” IEEE Transactions on Geoscience and Remote Sensing, vol. 59, no. 3, pp. 2174–2187, 2021.
[25]J. Zhang, M. Xing, G.-C. Sun and Z. Bao, “Integrating the Reconstructed Scattering Center Feature Maps With Deep CNN Feature Maps for Automatic SAR Target Recognition,” IEEE Geoscience and Remote Sensing Letters, vol. 19, pp. 1–5, 2022.
[26]A. I. Shahin and S. Almotairi, “An Accurate and Fast Cardio-Views Classification System Based on Fused Deep Features and LSTM,” IEEE Access, vol. 8, pp. 135184–135194, 2020.
[27]S. Mohsen, A. Elkaseer and S. G. Scholz, “Industry 4.0-Oriented Deep Learning Models for Human Activity Recognition,” IEEE Access, vol. 9, pp. 150508–150521, 2021.
[28]H. Riaz, M. Uzair, H. Ullah and M. Ullah, "Anomalous Human Action Detection Using a Cascade of Deep Learning Models," Proc. 9th Eur. Workshop Visual Inf. Process. (EUVIP), pp. 1–5, 2021.
[29]T. Nyajowi, N. Oyie and M. Ahuna, "CNN Real-Time Detection of Vandalism Using a Hybrid -LSTM Deep Learning Neural Networks," Proc. IEEE AFRICON, pp. 1–6, 2021.
[30]Y. -H. Byeon, D. Kim, J. Lee and K. -C. Kwak, “Ensemble Three-Stream RGB-S Deep Neural Network for Human Behavior Recognition Under Intelligent Home Service Robot Environments,” IEEE Access, vol. 9, pp. 73240–73250, 2021
[31]E. Martinez-Martin and M. Cazorla, “A Socially Assistive Robot for Elderly Exercise Promotion,” IEEE Access, vol. 7, pp. 75515–75529, 2019.
[32]莊亞澄,2022,「一個基於人類身體姿勢特徵的人機互動應用系統設計之研究」,國立虎尾科技大學碩士論文。[33]Gong, C. Chen and M. Peng, “Human Interaction Recognition Based on Deep Learning and HMM,” IEEE Access, vol. 7, pp. 161123–161130, 2019.
[34]M. Ramadan and A. El-Jaroudi, “Action Detection and Classification in Kitchen Activities Videos Using Graph Decoding,” The Visual Computer, vol. 39, pp. 799–812, 2022.
[35]Y. Gu et al., “Sensor Fusion Based Manipulative Action Recognition,” Auton Robot, vol. 45 pp. 1–13, 2020.
[36]J. Yu et al., “A Discriminative Deep Model with Feature Fusion and Temporal Attention for Human Action Recognition,” IEEE Access, vol. 8, pp. 43243–43255, 2020.
[37]鄭乃瑋,2021,「一個考量彩色攝影機及深度攝影機之手勢影像感測融合的深度學習辨識研究」,國立虎尾科技大學碩士論文。[38]Y. -H. Byeon, D. Kim, J. Lee and K. -C. Kwak, “Ensemble Three-Stream RGB-S Deep Neural Network for Human Behavior Recognition Under Intelligent Home Service Robot Environments,” IEEE Access, vol. 9, pp. 73240–73250, 2021.
[39]O. Koller, N. C. Camgoz, H. Ney and R. Bowden, “Weakly Supervised Learning with Multi-Stream CNN-LSTM-HMMs to Discover Sequential Parallelism in Sign Language Videos,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 42, no. 9, pp. 2306–2320, 2020.
[40]U. Haroon et al., “A Multi-Stream Sequence Learning Framework for Human Interaction Recognition,” IEEE Transactions on Human-Machine Systems, vol. 52, no. 3, pp. 435–444, 2022.
[41]A. K. Talukdar and M. K. Bhuyan, “Vision-Based Continuous Sign Language Spotting Using Gaussian Hidden Markov Model,” IEEE Sensors Letters, vol. 6, no. 7, pp. 1–4, 2022.
[42]T. Simões Dias, J. J. A. M. Júnior and S. F. Pichorim, “An Instrumented Glove for Recognition of Brazilian Sign Language Alphabet,” IEEE Sensors Journal, vol. 22, no. 3, pp. 2518–2529, 2022.
[43]蘇俊麟,2016,「基於影像之深度資訊的手勢辨識方法研究」,國立虎尾科技大學碩士論文。[44]Y. Liu, F. Jiang and M. Gowda, “Application Informed Motion Signal Processing for Finger Motion Tracking Using Wearable Sensors,” Proc. IEEE Int. Conf. Acoust. Speech and Signal Process. (ICASSP), pp. 8334–8338, 2020.
[45]C. Mizera, T. Delrieu, V. Weistroffer, C. Andriot, A. Decatoire and J.-P. Gazeau, “Evaluation of Hand-Tracking Systems in Teleoperation and Virtual Dexterous Manipulation,” IEEE Sensors Journal, vol. 20, no. 3, pp. 1642–1655, 2020.
[46]J. P. Sahoo, S. P. Sahoo, S. Ari and S. K. Patra, “Hand Gesture Recognition Using Densely Connected Deep Residual Network and Channel Attention Module for Mobile Robot Control,” IEEE Transactions on Instrumentation and Measurement, vol. 72, pp. 1–11, 2023.
[47]K. Haratiannejadi and R. R. Selmic, “Smart Glove and Hand Gesture-Based Control Interface for Multi-Rotor Aerial Vehicles in a Multi-Subject Environment,” IEEE Access, vol. 8, pp. 227667–227677, 2020.
[48]H. He and Y. Dan, “The Research and Design of Smart Mobile Robotic Arm Based on Gesture Controlled,” Proc. Int. Conf. Adv. Mechatronic Systems (ICAMechS), pp. 308–312, 2020.
[49]X. Zhao, Y. He, X. Chen, and Z. Liu, “Human–Robot Collaborative Assembly Based on Eye-Hand and a Finite State Machine in a Virtual Environment,” Applied Sciences, vol. 11, no. 12, pp. 5754–5772, 2021.
[50]王信傑,2022,「一個基於RGB-D影像感測之手勢語言辨識系統設計之研究」,國立虎尾科技大學碩士論文。[51]Y. Lecun, L. Bottou, Y. Bengio and P. Haffner, “Gradient-Based Learning Applied to Document Recognition,” Proceedings of the IEEE, vol. 86, no. 11, pp. 2278–2324, 1998.
[52]S. Hochreiter and J. Schmidhuber, “Long Short-Term Memory,” Neural Computation, vol. 9, no. 8, pp. 1735–1780, 1997.
[53]G. D. Forney, “The Viterbi Algorithm,” Proceedings of the IEEE, vol. 61, no. 3, pp. 268–278, 1973.
[54]張育瑞,2016,「一種人體姿態命令辨識及其身分識別的強化式方法之研究」,國立虎尾科技大學碩士論文。