|
[1] A. Banjar, Z. Ahmed, A. Daud, R. Ayaz Abbasi, and H. Dawood, "Aspect-Based Sentiment Analysis for Polarity Estimation of Customer Reviews on Twitter," Computers, Materials & Continua, vol. 67, no. 2, pp. 2203-2225, 2021, doi: 10.32604/cmc.2021.014226. [2] S. Consoli, L. Barbaglia, and S. Manzan, "Fine-grained, aspect-based sentiment analysis on economic and financial lexicon," Knowledge-Based Systems, vol. 247, 2022, doi: 10.1016/j.knosys.2022.108781. [3] J. Zhou, J. X. Huang, Q. Chen, Q. V. Hu, T. Wang, and L. He, "Deep Learning for Aspect-Level Sentiment Classification: Survey, Vision, and Challenges," IEEE Access, vol. 7, pp. 78454-78483, 2019, doi: 10.1109/access.2019.2920075. [4] F. Arias, M. Zambrano Nunez, A. Guerra-Adames, N. Tejedor-Flores, and M. Vargas-Lombardo, "Sentiment Analysis of Public Social Media as a Tool for Health-Related Topics," IEEE Access, vol. 10, pp. 74850-74872, 2022, doi: 10.1109/ACCESS.2022.3187406. [5] M. Birjali, M. Kasri, and A. Beni-Hssane, "A comprehensive survey on sentiment analysis: Approaches, challenges and trends," Knowledge-Based Systems, vol. 226, p. 107134, 2021/08/17/ 2021, doi: https://doi.org/10.1016/j.knosys.2021.107134. [6] Y. Zhou, L. Liao, Y. Gao, R. Wang, and H. Huang, "TopicBERT: A Topic-Enhanced Neural Language Model Fine-Tuned for Sentiment Classification," IEEE Transactions on Neural Networks and Learning Systems, pp. 1-14, 2021, doi: 10.1109/TNNLS.2021.3094987. [7] N. R. de Oliveira, P. S. Pisa, M. A. Lopez, D. S. V. de Medeiros, and D. M. F. Mattos, "Identifying fake news on social networks based on natural language processing: Trends and challenges," Information (Switzerland), vol. 12, no. 1, pp. 1-32, 2021, doi: 10.3390/info12010038. [8] F. Sadeghi, A. J. Bidgoly, and H. Amirkhani, "Fake news detection on social media using a natural language inference approach," Multimedia Tools and Applications, vol. 81, no. 23, pp. 33801-33821, 2022, doi: 10.1007/s11042-022-12428-8. [9] A. Kumar and N. Sachdeva, "A Bi-GRU with attention and CapsNet hybrid model for cyberbullying detection on social media," World Wide Web, vol. 25, no. 4, pp. 1537-1550, 2022, doi: 10.1007/s11280-021-00920-4. [10] J. Ramirez Sanchez et al., "Uncovering Cybercrimes in Social Media through Natural Language Processing," Complexity, vol. 2021, 2021, doi: 10.1155/2021/7955637. [11] B. Gupta, P. Prakasam, and T. Velmurugan, "Integrated BERT embeddings, BiLSTM-BiGRU and 1-D CNN model for binary sentiment classification analysis of movie reviews," Multimedia Tools and Applications, vol. 81, no. 23, pp. 33067-33086, 2022, doi: 10.1007/s11042-022-13155-w. [12] J. W. Isaiah Steinke, Lindsay Simon, Raed Seetan, "Sentiment Analysis of Online Movie Reviews using Machine Learning," International Journal of Advanced Computer Science and Applications, vol. 13, no. 9, 2022. [Online]. Available: https://thesai.org/Downloads/Volume13No9/Paper_73-Sentiment_Analysis_of_Online_Movie_Reviews.pdf. [13] M. Z. Naeem, F. Rustam, A. Mehmood, D. Mui Zzud, I. Ashraf, and G. S. Choi, "Classification of movie reviews using term frequency-inverse document frequency and optimized machine learning algorithms," PeerJ Computer Science, vol. 8, p. e914, 2022, doi: 10.7717/peerj-cs.914. [14] K. Ullah, A. Rashad, M. Khan, Y. Ghadi, H. Aljuaid, and Z. Nawaz, "A Deep Neural Network-Based Approach for Sentiment Analysis of Movie Reviews," Complexity, vol. 2022, pp. 1-9, 2022, doi: 10.1155/2022/5217491. [15] K. Chakraborty, S. Bhatia, S. Bhattacharyya, J. Platos, R. Bag, and A. E. Hassanien, "Sentiment Analysis of COVID-19 tweets by Deep Learning ClassifiersA study to show how popularity is affecting accuracy in social media," Applied Soft Computing Journal, vol. 97, 2020, doi: 10.1016/j.asoc.2020.106754. [16] D. Dangi, D. K. Dixit, and A. Bhagat, "Sentiment analysis of COVID-19 social media data through machine learning," Multimedia Tools and Applications, vol. 81, no. 29, pp. 42261-42283, 2022, doi: 10.1007/s11042-022-13492-w. [17] Z. Jalil et al., "COVID-19 Related Sentiment Analysis Using State-of-the-Art Machine Learning and Deep Learning Techniques," (in English), Frontiers in Public Health, Original Research vol. 9, 2022-January-14 2022, doi: 10.3389/fpubh.2021.812735. [18] M. A. Kausar, A. Soosaimanickam, and M. Nasar, "Public Sentiment Analysis on Twitter Data during COVID-19 Outbreak," International Journal of Advanced Computer Science and Applications, vol. 12, no. 2, pp. 415-422, 2021, doi: 10.14569/IJACSA.2021.0120252. [19] D. Mehanović, Z. Mašetić, and A. Vatreš, "Covid-19 Twitter Data Analysis Using Natural Language Processing," in IAT 2021, 2021: Springer International Publishing, pp. 203-212, doi: 10.1007/978-3-030-90055-7_15. [Online]. Available: https://dx.doi.org/10.1007/978-3-030-90055-7_15 [20] L. Nemes and A. Kiss, "Social media sentiment analysis based on COVID-19," Journal of Information and Telecommunication, vol. 5, no. 1, pp. 1-15, 2021, doi: 10.1080/24751839.2020.1790793. [21] O. Oyebode et al., "COVID-19 Pandemic: Identifying Key Issues Using Social Media and Natural Language Processing," Journal of Healthcare Informatics Research, vol. 6, no. 2, pp. 174-207, 2022, doi: 10.1007/s41666-021- 00111-w. [22] A. R. Rahmanti et al., "Social media sentiment analysis to monitor the performance of vaccination coverage during the early phase of the national COVID-19 vaccine rollout," Computer Methods and Programs in Biomedicine, vol. 221, 2022, doi: 10.1016/j.cmpb.2022.106838. [23] S. H. A. Samsudin, N. M. Sabri, N. Isa, and U. F. M. Bahrin, "Sentiment Analysis on Acceptance of New Normal in COVID-19 Pandemic using Naive Bayes Algorithm," International Journal of Advanced Computer Science and Applications, vol. 13, no. 9, pp. 581-588, 2022, doi: 10.14569/IJACSA.2022.0130968. [24] H. Yin, X. Song, S. Yang, and J. Li, "Sentiment analysis and topic modeling for COVID-19 vaccine discussions," World Wide Web, vol. 25, no. 3, pp. 1067-1083, 2022, doi: 10.1007/s11280-022-01029-y. [25] X. Han et al., "Pre-trained models: Past, present and future," AI Open, vol. 2, pp. 225-250, 2021/01/01/ 2021, doi: https://doi.org/10.1016/j.aiopen.2021.08.002. [26] X. Qiu, T. Sun, Y. Xu, Y. Shao, N. Dai, and X. Huang, "Pre-trained models for natural language processing: A survey," Science China Technological Sciences, vol. 63, no. 10, pp. 1872-1897, 2020, doi: 10.1007/s11431-020- 1647-3. [27] A. L. Maas, R. E. Daly, P. T. Pham, D. Huang, A. Y. Ng, and C. Potts, "Learning word vectors for sentiment analysis," presented at the Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies - Volume 1, Portland, Oregon, 2011. [28] H. Zhang, "The Optimality of Naive Bayes," in FLAIRS Conference, 2004. [29] R. Gandhi. "Naive Bayes Classifier." Towards Data Science (Medium). https://towardsdatascience.com/naive-bayes-classifier-81d512f50a7c (accessed March 29, 2022). [30] M. Tope, "Email Spam Detection using Naive Bayes Classifier," INTERNATIONAL JOURNAL OF SCIENTIFIC DEVELOPMENT AND RESEARCH, vol. 4, no. 6, 2019. [31] S. Gupta. "Email Spam Filtering Using Naive Bayes Classifier." Springboard. https://www.springboard.com/blog/data-science/bayes-spam-filter/ (accessed July 18, 2022). [32] R. Team. "Sentiment Analysis Challenges And How To Overcome Them." https://www.repustate.com/blog/sentiment-analysis-challenges-with-solutions/ (accessed July 23, 2022). [33] A. Ligthart, C. Catal, and B. Tekinerdogan, "Systematic reviews in sentiment analysis: a tertiary study," Artificial Intelligence Review, vol. 54, no. 7, pp. 4997-5053, 2021, doi: 10.1007/s10462-021-09973-3. [34] "Negation: Glossary of Linguistic Terms." SIL Glossary. https://glossary.sil.org/term/negation (accessed 6 Aug, 2022). [35] L. Horn. "Negation." Oxford Bibliographies. https://www.oxfordbibliographies.com/view/document/obo-9780199772810/obo-9780199772810-0032.xml (accessed 6 Aug, 2022). [36] I. Gupta and N. Joshi, "A Review on Negation Role in Twitter Sentiment Analysis," International Journal of Healthcare Information Systems and Informatics (IJHISI), vol. 16, no. 4, pp. 1-19, 2021, doi: 10.4018/IJHISI.20211001.oa14. [37] S. Kiranyaz, O. Avci, O. Abdeljaber, T. Ince, M. Gabbouj, and D. J. Inman, "1D convolutional neural networks and applications: A survey," Mechanical Systems and Signal Processing, vol. 151, p. 107398, 2021/04/01/ 2021, doi: https://doi.org/10.1016/j.ymssp.2020.107398. [38] A. Vaswani et al., "Attention is all you need," presented at the Proceedings of the 31st International Conference on Neural Information Processing Systems, Long Beach, California, USA, 2017. [39] D. Bahdanau, K. Cho, and Y. Bengio, "Neural Machine Translation by Jointly Learning to Align and Translate," presented at the CoRR, 2015. [40] I. Sutskever, O. Vinyals, and Q. V. Le, "Sequence to Sequence Learning with Neural Networks," in NIPS, 2014. [41] K. Cho et al., "Learning Phrase Representations using RNN Encoder–Decoder for Statistical Machine Translation," in EMNLP, 2014. [42] 李謦伊. "Transformer — Attention Is All You Need." medium. (accessed 9 Aug, 2022). [43] H.-y. Lee. "Transformer." http://speech.ee.ntu.edu.tw/~tlkagk/courses/ML_2019/Lecture/Transformer%20(v5).pdf (accessed 10 Aug, 2022). [44] J. Devlin, M.-W. Chang, K. Lee, and K. Toutanova, "BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding," in North American Chapter of the Association for Computational Linguistics, 2019-05-24 2019: Association for Computational Linguistics, pp. 4171 - 4186, doi: None arxiv:1810.04805. [Online]. Available: https://arxiv.org/abs/1810.04805 [45] C. Sun, X. Qiu, Y. Xu, and X. Huang, "How to Fine-Tune BERT for Text Classification?," in CCL, 2019. [46] Y. Hu, J. Ding, Z. Dou, and H. Chang, "Short-Text Classification Detector: A Bert-Based Mental Approach," Computational Intelligence and Neuroscience, vol. 2022, pp. 1-11, 2022, doi: 10.1155/2022/8660828. [47] D. Weijie, L. Yunyi, Z. Jing, and S. Xuchen, "Long Text Classification Based on BERT," in 2021 IEEE 5th Information Technology,Networking,Electronic and Automation Control Conference (ITNEC), 15-17 Oct. 2021 2021, vol. 5, pp. 1147-1151, doi: 10.1109/ITNEC52019.2021.9587007. [48] D. Moonat. "Fine-tune BERT Model for Sentiment Analysis in Google Colab." Analytics Vidhya. https://www.analyticsvidhya.com/blog/2021/12/fine-tune-bert-model-for-sentiment-analysis-in-google-colab/ (accessed 12 Aug, 2022). [49] S. B. Alex Ratner, Paroma Varma, and Chris Ré. "Weak Supervision: The New Programming Paradigm for Machine Learning." https://dawn.cs.stanford.edu/2017/07/16/weak-supervision/ (accessed 13 Aug, 2022). [50] P. V. Alex Ratner, Braden Hancock, Chris Ré, and other members of Hazy Lab. "Weak Supervision: A New Programming Paradigm for Machine Learning." https://ai.stanford.edu/blog/weak-supervision/ (accessed 13 Aug, 2022). [51] R. Poyiadzi, D. Bacaicoa-Barber, J. Cid-Sueiro, M. Perello-Nieto, P. Flach, and R. Santos-Rodriguez, "The Weak Supervision Landscape," in 2022 IEEE International Conference on Pervasive Computing and Communications Workshops and other Affiliated Events (PerCom Workshops), 21-25 March 2022 2022, pp. 218-223, doi: 10.1109/PerComWorkshops53856.2022.9767420. [52] L.-M. Chen, B.-X. Xiu, and Z.-Y. Ding, "Multiple weak supervision for short text classification," Applied Intelligence, vol. 52, no. 8, pp. 9101-9116, 2022/06/01 2022, doi: 10.1007/s10489-021-02958-3. [53] Z.-H. Zhou, "A brief introduction to weakly supervised learning," National Science Review, vol. 5, no. 1, pp. 44-53, 2018, doi: 10.1093/nsr/nwx106. [54] B. Tunguz. "Six Levels of Auto ML." Medium. https://medium.com/@tunguz/six-levels-of-auto-ml-a277aa1f0f38 (accessed 16 Aug, 20202). [55] M. L. (hibayesian). Awesome-AutoML-Papers [Online] Available: https://github.com/hibayesian/awesome-automl-papers [56] Y.-W. Chen, Q. Song, and X. Hu, "Techniques for Automated Machine Learning," SIGKDD Explor. Newsl., vol. 22, no. 2, pp. 35–50, 2021, doi: 10.1145/3447556.3447567. [57] R. S. M. L. Patibandla, V. S. Srinivas, S. N. Mohanty, and C. R. Pattanaik, "Automatic Machine Learning: An Exploratory Review," in 2021 9th International Conference on Reliability, Infocom Technologies and Optimization (Trends and Future Directions) (ICRITO), 3-4 Sept. 2021 2021, pp. 1-9, doi: 10.1109/ICRITO51393.2021.9596483. [58] T. Nagarajah and G. Poravi, "A Review on Automated Machine Learning (AutoML) Systems," in 2019 IEEE 5th International Conference for Convergence in Technology (I2CT), 29-31 March 2019 2019, pp. 1-6, doi: 10.1109/I2CT45611.2019.9033810. [59] Z. Weng, "From Conventional Machine Learning to AutoML," in International Conference on Control Engineering and Artificial Intelligence (CCEAI 2019), Los Angeles, USA, 2019, vol. 1207: IOP Publishing, doi: 10.1088/1742- 6596/1207. [Online]. Available: https://dx.doi.org/10.1088/1742-6596/1207 [60] L. Vaccaro, G. Sansonetti, and A. Micarelli, "Automated Machine Learning: Prospects and Challenges," presented at the ICCAS 2020, 29 September 2020, 2020. [Online]. Available: https://dx.doi.org/10.1007/978-3-030-58811- 3_9. [61] M. Blohm, M. Hanussek, and M. Kintz, "Leveraging Automated Machine Learning for Text Classification: Evaluation of AutoML Tools and Comparison with Human Performance," presented at the ICAART, 2020-12-07, 2020. [Online]. Available: https://arxiv.org/abs/2012.03575. [62] X. He, K. Zhao, and X. Chu, "AutoML: A survey of the state-of-the-art," Knowledge-Based Systems, vol. 212, p. 106622, 2021/01/05/ 2021, doi: https://doi.org/10.1016/j.knosys.2020.106622. [63] D. Gopagoni and P. V, "Automated Machine Learning Tool: The First Stop for Data Science and Statistical Model Building," International Journal of Advanced Computer Science and Applications, vol. 11, no. 2, 2020, doi: 10.14569/ijacsa.2020.0110253. [64] R. Elshawi and S. Sakr, "Automated Machine Learning: Techniques and Frameworks," Springer International Publishing, 2020, pp. 40-69. [65] L. Zimmer, M. Lindauer, and F. Hutter, "Auto-Pytorch: Multi-Fidelity MetaLearning for Efficient and Robust AutoDL," IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 43, no. 9, pp. 3079-3090, 2021, doi: 10.1109/TPAMI.2021.3067763. [66] E. Brochu, V. M. Cora, and N. d. Freitas, "A Tutorial on Bayesian Optimization of Expensive Cost Functions, with Application to Active User Modeling and Hierarchical Reinforcement Learning," presented at the ArXiv, 2010. [67] G. Li. "貝葉斯超參優化方法." CSDN. https://blog.csdn.net/buptgshengod/article/details/81906225 (accessed 24 Aug, 2022). [68] "Counter objects." https://docs.python.org/3/library/collections.html#collections.Counter (accessed 12 Dec, 2022). [69] "Natural Language Toolkit (NLTK)." https://www.nltk.org/index.html (accessed. [70] E. K. Steven Bird, Edward Loper, Natural Language Processing with Python: Analyzing Text with the Natural Language Toolkit, 1 ed.: O'Rielly Media, 2009. [Online]. Available: https://www.nltk.org/book/. [71] "nltk.classify.naivebayes module." https://www.nltk.org/api/nltk.classify.naivebayes.html?highlight=naivebayesclassifier (accessed 21 Dec, 2022). [72] A. Ratner, S. H. Bach, H. Ehrenberg, J. Fries, S. Wu, and C. Ré, "Snorkel: rapid training data creation with weak supervision," The VLDB Journal, vol. 29, no. 2-3, pp. 709-730, 2020, doi: 10.1007/s00778-019-00552-1. [73] A. J. Ratner, S. H. Bach, H. R. Ehrenberg, and C. Ré, "Snorkel: Fast Training Set Generation for Information Extraction," presented at the Proceedings of the 2017 ACM International Conference on Management of Data, Chicago, Illinois, USA, 2017. [Online]. Available: https://doi.org/10.1145/3035918.3056442. [74] S. Evensen, C. Ge, D. Choi, and c. Demiralp, "Data Programming by Demonstration: A Framework for Interactively Learning Labeling Functions," presented at the ArXiv, 2020. [75] A. J. Ratner, C. D. Sa, S. Wu, D. Selsam, and C. Ré, "Data Programming: Creating Large Training Sets, Quickly," Advances in neural information processing systems, vol. 29, pp. 3567-3575, 2016. [76] A. Bansal, Advanced Natural Language Processing with TENSORFLOW 2: Build real-world effective NLP... applications using NER, RNNS, seq2seq models, Tran. Birmingham-Mumbai: Packt Publishing, 2021. [77] H. Jin, Q. Song, and X. Hu, "Auto-Keras: An Efficient Neural Architecture Search System," presented at the Proceedings of the 25th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, Anchorage, AK, USA, 2019. [Online]. Available: https://doi.org/10.1145/3292500.3330648. [78] L. Sobrecueva, Automated Machine Learning with AutoKeras: Deep Learning made accessible for everyone with just few lines of coding. Birmingham: Packt Publishing, 2021. [79] "K-fold." https://scikit-learn.org/stable/modules/cross_validation.html#k-fold (accessed 26 Dec, 2022).
|