|
[1]經濟部中小企業處,2019,2019年中小企業白皮書,經濟部中小企業處,臺北。 [2]Antonina Kloptchenko, 2003, Text mining based on the prototype matching method, TUCS, Ph.D. dissertation. [3]ACL 2019 acceptance rates. [Online] Available:https://acl2019pcblog.fileli.unipi.it/?p=161 [4]What’s new, different and challenging in ACL 2019?. [Online] Available:https://acl2019pcblog.fileli.unipi.it/?p=156 [5]R.Grishman, B. Sundheim, 1996, “Message Understanding Conference - 6: A brief history”. The 16th International Conference on Computational Linguistics, COLING 1996 Volume 1. [6]J.Shang, J.Liu, M.Jiang, X.Ren, C.R.Voss, J.Han, 2018, “Automated Phrase Mining from Massive Text Corpora”, IEEE Transactions on Knowledge and Data Engineering, pp.1825-1837. [7]P.Sun, X.Yang, X.Zhao, Z.Wang, 2018, “An Overview of Named Entity Recognition”, International Conference on Asian Language Processing (IALP). [8]S.Sekine, 2004, “Named Entity: History and Future”,[Online] Available:http://cs.nyu.edu/~sekine/papers/NEsurvey200402.pdf, 2020/07/03. [9]思知機器人. [Online] Available:https://www.ownthink.com, 2020/07/03. [10]S.Kulkarni, A.Singh, G.Ramakrishnan, S.Chakrabarti, 2009, “Collective Annotation of Wikipedia Entities in Web Text”, Proceedings of the 15th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pp. 457–466, Paris France. [11]F.Suchanek, 2010, “Information Extraction”, [Online] Available:https://www.slideserve.com/marcin/information-extraction, 2020/07/03. [12]A.Mikheev, M.Moens, C.Grover, 1999, “Named Entity Recognition without Gazetteers”, Ninth Conference of the European Chapter of the Association for Computational Linguistics, pp.1-8, Bergen Norway. [13]H.H Chen, J.C Lee, 1996, “Identification and Classification of Proper Nouns in Chinese Texts”, The 16th International Conference on Computational Linguistics, COLING 1996 Volume 1. [14]C.Cortes, V.Vapnik N, 1995, “Support-vector networks”, Machine learning. [15]S.Guiasu, A.Shenitzer, 1985, “The principle of maximum entropy”, The Mathematical Intelligencer. 7 (1): 42–48, Springer Science+Business Media . [16]L.R. Rabiner, 1989, “A tutorial on hidden Markov models and selected applications in speech recognition”, Proceedings of the IEEE, Vol. 77, pp.257-286. [17]J.Lafferty, A.McCallum, F.C.Pereira, 2001, “Conditional random fields: Probabilistic models for segmenting and labeling sequence data”, Proc. 18th International Conf. on Machine Learning. Morgan Kaufmann. [18]N.Xue, 2003, “Chinese Word Segmentation as Character Tagging”, International Journal of Computational Linguistics & Chinese Language Processing, Volume 8 pp. 29–48. [19]Z.GuoDong, S.Jian, 2004, “Exploring deep knowledge resources in biomedical name recognition”, Proceedings of the International Joint Workshop on Natural Language Processing in Biomedicine and its Applications, pp.99–102. [20]D.E. Rumelhart, G.E. Hintont, R.J. Williams, 1986, “Learning representations by backpropagating errors”, Nature, 323(6088) pp.533–536. [21]S. A. Althubiti, E. M. Jones, K. Roy, 2018, “LSTM for Anomaly-Based Network Intrusion Detection”, In 2018 28th International Telecommunication Networks and Applications Conference (ITNAC), pp. 1-3, Sydney. [22]G.Bekoulis, 2018, “Joint Entity Recognition and Relation Extraction as a Multi-head Selection Problem”, Expert System with Applications, Vol.114, pp.34-45. [23]H.David, H.Sarah, 2007, “One-hot encoding”, Digital design and computer architecture 2nd, David Money Harris, San Francisco. [24]Y.Bengio, 2003, “A Neural Probabilistic Language Model”, Journal of Machine Learning Research, pp.1137-1155. [25]M.Tomas, C.Kai, C.Greg, D.Jeffrey, 2013, “Efficient estimation of word representations in vector space”, arXiv preprint arXiv:1301.3781. [26]M.E. Peters, et al., 2018, “Deep contextualized word representations”, arXiv preprint arXiv:1802.05365. [27]J.Devlin, M.W.Chang, K.Lee, K.Toutanova, 2018, “BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding”, arXiv preprint arXiv:1810.04805. [28]A.Vaswani, et al., 2017, “Attention Is All You Need”, Proceedings of the 31st International Conference on Neural Information Processing Systems, pp. 6000-6010. [29]Z.Yang, Z.Dai, Y.Yang, J.Carbonell, R.Salakhutdinov, Q.V.Le, 2019, “XLNet: Generalized Autoregressive Pretraining for Language Understanding”, Advances in neural information processing systems, pp.5753-5763. [30]Y. Liu, M.Ott, N.Goyal, J.Du, M.Joshi, D.Chen, O.Levy, M.Lewis, L. Zettlemoyer, V.Stoyanov, 2019, “RoBERTa: A Robustly Optimized BERT Pretraining Approach”, arXiv preprint arXiv:1907.11692. [31]A.Wang, A.Singh, J.Michael, F.Hill, O.Levy, S.Bowman , 2018, “GLUE: A Multi-Task Benchmark and Analysis Platform for Natural Language Understanding”, Proceedings of the 2018 EMNLP Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP, pp.353-355. [32]Z.Zhang, X.Han, Z.Liu, X.Jiang, M.Sun, Q.Liu, 2019, “ERNIE: Enhanced Language Representation with Informative Entities”, Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pp.1441-1451. [33]A.Wang, et al., 2019, “SuperGLUE: A Stickier Benchmark for General-Purpose Language Understanding Systems”, Advances in Neural Information Processing Systems, pp.3266-3280. [34]Z.Lan, M.Chen, S.Goodman, K.Gimpel, P.Sharma, R.Soricut, 2019, “ALBERT: A Lite BERT for Self-supervised Learning of Language Representations”, arXiv preprint arXiv:1909.11942. [35]A.V.Aho, M.Corasick, 1975, “Efficient string matching: An aid to bibliographic search”, Communications of the ACM, pp.333-340. [36]S.Amit, 2012, “Introducing the Knowledge Graph: Things Not Strings”, [Online] Available:https://www.blog.google/products/search/introducing-knowledge-graph-things-not/, 2020/07/03. [37]T.B.Lee, 2006, “Linked Data”, [Online] Available:https://www.w3.org/DesignIssues/LinkedData.html, 2020/07/03. [38]C.Xiaojun, J.Shengbin, X.Yang, 2020, “A review: Knowledge reasoning over knowledge graph”, Expert Systems with Applications, Volume 141. [39]G.A. Miller, 1995, “Wordnet: a lexical database for English”, Communications of the ACM, pp. 39-41. [40]F.M. Suchanek, G. Kasneci, G. Weikum, 2008, “Yago: A large ontology from wikipedia and wordnet”, Journal of Web Semantics, pp. 203-217. [41]K.Bollacker, C.Evans, P.Paritosh, T.Sturge, J.Taylor, 2008, “Freebase: a collaboratively created graph database for structuring human knowledge”, In Proceedings of the 2008 ACM SIGMOD international conference on Management of data, pp.1247-1250. [42]T.W.Guilin, Q.C.Li, M.Wang, 2018, “A Survey of Techniques for Constructing Chinese Knowledge Graphs and Their Applications”, Sustainability, 10(9), 3245. [43]B.Christian, H.Tom, B.Lee, 2009, “Linked data: The story so far”, International Journal on Semantic Web and Information Systems, pp.1-22. [44]A.Vukotic, et al., 2014, Neo4j in Action. Manning Publications, Manning Publications, United States. [45]Neo4j Powers Intelligent Commerce for eBay App on Google Assistant, [Online]. Available: https://neo4j.com/case-studies/ebay/, 2020/07/03. [46]互聯網時代的社會語言學:基於SNS的文本數據挖掘, 2012, [Online] Available:http://www.matrix67.com/blog/archives/5044, 2020/07/03. [47]Jieba, [Online]. Available: https://github.com/fxsjy/jieba, 2020/07/03. [48]CkipTagger, [Online]. Available: https://github.com/ckiplab/ckiptagger, 2020/07/03. [49]MONPA, [Online]. Available: https://github.com/monpa-team/monpa, 2020/07/03. [50]HanLP, [Online]. Available: https://github.com/hankcs/HanLP, 2020/07/03. [51]龍捲風科技Text Miner, [Online]. Available: https://www.tornado.com.tw/text-miner/, 2020/07/03.
|