|
Azuma, R. T. (1997), “A survey of augmented reality”, Teleoperators Virtual Environments, Vol. 6, No. 4, pp. 355-385. Bimber, O. & Raskar, R. (2005), “Spatial augmented reality: merging real and virtual worlds”, CRC press. Bochkovskiy, A., Wang, C.-Y. & Liao, H.-Y. M. (2020), “Yolov4: Optimal speed and accuracy of object detection”, arXiv e-prints, pp. arXiv:2004.10934. Calonder, M., Lepetit, V., Strecha, C. & Fua, P. (2010), “Brief: Binary robust independent elementary features”. Paper presented at the European conference on computer vision, pp. 778-792. Chen, Q., Zhuo, Z. & Wang, W. (2019), “Bert for joint intent classification and slot filling”, arXiv e-prints, pp. arXiv:1902.10909. Devlin, J., Chang, M.-W., Lee, K. & Toutanova, K. (2018), “Bert: Pre-training of deep bidirectional transformers for language understanding”. Paper presented at the Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Vol. 1, pp 4171-4186. Faccio, M., Ferrari, E., Galizia, F. G., Gamberi, M. & Pilati, F. (2019), “Real-time assistance to manual assembly through depth camera and visual feedback”, Procedia CIRP, Vol. 81, pp. 1254-1259. Fiorentino, M., Uva, A. E., Gattullo, M., Debernardis, S. & Monno, G. (2014), “Augmented reality on large screen for interactive maintenance instructions”, Computers in Industry, Vol. 65, No. 2, pp. 270-278. Hochreiter, S. & Schmidhuber, J. (1997), “Long short-term memory”, Neural computation, Vol. 9, No. 8, pp. 1735-1780. Lai, Z.-H., Tao, W., Leu, M. C. & Yin, Z. (2020), “Smart augmented reality instructional system for mechanical assembly towards worker-centered intelligent manufacturing”, Journal of Manufacturing Systems, Vol. 55, pp. 69-81. Lowe, D. G. (2004), “Distinctive image features from scale-invariant keypoints”, International journal of computer vision, Vol. 60, No. 2, pp. 91-110. Malinowski, M. & Fritz, M. (2014), “A multi-world approach to question answering about real-world scenes based on uncertain input”. Paper presented at the Advances in Neural Information Processing Systems 27: Annual Conference on Neural Information Processing Systems 2014, pp 1682-1690. Milgram, P., Takemura, H., Utsumi, A. & Kishino, F. (1994), “Augmented reality: A class of displays on the reality-virtuality continuum ”, International Society for Optics and Photonics, Vol. 2351, pp. 282-292. Prabhavalkar, R., Rao, K., Sainath, T. N., Li, B., Johnson, L. & Jaitly, N. (2017), “A Comparison of Sequence-to-Sequence Models for Speech Recognition”. Paper presented at the Interspeech, pp. 939-943. Rosten, E. & Drummond, T. (2006), “Machine learning for high-speed corner detection”. Paper presented at the European conference on computer vision, pp. 430-443. Rublee, E., Rabaud, V., Konolige, K. & Bradski, G. (2011), “ORB: An efficient alternative to SIFT or SURF”. Paper presented at the 2011 International conference on computer vision, pp. 2564-2571. Shao, C. C., Liu, T., Lai, Y., Tseng, Y. & Tsai, S. (2018), “Drcd: a chinese machine reading comprehension dataset”, arXiv e-prints, pp. arXiv:1806.00920. Sutskever, I., Vinyals, O. & Le, Q. V. (2014), “Sequence to sequence learning with neural networks”. Paper presented at the Advances in Neural Information Processing Systems 27: Annual Conference on Neural Information Processing Systems 2014, pp. 3104-3112. Vinyals, O., Toshev, A., Bengio, S. & Erhan, D. (2015). “Show and tell: A neural image caption generator”. Paper presented at the Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 3156-3164 Wu, Q., Wang, P., Shen, C., Dick, A. & Van Den Hengel, A. (2016a), “Ask me anything: Free-form visual question answering based on knowledge from external sources”. Paper presented at the Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 4622-4630. Wu, Y., Schuster, M., Chen, Z., Le, Q. V., Norouzi, M., Macherey, W. & Macherey, K. (2016b), “Google's neural machine translation system: Bridging the gap between human and machine translation”, arXiv e-prints, pp. arXiv:1609.08144. Xiang, Y., Schmidt, T., Narayanan, V. & Fox, D. (2017), “Posecnn: A convolutional neural network for 6d object pose estimation in cluttered scenes”, arXiv e-prints, pp. arXiv:1711.00199. Xu, K., Ba, J., Kiros, R., Cho, K., Courville, A., Salakhudinov, R. & Bengio, Y. (2015), “Show, attend and tell: Neural image caption generation with visual attention”. Paper presented at the International conference on machine learning, pp. 2048-2057. Zheng, L., Liu, X., An, Z., Li, S. & Zhang, R. (2020), “A smart assistance system for cable assembly by combining wearable augmented reality with portable visual inspection”, Virtual Reality Intelligent Hardware, Vol. 2, No. 1, pp. 12-27.
|