|
[1] F. Ding, K. Yu, Z. Gu, X. Li, and Y. Shi, "Perceptual enhancement for autonomous vehicles: Restoring visually degraded images for context prediction via adversarial training," IEEE Transactions on Intelligent Transportation Systems, vol. 23, no. 7, pp. 9430-9441, 2021. [2] S. Gu, E. Holly, T. Lillicrap, and S. Levine, "Deep reinforcement learning for robotic manipulation with asynchronous off-policy updates," in 2017 IEEE international conference on robotics and automation (ICRA), 2017: IEEE, pp. 3389-3396. [3] A. Gaidon, Q. Wang, Y. Cabon, and E. Vig, "Virtual worlds as proxy for multi-object tracking analysis," in Proceedings of the IEEE conference on computer vision and pattern recognition, 2016, pp. 4340-4349. [4] G. C. Burdea and P. Coiffet, Virtual reality technology. John Wiley & Sons, 2003. [5] R. T. Azuma, "A survey of augmented reality," Presence: teleoperators & virtual environments, vol. 6, no. 4, pp. 355-385, 1997. [6] R. Sutton and A. Barto, "Reinforcement Learning: An Introduction. in progress, complete draft online," ed: MIT Press, November, 2017. [7] C. J. C. H. Watkins, "Learning from delayed rewards," 1989. [8] J. Kober, J. A. Bagnell, and J. Peters, "Reinforcement learning in robotics: A survey," The International Journal of Robotics Research, vol. 32, no. 11, pp. 1238-1274, 2013. [9] V. Mnih et al., "Playing atari with deep reinforcement learning," arXiv preprint arXiv:1312.5602, 2013. [10] V. Mnih et al., "Human-level control through deep reinforcement learning," nature, vol. 518, no. 7540, pp. 529-533, 2015. [11] H. Van Hasselt, A. Guez, and D. Silver, "Deep reinforcement learning with double qlearning," in Proceedings of the AAAI conference on artificial intelligence, 2016, vol. 30, no. 1. [12] D. Silver et al., "Mastering the game of Go with deep neural networks and tree search," nature, vol. 529, no. 7587, pp. 484-489, 2016. [13] A. A. Lydia and F. S. Francis, "Convolutional neural network with an optimized backpropagation technique," in 2019 IEEE International Conference on System, Computation, Automation and Networking (ICSCAN), 2019: IEEE, pp. 1-5. [14] S. Albawi, T. A. Mohammed, and S. Al-Zawi, "Understanding of a convolutional neural network," in 2017 international conference on engineering and technology (ICET), 2017: Ieee, pp. 1-6. [15] GGWithRabitLIFE. "[機器學習 ML NOTE] Reinforcement Learning 強化學習 (DQN 原理)." https://medium.com/%E9%9B%9E%E9%9B%9E%E8%88%87%E5%85%94%E5%8 54 5%94%E7%9A%84%E5%B7%A5%E7%A8%8B%E4%B8%96%E7%95%8C/%E6% A9%9F%E5%99%A8%E5%AD%B8%E7%BF%92-ml-note-reinforcement-learning- %E5%BC%B7%E5%8C%96%E5%AD%B8%E7%BF%92-dqn- %E5%AF%A6%E4%BD%9Catari-game-7f9185f833b0 (accessed. [16] Z. Wang, T. Schaul, M. Hessel, H. Hasselt, M. Lanctot, and N. Freitas, "Dueling network architectures for deep reinforcement learning," in International conference on machine learning, 2016: PMLR, pp. 1995-2003. [17] J.-Y. Zhu, T. Park, P. Isola, and A. A. Efros, "Unpaired image-to-image translation using cycle-consistent adversarial networks," in Proceedings of the IEEE international conference on computer vision, 2017, pp. 2223-2232. [18] J. Tobin, R. Fong, A. Ray, J. Schneider, W. Zaremba, and P. Abbeel, "Domain randomization for transferring deep neural networks from simulation to the real world," in 2017 IEEE/RSJ international conference on intelligent robots and systems (IROS), 2017: IEEE, pp. 23-30. [19] X. B. Peng, P. Abbeel, S. Levine, and M. Van de Panne, "Deepmimic: Example-guided deep reinforcement learning of physics-based character skills," ACM Transactions On Graphics (TOG), vol. 37, no. 4, pp. 1-14, 2018. [20] T. P. Lillicrap et al., "Continuous control with deep reinforcement learning," arXiv preprint arXiv:1509.02971, 2015. [21] P. Chang and T. Padif, "Sim2real2sim: Bridging the gap between simulation and realworld in flexible object manipulation," in 2020 Fourth IEEE International Conference on Robotic Computing (IRC), 2020: IEEE, pp. 56-62. [22] P. Isola, J.-Y. Zhu, T. Zhou, and A. A. Efros, "Image-to-image translation with conditional adversarial networks," in Proceedings of the IEEE conference on computer vision and pattern recognition, 2017, pp. 1125-1134. [23] H.-y. Lee. "GAN Lecture 2 (2017): CycleGAN." https://www.youtube.com/watch?v=9N_uOIPghuo (accessed. [24] X. Huang, M.-Y. Liu, S. Belongie, and J. Kautz, "Multimodal unsupervised image-toimage translation," in Proceedings of the European conference on computer vision (ECCV), 2018, pp. 172-189. [25] Y. Taigman, A. Polyak, and L. Wolf, "Unsupervised cross-domain image generation," arXiv preprint arXiv:1611.02200, 2016. [26] L. Paull et al., "Duckietown: an open, inexpensive and flexible platform for autonomy education and research," in 2017 IEEE International Conference on Robotics and Automation (ICRA), 2017: IEEE, pp. 1497-1504. [27] M. a. G. Chevalier-Boisvert, Florian and Cao, Yanjun and Mehta, Bhairav and Paull, Liam. "Duckietown Environments for OpenAI Gym." GitHub. https://github.com/duckietown/gym-duckietown (accessed. 55 [28] ASUS. "RT-AC1200 V2|無線路由器|ASUS 台灣." https://www.asus.com/tw/networking-iot-servers/wifi-routers/asus-wifi-routers/rtac1200-v2/techspec/ (accessed. [29] M. Bojarski et al., "Explaining how a deep neural network trained with end-to-end learning steers a car," arXiv preprint arXiv:1704.07911, 2017.
|