|
[1]“中光電智能機器人 Coretronic Intelligent Robotics (CIRC)”, available from: https://www.coretronic-robotics.com/tw/product2. [2]“Bear Robotics”, available from: https://www.bearrobotics.ai/product. [3]“Automated Guided Vehicle(AGV) - TECO”, available from: https://www.teco.com.tw/en/products/agv. [4]H. van Hasselt, A. Guez, and D. Silver, 2016, “Deep reinforcement learning with double Q-Learning”, Proceedings of the Thirtieth AAAI Conference on Artificial Intelligence, pp. 2094–2100, February. [5]Z. Wang, T. Schaul, M. Hessel, H. Van Hasselt, M. Lanctot, and N. De Freitas, 2016, “Dueling network architectures for deep reinforcement learning”, Proceedings of the 33rd International Conference on International Conference on Machine Learning, vol. 48, pp. 1995–2003, June. [6]Y. Wang and T. Jiang, 2021, “Design and PLC Realization of Elevator Control Based on LOOK Algorithm”, Journal of Physics: Conference Series, vol. 1754, pp. 012082, February. [7] “RMF”, available from: https://osrf.github.io/ros2multirobotbook/intro.html#robotics-middleware-framework-rmf. [8]A. A. Abdulla, H. Liu, N. Stoll, and K. Thurow, 2016, “An automated elevator management and multi-floor estimation for indoor mobile robot transportation based on a pressure sensor”, 17th International Conference on Mechatronics - Mechatronika (ME), pp. 1–7, December. [9]J.-G. Kang, S.-Y. An, and S.-Y. Oh, 2007, “Navigation strategy for the service robot in the elevator environment”, International Conference on Control, Automation and Systems, pp. 1092–1097, October. [10]K. T. Islam, G. Mujtaba, R. G. Raj, and H. F. Nweke, 2017, “Elevator button and floor number recognition through hybrid image classification approach for navigation of service robot in buildings”, International Conference on Engineering Technology and Technopreneurship (ICE2T), pp. 1–4, September. [11]X. Yuan, L. Buşoniu, and R. Babuška, 2008, “Reinforcement Learning for Elevator Control”, IFAC Proceedings Volumes, vol. 41, no. 2, pp. 2212–2217, July. [12]M. Brand and D. Nikovski, 2004, “Risk-Averse Group Elevator Scheduling”, April. [13]W. Liu, N. Liu, H. Sun, G. Xing, Y. Dong, and H. Chen, 2013, “Dispatching algorithm design for elevator group control system with Q-learning based on a recurrent neural network”, 25th Chinese Control and Decision Conference (CCDC), pp. 3397–3402, May. [14]R. H. Crites and A. G. Barto, 1998, “Elevator Group Control Using Multiple Reinforcement Learning Agents”, vol. 33, no. 2, pp. 235–262, November. [15]楊聖智, 2002, “運用基因演算法於控制電梯群體系統”, 國立臺灣師範大學-資訊教育研究所. [16]A. Haj-Ali, N. K. Ahmed, T. Willke, J. Gonzalez, K. Asanovic, and I. Stoica, 2019, “A View on Deep Reinforcement Learning in System Optimization”, arXiv, Sep. [17] S. Macenski, T. Foote, B. Gerkey, C. Lalancette, and W. Woodall, 2022, “Robot Operating System 2: Design, architecture, and uses in the wild”, Science Robotics, vol. 7, no. 66, p. eabm6074, May. [18]“Gazebo”, available from: http://gazebosim.org/. [19]“RViz”, available from: http://wiki.ros.org/rviz. [20]“ROS2 topics”, available from: https://docs.ros.org/en/foxy/Tutorials/Beginner-CLI-Tools/Understanding-ROS2-Topics/Understanding-ROS2-Topics.html. [21]“ROS2 services”, available from: https://docs.ros.org/en/foxy/Tutorials/Beginner-CLI-Tools/Understanding-ROS2-Services/Understanding-ROS2-Services.html. [22]“ROS2 actions”, available from: https://docs.ros.org/en/foxy/Tutorials/Beginner-CLI-Tools/Understanding-ROS2-Actions/Understanding-ROS2-Actions.html. [23]“Multicast”, available from: https://en.wikipedia.org/w/index.php?title=Multicast&oldid=1071824741. [24] “RTPS”, available from: https://en.wikipedia.org/w/index.php?title=RTPS&oldid=933098532. [25]“DDS API”, available from: https://fast-dds.docs.eprosima.com/en/latest/. [26]Y. Maruyama, S. Kato, and T. Azumi, 2016, “Exploring the performance of ROS2”, Proceedings of the 13th International Conference on Embedded Software, pp. 1–10, Oct. [27]“Open Robotics”, available from: https://www.openrobotics.org. [28]“RMF Web”, available from: https://osrf.github.io/ros2multirobotbook/rmf-web.html. [29]“Markov chain”, available from: https://en.wikipedia.org/w/index.php?title=Markov_chain&oldid=1087473341. [30]C. J. C. H. Watkins and P. Dayan, 1992, “Q-learning”, Mach Learn, vol. 8, no. 3, pp. 279–292, May. [31]V. Mnih et al., 2013, “Playing Atari with Deep Reinforcement Learning”, arXiv, Dec. [32]H. Hasselt, 2010, “Double Q-learning”, Advances in Neural Information Processing Systems, vol. 23, June. [33]“Pareto principle”, available from: https://en.wikipedia.org/w/index.php?title=Pareto_principle&oldid=1083488474
|