|
[1]F. H. Kamaru Zaman, S. A. Che Abdullah, N. A. Razak, J. Johari, I. Pasya, and K. A. Abu Kassim, “Visual-Based Motorcycle Detection using You Only Look Once (YOLO) Deep Network,” IOP Conf. Ser. Mater. Sci. Eng., vol. 1051, no. 1, p. 012004, 2021, doi: 10.1088/1757-899x/1051/1/012004. [2]G. Kumarasamy, N. K. Prakash, and P. S. Mohan, “Rider assistance system with an active safety mechanism,” 2015 IEEE Int. Conf. Comput. Intell. Comput. Res. ICCIC 2015, 2016, doi: 10.1109/ICCIC.2015.7435675. [3]G.-S. Hong, J.-H. Lee, Y.-W. Lee, and B.-G. Kim, “New Vehicle Verification Scheme for Blind Spot Area Based on Imaging Sensor System,” J. Multimed. Inf. Syst., vol. 4, no. 1, pp. 9–18, 2017. [4]J. Kuwana and M. Itoh, “Dynamic angling side-view mirror for supporting recognition of a vehicle in the blind spot,” 2008 Int. Conf. Control. Autom. Syst. ICCAS 2008, pp. 2913–2918, 2008, doi: 10.1109/ICCAS.2008.4694254. [5]R. E. Izzaty, B. Astuti, and N. Cholimah, “Perancangan Notifikasi Deteksi Kendaraan Di Area Blind Spot Kendaraan Berat Berbasis Arduino Uno,” Skripsi, no. 16040032, pp. 5–24, 2019. [6]M. Schoenherr, M. Grelaud, and A. Hirano, “Side View Assist-The World’s First Rider Assistance System for Two-Wheelers,” SAE Int. J. Veh. Dyn. Stability, NVH, vol. 1, no. 1, pp. 38–43, 2016, doi: 10.4271/2016-32-0052. [7]L. Bombini, P. Cerri, and P. Medici, “Radar-vision fusion for vehicle detection,” Proc. Int. Work. Intell. Transp., no. April 2016, pp. 65–70, 2006, [Online]. Available: http://www.ce.unipr.it/people/bertozzi/publications/cr/wit2006-crf-radar.pdf. [8]G. Liu, L. Wang, and S. Zou, “A radar-based blind spot detection and warning system for driver assistance,” Proc. 2017 IEEE 2nd Adv. Inf. Technol. Electron. Autom. Control Conf. IAEAC 2017, pp. 2204–2208, 2017, doi: 10.1109/IAEAC.2017.8054409. [9]F. Zhang, D. Clarke, and A. Knoll, “Vehicle detection based on LiDAR and camera fusion,” 2014 17th IEEE Int. Conf. Intell. Transp. Syst. ITSC 2014, pp. 1620–1625, 2014, doi: 10.1109/ITSC.2014.6957925. [10]A. Asvadi, L. Garrote, C. Premebida, P. Peixoto, and U. J. Nunes, “Multimodal vehicle detection: fusing 3D-LIDAR and color camera data,” Pattern Recognit. Lett., vol. 115, pp. 20–29, 2018, doi: 10.1016/j.patrec.2017.09.038. [11]B. Li, T. Zhang, and T. Xia, “Vehicle detection from 3D lidar using fully convolutional network,” Robot. Sci. Syst., vol. 12, 2016, doi: 10.15607/rss.2016.xii.042. [12]H. Yang, Q. Zhou, J. Ni, H. Li, and X. Shen, “Accurate Image-Based Pedestrian Detection With,” vol. 69, no. 12, pp. 14494–14509, 2020. [13]T. F. Gonzalez, “Handbook of approximation algorithms and metaheuristics,” Handb. Approx. Algorithms Metaheuristics, pp. 1–1432, 2007, doi: 10.1201/9781420010749. [14]C. Szegedy et al., “Going deeper with convolutions,” Proc. IEEE Comput. Soc. Conf. Comput. Vis. Pattern Recognit., vol. 07-12-June, pp. 1–9, 2015, doi: 10.1109/CVPR.2015.7298594. [15]K. Simonyan and A. Zisserman, “Very deep convolutional networks for large-scale image recognition,” 3rd Int. Conf. Learn. Represent. ICLR 2015 - Conf. Track Proc., pp. 1–14, 2015. [16]K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition,” Proc. IEEE Comput. Soc. Conf. Comput. Vis. Pattern Recognit., vol. 2016-Decem, pp. 770–778, 2016, doi: 10.1109/CVPR.2016.90. [17]A. G. Howard et al., “MobileNets: Efficient Convolutional Neural Networks for Mobile Vision Applications,” 2017, [Online]. Available: http://arxiv.org/abs/1704.04861. [18]R. Girshick, “Fast R-CNN,” Proc. IEEE Int. Conf. Comput. Vis., vol. 2015 Inter, pp. 1440–1448, 2015, doi: 10.1109/ICCV.2015.169. [19]S. Ren, K. He, R. Girshick, and J. Sun, “Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 39, no. 6, pp. 1137–1149, 2017, doi: 10.1109/TPAMI.2016.2577031. [20]W. Liu et al., “SSD: Single shot multibox detector,” Lect. Notes Comput. Sci. (including Subser. Lect. Notes Artif. Intell. Lect. Notes Bioinformatics), vol. 9905 LNCS, pp. 21–37, 2016, doi: 10.1007/978-3-319-46448-0_2. [21]J. Redmon, S. Divvala, R. Girshick, and A. Farhadi, “You only look once: Unified, real-time object detection,” Proc. IEEE Comput. Soc. Conf. Comput. Vis. Pattern Recognit., vol. 2016-Decem, pp. 779–788, 2016, doi: 10.1109/CVPR.2016.91. [22]R. Montanari, L. Minin, and S. Marzani, “Winvr2011-554 Design of Warning Delivery Strategies in Advanced Rider,” pp. 1–10, 2017. [23]Y. Zhao, L. Bai, Y. Lyu, and X. Huang, “Camera-based blind spot detection with a general purpose lightweight neural network,” Electron., vol. 8, no. 2, 2019, doi: 10.3390/electronics8020233. [24]I. C. Chang, W. R. Chen, X. M. Kuo, Y. J. Song, P. H. Liao, and C. Kuo, “An artificial intelligence-based proactive blind spot warning system for motorcycles,” Proc. - 2020 Int. Symp. Comput. Consum. Control. IS3C 2020, pp. 404–407, 2020, doi: 10.1109/IS3C50286.2020.00110. [25]A. P. Nezhad, M. Ghatee, and H. Sajedi, “Blind Spot Warning System based on Vehicle Analysis in Blind Spot Warning System based on Vehicle Analysis in Stream Images by a Real-Time Self-Supervised Deep Learning Stream Images by a Real-Time Self-Supervised Deep Learning Model Model,” 2021, doi: 10.36227/techrxiv.14806290.v1. [26]N. Wojke, A. Bewley, and D. Paulus, “Simple online and realtime tracking with a deep association metric,” Proc. - Int. Conf. Image Process. ICIP, vol. 2017-Septe, pp. 3645–3649, 2018, doi: 10.1109/ICIP.2017.8296962. [27]C.-Y. Wang, A. Bochkovskiy, and H.-Y. M. Liao, “YOLOv7: Trainable bag-of-freebies sets new state-of-the-art for real-time object detectors,” pp. 1–15, 2022, [Online]. Available: http://arxiv.org/abs/2207.02696. [28]K. O’Shea and R. Nash, “An Introduction to Convolutional Neural Networks,” pp. 1–11, 2015, [Online]. Available: http://arxiv.org/abs/1511.08458. [29]C. Michaelis et al., “Benchmarking Robustness in Object Detection: Autonomous Driving when Winter is Coming,” no. NeurIPS 2019, 2019, [Online]. Available: http://arxiv.org/abs/1907.07484. [30]J. Redmon and A. Farhadi, “YOLOv3: An Incremental Improvement,” 2018, [Online]. Available: http://arxiv.org/abs/1804.02767. [31]A. Bochkovskiy, C.-Y. Wang, and H.-Y. M. Liao, “YOLOv4: Optimal Speed and Accuracy of Object Detection,” 2020, [Online]. Available: http://arxiv.org/abs/2004.10934. [32]M. Megawaty and M. Ulfa, “Decision Support System Methods: A Review,” J. Inf. Syst. Informatics, vol. 2, no. 1, pp. 192–201, 2020, doi: 10.33557/journalisi.v2i1.63. [33]T. A. Dompeipen, M. E. I. Najoan, J. T. Elektro, U. Sam, and R. Manado, “SSD, Mobile-net,” vol. 16, no. 1, pp. 65–76, 2021. [34]B. Nath et al., “A Sentiment Analysis of Food Review using Logistic Regression,” vol. 2, no. 7, pp. 251–260, 2017. [35]M. Everingham, L. Van Gool, C. K. I. Williams, J. Winn, and A. Zisserman, “The pascal visual object classes (VOC) challenge,” Int. J. Comput. Vis., vol. 88, no. 2, pp. 303–338, 2010, doi: 10.1007/s11263-009-0275-4. [36]T. Y. Lin et al., “Microsoft COCO: Common objects in context,” Lect. Notes Comput. Sci. (including Subser. Lect. Notes Artif. Intell. Lect. Notes Bioinformatics), vol. 8693 LNCS, no. PART 5, pp. 740–755, 2014, doi: 10.1007/978-3-319-10602-1_48. [37]M. S. M. Hashim et al., “Determination of blind spot zone for motorcycles,” IOP Conf. Ser. Mater. Sci. Eng., vol. 670, no. 1, 2019, doi: 10.1088/1757-899X/670/1/012075. [38] Eric Trow. (2018, September 28) Stayin’ Safe: Proper Motorcycle Mirror Positioning. Rider Magazine. https://ridermagazine.com/2018/09/28/proper-motorcycle-mirror-positioning/
|