|
[1] J. McCarthy, M. L. Minsky, N. Rochester, and C. E. Shannon, “A proposal for the dartmouth summer research project on artificial intelligence, august 31, 1955,” AI magazine, vol. 27, no. 4, pp. 12–12, 2006. [2] R. A. Kowalski, “The early years of logic programming,” Communications of the ACM, vol. 31, no. 1, pp. 38–43, 1988. [3] K. Kahn and N. Winters, “Constructionism and ai: A history and possible futures,” British Journal of Educational Technology, vol. 52, no. 3, pp. 1130 1142, 2021. [4] A. W. Sadek, “Artificial intelligence applications in transportation,” Transportation Research Circular, pp. 1–7, 2007. [5] T. C. Lin, “Artificial intelligence, finance, and the law,” Fordham L. Rev., vol. 88, p. 531, 2019. [6] T. Panch, H. Mattie, and L. A. Celi, “The ¡§inconvenient truth¡¨ about ai in healthcare,” NPJ digital medicine, vol. 2, no. 1, p. 77, 2019. [7] B.-h. Li, B.-c. Hou, W.-t. Yu, X.-b. Lu, and C.-w. Yang, “Applications of artificial intelligence in intelligent manufacturing: a review,” Frontiers of Information Technology & Electronic Engineering, vol. 18, pp. 86–96, 2017. [8] Y. Liu, X. Ma, L. Shu, G. P. Hancke, and A. M. Abu-Mahfouz, “From industry 4.0 to agriculture 4.0: Current status, enabling technologies, and research challenges,” IEEE Transactions on Industrial Informatics, vol. 17, no. 6, pp. 4322–4334, 2020. [9] M. A. Chaudhry and E. Kazim, “Artificial intelligence in education (aied): A high-level academic and industry note 2021,” AI and Ethics, pp. 1–9, 2022. [10] L. L. Har, U. K. Rashid, L. Te Chuan, S. C. Sen, and L. Y. Xia, “Revolution of retail industry: from perspective of retail 1.0 to 4.0,” Procedia Computer Science, vol. 200, pp. 1615–1625, 2022. [11] G. E. Hinton and R. R. Salakhutdinov, “Reducing the dimensionality of data with neural networks,” science, vol. 313, no. 5786, pp. 504–507, 2006. [12] L. Fei-Fei, J. Deng, and K. Li, “Imagenet: Constructing a large-scale image database,” Journal of vision, vol. 9, no. 8, pp. 1037–1037, 2009. [13] A. Lally and P. Fodor, “Natural language processing with prolog in the ibm watson system,” The Association for Logic Programming (ALP) Newsletter, vol. 9, p. 2011, 2011. [14] A. Krizhevsky, I. Sutskever, and G. E. Hinton, “Imagenet classification with deep convolutional neural networks,” Advances in neural information processing systems, vol. 25, pp. 1097–1105, 2012. [15] J. Powles and H. Hodson, “Google deepmind and healthcare in an age of algorithms,” Health and technology, vol. 7, no. 4, pp. 351–367, 2017. [16] L. Ma and Y. Zhang, “Using word2vec to process big text data,” in 2015 IEEE International Conference on Big Data (Big Data). IEEE, 2015, pp. 2895–2897. [17] F.-Y. Wang, J. J. Zhang, X. Zheng, X. Wang, Y. Yuan, X. Dai, J. Zhang, and L. Yang, “Where does alphago go: From church-turing thesis to alphago thesis and beyond,” IEEE/CAA Journal of Automatica Sinica, vol. 3, no. 2, pp. 113–120, 2016. [18] H. D. Cheng, X. H. Jiang, Y. Sun, and J. Wang, “Color image segmentation: advances and prospects,” Pattern recognition, vol. 34, no. 12, pp. 2259–2281, 2001. [19] N. Plath, M. Toussaint, and S. Nakajima, “Multi-class image segmentation using conditional random fields and global classification,” in Proceedings of the 26th annual international conference on machine learning, 2009, pp. 817–824. [20] N. Dhanachandra, K. Manglem, and Y. J. Chanu, “Image segmentation using k-means clustering algorithm and subtractive clustering algorithm,” Procedia Computer Science, vol. 54, pp. 764–771, 2015. [21] P. Arbelaez, B. Hariharan, C. Gu, S. Gupta, L. Bourdev, and J. Malik, “Semantic segmentation using regions and parts,” pp. 3378–3385, 2012. [22] N. Silberman, D. Sontag, and R. Fergus, “Instance segmentation of indoor scenes using a coverage loss,” pp. 616–631, 2014. [23] Z. Zhang, A. G. Schwing, S. Fidler, and R. Urtasun, “Monocular object instance segmentation and depth ordering with cnns,” in Proceedings of the IEEE International Conference on Computer Vision, 2015, pp. 2614–2622. [24] A. Kirillov, K. He, R. Girshick, C. Rother, and P. Dollár, “Panoptic segmentation,” in Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, 2019, pp. 9404–9413. [25] S. Sra, S. Nowozin, and S. J. Wright, Optimization for machine learning. Mit Press, 2012. [26] S. Khoram and J. Li, “Adaptive quantization of neural networks,” in International Conference on Learning Representations, 2018. [27] D. Banik, A. Ekbal, and P. Bhattacharyya, “Machine learning based optimized pruning approach for decoding in statistical machine translation,” IEEE Access, vol. 7, pp. 1736–1751, 2018. [28] A. Heidari, M. A. Jabraeil Jamali, N. Jafari Navimipour, and S. Akbarpour, “Deep qlearning technique for offloading offline/online computation in blockchain-enabled green iot-edge scenarios,” Applied Sciences, vol. 12, no. 16, p. 8232, 2022. [29] A. Heidari, S. Toumaj, N. J. Navimipour, and M. Unal, “A privacy-aware method for covid-19 detection in chest ct images using lightweight deep conventional neural network and blockchain,” Computers in Biology and Medicine, vol. 145, p. 105461, 2022. [30] C. P. Filho, E. Marques Jr, V. Chang, L. Dos Santos, F. Bernardini, P. F. Pires, L. Ochi, and F. C. Delicato, “A systematic literature review on distributed machine learning in edge computing,” Sensors, vol. 22, no. 7, p. 2665, 2022. [31] C.-F. Lin and S.-D. Wang, “Fuzzy support vector machines,” IEEE transactions on neural networks, vol. 13, no. 2, pp. 464–471, 2002. [32] F. Schroff, A. Criminisi, and A. Zisserman, “Object class segmentation using random forests.” in BMVC, 2008, pp. 1–10. [33] Y. Gao, Y. Shao, J. Lian, A. Z. Wang, R. C. Chen, and D. Shen, “Accurate segmentation of ct male pelvic organs via regression-based deformable models and multi-task random forests,” IEEE transactions on medical imaging, vol. 35, no. 6, pp. 1532–1543, 2016. [34] F. Han and S.-C. Zhu, “Bottom-up/top-down image parsing with attribute grammar,” IEEE transactions on pattern analysis and machine intelligence, vol. 31, no. 1, pp. 59–73, 2008. [35] S.-C. Zhu, D. Mumford et al., “A stochastic grammar of images. foundations and trends®,” Computer Graphics and Vision, 2007. [36] J. Long, E. Shelhamer, and T. Darrell, “Fully convolutional networks for semantic segmentation,” in Proceedings of the IEEE conference on computer vision and pattern recognition, 2015, pp. 3431–3440. [37] W. Sun and R. Wang, “Fully convolutional networks for semantic segmentation of very high resolution remotely sensed images combined with dsm,” IEEE Geoscience and Remote Sensing Letters, vol. 15, no. 3, pp. 474–478, 2018. [38] F. Visin, M. Ciccone, A. Romero, K. Kastner, K. Cho, Y. Bengio, M. Matteucci, and A. Courville, “Reseg: A recurrent neural network-based model for semantic segmentation,” in Proceedings of the IEEE conference on computer vision and pattern recognition workshops, 2016, pp. 41–48. [39] F. Visin, K. Kastner, K. Cho, M. Matteucci, A. Courville, and Y. Bengio, “Renet: A recurrent neural network based alternative to convolutional networks,” arXiv preprint arXiv:1505.00393, 2015. [40] H. Zhao, J. Shi, X. Qi, X. Wang, and J. Jia, “Pyramid scene parsing network,” in Proceedings of the IEEE conference on computer vision and pattern recognition, 2017, pp. 2881–2890. [41] G. Lin, A. Milan, C. Shen, and I. Reid, “Refinenet: Multi-path refinement networks for high-resolution semantic segmentation,” in Proceedings of the IEEE conference on computer vision and pattern recognition, 2017, pp. 1925–1934. [42] C. Peng, X. Zhang, G. Yu, G. Luo, and J. Sun, “Large kernel matters–improve semantic segmentation by global convolutional network,” in Proceedings of the IEEE conference on computer vision and pattern recognition, 2017, pp. 4353–4361. [43] L.-C. Chen, G. Papandreou, I. Kokkinos, K. Murphy, and A. L. Yuille, “Deeplab: Semantic image segmentation with deep convolutional nets, atrous convolution, and fully connected crfs,” IEEE transactions on pattern analysis and machine intelligence, vol. 40, no. 4, pp. 834–848, 2017. [44] T. Zhou, W. Wang, E. Konukoglu, and L. Van Gool, “Rethinking semantic segmentation: A prototype view,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2022, pp. 2582–2593. [45] L.-C. Chen, Y. Zhu, G. Papandreou, F. Schroff, and H. Adam, “Encoder-decoder with atrous separable convolution for semantic image segmentation,” in Proceedings of the European conference on computer vision (ECCV), 2018, pp. 801–818. [46] O. Ronneberger, P. Fischer, and T. Brox, “U-net: Convolutional networks for biomedical image segmentation,” in Medical Image Computing and Computer-Assisted Intervention–MICCAI 2015: 18th International Conference, Munich, Germany, October 5-9, 2015, Proceedings, Part III 18. Springer, 2015, pp. 234–241. [47] Ö. Çiçek, A. Abdulkadir, S. S. Lienkamp, T. Brox, and O. Ronneberger, “3d u-net: learning dense volumetric segmentation from sparse annotation,” in Medical Image Computing and Computer-Assisted Intervention–MICCAI 2016: 19th International Conference, Athens, Greece, October 17-21, 2016, Proceedings, Part II 19. Springer, 2016, pp.424–432. [48] X. Li, H. Chen, X. Qi, Q. Dou, C.-W. Fu, and P.-A. Heng, “H-denseunet: hybrid densely connected unet for liver and tumor segmentation from ct volumes,” IEEE transactions on medical imaging, vol. 37, no. 12, pp. 2663–2674, 2018. [49] H. Huang, L. Lin, R. Tong, H. Hu, Q. Zhang, Y. Iwamoto, X. Han, Y.-W. Chen, and J. Wu, “Unet 3+: A full-scale connected unet for medical image segmentation,” in ICASSP 2020-2020 IEEE international conference on acoustics, speech and signal processing (ICASSP). IEEE, 2020, pp. 1055–1059. [50] F. Bougourzi, C. Distante, F. Dornaika, and A. Taleb-Ahmed, “Pdatt-unet: Pyramid dualdecoder attention unet for covid-19 infection segmentation from ct-scans,” Medical Image Analysis, vol. 86, p. 102797, 2023. [51] H. Cao, Y. Wang, J. Chen, D. Jiang, X. Zhang, Q. Tian, and M. Wang, “Swin-unet: Unet-like pure transformer for medical image segmentation,” in European conference on computer vision. Springer, 2022, pp. 205–218. [52] V. Badrinarayanan, A. Kendall, and R. Cipolla, “Segnet: A deep convolutional encoderdecoder architecture for image segmentation,” IEEE transactions on pattern analysis and machine intelligence, vol. 39, no. 12, pp. 2481–2495, 2017. [53] W. Wang, T. Zhou, F. Yu, J. Dai, E. Konukoglu, and L. Van Gool, “Exploring cross-image pixel contrast for semantic segmentation,” in Proceedings of the IEEE/CVF International Conference on Computer Vision, 2021, pp. 7303–7313. [54] T. Zhou, L. Li, G. Bredell, J. Li, J. Unkelbach, and E. Konukoglu, “Volumetric memory network for interactive medical image segmentation,” Medical Image Analysis, vol. 83, p. 102599, 2023. [55] J. Wang, K. Sun, T. Cheng, B. Jiang, C. Deng, Y. Zhao, D. Liu, Y. Mu, M. Tan, X. Wang et al., “Deep high-resolution representation learning for visual recognition,” IEEE transactions on pattern analysis and machine intelligence, vol. 43, no. 10, pp. 3349–3364, 2020. [56]L.-C. Chen, G. Papandreou, I. Kokkinos, K. Murphy, and A. L. Yuille, “Semantic image segmentation with deep convolutional nets and fully connected crfs,” arXiv preprint arXiv:1412.7062, 2014. [57] H. Noh, S. Hong, and B. Han, “Learning deconvolution network for semantic segmentation,” in Proceedings of the IEEE international conference on computer vision, 2015, pp. 1520–1528. [58] D. Eigen and R. Fergus, “Predicting depth, surface normals and semantic labels with a common multi-scale convolutional architecture,” in Proceedings of the IEEE international conference on computer vision, 2015, pp. 2650–2658. [59] G. Papandreou, L.-C. Chen, K. P. Murphy, and A. L. Yuille, “Weakly-and semisupervised learning of a deep convolutional network for semantic image segmentation,” in Proceedings of the IEEE international conference on computer vision, 2015, pp. 1742–1750. [60] N. Hft, H. Schulz, and S. Behnke, “Fast semantic segmentation of rgb-d scenes with gpu-accelerated deep neural networks,” in Joint German/Austrian Conference on Artificial Intelligence (Künstliche Intelligenz). Springer, 2014, pp. 80–85. [61] R. Strudel, R. Garcia, I. Laptev, and C. Schmid, “Segmenter: Transformer for semantic segmentation,” in Proceedings of the IEEE/CVF international conference on computer vision, 2021, pp. 7262–7272. [62] A. Hatamizadeh, Y. Tang, V. Nath, D. Yang, A. Myronenko, B. Landman, H. R. Roth, and D. Xu, “Unetr: Transformers for 3d medical image segmentation,” in Proceedings of the IEEE/CVF winter conference on applications of computer vision, 2022, pp. 574–584. [63] A. Tigadi, R. Gujanatti, A. Gonchi, and B. Klemsscet, “Advanced driver assistance systems,” International Journal of Engineering Research and General Science, vol. 4, no. 3, pp. 151–158, 2016. [64] F. Jiang, A. Grigorev, S. Rho, Z. Tian, Y. Fu, W. Jifara, K. Adil, and S. Liu, “Medical image semantic segmentation based on deep learning,” Neural Computing and Applications, vol. 29, no. 5, pp. 1257–1265, 2018. [65] T. Cane and J. Ferryman, “Evaluating deep semantic segmentation networks for object detection in maritime surveillance,” in 2018 15th IEEE international conference on advanced video and signal based surveillance (AVSS). IEEE, 2018, pp. 1–6. [66] L. Schmarje, M. Santarossa, S.-M. Schröder, and R. Koch, “A survey on semi-self-and unsupervised learning for image classification,” IEEE Access, vol. 9, pp. 82 146–82 168, 2021. [67] X. Ran, X. Zhou, M. Lei, W. Tepsan, and W. Deng, “A novel k-means clustering algorithm with a noise algorithm for capturing urban hotspots,” Applied Sciences, vol. 11, no. 23, p. 11202, 2021. [68] J. Kaur, S. Agrawal, and R. Vig, “A comparative analysis of thresholding and edge detection segmentation techniques,” International journal of computer applications, vol. 39,no. 15, pp. 29–34, 2012. [69] B. Koonce and B. Koonce, “Vgg network,” Convolutional Neural Networks with Swift for Tensorflow: Image Recognition and Dataset Categorization, pp. 35–50, 2021. [70] P. Tang, H. Wang, and S. Kwong, “G-ms2f: Googlenet based multi-stage feature fusion of deep cnn for scene recognition,” Neurocomputing, vol. 225, pp. 188–197, 2017. [71] G. Du, X. Cao, J. Liang, X. Chen, and Y. Zhan, “Medical image segmentation based on u-net: A review.” Journal of Imaging Science & Technology, vol. 64, no. 2, 2020. [72] A. Fabijańska, “Segmentation of corneal endothelium images using a u-net-based convolutional neural network,” Artificial intelligence in medicine, vol. 88, pp. 1–13, 2018. [73] B. Yu, F. Chen, C. Xu, L. Wang, and N. Wang, “Matrix segnet: a practical deep learning framework for landslide mapping from images of different areas with different spatial resolutions,” Remote Sensing, vol. 13, no. 16, p. 3158, 2021. [74] J. Wang, K. Sun, T. Cheng, B. Jiang, C. Deng, Y. Zhao, D. Liu, Y. Mu, M. Tan, X. Wang et al., “Deep high-resolution representation learning for visual recognition,” IEEE transactions on pattern analysis and machine intelligence, 2020. [75] C. Liu, H. Ding, and X. Jiang, “Towards enhancing fine-grained details for image matting,” in Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, 2021, pp. 385–393. [76] S. Saito, T. Simon, J. Saragih, and H. Joo, “Pifuhd: Multi-level pixel-aligned implicit function for high-resolution 3d human digitization,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2020, pp. 84–93. [77] S. Mo, Y. Zhu, N. Zabaras, X. Shi, and J. Wu, “Deep convolutional encoder-decoder networks for uncertainty quantification of dynamic multiphase flow in heterogeneous media,” Water Resources Research, vol. 55, no. 1, pp. 703–728, 2019. [78] X. Zhou, Y. Hu, W. Liang, J. Ma, and Q. Jin, “Variational lstm enhanced anomaly detection for industrial big data,” IEEE Transactions on Industrial Informatics, vol. 17, no. 5, pp. 3469–3477, 2020. [79] L.-P. Wong, M. Y. H. Low, and C. S. Chong, “An efficient bee colony optimization algorithm for traveling salesman problem using frequency-based pruning,” in 2009 7th IEEE International Conference on Industrial Informatics. IEEE, 2009, pp. 775–782. [80] V. Sze, Y.-H. Chen, T.-J. Yang, and J. S. Emer, “Efficient processing of deep neural networks: A tutorial and survey,” Proceedings of the IEEE, vol. 105, no. 12, pp. 2295–2329,2017. [81] M. Denil, B. Shakibi, L. Dinh, M. Ranzato, and N. De Freitas, “Predicting parameters in deep learning,” Advances in neural information processing systems, vol. 26, 2013. [82] D. Banik, A. Ekbal, and P. Bhattacharyya, “Machine learning based optimized pruning approach for decoding in statistical machine translation,” IEEE Access, vol. 7, pp. 1736–1751, 2018. [83] Y. He, X. Dong, G. Kang, Y. Fu, C. Yan, and Y. Yang, “Asymptotic soft filter pruning for deep convolutional neural networks,” IEEE transactions on cybernetics, vol. 50, no. 8, pp. 3594–3604, 2019. [84] J. Liu, B. Zhuang, Z. Zhuang, Y. Guo, J. Huang, J. Zhu, and M. Tan, “Discrimination aware network pruning for deep model compression,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 44, no. 8, pp. 4035–4051, 2021. [85] J.-Y. Wu, C. Yu, S.-W. Fu, C.-T. Liu, S.-Y. Chien, and Y. Tsao, “Increasing compactness of deep learning based speech enhancement models with parameter pruning and quantization techniques,” IEEE Signal Processing Letters, vol. 26, no. 12, pp. 1887–1891, 2019. [86] T. Choudhary, V. Mishra, A. Goswami, and J. Sarangapani, “Inference-aware convolutional neural network pruning,” Future Generation Computer Systems, vol. 135, pp.44–56, 2022. [87] T. Zhang, S. Ye, K. Zhang, J. Tang, W. Wen, M. Fardad, and Y. Wang, “A systematic dnn weight pruning framework using alternating direction method of multipliers,” in Proceedings of the European conference on computer vision (ECCV), 2018, pp. 184–199. [88] J. Ye, X. Lu, Z. Lin, and J. Z. Wang, “Rethinking the smaller-norm-less-informative assumption in channel pruning of convolution layers,” arXiv preprint arXiv:1802.00124,2018. [89] D. Blalock, J. J. Gonzalez Ortiz, J. Frankle, and J. Guttag, “What is the state of neural network pruning,” Proceedings of machine learning and systems, vol. 2, pp. 129–146,2020. [90] Y. Lin, Y. Tu, and Z. Dou, “An improved neural network pruning technology for automatic modulation classification in edge devices,” IEEE Transactions on Vehicular Technology, vol. 69, no. 5, pp. 5703–5706, 2020. [91] W. Hu, Z. Che, N. Liu, M. Li, J. Tang, C. Zhang, and J. Wang, “: Channel pruning via class-aware trace ratio optimization,” IEEE Transactions on Neural Networks and Learning Systems, 2023. [92] J. Gou, B. Yu, S. J. Maybank, and D. Tao, “Knowledge distillation: A survey,” International Journal of Computer Vision, vol. 129, pp. 1789–1819, 2021. [93] N. Kishore Kumar and J. Schneider, “Literature survey on low rank approximation of matrices,” Linear and Multilinear Algebra, vol. 65, no. 11, pp. 2212–2244, 2017. [94] S. Ravanbakhsh, J. Schneider, and B. Poczos, “Equivariance through parameter-sharing,” in International conference on machine learning. PMLR, 2017, pp. 2892–2901. [95] Y. LeCun, L. Bottou, Y. Bengio, and P. Haffner, “Gradient-based learning applied to document recognition,” Proceedings of the IEEE, vol. 86, no. 11, pp. 2278–2324, 1998. [96] S. Hochreiter and J. Schmidhuber, “Long short-term memory,” Neural computation, vol. 9, no. 8, pp. 1735–1780, 1997. [97] S. Gupta, A. Agrawal, K. Gopalakrishnan, and P. Narayanan, “Deep learning with limited numerical precision,” in International conference on machine learning. PMLR, 2015, pp. 1737–1746. [98] B. Jacob, S. Kligys, B. Chen, M. Zhu, M. Tang, A. Howard, H. Adam, and D. Kalenichenko, “Quantization and training of neural networks for efficient integer arithmetic-only inference,” in Proceedings of the IEEE conference on computer vision and pattern recognition, 2018, pp. 2704–2713. [99] K. Liu, E. Fridman, and K. H. Johansson, “Dynamic quantization of uncertain linear networked control systems,” Automatica, vol. 59, pp. 248–255, 2015. [100] H. Fan, G. Wang, M. Ferianc, X. Niu, and W. Luk, “Static block floating-point quantization for convolutional neural networks on fpga,” in 2019 International Conference on Field-Programmable Technology (ICFPT). IEEE, 2019, pp. 28–35. [101] J. Fang, A. Shafiee, H. Abdel-Aziz, D. Thorsley, G. Georgiadis, and J. H. Hassoun, “Posttraining piecewise linear quantization for deep neural networks,” in Computer Vision–ECCV 2020: 16th European Conference, Glasgow, UK, August 23–28, 2020, Proceedings, Part II 16. Springer, 2020, pp. 69–86. [102] M. Kirtas, A. Oikonomou, N. Passalis, G. Mourgias-Alexandris, M. Moralis-Pegios, N. Pleros, and A. Tefas, “Quantization-aware training for low precision photonic neural networks,” Neural Networks, vol. 155, pp. 561–573, 2022. [103] J. L. Rosa, A. Robin, M. Silva, C. A. Baldan, and M. P. Peres, “Electrodeposition of copper on titanium wires: Taguchi experimental design approach,” Journal of materials processing technology, vol. 209, no. 3, pp. 1181–1188, 2009. [104] S. Athreya and Y. Venkatesh, “Application of taguchi method for optimization of process parameters in improving the surface roughness of lathe facing operation,” International Refereed Journal of Engineering and Science, vol. 1, no. 3, pp. 13–19, 2012. [105] M. Everingham, S. A. Eslami, L. Van Gool, C. K. Williams, J. Winn, and A. Zisserman, “The pascal visual object classes challenge: A retrospective,” International journal of computer vision, vol. 111, pp. 98–136, 2015. [106] G. J. Brostow, J. Fauqueur, and R. Cipolla, “Semantic object classes in video: A high-definition ground truth database,” Pattern Recognition Letters, vol. 30, no. 2, pp. 88–97,2009. [107] C. Farabet, C. Couprie, L. Najman, and Y. LeCun, “Scene parsing with multiscale feature learning, purity trees, and optimal covers,” arXiv preprint arXiv:1202.2160, 2012. [108] D. Grangier, L. Bottou, and R. Collobert, “Deep convolutional networks for scene parsing,” in ICML 2009 deep learning workshop, vol. 3, no. 6, 2009, p. 109. [109] C. Gatta, A. Romero, and J. van de Veijer, “Unrolling loopy top-down semantic feedback in convolutional deep networks,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, 2014, pp. 498–505. [110] L. Wen, X. Li, X. Li, and L. Gao, “A new transfer learning based on vgg-19 network for fault diagnosis,” in 2019 IEEE 23rd international conference on computer supported cooperative work in design (CSCWD). IEEE, 2019, pp. 205–209. [111] D. Eigen, C. Puhrsch, and R. Fergus, “Depth map prediction from a single image using a multi-scale deep network,” Advances in neural information processing systems, vol. 27,2014. [112] K. Simonyan and A. Zisserman, “Very deep convnets for large-scale image recognition,” Computing Research Repository, 2014. [113] S. Liu, D. Huang et al., “Receptive field block net for accurate and fast object detection,” in Proceedings of the European conference on computer vision (ECCV), 2018, pp. 385–400. [114] R. Kukunuri, A. Aglawe, J. Chauhan, K. Bhagtani, R. Patil, S. Walia, and N. Batra, “Edgenilm: towards nilm on edge devices,” in Proceedings of the 7th ACM International Conference on Systems for Energy-Efficient Buildings, Cities, and Transportation, 2020, pp. 90–99. [115] L. Ma, S. Yi, N. Carter, and Q. Li, “Efficient live migration of edge services leveraging container layered storage,” IEEE Transactions on Mobile Computing, vol. 18, no. 9, pp. 2020–2033, 2018. [116] J. Hou and J.-E. Schmitt, “All change in edge computing: As amd buys xilinx and nvidia acquires arm, we ask. two industry experts what this could mean for the vision sector.” Imaging and Machine Vision Europe, no. 102, pp. 12–14, 2020. [117] R. Szeliski, “Computer vision: algorithms and applications.” Springer Science & Business Media, 2010. [118] W. Wang, E. Xie, X. Li, D.-P. Fan, K. Song, D. Liang, T. Lu, P. Luo, and L. Shao, “Pvtv2: Improved baselines with pyramid vision transformer,” Computational Visual Media, vol. 8, no. 3, pp. 415–424, 2022. [119] Z. Liu, Y. Lin, Y. Cao, H. Hu, Y. Wei, Z. Zhang, S. Lin, and B. Guo, “Swin transformer: Hierarchical vision transformer using shifted windows,” in Proceedings of the IEEE/CVF international conference on computer vision, 2021, pp. 10 012–10 022. [120] J. Qiu, J. Wang, S. Yao, K. Guo, B. Li, E. Zhou, J. Yu, T. Tang, N. Xu, S. Song et al., “Going deeper with embedded fpga platform for convolutional neural network,” in Proceedings of the 2016 ACM/SIGDA international symposium on field-programmable gate arrays, 2016, pp. 26–35. [121] Y. Wang, J. Xu, Y. Han, H. Li, and X. Li, “Deepburning: Automatic generation of FPGA-based learning accelerators for the neural network family,” in Proceedings of the 53rd Annual Design Automation Conference, 2016, pp. 1–6. [122] N. Suda, V. Chandra, G. Dasika, A. Mohanty, Y. Ma, S. Vrudhula, J.-s. Seo, and Y. Cao,“Throughput-optimized opencl-based fpga accelerator for large-scale convolutional neural networks,” in Proceedings of the 2016 ACM/SIGDA international symposium on field-programmable gate arrays, 2016, pp. 16–25. [123] S. I. Venieris and C.-S. Bouganis, “fpgaconvnet: A framework for mapping convolutional neural networks on fpgas,” in 2016 IEEE 24th Annual International Symposium on Field-Programmable Custom Computing Machines (FCCM). IEEE, 2016, pp. 40–47. [124] Z. Liu, Y. Dou, J. Jiang, J. Xu, S. Li, Y. Zhou, and Y. Xu, “Throughput-optimized fpga accelerator for deep convolutional neural networks,” ACM Transactions on Reconfigurable Technology and Systems (TRETS), vol. 10, no. 3, pp. 1–23, 2017. [125] Y. Guan, H. Liang, N. Xu, W. Wang, S. Shi, X. Chen, G. Sun, W. Zhang, and J. Cong, “Fp-dnn: An automated framework for mapping deep neural networks onto fpgas with rtl-hls hybrid templates,” in 2017 IEEE 25th Annual International Symposium on Field-Programmable Custom Computing Machines (FCCM). IEEE, 2017, pp. 152–159. [126] K. Guo, L. Sui, J. Qiu, J. Yu, J. Wang, S. Yao, S. Han, Y. Wang, and H. Yang, “Angeleye: A complete design flow for mapping cnn onto embedded fpga,” IEEE transactions on computer-aided design of integrated circuits and systems, vol. 37, no. 1, pp. 35–47,2017.
|