|
References [1]B. McMahan, E. Moore, D. Ramage, S. Hampson, and B. A. y Arcas, “Communication-efficient learning of deep networks from decentralized data,” in Proceedings of the 20th International Conference on Artificial Intelligence and Statistics, pp. 1273-1282, 2017. [2]A. N. Bhagoji, S. Chakraborty, P. Mittal, and S. Calo, “Analyzing Federated Learning through an Adversarial Lens,” in International Conference on Machine Learning, pp. 634–643, 2019. [3]H. Kim, J. Park, M. Bennis and S. -L. Kim, “Blockchained On-Device Federated Learning,” IEEE Communications Letters, vol. 24, no. 6, pp. 1279-1283, 2020. [4]N. Rodríguez-Barroso, E. Martinez-Camara, M. V. Luzon, and F. Herrera, “Dynamic Defense Against Byzantine Poisoning Attacks in Federated Learning,” Future Generation Computer Systems, vol. 133, pp. 1-9, 2022. [5]M. Song, Z. Wang, Z. Zhang, Y. Song, Q. Wang, J. Ren, and H. Qi, “Analyzing User-Level Privacy Attack Against Federated Learning,” IEEE Journal on Selected Areas in Communications, vol. 38, no. 10, pp. 2430-2444, 2020. [6]A. Lall, “Data Streaming Algorithms for the Kolmogorov-Smirnov Test,” in 2015 IEEE International Conference on Big Data (Big Data), pp. 95-104, 2015. [7]Q. Yang, Y. Liu, T. Chen, and Y. Tong, “Federated Machine Learning: Concept and Applications,” ACM Transactions on Intelligent Systems and Technology, vol. 10, no. 2, pp. 1-19, 2019. [8]D. Reis, P. Flach, S. Matwin, and G. Batista, “Fast Unsupervised Online Drift Detection Using Incremental Kolmogorov-Smirnov Test,” in Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pp. 1545–1554, 2016. [9]S. Schelter, T. Rukat, and F. Biessmann, “Learning to Validate the Predictions of Black Box Classifiers on Unseen Data,” in Proceedings of the 2020 ACM SIGMOD International Conference on Management of Data, pp. 1289–1299, 2020. [10]M. Hay, G. Miklau, D. Jensen, D. Towsley, and P. Weis, “Resisting Structural Re-identification in Anonymized Social Networks,” in Proceedings of the VLDB Endowment, pp. 102–114, 2008. [11]R. J. Santos, J. Bernardino, and M. Vieira, “Approaches and Challenges in Database Intrusion Detection,” ACM SIGMOD Record, vol.43, no.3, pp. 36–47, 2014. [12]A. Bhowmick, J. Duchi, J. Freudiger, G. Kapoor, and R. Rogers, “Protection against Reconstruction and Its Applications in Private Federated Learning,” arXiv:1812.00984, 2018. [13]L. Melis, C. Song, E. De Cristofaro, and V. Shmatikov, “Exploiting Unintended Feature Leakage in Collaborative Learning,” in 2019 IEEE Symposium on Security and Privacy (SP), pp. 691-706, 2019. [14]L. T. Phong, Y. Aono, T. Hayashi, L. Wang, and S. Moriai, “Privacy-Preserving Deep Learning via Additively Homomorphic Encryption,” IEEE Transactions on Information Forensics and Security, vol. 13, no. 5, pp. 1333-1345, 2018. [15]A. Qayyum, M. U. Janjua, and J. Qadir, “Making federated learning robust to adversarial attacks by learning data and model association,” Computers & Security, vol. 121, 2022. [16]M. Yang, H. Cheng, F. Chen, X. Liu, M. Wang, and X. Li, “Model poisoning attack in differential privacy-based federated learning,” Information Sciences, vol. 630, 2023. [17]Q. Xia, Z. Tao, and Q. Li, “Defenses Against Byzantine Attacks in Distributed Deep Neural Networks,” IEEE Transactions on Network Science and Engineering, vol. 8, no. 3, pp. 2025-2035, 2021. [18]Y. Liu, Y. Xie, and A. Srivastava, “Neural Trojans,” in 2017 IEEE 35th International Conference on Computer Design (ICCD), pp. 45-48, 2017. [19]Y. Liu, X. Ma, J. Bailey, and F. Lu, “Reflection Backdoor: A Natural Backdoor Attack on Deep Neural Networks,” in Computer Vision – ECCV 2020: European Conference on Computer Vision, pp. 182–199, 2020. [20]Y. Chen, L. Su and J. Xu, “Distributed Statistical Machine Learning in Adversarial Settings: Byzantine Gradient Descent,” in Proceedings of the ACM on Measurement and Analysis of Computing Systems, vol. 1, no. 2, pp. 1-25, 2017. [21]D. Yin, Y. Chen, K. Ramchandran and P. L. Bartlett, “Byzantinerobust Distributed Learning: Towards Optimal Statistical Rates,” Proceedings of the 35th International Conference on Machine Learning, pp. 5650-5659, 2018. [22]P. M. Djuric and J. Miguez, “Assessment of Nonlinear Dynamic Models by Kolmogorov–Smirnov Statistics,” IEEE Transactions on Signal Processing, vol. 58, no. 10, pp. 5069-5079, 2010. [23]Y. Zhao, J. Chen, J. Zhang, D. Wu, M. Blumenstein and S. Yu, “Detecting and Mitigating Poisoning Attacks in Federated Learning using Generative Adversarial Networks,” Concurrency and Computation: Practice and Experience, vol. 34, 2020. [24]Y. Li, C. Chen, N. Liu, H. Huang, Z. Zheng and Q. Yan, “A Blockchain-Based Decentralized Federated Learning Framework with Committee Consensus,” IEEE Networks, vol. 35, no. 1, pp. 234-241, 2021. [25]Y. J. Kim and C. S. Hong, “Blockchain-based Node-aware Dynamic Weighting Methods for Improving Federated Learning Performance,” in 2019 20th Asia-Pacific Network Operations and Management Symposium (APNOMS), pp. 1-4, 2019. [26]U. Majeed and C. S. Hong, “FLchain: Federated Learning via MEC-enabled Blockchain Network,” in 2019 20th Asia-Pacific Network Operations and Management Symposium (APNOMS), pp. 1-4, 2019. [27]V. Tolpegin, S. Truex, M. E. Gursoy, and L. Liu, “Data Poisoning Attacks Against Federated Learning Systems,” in Computer Security – ESORICS 2020, Lecture Notes in Computer Science, vol. 12308, pp. 480-501, 2020. [28]J. Lin, M. Du, and J. Liu, “Free-riders in Federated Learning: Attacks and Defenses,” arXiv:1911.12560, 2019. [29]N. Moustafa and J. Slay, “UNSW-NB15: A Comprehensive Data Set for Network Intrusion Detection Systems (UNSW-NB15 Network Data Set),” in 2015 Military Communications and Information Systems Conference (MilCIS), pp. 1-6, 2015. [30]I. Sharafaldin, A. H. Lashkari, and A. A. Ghorbani, “Toward Generating a New Intrusion Detection Dataset and Intrusion Traffic Characterization,” in 4th International Conference on Information Systems Security and Privacy (ICISSP), Portugal, 2018. [31]B. McMahan, E. Moore, D. Ramage, S. Hampson, and B. A. y Arcas, “Communication-efficient Learning of Deep Networks from Decentralized Data,” in The 20th International Conference on Artificial Intelligence and Statistics, pp. 1273-1282, 2017. [32]H. B. McMahan, E. Moore, D. Ramage, and B. A. Y. Arcas, “Federated Learning of Deep Networks using Model Averaging,” arXiv:1602.05629, 2017. [33]L. T. Phong, Y. Aono, T. Hayashi, L. Wang, and S. Moriai, “Privacy-Preserving Deep Learning via Additively Homomorphic Encryption,” IEEE Transactions on Information Forensics and Security, vol. 13, no. 5, pp. 1333–1345, 2018. [34]A. Shafahi, W. R. Huang, M. Najibi, O. Suciu, C. Studer, T. Dumitras, and T. Goldstein, “Poison Frogs! Targeted Clean-Label Poisoning Attacks on Neural Networks,” in 32nd Conference on Neural Information Processing Systems (NIPS 2018), pp. 6103-6113, 2018. [35]M. Nasr, R. Shokri, and A. Houmansadr, “Comprehensive Privacy Analysis of Deep Learning: Passive and Active White-box Inference Attacks against Centralized and Federated Learning,” in 2019 IEEE Symposium on Security and Privacy (SP), pp. 739-753, 2019. [36]H. Yang, M. Ge, K. Xiang and J. Li, “Using Highly Compressed Gradients in Federated Learning for Data Reconstruction Attacks,” IEEE Transactions on Information Forensics and Security, vol. 18, pp. 818-830, 2023. [37]J. Zhang, C. Ge, F. Hu and B. Chen, “RobustFL: Robust Federated Learning Against Poisoning Attacks in Industrial IoT Systems,” IEEE Transactions on Industrial Informatics, vol. 18, no. 9, pp. 6388-6397, 2022. [38]J. Zhao, H. Zhu, F. Wang, R. Lu, Z. Liu, and H. Li, “PVD-FL: A Privacy-Preserving and Verifiable Decentralized Federated Learning Framework,” IEEE Transactions on Information Forensics and Security, vol. 17, pp. 2059-2073, 2022. [39]Z. Zhang, L. Wu, C. Ma, J. Li, J. Wang, Q. Wang, and S. Yu, “LSFL: A Lightweight and Secure Federated Learning Scheme for Edge Computing,” IEEE Transactions on Information Forensics and Security, vol. 18, pp. 365-379, 2023.
|