Page 357 - 《软件学报》2025年第9期
P. 357
4268 软件学报 2025 年第 36 卷第 9 期
攻击者的贡献大小以确保公平性, 而且降低了其制造恶意可用更新空间的可能, 从而缓解攻击者对全局性能的影
响. 广泛的实验结果表明, FedDiscrete 是一种有效且鲁棒的防御方案, 面向动态的攻击能力、不同的攻击形式和攻
击场景都能展现出较强的优势. 我们相信, 本文的工作有助于加深对 CEFL 实际场景下的投毒攻击与防御策略的
理解, 在未来工作中, 我们将 (1) 利用数据分布的特性设计针对特定数据集优化的离散更新空间方法, 以提高模型
训练效率; (2) 从数据集的特征复杂度和防御方法的粒度等方面出发, 探索一个能够在离散更新空间下对更复杂的
本地特征模式的数据集具有鲁棒性的防御方法, 为进一步开发更高效、鲁棒的防御检测技术并应用于更广泛的领
域提供新思路.
References:
[1] Tong X, Zhang Z, Jin CQ, Zhou AY. Blockchain for end-edge-cloud architecture: A survey. Chinese Journal of Computers, 2021, 44(12):
2345–2366 (in Chinese with English abstract). [doi: 10.11897/SP.J.1016.2021.02345]
[2] Shi L, Shu JG, Zhang WZ, Liu Y. HFL-DP: Hierarchical federated learning with differential privacy. In: Proc. of the 2021 IEEE Global
Communications Conf. (GLOBECOM). Madrid: IEEE, 2021. 1–7. [doi: 10.1109/GLOBECOM46510.2021.9685644]
[3] McMahan B, Moore E, Ramage D, Hampson S, Arcas BA. Communication-efficient learning of deep networks from decentralized data.
In: Proc. of the 20th Int’l Conf. on Artificial Intelligence and Statistics. Fort Lauderdale: PMLR, 2017. 1273–1282.
[4] Tyagi S, Rajput IS, Pandey R. Federated learning: Applications, security hazards and defense measures. In: Proc. of the 2023 Int’l Conf.
on Device Intelligence, Computing and Communication Technologies (DICCT). Dehradun: IEEE, 2023. 477–482. [doi: 10.1109/DICCT
56244.2023.10110075]
[5] Li XJ, Wu GW, Yao L, Zhang WZ, Zhang B. Progress and future challenges of security attacks and defense mechanisms in machine
learning. Ruan Jian Xue Bao/Journal of Software, 2021, 32(2): 406–423 (in Chinese with English abstract). http://www.jos.org.cn/1000-
9825/6147.htm [doi: 10.13328/j.cnki.jos.006147]
[6] Li SH, Ngai E, Voigt T. Byzantine-robust aggregation in federated learning empowered industrial IoT. IEEE Trans. on Industrial
Informatics, 2023, 19(2): 1165–1175. [doi: 10.1109/TII.2021.3128164]
[7] Ren K, Wang Q, Wang C, Qin Z, Lin XD. The security of autonomous driving: Threats, defenses, and future directions. Proc. of the
IEEE, 2020, 108(2): 357–372. [doi: 10.1109/JPROC.2019.2948775]
[8] Xie SY, Yan Y, Hong Y. Stealthy 3D poisoning attack on video recognition models. IEEE Trans. on Dependable and Secure Computing,
2023, 20(2): 1730–1743. [doi: 10.1109/TDSC.2022.3163397]
[9] Ben Saad S, Brik B, Ksentini A. Toward securing federated learning against poisoning attacks in zero touch B5G networks. IEEE Trans.
on Network and Service Management, 2023, 20(2): 1612–1624. [doi: 10.1109/TNSM.2023.3278838]
[10] Cao XY, Fang MH, Liu J, Gong NZ. FLTrust: Byzantine-robust federated learning via trust bootstrapping. arXiv:2012.13995, 2022.
[11] Han F, Zhang Y, Zhao M. Defending poisoning attacks in federated learning via loss value normal distribution. In: Proc. of the 26th Int’l
Conf. on Computer Supported Cooperative Work in Design (CSCWD). Rio de Janeiro: IEEE, 2023. 1644–1649. [doi: 10.1109/CSCWD
57460.2023.10152846]
[12] Zhao LC, Hu SS, Wang Q, Jiang JL, Shen C, Luo XY, Hu PF. Shielding collaborative learning: Mitigating poisoning attacks through
client-side detection. IEEE Trans. on Dependable and Secure Computing, 2021, 18(5): 2029–2041. [doi: 10.1109/TDSC.2020.2986205]
[13] Kumar A, Khimani V, Chatzopoulos D, Hui P. FedClean: A defense mechanism against parameter poisoning attacks in federated
learning. In: Proc. of the 2022 IEEE Int’l Conf. on Acoustics, Speech and Signal Processing (ICASSP). Singapore: IEEE, 2022. 4333–
4337. [doi: 10.1109/ICASSP43922.2022.9747497]
[14] Zhang ZS, Li JR, Yu SC, Makaya C. SAFELearning: Secure aggregation in federated learning with backdoor detectability. IEEE Trans.
on Information Forensics and Security, 2023, 18: 3289–3304. [doi: 10.1109/TIFS.2023.3280032]
[15] Zhang JL, Ge CP, Hu F, Chen B. RobustFL: Robust federated learning against poisoning attacks in industrial IoT systems. IEEE Trans.
on Industrial Informatics, 2022, 18(9): 6388–6397. [doi: 10.1109/TII.2021.3132954]
[16] Qiao FF, Li Z, Kong YB. A privacy-aware and incremental defense method against GAN-based poisoning attack. IEEE Trans. on
Computational Social Systems, 2024, 11(2): 1708–1721. [doi: 10.1109/TCSS.2023.3263241]
[17] Zhang ZX, Cao XY, Jia JY, Gong NZ. FLDetector: Defending federated learning against model poisoning attacks via detecting malicious
clients. In: Proc. of the 28th ACM SIGKDD Conf. on Knowledge Discovery and Data Mining. Washington: ACM, 2022. 2545–2555.
[doi: 10.1145/3534678.3539231]
3
[18] Zhao P, Huang HJ, Zhao XH, Huang DY. P : Privacy-preserving scheme against poisoning attacks in mobile-edge computing. IEEE

