Page 358 - 《软件学报》2025年第9期
P. 358

赵亚茹 等: 云边联邦学习系统下抗投毒攻击的防御方法                                                      4269


                     Trans. on Computational Social Systems, 2020, 7(3): 818–826. [doi: 10.1109/TCSS.2019.2960824]
                 [19]   Cao XY, Zhang ZX, Jia JY, Gong NZ. FLCert: Provably secure federated learning against poisoning attacks. IEEE Trans. on Information
                     Forensics and Security, 2022, 17: 3691–3705. [doi: 10.1109/TIFS.2022.3212174]
                 [20]   Ma ZR, Ma JF, Miao YB, Li YJ, Deng RH. ShieldFL: Mitigating model poisoning attacks in privacy-preserving federated learning. IEEE
                     Trans. on Information Forensics and Security, 2022, 17: 1639–1654. [doi: 10.1109/TIFS.2022.3169918]
                 [21]   Shi ZS, Ding XY, Li FG, Chen YN, Li CR. Mitigation of poisoning attack in federated learning by using historical distance detection. In:
                     Proc. of the 5th Cyber Security in Networking Conf. (CSNet). Abu Dhabi: IEEE, 2021. 10–17. [doi: 10.1109/CSNet52717.2021.9614278]
                 [22]   Zhao YR, Zhang JB, Cao YH. Manipulating vulnerability: Poisoning attacks and countermeasures in federated cloud-edge-client learning
                     for image classification. Knowledge-based Systems, 2023, 259: 110072. [doi: 10.1016/j.knosys.2022.110072]
                 [23]   Al-Maslamani NM, Ciftler BS, Abdallah M, Mahmoud MMEA. Toward secure federated learning for IoT using DRL-enabled reputation
                     mechanism. IEEE Internet of Things Journal, 2022, 9(21): 21971–21983. [doi: 10.1109/JIOT.2022.3184812]
                 [24]   Mnih V, Kavukcuoglu K, Silver D, Rusu AA, Veness J, Bellemare MG, Graves A, Riedmiller M, Fidjeland AK, Ostrovski G, Petersen S,
                     Beattie  C,  Sadik  A,  Antonoglou  I,  King  H,  Kumaran  D,  Wierstra  D,  Legg  S,  Hassabis  D.  Human-level  control  through  deep
                     reinforcement learning. Nature, 2015, 518(7540): 529–533. [doi: 10.1038/nature14236]
                 [25]   Liu YN, Li KQ, Jin YW, Zhang Y, Qu WY. A novel reputation computation model based on subjective logic for mobile ad hoc networks.
                     Future Generation Computer Systems, 2011, 27(5): 547–554. [doi: 10.1016/j.future.2010.03.006]
                 [26]   Wang  WX,  Levine  A,  Feizi  S.  Improved  certified  defenses  against  data  poisoning  with  (deterministic)  finite  aggregation.  arXiv:
                     2202.02628, 2022.
                 [27]   Mozaffari H, Shejwalkar V, Houmansadr A. Every vote counts: Ranking-based training of federated learning to resist poisoning attacks.
                     In: Proc. of the 32nd USENIX Conf. on Security Symp. Anaheim: USENIX Association, 2023. 1721–1738.
                 [28]   Zhang JL, Chen JJ, Wu D, Chen B, Yu S. Poisoning attack in federated learning using generative adversarial nets. In: Proc. of the 18th
                     IEEE Int’l Conf. on Trust, Security and Privacy in Computing and Communications and the 13th IEEE Int’l Conf. on Big Data Science
                     and Engineering (TrustCom/BigDataSE). Rotorua: IEEE, 2019. 374–380. [doi: 10.1109/TrustCom/BigDataSE.2019.00057]
                 [29]   Zhao  Y,  Chen  JJ,  Zhang  JL,  Wu  D,  Blumenstein  M,  Yu  S.  Detecting  and  mitigating  poisoning  attacks  in  federated  learning  using
                     generative adversarial networks. Concurrency and Computation: Practice and Experience, 2022, 34(7): e5906. [doi: 10.1002/cpe.5906]
                 [30]   Liu XY, Li HW, Xu GW, Chen ZQ, Huang XM, Lu RX. Privacy-enhanced federated learning against poisoning adversaries. IEEE Trans.
                     on Information Forensics and Security, 2021, 16: 4574–4588. [doi: 10.1109/TIFS.2021.3108434]
                 [31]   Tolpegin V, Truex S, Gursoy ME, Liu L. Data poisoning attacks against federated learning systems. In: Proc. of the 25th European Symp.
                     on Research in Computer Security. Guildford: Springer, 2020. 480–501. [doi: 10.1007/978-3-030-58951-6_24]
                 [32]   Fraboni  Y,  Vidal  R,  Lorenzi  M.  Free-rider  attacks  on  model  aggregation  in  federated  learning.  In:  Proc.  of  the  24th  Int’l  Conf.  on
                     Artificial Intelligence and Statistics. San Diego: PMLR, 2021. 1846–1854.
                 [33]   Fung C, Yoon CJM, Beschastnikh I. Mitigating sybils in federated learning poisoning. arXiv:1808.04866, 2020.
                 [34]   Jagielski M, Oprea A, Biggio B, Liu C, Nita-Rotaru C, Li B. Manipulating machine learning: Poisoning attacks and countermeasures for
                     regression learning. In: Proc. of the 2018 IEEE Symp. on Security and Privacy (SP). San Francisco: IEEE, 2018. 19–35. [doi: 10.1109/SP.
                     2018.00057]
                 [35]   Jebreel NM, Domingo-Ferrer J. FL-Defender: Combating targeted attacks in federated learning. Knowledge-based Systems, 2023, 260:
                     110178. [doi: 10.1016/j.knosys.2022.110178]
                 [36]   Sharipuddin, Purnama B, Kurniabudi, Winanto EA, Stiawan D, Hanapi D, Idris MYB, Budiarto R. Features extraction on IoT intrusion
                     detection  system  using  principal  components  analysis  (PCA).  In:  Proc.  of  the  7th  Int’l  Conf.  on  Electrical  Engineering,  Computer
                     Sciences and Informatics. Yogyakarta: IEEE, 2020. 114–118. [doi: 10.23919/EECSI50503.2020.9251292]
                 [37]   Zhao YR, Cao YH, Zhang JB, Huang HX, Liu YH. FlexibleFL: Mitigating poisoning attacks with contributions in cloud-edge federated
                     learning systems. Information Sciences, 2024, 664: 120350. [doi: 10.1016/j.ins.2024.120350]
                 [38]   Ramanujan V, Wortsman M, Kembhavi A, Farhadi A, Rastegari M. What’s hidden in a randomly weighted neural network? In: Proc. of
                     the 2020 IEEE/CVF Conf. on Computer Vision and Pattern Recognition (CVPR). Seattle: IEEE, 2020. 11890–11899. [doi: 10.1109/
                     CVPR42600.2020.01191]
                 [39]   He KM, Zhang XY, Ren SQ, Sun J. Delving deep into rectifiers: Surpassing human-level performance on ImageNet classification. In:
                     Proc. of the 2015 IEEE Int’l Conf. on Computer Vision. Santiago: IEEE, 2015. 1026–1034. [doi: 10.1109/ICCV.2015.123]
                 [40]   Li XY, Qu Z, Zhao SQ, Tang B, Lu Z, Liu Y. LoMar: A local defense against poisoning attack on federated learning. IEEE Trans. on
                     Dependable and Secure Computing, 2023, 20(1): 437–450. [doi: 10.1109/TDSC.2021.3135422]
                 [41]   Lyu L, Xu XY, Wang Q, Yu H. Collaborative fairness in federated learning. In: Yang Q, Fan LX, Yu H, eds. Federated Learning: Privacy
   353   354   355   356   357   358   359   360   361   362   363