Page 354 - 《软件学报》2025年第5期
P. 354

软件学报 ISSN 1000-9825, CODEN RUXUEW                                        E-mail: jos@iscas.ac.cn
                 2025,36(5):2254−2269 [doi: 10.13328/j.cnki.jos.007186] [CSTR: 32375.14.jos.007186]  http://www.jos.org.cn
                 ©中国科学院软件研究所版权所有.                                                          Tel: +86-10-62562563



                                                                         *
                 基于梯度放大的联邦学习激励欺诈攻击与防御

                 乐紫莹  1,2,3 ,    陈    珂  1,2,3 ,    寿黎但  1,2,3 ,    骆歆远  1,2,3 ,    陈    刚  1,2,3


                 1
                  (浙江大学, 浙江 杭州 310027)
                 2
                  (区块链与数据安全全国重点实验室        (浙江大学), 浙江 杭州 310027)
                  (浙江省大数据智能计算重点实验室        (浙江大学), 浙江 杭州 310027)
                 3
                 通信作者: 陈珂, E-mail: chenk@zju.edu.cn

                 摘 要: 在联邦学习领域, 激励机制是吸引高质量数据持有者参与联邦学习并获得更优模型的重要工具. 然而, 现
                 有的联邦学习研究鲜有考虑到参与者可能滥用激励机制的情况, 也就是他们可能会通过操纵上传的本地模型信息
                 来获取更多的奖励. 针对这一问题进行了深入研究. 首先, 明确定义联邦学习中的参与者激励欺诈攻击问题, 并引
                 入激励成本比来评估不同激励欺诈攻击方法的效果以及防御方法的有效性. 其次, 提出一种名为“梯度放大攻击
                 (gradient scale-up attack)”的攻击方法, 专注于对模型梯度进行激励欺诈. 这种攻击方法计算出相应的放大因子, 并
                 利用这些因子来提高本地模型梯度的贡献, 以获取更多奖励. 最后, 提出一种高效的防御方法, 通过检验模型梯度
                 的二范数值来识别欺诈者, 从而有效地防止梯度放大攻击. 通过对                     MNIST  等数据集进行详尽地分析和实验验证,
                 研究结果表明, 所提出的攻击方法能够显著提高奖励, 而相应的防御方法能够有效地抵制欺诈参与者的攻击行为.
                 关键词: 联邦学习; 激励欺诈攻击; 梯度放大攻击; 恶意参与者检测; 安全保护
                 中图法分类号: TP309

                 中文引用格式  乐紫莹,   陈珂,   寿黎但,   骆歆远,   陈刚.   基于梯度放大的联邦学习激励欺诈攻击与防御.   软件学报,   2025,
                 36(5): 2254–2269. http://www.jos.org.cn/1000-9825/7186.htm
                 英文引用格式: Yue ZY, Chen K, Shou LD, Luo XY, Chen G. Reward Fraud Attack and Defense for Federated Learning Based on
                 Gradient Scale-up. Ruan  Jian  Xue  Bao/Journal  of  Software, 2025, 36(5): 2254–2269 (in Chinese). http://www.jos.org.cn/1000-9825/
                 7186.htm

                 Reward Fraud Attack and Defense for Federated Learning Based on Gradient Scale-up
                 YUE Zi-Ying 1,2,3 , CHEN Ke 1,2,3 , SHOU Li-Dan 1,2,3 , LUO Xin-Yuan 1,2,3 , CHEN Gang 1,2,3
                 1
                 (Zhejiang University, Hangzhou 310027, China)
                 2
                 (State Key Laboratory of Blockchain and Data Security (Zhejiang University), Hangzhou 310027, China)
                 3
                 (Key Laboratory of Big Data Intelligent Computing of Zhejiang Province (Zhejiang University), Hangzhou 310027, China)
                 Abstract:  In  the  field  of  federated  learning,  incentive  mechanisms  play  a  crucial  role  in  enticing  high-quality  data  contributors  to  engage
                 in  federated  learning  and  acquire  superior  models.  However,  existing  research  in  federated  learning  often  neglects  the  potential  misuse  of
                 these  incentive  mechanisms.  Specifically,  participants  may  manipulate  their  locally  trained  models  to  dishonestly  maximize  their  rewards.
                 This  issue  is  thoroughly  examined  in  this  study.  Firstly,  the  problem  of  rewards  fraud  in  federated  learning  is  clearly  defined,  and  the
                 concept  of  reward-cost  ratio  is  introduced  to  assess  the  effectiveness  of  various  rewards  fraud  techniques  and  defense  mechanisms.
                 Following  this,  an  attack  method  named  the  “gradient  scale-up  attack”  is  proposed,  focusing  on  manipulating  model  gradients  to  exploit
                 the  incentive  system.  This  attack  method  calculates  corresponding  scaling  factors  and  utilizes  them  to  increase  the  contribution  of  the  local
                 model  to  gain  more  rewards.  Finally,  an  efficient  defense  mechanism  is  proposed,  which  identifies  malicious  participants  by  examining  the
                 L 2 -norms  of  model  updates,  effectively  thwarting  gradient  scale-up  attacks.  Through  extensive  analysis  and  experimental  validation  on


                 *    基金项目: 浙江省“尖兵”计划  (2024C01021)
                  收稿时间: 2023-09-28; 修改时间: 2023-11-10, 2024-01-12; 采用时间: 2024-03-26; jos 在线出版时间: 2024-09-14
                  CNKI 网络首发时间: 2024-09-18
   349   350   351   352   353   354   355   356   357   358   359