Page 258 - 《软件学报》2025年第4期
P. 258

1664                                                       软件学报  2025  年第  36  卷第  4  期


                     35th Int’l Conf. on Neural Information Processing Systems. Online: Curran Associates Inc., 2021. 18075–18086.
                 [83]  Chen M, Zhang ZK, Wang TH, Backes M, Humbert M, Zhang Y. Graph unlearning. In: Proc. of the 2022 ACM SIGSAC Conf. on
                     Computer and Communications Security. Los Angeles: ACM, 2022. 499–513. [doi: 10.1145/3548606.3559352]
                 [84]  Zhu XR, Li GY, Hu W. 2023. Heterogeneous federated knowledge graph embedding learning and unlearning. In: Proc. of the 2023 ACM
                     Web Conf. Austin: ACM, 2023. 2444–2454. [doi: 10.1145/3543507.3583305]
                 [85]  Li  YT,  Wang  CH,  Cheng  G.  Online  forgetting  process  for  linear  regression  models.  In:  Proc.  of  the  24th  Int’l  Conf.  on  Artificial
                     Intelligence and Statistics. 2021. 217–225.
                 [86]  Mirzasoleiman B, Karbasi A, Krause A. Deletion-robust submodular maximization: Data summarization with “the right to be forgotten”.
                     In: Proc. of the 34th Int’l Conf. on Machine Learning. 2017. 2449–2458.
                 [87]  Yu C, Jeoung S, Kasi A, Yu PF, Ji H. Unlearning bias in language models by partitioning gradients. In: Findings of the Association for
                     Computational Linguistics. Toronto: ACL, 2023. 6032–6048. [doi: 10.18653/v1/2023.findings-acl.375]
                 [88]  Yao YS, Xu XJ, Liu Y. Large language model unlearning. arXiv:2310.10683, 2024.
                 [89]  Chen JA, Yang DY. Unlearn what you want to forget: Efficient unlearning for LLMs. arXiv:2310.20150, 2023.
                 [90]  Liu Z, Kalinli O. Forgetting private textual sequences in language models via leave-one-out ensemble. arXiv:2309.16082, 2023.
                 [91]  Ni SW, Chen DW, Li CM, Hu XP, Xu RF, Yang M. Forgetting before learning: Utilizing parametric arithmetic for knowledge updating
                     in large language models. arXiv:2311.08011, 2024.
                             孟小峰(1964-), 男, 博士, 教授, 博士生导师,
                 [92]  Liu GY, Ma XQ, Yang Y, Wang C, Liu JC. FedEraser: Enabling efficient client-level data removal from federated learning models. In:
                     Proc. of the 29th IEEE/ACM Int’l Symp. on Quality of Service. Tokyo: IEEE, 2021. 1–10. [doi: 10.1109/IWQOS52092.2021.9521274]
                 [93]  Liu Y, Xu L, Yuan XL, Wang C, Li B. The right to be forgotten in federated learning: An efficient realization with rapid retraining. In:
                     Proc.  of  the  IEEE  INFOCOM  2022 —IEEE  Conf.  on  Computer  Communications.  London:  IEEE,  2022.  1749–1758.  [doi:  10.1109/
                     INFOCOM48880.2022.9796721]
                 [94]  Liu Y, Ma Z, Liu XM, Ma JF. Learn to forget: User-level memorization elimination in federated learning. arXiv:2003.10933, 2021.
                 [95]  Gong J, Kang J, Simeone O, Kassab R. Forget-SVGD: Particle-based Bayesian federated unlearning. In: Proc. of the 2022 IEEE Data
                     Science and Learning Workshop. Singapore: IEEE, 2022. 1–6. [doi: 10.1109/DSLW53931.2022.9820602]
                 [96]  Wang JX, Guo S, Xie X, Qi H. Federated unlearning via class-discriminative pruning. In: Proc. of the 2022 ACM Web Conf. Virtual
                     Event: ACM, 2022. 622–632. [doi: 10.1145/3485447.3512222]
                 [97]  Che TS, Zhou Y, Zhang ZJ, Lyu LJ, Liu J, Yan D, Dou DJ, Huan J. Fast federated machine unlearning with nonlinear functional theory.
                     In: Proc. of the 40th Int’l Conf. on Machine Learning. 2023. 4241–4268.
                 [98]  Chen M, Zhang ZK, Wang TH, Backes M, Humbert M, Zhang Y. When machine unlearning jeopardizes privacy. In: Proc. of the 2021
                     ACM SIGSAC Conf. on Computer and Communications Security. Virtual Event: ACM, 2021. 896–911. [doi: 10.1145/3460120.3484756]

                 附中文参考文献:
                 [6]  中华人民共和国国家质量监督检验检疫总局, 中国国家标准化管理委员会. GB/T 35273-2020 信息安全技术 个人信息安全规范. 北
                    京: 中国标准出版社, 2018. https://std.samr.gov.cn/gb/search/gbDetailed?id=A0280129495AEBB4E05397BE0A0AB6FE


                             李梓童(1999-), 女, 硕士生, 主要研究领域为隐                 王雷霞(1994-), 女, 博士生, 主要研究领域为数
                            私保护.                                         据隐私保护.




                                                                          郝新丽(1995-), 女, 博士生, 主要研究领域为大
                            CCF  会士, 主要研究领域为云数据管理, 网络数                   数据分析.
                            据管理, 隐私保护.
   253   254   255   256   257   258   259   260   261   262   263