Page 328 - 《软件学报》2026年第1期
P. 328

刘立伟 等: 数据要素流通全流程隐私关键技术: 现状、挑战与展望                                                 325


                 [96]   Shokri R, Stronati M, Song CZ, Shmatikov V. Membership inference attacks against machine learning models. In: Proc. of the 2017
                      IEEE Symp. on Security and Privacy (SP). San Jose: IEEE, 2017. 3–18. [doi: 10.1109/SP.2017.41]
                 [97]   Melis L, Song CZ, De Cristofaro E, Shmatikov V. Exploiting unintended feature leakage in collaborative learning. In: Proc. of the 2019
                      IEEE Symp. on Security and Privacy (SP). San Francisco: IEEE, 2019. 691–706. [doi: 10.1109/SP.2019.00029]
                 [98]   Wang Y, Zhao YY, Dong YS, Chen HY, Li JD, Derr T. Improving fairness in graph neural networks via mitigating sensitive attribute
                      leakage. In: Proc. of the 28th ACM SIGKDD Conf. on Knowledge Discovery and Data. Washington: ACM, 2022. 1938–1948. [doi: 10.
                      1145/3534678.3539404]
                 [99]   Liu TT, Yao HW, Wu T, Qin Z, Lin F, Ren K, Chen C. Mitigating privacy risks in LLM embeddings from embedding inversion.
                      arXiv:2411.05034, 2024.
                 [100]   Jagielski M, Carlini N, Berthelot D, Kurakin A, Papernot N. High accuracy and high fidelity extraction of neural networks. In: Proc. of
                      the 29th USENIX Conf. on Security Symp. USENIX Association, 2020. 76.
                 [101]   Tang  MX,  Dai  AN,  DiValentin  L,  Ding  A,  Hass  A,  Gong  NZ,  Chen  YR,  Li  HH.  MODELGUARD:  Information-theoretic  defense
                      against model extraction attacks. In: Proc. of the 33rd USENIX Conf. on Security Symp. Philadelphia: USENIX Association, 2024. 297.
                 [102]   Yong ZX, Menghini C, Bach SH. Low-resource languages jailbreak GPT-4. In: Proc. of the 37th Int’l Conf. on Neural Information
                      Processing Systems. New Orleans: Curran Associates Inc., 2023.
                 [103]   Wei  A,  Haghtalab  N,  Steinhardt  J.  Jailbroken:  How  does  LLM  safety  training  fail?  In:  Proc.  of  the  37th  Int’l  Conf.  on  Neural
                      Information Processing Systems. New Orleans: Curran Associates Inc., 2023. 1–32.
                 [104]   Perez  F,  Ribeiro  I.  Ignore  previous  prompt:  Attack  techniques  for  language  models.  In:  Proc.  of  the  36th  Int’l  Conf.  on  Neural
                      Information Processing Systems. New Orleans: Curran Associates Inc., 2022.
                 [105]   Liu YP, Jia YQ, Geng RP, Jia JY, Gong NZ. Formalizing and benchmarking prompt injection attacks and defenses. In: Proc. of the 33rd
                      USENIX Conf. on Security Symp. Philadelphia: USENIX Association, 2024. 103.

                 附中文参考文献
                  [1]   李凤华, 李晖, 贾焰, 俞能海, 翁健. 隐私计算研究范畴及发展趋势. 通信学报. 2016, 37(4): 1–11. [doi: 10.11959/j.issn.1000-436x.
                     2016078]
                  [2]   郭钊均, 李美玲, 周杨铭, 彭万里, 李晟, 钱振兴, 张新鹏. 人工智能生成内容模型的数字水印技术研究进展. 网络空间安全科学学报,
                     2024, 2(1): 13–39. [doi: 10.20172/j.issn.2097-3136.240102]
                  [4]   霍炜, 郁昱, 杨糠, 郑中翔, 李祥学, 姚立, 谢杰. 隐私保护计算密码技术研究进展与应用. 中国科学: 信息科学, 2023, 53(9):
                     1688–1733. [doi: 10.1360/SSI-2022-0434]
                 [31]   戴怡然, 张江, 向斌武, 邓燚. 全同态加密技术的研究现状及发展路线综述. 电子与信息学报, 2024, 46(5): 1774–1789. [doi:
                     10.11999/JEIT230703]

                 作者简介
                 刘立伟, 硕士生, 主要研究领域为数据隐私, 个性化联邦学习, AI 安全.
                 傅超豪, 博士, 主要研究领域为忘却学习, 联邦学习, AI 安全.
                 孙泽堃, 博士, CCF  学生会员, 主要研究领域为计算机视觉, AI 安全, 数据隐私.
                 周耘, 学士, 主要研究领域为区块链, AI 安全.
                 阮娜, 博士, 副教授, 博士生导师, CCF  杰出会员, 主要研究领域为数据隐私, 区块链, AI 安全.
                 蒋昌俊, 博士, 教授, 博士生导师, CCF  会士, 主要研究领域为网络计算技术, 网络交易风控.
   323   324   325   326   327   328   329   330   331   332   333