Page 327 - 《软件学报》2025年第5期
P. 327

王源源 等: 本地差分隐私频率估计伪数据攻击及防御方法                                                     2227


                 6   总 结

                    针对子集选择机制和环机制两种效用最优的                LDP  协议, 本文设计了攻击效用最大的伪数据攻击方案, 并通过
                 攻击方案展示了子集选择机制和环机制易于遭受伪数据攻击的负面影响. 攻击者可以向                             LDP  机制中注入假用户,
                 向服务器发送伪造的数据, 达到显著提高目标项目频率估计值的目的. 本文通过理论分析和实验评估证实了所设
                 计攻击方案的有效性. 最后, 本文提出了针对伪数据攻击的防御方法. 未来的工作主要包括深入分析伪数据攻击对
                 其他各种   LDP  机制的影响, 以及设计安全高效的防御措施来应对伪数据攻击.


                 References:
                  [1]  Dwork C. Differential privacy: A survey of results. In: Proc. of the 5th Int’l Conf. on Theory and Applications of Models of Computation.
                     Xi’an: Springer, 2008. 1–19. [doi: 10.1007/978-3-540-79228-4_1]
                  [2]  Evfimievski A, Gehrke J, Srikant R. Limiting privacy breaches in privacy preserving data mining. In: Proc. of the 22nd ACM SIGMOD-
                     SIGACT-SIGART Symp. on Principles of Database Systems. San Diego: ACM, 2003. 211–222. [doi: 10.1145/773153.773174]
                  [3]  Erlingsson Ú, Pihur V, Korolova A. RAPPOR: Randomized aggregatable privacy-preserving ordinal response. In: Proc. of the 2014 ACM
                     SIGSAC Conf. on Computer and Communications Security. Scottsdale: ACM, 2014. 1054–1067. [doi: 10.1145/2660267.2660348]
                  [4]  Cormode G, Jha S, Kulkarni T, Li NH, Srivastava D, Wang TH. Privacy at scale: Local differential privacy in practice. In: Proc. of the
                     2018 Int’l Conf. on Management of Data. Houston: ACM, 2018. 1655–1658. [doi: 10.1145/3183713.3197390]
                  [5]  Luca  M,  Zervas  G.  Fake  it  till  you  make  it:  Reputation,  competition,  and  Yelp  review  fraud.  Management  Science,  2016,  62(12):
                     3412–3427. [doi: 10.1287/mnsc.2015.2304]
                  [6]  Ye  M,  Barg  A.  Optimal  schemes  for  discrete  distribution  estimation  under  locally  differential  privacy.  IEEE  Trans.  on  Information
                     Theory, 2018, 64(8): 5662–5676. [doi: 10.1109/TIT.2018.2809790]
                  [7]  Wang  SW,  Huang  LS,  Nie  YW,  Zhang  XY,  Wang  PZ,  Xu  HL,  Yang  W.  Local  differential  private  data  aggregation  for  discrete
                     distribution estimation. IEEE Trans. on Parallel and Distributed Systems, 2019, 30(9): 2046–2059. [doi: 10.1109/TPDS.2019.2899097]
                  [8]  Wang SW, Qian YQ, Du JC, Yang W, Huang LS, Xu HL. Set-valued data publication with local privacy: Tight error bounds and efficient
                     mechanisms. Proc. of the VLDB Endowment, 2020, 13(8): 1234–1247. [doi: 10.14778/3389133.3389140]
                  [9]  Jagielski M, Oprea A, Biggio B, Liu C, Nita-Rotaru C, Li B. Manipulating machine learning: Poisoning attacks and countermeasures for
                     regression learning. In: Proc. of the 2018 IEEE Symp. on Security and Privacy (SP). San Francisco: IEEE, 2018. 19–35. [doi: 10.1109/SP.
                     2018.00057]
                 [10]  Shanghai Observer. “The Most Evil AI Ever”? This chatbot has learned to curse, causing serious moral and ethical controversy. 2022 (in
                     Chinese). https://export.shobserver.com/baijiahao/html/498201.html
                 [11]  Ambainis A, Jakobsson M, Lipmaa H. Cryptographic randomized response techniques. In: Proc. of the 7th Int’l Workshop on Theory and
                     Practice in Public Key Cryptography. Singapore: Springer, 2004. 425–438. [doi: 10.1007/978-3-540-24632-9_31]
                 [12]  Moran T, Naor M. Polling with physical envelopes: A rigorous analysis of a human-centric protocol. In: Proc. of the 25th Int’l Conf. on
                     the Theory and Applications of Cryptographic Techniques. St. Petersburg: Springer, 2006. 88–108. [doi: 10.1007/11761679_7]
                 [13]  Cheu A, Smith A, Ullman J. Manipulation attacks in local differential privacy. In: Proc. of the 2021 IEEE Symp. on Security and Privacy
                     (SP). San Francisco: IEEE, 2021. 883–900. [doi: 10.1109/SP40001.2021.00001]
                 [14]  Kato F, Cao Y, Yoshikawa M. Preventing manipulation attack in local differential privacy using verifiable randomization mechanism. In:
                     Proc. of the 35th Annual IFIP WG 11.3 Conf. on Data and Applications Security and Privacy XXXV. Calgary: Springer, 2021. 43–60.
                     [doi: 10.1007/978-3-030-81242-3_3]
                 [15]  Cao XY, Jia JY, Gong NZ. Data poisoning attacks to local differential privacy protocols. In: Proc. of the 30th USENIX Security Symp.
                     Berkeley: USENIX, 2021. 947–964.
                 [16]  Kairouz P, Oh S, Viswanath P. Extremal mechanisms for local differential privacy. The Journal of Machine Learning Research, 2016,
                     17(1): 492–542. [doi: 10.5555/2946645.2946662]
                 [17]  Wang  TH,  Blocki  J,  Li  NH,  Jha  S.  Locally  differentially  private  protocols  for  frequency  estimation.  In:  Proc.  of  the  26th  USENIX
                     Security Symp. Vancouver: USENIX, 2017. 729–745.
                 [18]  Wu YJ, Cao XY, Jia JY, Gong NZ. Poisoning attacks to local differential privacy protocols for key-value data. In: Proc. of the 31st
                     USENIX Security Symp. Boston: USENIX, 2022. 519–536.
                 [19]  Ye QQ, Hu HB, Meng XF, Zheng HD. PrivKV: Key-value data collection with local differential privacy. In: Proc. of the 2019 IEEE
                     Symp. on Security and Privacy (SP). San Francisco: IEEE, 2019. 317–331. [doi: 10.1109/SP.2019.00018]
   322   323   324   325   326   327   328   329   330   331   332